text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
\section{\label{intro} Introduction} The presence of uphill diffusion in particle or spin models is a remarkable problem of statistical mechanics, that was first envisaged by Darken in his pioneering work dating back to 1949 \cite{Darken49}. While Fick's law of diffusion dictates that particles diffuse \textit{against} the concentration gradient (\textit{downhill} diffusion), in presence of a multi-component system or an external field particles may be found migrating up the gradient \cite{CDMP17bis,CC17}, giving thus rise to the so-called \textit{uphill} currents, confirmed both experimentally and theoretically in numerous studies \cite{Lesher1994,Frink2000,Ferri2006,Kuhl2007,Yu2007,Sundman2009,Vielzeuf2011,Boudin2012,Rougier2013,Laeuerer2015}. Remarkably, in \cite{CDMP16,CDMP17} some numerical and theoretical evidence supported the conclusion that uphill diffusion may also occur in single-component systems in presence of a phase transition. In these works, the authors consider a 1D lattice gas model coupled to external particle reservoirs, in which particles that hop on the lattice are subject to an exclusion principle and are equipped with a long range, Kac-like interaction \cite{Presutti09}, which gives rise, at sufficiently low temperatures, to a phase transition. By adopting spin variables, the authors show that when the absolute value of the magnetization at the boundaries is larger than the equilibrium mean field magnetization, a sharp interface (called the ``instanton'' therein) appears at the center of the lattice accompanied by a downhill current. The novelty comes when the absolute magnetization at the boundary is lowered below the equilibrium mean field value: the interface, referred to as the ``bump'', is then observed moving towards one of the boundaries, and the current changes sign, namely positive spins move from the reservoir with higher magnetization towards the one with lower magnetization. One might consider, therefore, an experimental set-up in which two finite reservoirs, equipped with metastable values of magnetization, are connected by two channels: one crossed by an uphill current and another one (in which the long-range Kac interaction is absent) featuring a Fickian current. The result would indeed be a closed circuit in which, in presence of a phase transition, current spontaneously flows across the ring. This phenomenon was indeed observed and reproduced in \cite{CDMP17} by performing extensive Monte Carlo simulations of the model (note that no violation of the thermodynamic principles occurs therein, as the total energy of the system is not a conserved quantity). A step forward was then taken in \cite{CGGV18}, in which a similar phenomenology was also observed in a 2D Ising model on a square lattice coupled to two external magnetization reservoirs attached at the right and left boundaries (whose magnetizations are equal, respectively, to $m_+>0$ and $m_-=-m_+$). The presence of a stationary uphill current is induced by the reservoirs, whose updating mechanism at the boundaries breaks the condition of local detailed balance, cf. also \cite[Eq. 1.6]{ELS1990}. In particular, the authors show that, in presence of a ferromagnetic phase transition, a certain critical value $m_\textrm{crit}$ marks the transition between two different regimes (referred to, in \cite{CGGV18}, as the \textit{stable} and \textit{metastable} one), and characterized, respectively by downhill and uphill currents, when $m_+$ is, respectively, larger or smaller than $m_\textrm{crit}$. We recall that in the absence of reservoirs and in the infinite volume limit, the equilibrium 2D Ising model undergoes a phase transition at the inverse critical temperature \cite{Onsager1944} \begin{equation} \label{betac} \beta_c = \frac{\ln(1+\sqrt{2})}{2} \approx 0.440686\, . \end{equation} For inverse temperatures $\beta>\beta_c$ the 2D Ising model (with vanishing external magnetic field) exhibits a spontaneous magnetization given by \cite{Yang1952} \begin{equation} \label{mbeta} m_\beta = \left[1-\frac{1}{\sinh^{4}\left(2\beta\right)}\right]^{1/8}\, . \end{equation} In \cite{CGGV18} it is claimed that the critical value $m_\textrm{crit}$, evaluated at $\beta=1$, can also be estimated by measuring the magnetization value $m_\textrm{eq}$ (that approaches, in the large volume limit, the critical value $m_\beta$ in \eqref{mbeta} \cite{Onsager1944,Yang1952}) evaluated at the rightmost column of an Ising model in equilibrium conditions (i.e. in the absence of reservoirs and external magnetic fields) and characterized by a conservative dynamics. The claim above thus indicates the possibility that a ``nonequilibrium'' observable, such as $m_\textrm{crit}$, characterizing the dynamics of a boundary-driven Ising model, may be estimated from the analysis of an equilibrium Ising model. We shall tackle carefully this question in Section\ \ref{sec:results} and will show that the two quantities $m_\textrm{crit}$ and $m_\textrm{eq}$, evaluated for some fixed $L$ and $\beta$, are generally different from one another, their deviation becoming negligible only for large values of $\beta$ and by choosing suitable boundary conditions (b.c.s). At $\beta=1$ and with the b.c. considered in \cite{CGGV18}, the two observables are, indeed, almost coinciding. In this work, we continue along the path traced in \cite{CGGV18} and aim to investigate in more detail the effect of different b.c.s for the considered 2D Ising model, cf. also \cite{Jan1983}, and unveil novel regimes other than the stable and metastable ones, by dropping further below the absolute value of the magnetization of the reservoirs. In particular, we shall discuss the behavior of the \textit{equation of state}, namely the relation between the stationary current and the value of the magnetization at the boundaries, thus extending the preliminary results obtained in \cite{CGGV18}, cf. Fig. 2 therein. The manuscript is organized as follows. The geometry and the bulk dynamics of our model are introduced in Section \ref{sec:model}, together with the boundary conditions in Section \ref{sec:bc}, and details of the spin-updating mechanisms (due to the coupling with the reservoirs) in Section \ref{sec:spin-update}. The role of the chosen spin-updating mechanism, and the comparison with a different mechanism based on detailed balance, are clarified in the same section. Definitions of observables and implementation details are provided in Sections \ref{sec:obs} and \ref{sec:implem}. The results of our simulations are presented and discussed in Section \ref{sec:results}. Section \ref{sec:concl} is devoted to conclusions. \section{The models} \label{sec:model} We consider non-equilibrium Monte Carlo (MC) dynamics of the nearest-neighbor ferromagnetic Ising model on a finite square 2D binary $L\times L$ lattice $\Lambda$ of linear size $L$, that is periodic only in one dimension (the vertical or synonymously, the $y$--direction). We shall denote by $\sigma_{i}\in\{-1,+1\}$ the ``bulk'' spin state at the lattice coordinate $i=(x,y)\in \Lambda$. The system exhibits conservative exchange dynamics in the bulk (Section\ \ref{sec:bulk}), and is coupled to two infinite and interaction-free magnetization reservoirs on the horizontal direction (the $x$--direction, see \figref{fig0} for a description of the set-up), denoted as $\mathcal R_+$ and $\mathcal R_{-}$, located, respectively, at the right and the left ends of the lattice. To this end, we define a bounded domain $\Lambda^o$, constituted by two vertical stripes, each made of $L$ lattice sites, located at $x=\bar{x}\in\{0,L+1\}$. We shall then denote by $\sigma^o_{i}$, $i=(\bar{x},y)\in \Lambda^o$ the ghost spins, picking up values in a set which depends on the chosen boundary model, as described in Section\ \ref{sec:bc}. We denote by $N_b=2L^2+L$ the total number of bonds in the system, consisting of $L^2+L(L-1)$ bulk bonds between adjacent spins in $\Lambda$ and $2L$ horizontal bonds connecting the spins at the boundaries of $\Lambda$ (hereafter called \textit{boundary spins}, located at $x=x_b\in\{1,L\}$) with the $2L$ so-called \textit{ghost spins} in $\Lambda^o$. Note that the Ising model is equivalent to a lattice gas model via the standard mapping between $L^2$ spin variables $\sigma_{i}$ and occupation variables $\eta_{i}=(1+ \sigma_{i})/2\in \{0,1\}$ with $\eta_{i} = 1$ (resp. $\eta_{i} = 0$) denoting the presence (resp. absence) of a particle. \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{Figures-manuscript-schematic4.pdf} \caption{Schematic picture of the 2D Ising model coupled to ideal reservoirs at the left and right boundaries, characterized by the constant magnetizations, respectively, $m_-$ and $m_+$. Spins up and spins down are represented, respectively, with black and green circles, whereas the shaded circles in $x=0$ and $x=L+1$ represent the so-called \textit{ghost spins}, defining the b.c. of the model (they are not part, in general, of the reservoirs). The horizontal double-headed arrow at the center of the lattice means that a Kawasaki-type of dynamics holds in the bulk, whereas the vertical double-headed arrows at the bottom and at the top of the lattice recall that periodic b.c.s apply on the vertical direction. Finally, the single-headed arrows pointing towards one ghost spin in $x=L+1$ represent the different b.c.s considered for this model.} \label{fig0} \end{figure} Calling $\sigma=\{\sigma_i\}$, $i\in \Lambda$ and $\sigma^o=\{\sigma^o_i\}$, $i\in \Lambda^o$, respectively, the \textit{spin configuration} and the \textit{boundary condition}, we define the Hamiltonian \begin{equation} \label{H-Ising} H(\sigma|\sigma^o) = - \frac{1}{2}\sum_{\underset{i,j\in \Lambda}{\langle i,j\rangle}} \sigma_i \sigma_j- \sum_{\underset{i\in \Lambda, j\in \Lambda^o}{\langle i,j\rangle}} \!\!\!\sigma_i \sigma^o_j, \end{equation} where $\langle i,j\rangle$ denotes a nearest neighbor pair, and the magnetization of spin configuration $\sigma$ is given by \begin{equation} \label{magn} m(\sigma) = \frac{1}{L^2} \sum_{i\in\Lambda} \sigma_i. \end{equation} Both the properties of the ghost spins at $\bar{x}$ (Section\ \ref{sec:bc}) and the dynamics of the boundary spins at $x_b$ (Section\ \ref{sec:spin-update}) are eventually affected by the two reservoirs $\mathcal R_+$ and $\mathcal R_{-}$. These reservoirs are fully characterized by their constant magnetizations $m_\textrm{res}$, respectively $m_+ \in [0,1]$ and $m_-=-m_+$, respectively. In the sequel, when convenient, we will use \begin{equation} \beta h_\textrm{res}=\tanh^{-1}(m_\textrm{res}) = \frac{1}{2}\ln \left(\frac{1+m_\textrm{res}}{1-m_\textrm{res}}\right), \qquad \textrm{m}_\textrm{res}\in\{m_-,m_+\} \label{defbetah} \end{equation} in which $h_\textrm{res}$ can be regarded as the strength of a fictitious, dimensionless magnetic field, taking values $h_+\ge0$ and $h_-=-h_+$ at the left and the right reservoirs, respectively. It will become soon clear, in Section\ \ref{sec:spin-update}, the reason of this notation: when the mechanism of interaction with the reservoirs is based on \textit{detailed balance}, $\tanh^{-1}(m_+)$ attains indeed the structure of a dimensionless magnetic field multiplied by inverse temperature $\beta$. Table\ \ref{Table1} provides the translation from $m_+$ values to $\beta h_+$ values considered in most plots of the following sections. \begin{table}[tb] \centering \begin{tabular}{r@{\quad}r@{\qquad\qquad}r@{\quad}r@{\qquad\qquad}r@{\quad}r@{\qquad\qquad}r@{\quad}r} \hline\hline $m_+$ & $\beta h_+$ & $m_+$ & $\beta h_+$ & $m_+$ & $\beta h_+$ & $m_+$ & $\beta h_+$\\ \hline 0.000000 & 0.00 & 0.716298 & 0.90 & 0.874053 & 1.35 & 0.998778 & 3.70 \\ 0.148885 & 0.15 & 0.781806 & 1.05 & 0.970452 & 2.10 & 0.998894 & 3.75 \\ 0.291313 & 0.30 & 0.817754 & 1.15 & 0.978026 & 2.25 & 0.999329 & 4.00 \\ 0.537050 & 0.60 & 0.833655 & 1.20 & 0.998650 & 3.65 & 0.999955 & 5.35 \\ 0.604368 & 0.70 & 0.848284 & 1.25 & 0.999273 & 3.96 & 1.000000 & $\infty$ \\ \hline\hline \end{tabular} \caption{ The sorted list of $(m_+,\beta h_+)$ pairs contains all values 'selected' in this manuscript.} \label{Table1} \end{table} \subsection{Kawasaki dynamics in the bulk} \label{sec:bulk} We consider the Ising system with reservoirs at inverse temperature $\beta$ and let the spins evolve following a continuous-time stochastic dynamics with two contributions: a conservative exchange dynamics in the bulk and and spin flip mechanisms at the opposing vertical boundaries. More precisely, in the bulk, the spins follow a Kawasaki dynamics, i.e. the two spins of a selected bond $\langle i,j\rangle$, with $i,j\in \Lambda$, exchange (``bond flip'') their values with rate \begin{eqnarray} \label{kawa} c_{i,j}^\textrm{bulk} = \textrm{min}(1,e^{-\beta \Delta H}), \qquad \Delta H = H(\sigma^{ij}|\sigma^o) - H(\sigma|\sigma^o), \end{eqnarray} where $\sigma^{ij}$ denotes the configuration obtained from $\sigma$ by exchanging the spins at sites $i$ and $j$. In the sequel we shall investigate the stationary dynamics corresponding to two different spin-updating mechanisms at the two vertical boundaries due the interaction with the reservoirs $\mathcal R_-$ and $\mathcal R_{+}$, and three different b.c.s. \subsection{Boundary conditions} \label{sec:bc} Periodic b.c.s hold along the vertical direction of the considered model. The focus, hereafter, will be the investigation of different b.c.s imposed along the horizontal direction. These will indeed be found to affect the stationary magnetization profiles $\overline{m}_x$ (as function of grid coordinate $x$) as well as the stationary, $x$-independent current $J$, which is defined as the time-averaged rate of change of a boundary spin value. The ``L4'' b.c., inspired by \cite{BP2003} and recently implemented in \cite{CGGV18}, is defined as follows \begin{equation} \mathrm{{\bf [L4]}}\qquad \sigma_{(\bar{x},y)}^o = \sigma_{(x_b,y-L/4)}, \qquad \forall {\bar{x}\in\{x_b\pm 1\}}, \forall {y\in\{1,..,L\}}. \end{equation} The distance $L/4$ was chosen so as to make the two spins $ \sigma_{(x_b,y)}$ and $ \sigma_{(x_b,y-L/4)}$ sufficiently uncorrelated. The second b.c. we consider is the so-called ``free boundary'' (FB) condition, in which one takes \begin{equation} \mathrm{{\bf [FB]}}\qquad\sigma_{i}^o=0, \quad \forall {i\in\Lambda^o}\,. \end{equation} Note that in both the L4 and FB b.c.s, the properties of the reservoirs do not enter directly. The third b.c. we shall consider will be referred to, hereafter, as the ``stochastic boundary'' (SB) condition. The SB rule dictates that the probability that a ghost spin takes a certain value is ruled by the magnetization $m_\textrm{res}$ of the reservoir next to $\sigma_i^o\in\{-1,+1\}$, by assuming that the average value of $\sigma_i^o$ is $m_\textrm{res}$, and reads \begin{equation} \label{ghsp} \mathrm{{\bf [SB]}}\qquad\textrm{prob}(\sigma_{i}^o) = \frac{1 + \sigma_{i}^o m_\textrm{res}}{2}, \quad \forall i\in\Lambda^o\, . \end{equation} where $m_\textrm{res} \in \{m_-,m_+\}$ is the magnetization of the reservoir close to $x=x_b$. \subsection{Spin-updating mechanisms} \label{sec:spin-update} We shall now consider two different spin-updating mechanism at the vertical boundaries of the lattice, that mimic the action of the two reservoirs on the system. As shown in \figref{fig0}, the system is coupled, to its horizontal boundaries, to the two reservoirs $\mathcal R_+$ and $\mathcal R_{-}$, having magnetization equal, respectively, to $m_+ \in [0,1]$ and $m_-=-m_+$. The first mechanism we shall consider in this work is the one considered in \cite{CGGV18}, called \textit{independent spin flip} (ISF). According to the ISF mechanism, a selected boundary spin $\sigma_{i}$, $i=(x_b,y)$ is updated to its new value $\sigma'_{i}\in\{-1,1\}$ (which may coincide with $\sigma_{i}$) with a rate $c^\textrm{ISF}$ which is independent from the current state $\sigma_i$ and only dictated by the magnetization of the adjacent reservoir, i.e.: \begin{equation} \label{isf} c^\textrm{ISF}(\sigma'_{i}) = \frac{1 + \sigma_i' m_\textrm{res}}{2} \end{equation} where $m_\textrm{res} \in \{m_-,m_+\}$ is the magnetization of the reservoir close to $x=x_b$. We thus flip $\sigma_i$ with probability $(1-\sigma_i m_\textrm{res})/2$. Note further that each reservoir plays a double role with the ISF-SB dynamics: (i) it updates the boundary spins $\sigma_i\rightarrow \sigma'_i$, $i=(x_b,y)\in\Lambda$ with a rate defined by \eqref{isf}, (ii) it acts as a real ``boundary'' surrounding the lattice $\Lambda$ by fixing the stochastic b.c. $\sigma^o$ (i.e. the stochastic character of all the ghost spins in $\Lambda^o$), according to \eqref{ghsp}. A second, so-called NSF spin-updating mechanism is obtained by considering a model that is fully decoupled from the reservoirs, namely no spin flip (NSF) mechanism takes place at the leftmost and rightmost vertical boundaries of the lattice. For this model, the bulk dynamics is still defined by \eqref{kawa}, but \eqref{isf} is replaced by \begin{equation} \label{nsf} c^\textrm{NSF}=0\, . \end{equation} Since reservoirs are not part of the description for the NSF dynamics, the SB b.c. as defined in Section\ \ref{sec:bc} is no longer relevant here. However, we can still define the SB b.c. in the NSF case by considering \eqref{ghsp} with a given $m_+$ (and $m_-=-m_+$) even if these values are not associated to a reservoir. It is readily seen that the NSF dynamics conserves, at any time step, the total magnetization, that is fixed by the initial configuration. The presence of a spin-updating mechanism induced by the reservoirs, or the lack thereof, affects significantly the critical value of magnetization corresponding to a regime of vanishing current, as it will be seen in Section\ \ref{sec:critical}. Let us also shortly dwell, here, on some of the features of the so-called detailed balance (DB) dynamics \cite{CMW11,ELS1990}. We say that the spins located at one boundary of the lattice are (locally) in \textit{equilibrium} with the nearest reservoir if the rate $c^\textrm{DB}$ at which a spin value $\sigma_{i}'$ is introduced at the boundary site $i$ obeys the following DB condition: \begin{equation} \label{db} c^\textrm{DB}(\sigma_i') = \textrm{min}\left[1,\left(\frac{1+m_\textrm{res}\sigma_i'}{1+m_\textrm{res}\sigma_i}\right)e^{-\beta \Delta H}\right], \qquad \Delta H = H(\sigma'|\sigma^o) - H(\sigma|\sigma^o), \end{equation} where $\sigma'$ denotes the configuration obtained from $\sigma$ by updating the spin $\sigma_i$ to $\sigma_i'$, and the term $(1+m_\textrm{res}\sigma_i')/2$ corresponds to the probability of drawing at random the spin $\sigma_i'$ from the reservoir with magnetization $m_\textrm{res}$. We may then use an identity that follows from Eq.\ \ref{defbetah}, \begin{equation} \label{mh} \frac{1+m_\textrm{res}\sigma_i'}{1+m_\textrm{res}\sigma_i}=e^{\beta h_\textrm{res}(\sigma_i'-\sigma_i)}, \qquad \forall \sigma_i,\sigma_i'\in\{-1,+1\} \end{equation} to rewrite \eqref{db} more conveniently as \begin{equation} \label{db2} c^\textrm{DB}(\sigma_i') = \textrm{min}\left(1,e^{-\beta \Delta \mathcal{H}}\right), \qquad \Delta \mathcal{H} = \mathcal{H}(\sigma'|\sigma^o) - \mathcal{H}(\sigma|\sigma^o) \end{equation} where \begin{equation} \mathcal{H}(\sigma|\sigma^o)= H(\sigma|\sigma^o)- \sum_{i\in(x_b,y)} h_\textrm{res} \sigma_i \end{equation} corresponds to the hamiltonian of an Ising model with opposing magnetic fields $h_\textrm{res}$ acting on all boundary spins. \subsection{Observables} \label{sec:obs} The presence of the two reservoir at the boundaries of the domain $\Lambda$ with different magnetizations induces a magnetization flow across the system. The magnetization current along a given horizontal bond is measured by counting the number of positive spins that cross this bond from left to right minus the number of those crossing this bond in the opposite direction, and dividing this number by time. The stationary current $J$ then corresponds to the long time limit of this rate \cite{CGGV18}, averaged over all bonds, or alternatively, averaged over boundary bonds only, provided a stationary limit exists. \begin{table} \centering \begin{tabular}{lll} \hline\hline Symbol & Definition & Description \\ \hline $m_+$ & Parameter & Magnetization of reservoir ${\cal R}_+$ \\ $m_-$ & $=-m_+$ & Magnetization of reservoir ${\cal R}_-$\\ $m_\textrm{res}$ & res\ $\in\{-1,+1\}$ & stands for $m_+$ or $m_-$ \\ $m_\beta$ & \eqref{mbeta} & Onsager's equilibrium magnetization\\ $\overline{m}_x$ & \eqref{defmeanmx} & Mean magnetization at position $x\in \{1,..,L\}$\\ $\overline{m}_L$ & $=\overline{m}_{x=L}$ & Mean magnetization at the right boundary \\ $\overline{m}_L^\textrm{X}$ & & $\overline{m}_L$ obtained using model X \\ $m_\textrm{eq}$ & $=\overline{m}_L$ & Mean absolute magnetization at the right boundary for ${\cal E}$-model\\ $m_\textrm{eq}^\textrm{X}$& $=\overline{m}_L^\textrm{X}$ & $m_\textrm{eq}$ obtained using model X $\in\mathcal{E}$ \\ $m_\textrm{crit}$ & & Largest $m_+$ value for which $J=0$\\ $m_\textrm{crit}^\textrm{X}$ & & $m_\textrm{crit}$ obtained using model X $\in\mathcal{N}$ \\ $m_\textrm{bump}$ & & maximum $\overline{m}_x$ value close to boundary of the bump profile\\ $m_o$ & $=\overline{m}_L^\textrm{DB-L4}$ & for the case of $m_+=0$\\ $\overline{m}$ & $=L^{-1}\sum_{x=1}^L\overline{m}_x$ & Mean bulk magnetization\\ $m(r)$ & & Magnetization at reduced coordinate $r\in[0,1]$\\ $m_u$ & & stability region for bump $m\in[m_u,m_\textrm{crit}]$ \\ $m_+^\textrm{eff}$ & \eqref{meff} & effective magnetization, $\overline{m}_L^\textrm{ISF-L4}(\beta,m_+^\textrm{eff}) = \overline{m}_L^\textrm{DB-L4}(\beta,m_+)$\\ \hline\hline \end{tabular} \caption{\label{tabnotation}Notation for magnetizations considered in this work. Whenever a symbol $h$ is used, it is related to the corresponding magnetization by $m=\tanh(\beta h)$, where $\beta$ denotes inverse temperature, c.f. Tab.\ \ref{Table1}.} \end{table} A typical stationary state configuration can be surveyed by introducing the {\em stationary magnetization profile} $\overline{m}_x$, $x=1,\ldots,L$, that is obtained as follows: we measure the average magnetization along the column $x$ \begin{equation} m_x(t) = \frac 1 L\sum_{y=1}^{L} \sigma_{(x,y)}(t), \qquad \forall x\in \{1,..,L\} \label{defmx} \end{equation} at each MC step $t$, and then we take the average $\overline{m}_x$ over the set of collected values in the course of MC, with $T$ steps in total, \begin{equation}\label{defmeanmx} \overline{m}_x = \frac{1}{T}\sum_{t=1}^T m_x(t) , \qquad \forall x\in \{1,..,L\} \end{equation} In practise, the expensive sums are never evaluated during the MC, as each spin or bond flip causes only changes of $m_x(t)$. The mean boundary magnetizations are $\overline{m}_1$ and $\overline{m}_L$. For the ${\cal E}$--models $\overline{m}_L$ is denoted by $m_\textrm{eq}$. The notation used in this work is summarized in Tab.\ \ref{tabnotation}. By combining reservoir mechanisms and b.c.s we have defined nine different models, a set of three equilibrium models ${\cal E}\equiv\{$NSF-L4, NSF-FB, NSF-SB$\}$, a set containing the local equilibrium models ${\cal L}\equiv\{$DB-L4, DB-FB, DB-SB$\}$, and the corresponding three nonequilibrium models ${\cal N}\equiv\{$ISF-L4, ISF-FB, ISF-SB$\}$. The investigation of the ${\cal E}$, ${\cal L}$, and ${\cal N}$ dynamics, equipped with the different b.c.s, will allow us to comment, in Section\ \ref{sec:results}, on the relation between the magnetizations $m_\textrm{crit}$ and $m_\textrm{eq}$, mentioned above, for different values of $\beta$. \subsection{Implementation details} \label{sec:implem} All results to be presented are obtained for 2D grids with $L=40$. Each MC step corresponds to a single attempted bond or boundary spin flip at a randomly selected site. If not otherwise mentioned, (i) the initial configuration, referred to as ``{\em configuration A}'', has $\sigma_i=-1$ for $x\in\{1,..,20\}$ and $\sigma_i=+1$ for $x\in\{21,..,40\}$ (the configuration shown for $h_+=5.35$ in \figref{schematic-beta=1-ISF-L4} later below), and (ii) each simulation for given set of parameters $\beta$, $m_+$ runs for $\tau=10^{13}/N_b$ steps. Contributions to observables like the current $J$ and magnetization profiles $\overline{m}_x$ are computed at each MC step, as each bond of spin flip gives rise to an increment of decrement of these quantities as opposed to the total energy, which is for this reason only calculated each $N_b$ MC steps. To save computing time further all nine models ${\cal E}$, ${\cal L}$, and ${\cal N}$ are calculated during a single run, and share their random numbers. This way identical bonds or spins are selected for an attempted MC move simultaneously, and the cost for the calculation of acceptance rates is minimized. Exponentials $\exp(-\beta \Delta H)$ are nowhere calculated but looked up from its very limited set of possible values. Relevant neighbors for the calculation of $\Delta H$ are saved once for each bond and each model, which completely eliminates the calculation of b.c.s such as L4 or periodic b.c.s in $y$--direction. We have calculated the current $J$ independently from spins at the left and right boundary. For all stationary results to be presented, they are identical within statistical errors. To capture the details as a function of $m_+$, we plot results versus the dimensionless quantity $\beta h_+\equiv \tanh^{-1}(m_+)$. This representation is advantageous to blow up the region of $m_+$ close to 1, where interesting phenomena appear in a very narrow interval of $m_+$ \cite{CGGV18}. To ensure that measurement points are equidistantly spaced in this representation we choose $\beta h_+$ equidistantly spaced between $0$ and $6.5$, at a resolution of $0.05$. In addition, we calculate the result for $m_+=1$ (corresponding to $\beta h_+=\infty$) and eventually add it to plots as a filled marker. The 2D maps are obtained at a resolution of $\Delta \beta=0.01$ and $\Delta(\beta h_+)=0.05$. A function like $m_\textrm{crit}(\beta)$ is extracted from the $J$--map as contour line at $J=0$. The magnetization $m^\textrm{eff}_+$ is extracted by first inverting the $\overline{m}_L^\textrm{ISF-L4}$ map, using a continuous interpolant. The quantities $m^\textrm{eff}_+$ and $\overline{m}_L^\textrm{ISF-L4}$ are introduced in Sec.\ref{sec:critical}, see also Tab.\ref{tabnotation}. \section{Results and discussion} \label{sec:results} In this section we present the results of our numerical simulations of the nonequilibrium steady state (NESS) attained by the ${\cal N}$--models ISF-L4, ISF-FB, and ISF-SB introduced in Section\ \ref{sec:model}. Moreover, when convenient, we shall also compare the results obtained with the same b.c.s in presence of DB dynamics, i.e., ${\cal L}$--models DB-L4, DB-FB, and DB-SB. The equilibrium ${\cal E}$--models NSF-L4, NSF-FB and NSF-SB will also be considered for the purpose of comparison. We aim at elucidating the nature of the NESS and the role played by the details of the b.c.s and reservoir mechanisms. In particular, we will focus on the stationary magnetization profiles $\overline{m}_x$ and currents $J$, defined in Section\ \ref{sec:obs}. In all the simulations to be considered below, the initial datum is the ``configuration A'' introduced in Section\ \ref{sec:implem}, and whose total magnetization is zero. Since our models depend on $L$, $\beta$ and $m_+$, the corresponding NESS will also depend on the same set of parameters in a manner which is far from obvious. It is therefore of crucial importance to choose properly these quantities in our simulations. The dependence of the NESS on the inverse temperature $\beta$ is likely to be dictated by the presence of the ferromagnetic phase transition in the equilibrium system. Thus, different behaviors are to be expected in the two temperature regimes: standard phenomena (e.g. Fick's law) at high temperatures and a more complex and intriguing picture in the low temperature phase. As representative instances of the two different regimes, we have chosen $\beta=0.3$ and $\beta=1$, respectively lower and higher than the critical value of $\beta_c$ in \eqref{betac}. We recall that the linear size of the lattice $\Lambda$ in our simulations, for which results are presented, is chosen as $L=40$, as in \cite{CGGV18}. Indeed, to the best of our knowledge, this lattice appears sufficiently large to suppress the most evident finite-volume effects that can be observed for $L$ ranging between 10 and 40, while being sufficiently small to allow for a detailed numerical study within an acceptable computational workload. However, we cannot exclude the existence of significant finite-size effects for systems with sizes of the order of our size $L=40$. These issues will be discussed further in what follows. The main results of \cite{CGGV18}, that will be extended and deepened here, concern the behavior of the stationary current $J$ of the ISF-L4 model, as the magnetization $m_+$ ($m_-$) of the reservoir(s) is varied. We recall them briefly here. \begin{enumerate} \item As the value $m_+$ is decreased from its maximum value 1, the flux $J(m_+)$ is first negative and, crossing a critical value $m_\textrm{crit}$, it becomes positive. More precisely, for $m_+>m_\textrm{crit}$ the current is negative, flowing from the reservoir with positive magnetization $\mathcal R_+$ to the one with negative magnetization $\mathcal R_{-}$ (in agreement with the Fick's law). For $m_+<m_\textrm{crit}$ the current is reversed, flowing against the magnetization gradient, that is from $\mathcal R_{-}$ to $\mathcal R_+$ (in violation of the Fick's law). In the latter case we have the phenomenon called {\em uphill diffusion}. \bigskip \item The flip of the current in passing from downhill to uphill diffusion is connected to a change in the structure of the NESS \cite{CGGV18}. This can be detected by studying the stationary magnetization profile $\overline{m}_{x}$ or by inspection of the typical spin configuration of the NESS. In \cite{CGGV18} two typical magnetization profiles have been identified: {\em instanton} and {\em bump}. \medskip The {\em instanton} profile corresponds, in the spin configurations, to the presence of an interface separating two plus and minus equally sized phases close to $\mathcal R_+$ and $\mathcal R_-$ respectively, that is with the interface placed in the middle of the lattice (cf. also \cite{SOS16} for an interpretation of the interface based on nonequilibrium thermodynamics). A sketch of the instanton profile, in a continuous limit representation $m(r)$, where $r=(x-1)/(L-1)$, is given in \figref{schematic-profile}. The figure shows that, for $r$ moving from the right reservoir $\mathcal R_+$ ($r=1$) to the center of the lattice ($r=1/2$), the profile $m(r)$ decreases from $m_+$ (the magnetization imposed by $\mathcal R_+$) to a value which is expected to be the equilibrium spontaneous magnetization $m_\beta$ given in \eqref{mbeta}. At the center of the system, $m=1/2$, the magnetization jumps down to $-m_\beta$ and then keeps decreasing up to the left reservoir ($r=0$), where it takes the value $m_-$. Obviously, at finite volumes the sharp jump is replaced by a layer, i.e. a narrow region of sites in which the profile changes rapidly. Examples of steady magnetization profiles with an instanton-like shape are shown in the panel with $\beta h_+=5.35$ of \figref{fig1.d1} for the ISF-L4, ISF-FS, ISF-SB models. The corresponding spin configuration, for the ISF-L4 case, is shown in \figref{schematic-beta=1-ISF-L4}, where it is also evident that $J$ is negative in this state. \begin{figure}[tb] \centering \includegraphics[width=0.6\textwidth]{Fig3.pdf} \caption{\label{schematic-profile}Sketch of the magnetization profile $m(r)$ in the macroscopic coordinate $r=(x-1)/(L-1)\in[0,1]$, when $m_+=1$ and $\beta>\beta_c$. For $\beta<\beta_c$, $m_{\beta=0}=0$, the gap is absent, and the profile becomes linear at $\beta=0$.} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=0.8\textwidth]{Figures-manuscript-fig-msfig-mx-beta=1} \caption{\label{fig1.d1}Stationary magnetization profiles $\overline{m}_x$ versus $x$ for the three nonequilibrium models at $\beta=1$, i.e. $\beta \gg \beta_c$, a temperature well below the critical temperature for the corresponding equilibrium system. $40\times 40$ grid. $\tau=10^{13}/N_b$ for each $h_+$ value. Note that $h_+=\beta h_+$ for the present case. } \end{figure} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{Figures-manuscript-schematic-beta=1-ISF-L4-july2018.pdf} \caption{\label{schematic-beta=1-ISF-L4}Nonequilibrium current $J$ versus $h_+$ for $\beta=1$ (ISF-L4), together with selected configurations at time $t=\tau=10^{13}/N_b$. The corresponding magnetization profiles are shown in \figref{fig1.d1} in concert with profiles for the other two models (ISF-FB and ISF-SB). The metastable region at $h_+\approx 3.7$ ($m_+\approx 0.9988$) is investigated in more detail below. The current changes three times the sign. Between panels marked by $h_+=1.15$ and $1.26$, between $h_+=1.36$ and $2.24$, as well as between $h_+=3.75$ and $4.0$. It reaches its maximum value at about $h_+=0.71$. Color code: spin +1 (black), spin -1 (olive). $40\times 40$ grid. } \end{figure} \medskip The {\em bump} profile occurs when the typical spin configuration of the NESS presents an interface close to one of the vertical boundaries. In this case a sea of (say) positive spins emanating from $\mathcal R_+$ invades the most part of the lattice $\Lambda$, leaving a small strip adjacent to $\mathcal R_-$ for the negative spins. The profile of $\overline{m}_{x}$ for $x$ ranging from the right ($x=L$) to the left ($x=1)$ boundary, increases steadily from $m_+$ up to a maximum value $m_\textrm{bump}>m_+$, which is reached for $x$ close to 1. Then $\overline{m}_{x}$ jumps suddenly from $m_\textrm{bump}$ down to $m_-$ at $x=1$, forming a boundary layer close to the left boundary. A similar profile is obtained by changing in the previous description the sign of magnetization and exchanging left with right. In any case, in the presence of a bump the current flows in the `wrong' direction, producing {\em uphill diffusion}. The bump appears, for instance, in the ISF-L4 model for $\beta h_+=3.75$, see \figref{fig1.d1}. In \figref{schematic-beta=1-ISF-L4} it is shown, for the same value of $\beta h_+$, the spin configuration with the boundary layer close to $\mathcal R_+$ and the evidence for the uphill current, i.e. $J>0$. \bigskip \item The instanton profile, that sustains a negative current, is stable for $m_+\in(m_\textrm{crit},1)$. This means that, perturbing the \textit{configuration A} initial datum or even taking a random initial condition for $m_+>m_\textrm{crit}$, the dynamics leads asymptotically to the instanton profile for $\overline{m}_{x}$. This is the {\em stable phase}. The situation is quite different and much more intricate for $m_+\in (0, m_\textrm{crit})$. Here the instanton loses stability and we have numerical evidence that, at least for not too low $m_+$ values, the bump profile is stable. Thus, there should exist a further critical value $m_{u} <m_\textrm{crit}$ that defines the stability region for the bump $(m_u, m_\textrm{crit})$. The nature of $m_u$ is not well understood, but it is conjectured that its presence should be a finite-volume effect, since in the large volume limit the interval $(m_u, m_\textrm{crit})$ is supposed to shrink to the empty set. Indeed, some heuristic reasoning leads to the estimate $O(L^{-2/3})$ for its length. This phase has been termed {\em metastable} in \cite{CGGV18}. Below $m_{u}$ the picture is much more complex and less studied. In the next section we will show that decreasing further $m_+$ towards 0, the system undergoes an intricate sequence of different stationary states. This is the {\em unstable phase}, that was also referred to as the ``weakly unstable'' or ``chaotic region'' in \cite{CGGV18}. \end{enumerate} \subsection{Behavior of $J$ in external magnetization-temperature space} We shall discuss here the 2D maps for the current $J$ in the parameter space spanned by $ \beta h_+=\tanh^{-1}(m_+)$ and $\beta$ for the 3 different models ISF-4, ISF-FB and ISF-SB. The MC results are portrayed in \figref{Jmap} (left column). Remarkably, the ISF-L4 and ISF-SB dynamics seems to yield similar results for the current, while the FB b.c.s gives rise to a different scenario, in which the onset of uphill currents is somehow hindered. It should also be noted that, with the L4 and SB b.c.s, the Onsager's result for $m_\beta$, denoted by the yellow line in \figref{Jmap} is close, for large enough values of $\beta$, to $m_\textrm{crit}$ since it marks closely the transition between regions with positive currents (colored in red, in the figure) and other regions with negative currents (colored in green). For smaller values of $\beta$, the yellow line no longer stays at the border between red and green regions, hence $m_{\beta}$ and $m_\textrm{crit}$ substantially differ from one another, as it will also be seen in \figref{fig3.1}. \begin{figure*} \centering \includegraphics[width=6cm]{Figures-manuscript-fig-prelim-MAP-ISF-L4} \includegraphics[width=6cm]{Figures-manuscript-fig-prelim-MAP-DB-L4}\\ \includegraphics[width=6cm]{Figures-manuscript-fig-prelim-MAP-ISF-FB} \includegraphics[width=6cm]{Figures-fig-prelim-MAP-DB-FB}\\ \includegraphics[width=6cm]{Figures-manuscript-fig-prelim-MAP-ISF-SB} \includegraphics[width=6cm]{Figures-fig-prelim-MAP-DB-SB} \caption{2D maps showing the values of $J$ as a function of $\beta$ (horizontal) and $\beta h_+$ (vertical) for the three ${\cal N}$--models (left column) and ${\cal L}$--models (right column). For the $\mathcal{E}$--models $J$ vanishes by definition. Regions with positive $J$ are colored in red, adjacent regions with negative $J$ are colored green, remaining regions with $J<0$ use a grayscale; $J=0$ at $\beta h_+=0$, i.e., $m_+=0$. As is evident from these 2D maps, and depending on the pathway in $\beta$--$\beta h_+$--space, $J$ changes several times its sign for all but the DB-L4 and DB-FB models. The yellow line had been added as reference, it corresponds to Onsager's exact result for the 2D Ising model, $\beta h_{\beta}=\tanh^{-1}(m_\beta)$ with $m_\beta$ from \eqref{mbeta}. Height profiles of the three ISF-maps at two vertical lines, at $\beta=0.3$ and $\beta=1$, are shown in better resolution in \figref{fig1.e}. The uppermost contour line $J=0$ (borderline between red and green) for all ISF-models is shown in \figref{fig3.1}. } \label{maps} \label{Jmap} \end{figure*} In Section\ \ref{sec:critical} we shall focus, in particular, on the behavior of the current as a function of $\beta h_+$ along the vertical lines at $\beta=0.3$ and $\beta=1$. An inspection of \figref{Jmap} reveals that positive 'uphill' currents $J>0$ (colored in red) are observed, with the ISF mechanism, when the parameters $\beta$ and $m_+$ of our model are suitably tuned. In particular, the parameter $\beta$ is required to be larger than a certain critical value $\beta_\textrm{crit}$. Note that the latter, due to finite size effects, might even significantly differ from the critical value $\beta_c$ in \eqref{betac} determined by Onsager (pertaining to the infinite volume Ising model). The ISF mechanism is an essential ingredient to observe uphill currents. It is worth comparing the ISF results shown in \figref{Jmap} (left column) with the corresponding results obtained with the DB dynamics equipped with the L4, FB and SB b.c.'s, see \figref{Jmap} (right column). An inspection of \figref{Jmap} confirms that while the L4 and FB b.c.'s give rise to a purely Fickian behavior, the DB dynamics equipped with the SB b.c. defined in \eqref{ghsp} yields uphill currents. The onset of positive currents even in presence of DB dynamics stems from the choice of the aforementioned boundary condition, which, as it stands, does not match properly with the DB dynamics. This aspect will be clarified in Section \ref{sec:mL}. \subsection{Critical value $m_\textrm{crit}$, NSF equilibrium magnetization $m_\textrm{eq}$ and DB equilibrium magnetization $m_o$ } \label{sec:critical} The critical value $m_\textrm{crit}$ separating the stable and the metastable phases is the threshold at which the current changes sign, i.e. where the current vanishes, $J(m_\textrm{crit})=0$. As already said in the Introduction, in \cite{CGGV18} it has been conjectured that in the large volume limit $m_\textrm{crit}$ should approach the equilibrium magnetization $m_\beta$. However, in order to take into account the finite size effects, in \cite{CGGV18} it is also claimed that $m_\textrm{crit}$ can be measured by evaluating the magnetization $m_\textrm{eq}$ of the rightmost column of $\Lambda$ {\em in the absence of the reservoirs}, i.e. at equilibrium (NSF models). The choice of the L4 b.c.s should entail that in the large volume limit $m_\textrm{eq}$ approaches $m_\beta$. Some numerical evidence of this fact is given in \cite{CGGV18} for the ISF-L4 case, where $m_\textrm{eq}$ is computed for $L$ ranging from 10 up to 40 and $\beta=1$. Here, we study the dependence of $m_\textrm{eq}$ and $m_\textrm{crit} $ on the parameter $\beta$, and compare the ISF-L4 case with ISF-FB and ISF-SB. We denote by $m_\textrm{crit}^\textrm{X}$, with X\ $\in {\cal N}$, the critical magnetization computed according to the nonequilibirum model $X$. Analogously, $m_\textrm{eq}^\textrm{X}$, with X\ $\in {\cal E}$ is the quantity $\overline{m}_L$ defined by \eqref{defmeanmx} computed in the equilibrium model X. For later use, we introduce also a quantity, named $m_o$, which is defined as $\overline{m}_L^\textrm{DB-L4}$ at $h_+=0$. We remark that, while $m_\textrm{eq}$ is defined for the equilibrium bulk system in which the sole conservative Kawasaki dynamics acts, in defining $m_o$ we retain, in the presence of local equilibrium, the action of the two reservoirs even though with the same magnetization $m_+=m_-=0$ (corresponding to $h_+=0$). In this situation there is no net flux across the system induced by the gradient of magnetization at the boundaries, and a NESS is not established. Thus we argue that, at least for sufficiently large systems, the two systems (i.e. the Canonical NSF-L4 and the Grand Canonical DB-L4 with $h_+=0$) should be equivalent and then $m_\textrm{eq}$ and $m_o$ should agree. \begin{figure}[tb] \centering (a)\includegraphics[width=0.4\textwidth]{Figures-critical_z} (b)\includegraphics[width=0.4\textwidth]{Figures-fig-prelim-L4-special-values} \caption{\label{fig3.1} {\bf (a)} Behavior of $m_\textrm{eq}$, $m_\textrm{crit}$, and $m_\beta$ (shown is $\tanh^{-1}(m)$ in each case) as functions of $\beta$. Error bars are comparable with symbol sizes. {\bf (b)} Behavior of $m_\textrm{eq}$, $m_\textrm{crit}$, $m_o$, all evaluated with L4 b.c., and $m_\beta$ as functions of $\beta$. The data points for $m_o$ within the range $\beta\in[0.45,0.55]$ are marked by red crosses, as they are not expected to match $m_\beta$. At these large temperatures the system frequently changes its overall magnetization, and the mean magnetization at the boundary vanishes. The crosses represent the mean absolute magnetization at the boundary. } \end{figure} In \figref{fig3.1} we show the critical vales $m_\textrm{crit}^\textrm{X}$ for $\textrm{X}\in {\cal N}$, the spontaneous magnetization $m_\beta$, see \eqref{mbeta}, and $m_\textrm{eq}^\textrm{X}$, X\ $\in {\cal E}$ for $\beta \in (0, 1)$. $m_\textrm{crit}^\textrm{X}$ is the $m_+$--value closest to $1$ at which the current vanishes. The figure shows that the status of the conjecture regarding $m_\textrm{eq}^\textrm{NSF-L4}$ and $m_\textrm{crit}^\textrm{ISF-L4}$ in \cite{CGGV18} is strongly dependent on $\beta$. Indeed, for $\beta$ small (say $ \beta<0.6$) the two quantities are appreciably different one from the other and different from $m_\beta$. On the other hand, $m_\textrm{eq}^\textrm{NSF-L4}$ and $m_\textrm{crit}^\textrm{ISF-L4}$ get closer and closer to $m_\beta$ as $\beta$ is increased. For $\beta=1$, the value studied in \cite{CGGV18}, they almost coincide. The right panel of \figref{fig3.1} also shows the behavior of $m_o$, defined for the DB-L4 dynamics, as a function of $\beta$. The role of b.c.s is also evident: for all $\beta$, the critical value $m_\textrm{crit}^\textrm{ISF-FB}$ and $m_\textrm{eq}^\textrm{NSF-FB}$ are clearly different from $m_\textrm{crit}^\textrm{ISF-L4}$ and from $m_\textrm{eq}^\textrm{NSF-L4}$ and $m_\beta$. Moreover, they seem to approach each other, as $\beta$ increases. It is also evident that ISF-L4 and ISF-SB coincide in the whole range of $\beta$. This is the first instance of a situation that we will meet frequently, i.e. the equivalence of ISF-L4 and ISF-SB models. Finally, we observe that all the quantities relative to models with b.c.s (i.e. L4 and SB), converge to $m_\beta$ as $\beta$ increases. \subsection{Average magnetization at the boundaries} \label{sec:mL} Another relevant feature of the ISF mechanism, outlined in \cite{CGGV18}, is that the stationary magnetization (averaged over the vertical direction) on the rightmost column, denoted as $\overline{m}_L$, see Tab.\ref{tabnotation}, attains a value close to $m_+$ when $\beta$ is large. \figureref{fig:mL} contains the result of MC simulations showing the behavior of $\overline{m}_L$ as a function of $\beta h_+$ for different values of $\beta$, and for the various b.c.s considered above for the dynamics undergoing the ISF (top panes) and the DB (botom panels) updating mechanism. It is worth noticing that, for fixed $\beta$ and $\beta h_+$, the b.c.'s seem not to affect significantly the resulting value of $\overline{m}_L$ with the ISF updating mechanism. Another observation concerns the dependence of $\overline{m}_L$ on $\beta$, for a given $\beta h_+$. \figureref{fig:mL} shows that, with the ISF mechanism (top panels) $\overline{m}_L$ attains a value that is closer and closer to $m_+$ when $\beta$ is large, e.g. at $\beta=1$. For small values of $\beta$ the two values $\overline{m}_L$ and $m_+$ start to deviate significantly, because then spin diffusion from the boundaries to the bulk of the lattice plays a major role. It is also worth noticing that in \figref{fig:mL} the behavior of the DB-L4 and DB-SB models differ from one another. The origin of the peculiar behavior of the DB-SB model lies in the SB b.c.s defined in \eqref{ghsp}, which equips the ghost spins with an average magnetization close to $m_+$. On the other hand, because of the local equilibrium between the reservoir and the rightmost column that is produced by the detailed balance, ${\cal R}_+$ does not impose to $\overline{m}_L$ its ``nominal'' magnetization $m_+$ (as with the ISF dynamics): the measured value of $\overline{m}_L$ is, indeed, typically larger than $m_+$ (cf. e.g. the plot referring to the DB-L4 dynamics). The outcome of the two competitive effects is shown in the bottom right panel of \figref{fig:mL}. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Figures-fig-check-ISF-hL-versus-hplus}\\ \includegraphics[width=0.8\textwidth]{Figures-fig-check-DB-hL-versus-hplus}\ \caption{Behavior of $\tanh^{-1}(\overline{m}_L)$ as a function of the dimensionless magnetic field $\beta h_+$, with $L=40$, for different values of $\beta$ and for the six $\mathcal{N}$-- and ${\cal L}$--models (from left to right): ISF-L4, ISF-FB, ISF-SB (top panels), and DB-L4, DB-FB, DB-SB (bottom panels). The dotted line corresponds to the curve $\overline{m}_L=m_+$, the dashed line represents the theoretical behavior of $\overline{m}_L$ for $\beta=0$, c.f. \eqref{beta0theory}. The large circles in panel DB-L4 at $\beta h_+=0$ (absence of external field) correspond to $\tanh^{-1}(m_\beta)$, cf. \eqref{mbeta}.} \label{fig:mL} \end{figure} \begin{figure} \centering (a)\includegraphics[height=5.15cm]{Figures-fig-check-DB-hL-versus-hplus-SCHEMATIC} (b)\includegraphics[height=4.85cm]{Figures-fig-meff-vs-mplus} \caption{{\bf (a)} Here we reproduce the bottom left panel of \figref{fig:mL} for the mean magnetization at the boundary to visualize the definition of $m^\textrm{eff}_+$ exemplified for the case of $\beta=1$ (green line). An arbitrary value of $m_+$ is singled out on the horizontal axis (blue bullet). The red bullet represents the ``effective magnetization'' $m^\textrm{eff}_+$. It yields, when used instead of $m_+$ during ISF-L4 dynamics, the same value $\overline{m}_L$ obtained with DB-L4 dynamics at $m_+$. $\overline{m}_L^\textrm{ISF-L4}\approx m_+$ (dashed black line) holds for this rather large value of $\beta$. $m^\textrm{eff}_+$ is larger than $m_\beta$ (green bullet). {\bf (b}) Effective $m^\textrm{eff}_+$ versus $m_+$ for the same $\beta$'s. Also included are values $m_\beta$ (short horizontal lines). We find that $m^\textrm{eff}_+>m_\beta$ for all $m_+$ and all $\beta$ investigated. Because the calculation of $m^\textrm{eff}_+$ requires an inversion of $\overline{m}_L^\textrm{ISF-L4}$ with respect to $m_+$, the error bars are of the order of the visible undulations, and some data points at large $m_+$ have even been skipped. ISF-L4 (and also DB-SB) dynamics equipped with $m^\textrm{eff}_+$ instead of $m_+$ exhibit just standard Fickian diffusion (results not shown).} \label{fig:DB} \end{figure} Remarkably, only Fickian diffusion is observed with DB-L4 dynamics. To see that, let us first introduce the effective magnetization $m^\textrm{eff}_+$, defined as the reservoir magnetization which, with a ISF dynamics, gives rise to the same value of boundary magnetization $\overline{m}_L$ obtained with a DB dynamics (equipped with the same b.c.'s of the ISF dynamics) in presence of a reservoir magnetization $m_+$. Precisely, considering the L4 b.c.'s, one has that $m^\textrm{eff}_+$ solves the following relation: \begin{equation} \overline{m}_L^\textrm{ISF-L4}(\beta,m_+^\textrm{eff}) = \overline{m}_L^\textrm{DB-L4}(\beta,m_+) \label{meff} \end{equation} Thus, \figref{fig:DB} reveals that, for any $\beta$, $m^\textrm{eff}_+$ is larger than $m_{\beta}$, which, as shown in \eqref{fig3.1}, is also close to $m_\textrm{crit}^\textrm{ISF-L4}$. The result is, then, the onset of a stable phase, accompanied by a downhill current, cf. Section\ \ref{sec:results}. Turning to the DB-SB model, the effective magnetization obtained from the solution of an equation similar to \eqref{meff} turns out being smaller, for certain values of $m_+$, than $m_\textrm{crit}^\textrm{ISF-SB}$, in which case uphill currents can indeed be observed, see \figref{maps}. Our results seem to indicate that purely Fickian diffusion is restored with the above DB-SB dynamics if one defines differently the SB boundary condition in \eqref{ghsp}, namely by replacing $m_+$ by $m^\textrm{eff}_+$ obtained from the solution \eqref{meff}, cf. the right panel of \figref{fig:DB} (the corresponding 2D map is not shown since it looks similar to the right upper panel of \figref{maps}). Furthermore, the following argument allows us to interpret the behavior of $\overline{m}_L$ as a function of $m_+$ at $\beta=0$ for the ISF models. At $\beta=0$ all attempted bulk bond flips are accepted, $c_{i,j}^\textrm{bulk}=1$, and the dynamics is equivalent to that of a simple exclusion process \cite{simpleexclusion}. This implies that the magnetization profile $\overline{m}_x$ varies strictly linearly with $x$, as it can be seen, for instance, by using duality \cite{CGGR2013}. The two coefficients characterizing the linear profile are obtained from the following two conditions: (i) For symmetry reasons $\overline{m}_1=-\overline{m}_L$, i.e., $\overline{m}_{(L+1)/2}=0$, and (ii) the magnetization at the boundary is determined with equal weight by bond flips from the adjacent layer with magnetization $\overline{m}_{L-1}$ and by the ISF updating scheme enforcing magnetization $m_+$. The resulting linear magnetization profile $\overline{m}_x$ and its values at the boundaries are \begin{eqnarray} \overline{m}_x = \frac{2x-1-L}{L+1}m_+, \qquad \overline{m}_L = -\overline{m}_1 = \frac{L-1}{L+1}m_+ \qquad (\beta=0)\, . \label{beta0theory} \end{eqnarray} As the case of $\beta=0$ provides a lower bound, this is the $\beta$-independent result for large system sizes. The observed departures reflect the finite system size and can actually be used to measure the system size. In the following subsections we will present the results of a set of simulations at high ($\beta=0.3$) and low ($\beta=1$) temperatures. For both cases we will describe the asymptotic magnetization profile and the corresponding current. These result are taken from a much larger bunch of simulations and are selected in order to offer an highlight of the complex dynamic presented by our models. As it could be expected, the scenario below the critical temperature is much complex than above, where no anomalies occur. \vskip .2cm \subsection{High temperature regime} \begin{figure}[tb] \centering \includegraphics[width=0.8\textwidth]{{Figures-manuscript-fig-msfig-mx-beta=0.3}.pdf} \caption{\label{fig1.a1}Same as \figref{fig1.d1} for $\beta=0.3$, i.e. $\beta < \beta_c$. A table of figures with the magnetization profiles $\overline{m}_x(\tau)$ versus $x$ for different values of $\beta h_+$, for the three nonequilibrium models. } \end{figure} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{Figures-manuscript-schematic-beta=03-ISF-L4-july2018.pdf} \caption{\label{schematic-beta=03-ISF-L4}Nonequilibrium current $J$ versus dimensionless magnetic field $\beta h_+$ for $\beta=0.3$ (ISF-L4), together with selected spin configurations at time $t=\tau=10^{13}/N_b$. The corresponding magnetization profiles are shown in \figref{fig1.a1} in concert with profiles for the other two nonequilibrium models (ISF-FB and ISF-SB). Color code: spin +1 (black), spin -1 (olive).} \end{figure} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{{Figures-manuscript-fig-msfig-E-beta=0.3}.pdf} \caption{\label{E-beta=03-ISF-L4}Mean total hamiltonian, as defined in \eqref{H-Ising}, per bond vs $\beta h_+$ for $\beta=0.3$, after $10^{13}$ MC steps for each $h_+$ value. } \end{figure} We consider $\beta=0.3$ as a representative case of the high temperature regime. In \figref{fig1.a1} we display the stationary magnetization profiles $\overline{m}_{x}$ for different values of $\beta h_+$. In each panel the three models ISF-L4, ISF-FB and ISF-SB are considered. As it is evident from the figure, the profiles of the three different b.c.s are qualitatively comparable in that they all are non decreasing, at least for not too small $\beta h_+$'s, say for $\beta h_+\ge 0.6$. However, the b.c.s play a role: while the L4 and SB are essentially indistinguishable in all the range of $\beta h_+$'s, the FB coincides with the others only at high $\beta h_+$, including $m_+=1$, but decreasing $\beta h_+$ the curvature of FB profile appears clearly different from the others. For $\beta h_+<0.6$ the L4 and SB curves develop, eventually through the appearance of slightly pronounced local maxima and minima, an almost flat region in the bulk. A similar behavior is developed also for the FB case, but for smaller $\beta h_+$. Such profiles sustains a negative current, as we can see in \figref{schematic-beta=03-ISF-L4} where $J$ is plotted versus $\beta h_+$. Moreover, $J$ looks as a smooth curve that decreases monotonically as $\beta h_+$ increases from $0$. As $\beta h_+\to 0$ the systems get close to equilibrium, being $m_+$ and $m_-=-m_+$ close since $m_+ \to 0$. Thus $\overline{m}_{x}$ approaches the flat equilibrium profile, as observed above, and the flux approaches 0. The analysis of the corresponding spin configurations shows a weak phase separation (without sharp interface, see the slope in the center of the curve for $\beta h_+=2.1$) that becomes less and less visible approaching $\beta h_+=0$. This indicates that, as the system approaches equilibrium, the positive and negative phases become more and more random and intertwined: the configuration loses the instanton profile structure. The confirmation of the fact that as $\beta h_+$ decreases, the spin configuration becomes more disordered is shown in \figref{E-beta=03-ISF-L4}, where the mean total Hamiltonian per bond $\overline{H}/N_b$ is represented for the ISF-L4, ISF-FB and ISF-SB models. Since the Hamiltonian can be written as $H(\sigma|\sigma^0)=2N_{-}-N_b$, where $N_{-}$ is the number of broken bonds (a broken bond has antiparallel spins), this quantity, which is in between $-1$ and $1$, gives a measure of the amount of broken bonds on the total lattice bonds. Its value is -1 when no bond is broken and 1 when all the bonds are broken, while for a sharp vertical interface (with essentially $L$ broken bonds), this ratio is $(1-2L)/(1+2L)$, whose value is $\approx -0.975$ for $L=40$. Also in this respect the L4 and SB models are the same, while FB follows a different path, though qualitatively similar. \subsection{Low temperature regime} As a case study for the low temperature regime, we consider $\beta=1$, and write $h_+$ instead of $\beta h_+$ in this section, to simplify notation. The analysis of the time averaged magnetization profiles shown in \figref{fig1.d1}, offers a rich scenario as $h_+$ decreases: starting from the instanton profile, which is common to all three ISF models in the stable parameter region (see panel for $h_+=5.35$), we find a metastable phase where the instanton is replaced by the bump (see panel for $h_+=3.7$). Then continuing to decrease $h_+$, the system enters into the weakly unstable phase with double-bump profiles (see panel for $h_+=2.25$). In this case two (almost) flat regions with values close to $m_+$ and $m_-$ develop in the vicinity of the boundary with the opposite magnetization. Then moving towards the boundary, the profile of $\overline{m}_x$ jumps sharply to the value imposed by the reservoir. This behavior corresponds to a configuration with an interface in the middle, but with the plus and minus phases flipped, i.e. close respectively to ${\cal R}^-$ and ${\cal R}^+$. Because of that, two boundary layers appear. A configuration with this behavior is clearly visible, in \figref{schematic-beta=1-ISF-L4} for $h_+=2.24$. Decreasing $h_+$ further, sharper maxima and minima appear, see e.g. $h_+=1.05$. Observe that for a given $h_+$, while the behaviors of ISF-L4 and ISF-SB models are similar (maybe after the application of the left-right and spin flip symmetries, see e.g. $h_+=1.35$), the free b.c.s of the ISF-FB model generate quite different nonequilibrium stationary states. However, as in the high temperature case, the three curves seem to reconcile as $h_+\to 0$ (that is approaching the equilibrium) when in the bulk the profile tend to flatten, although with two peaks close to the reservoir that resemble those found in Rieder et al.\ \cite{Rieder1967} or the widely observed boundary resistance effects. \begin{figure*} \centering \includegraphics[width=0.55\textwidth]{Figures-manuscript-fig-hysteresis-beta=1-run1.pdf} \caption{\label{fig-hysteresis-beta=1-run1}Zoom into results for $J$ at $\beta=1$. A more detailed investigation of the hysteretic region at $h_+$ slightly below the critical $h_{\beta=1}\approx 3.96158$, where results depend on the initial configuration. Cyan circles mark results obtained after $\tau=10^{13}/N_b$ starting from the default initial configuration A. The open squares are obtained by slowly increasing $h_+$ with a positive rate equal to $6.690\times10^{-14}$ per MC step, starting from the cyan configuration at $h_+=3.6$, the filled squares are obtained by slowly decreasing $h_+$ with a negative rate equal to $-9.315\times10^{-14}$ per MC step, starting from the 'cyan' configuration at $h_+=3.7$.} \end{figure*} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figures-manuscript-fig-msfig-E-beta=1} \caption{\label{E-beta=1-ISF-L4}Mean total hamiltonian as defined in \eqref{H-Ising} divided by number of bonds vs $h_+$ for $\beta=1$. The dashed line marks the Onsager $h_{\beta=1}\approx 3.9616$ according to \eqref{mbeta}. In the limit of $h_+\rightarrow\infty$ ($m_+=1$) the mean energy $\overline{H}/N_b$ approaches the energy of the default initial configuration A with a single straight line interface $\overline{H}/N_b=(1-2L)/(1+2L)\approx -0.975$, while there are clearly about three line interfaces for $h_+\in[1.5,3.6]$, the ones visible in \figref{schematic-beta=1-ISF-L4}. $40\times 40$ grid. $\tau=10^{13}/N_b$ for each $h_+$. } \end{figure} \figureref{schematic-beta=1-ISF-L4} shows the current for ISF-L4 model: in contrast with the simplest high temperature case, in this low temperature region the current curve, reflecting the complexity of the magnetization profiles, is not smooth and not monotonic. It crosses three times the zero line and shows the presence of an hysteretic region (close to $h_+=4$) where the observed stationary state depends on the initial configuration chosen in the MC simulation. A blow up of this phenomenon is shown in \figref{fig-hysteresis-beta=1-run1}. Cyan circles mark results obtained after $\tau=10^{13}/N_b$ MC steps starting from the default initial configuration A. The open squares are obtained by slowly \emph{increasing} $h_+$, starting from the cyan configuration at $h_+=3.6$, while the filled squares are obtained by slowly \emph{decreasing} $h_+$, starting from the 'cyan' configuration at $h_+=3.7$. At these rates the hysteretic region is seen to extend over the range $h_+\in[3.46,3.91]$. The spin configurations corresponding to the magnetization profiles of ISF-L4 (represented by red square lines in \figref{fig1.d1}) are shown in the surrounding panels of \figref{schematic-beta=1-ISF-L4}. Note that the study of the mean total hamiltonian per bond $\overline{H}/N_b$ shows, see \figref{E-beta=1-ISF-L4}, wide intervals in which the curves are almost constant, meaning that in those regions the interfaces separating the plus and minus phase are maintained. It is worth noting the difference with the high temperature case, see \figref{E-beta=03-ISF-L4}, in which $\overline{H}/N_b$ varies smoothly without flat regions. \begin{figure*}[tb] \centering (a)\includegraphics[width=0.46\textwidth]{{Figures-manuscript-fig-msfig-J-beta=0.3}.pdf} (b)\includegraphics[width=0.46\textwidth]{Figures-manuscript-fig-msfig-J-beta=1.pdf} \caption{\label{fig1.e}Current $J$ vs $\beta h_+$ reporting the different curves corresponding to the considered values of (a) $\beta=0.3$ and (b) $\beta=1.0$. The dashed line in (b) $\beta=1$ marks the Onsager $h_{\beta=1}\approx 3.9616$ according to \eqref{mbeta}. Each point marks an independent simulation with $\tau=10^{13}/N_b$. $40\times 40$ grid. Initial configuration A. The red curves (ISF-L4) had already been shown together with configurations in \figsref{schematic-beta=1-ISF-L4}{schematic-beta=03-ISF-L4}. Note the different vertical scales in (a) and (b). } \end{figure*} \figureref{fig1.e} represents the current $J$ as a function of $\beta h_+$ for the two cases $\beta=0.3$ (left panel) and $\beta=1$ (right panel). Together with the current curves already shown in \figsref{schematic-beta=03-ISF-L4}{schematic-beta=1-ISF-L4} for ISF-L4 model, there are the ones for ISF-FB and ISF-SB models. From the comparison it is confirmed that also for this quantity the model ISF-SB behaves like the model ISF-L4, while ISF-FB model differs from the first two, while maintaining qualitatively similar trends within the same temperature regime. Again, the difference between high and low temperature cases is evident. \figureref{fig-m} shows that at low temperature ($\beta=1$) there are intervals of $\beta h_+$ where the absolute value of the average magnetization $|\overline{m}|$ is non-zero, while for high temperature the mean magnetization vanishes for all $\beta h_+$. \section{Conclusions} \label{sec:concl} We have investigated the behavior of the current $J$ and the boundary magnetization $\overline{m}_L$ for some 2D Ising models, coupled to external magnetization reservoirs and equipped with different b.c.s, called respectively L4, FB and SB. In particular, we examined the sign of the stationary currents in the parameter space, spanned by the variables $\beta$ and $m_+$. Our MC simulations indicate that stationary uphill currents do appear for certain values of the parameters of the model, as a result of the ISF spin-updating mechanism, which breaks the condition of detailed balance. We also highlighted the relation between $m_\textrm{crit}$ and $m_\textrm{eq}$, the first being an observable characterizing the nonequilibrium dynamics of the 2D Ising model in contact with external reservoirs, while the latter refers to an equilibrium Ising model with conservative dynamics. Our simulations show that, if $L$ is fixed, the two quantities both tend to Onsager's magnetization $m_{\beta}$ when $\beta$ is large. Moreover, we studied in detail the behavior of the equation of state, namely the relation $J$ vs. $m_{+}$, for $\beta=0.3$ and $\beta=1$, as representative, respectively, of the high temperature and low temperature regimes. Lowering the parameter $m_+$ leads to novel stationary states, in which the current changes sign multiple times and more interfaces appear in the microscopic spin configurations. One open question, not addressed in this manuscript, concerns the presence of uphill diffusion in the infinite volume limit. More theoretical work is needed to clarify whether uphill currents persist in the above limit. The analysis of 2D Ising models on a finite lattice, coupled to external reservoirs that break the condition of detailed balance, may be relevant in a variety of applications, e.g. in the investigation of mesoscopic systems, in which the notion of ``local equilibrium'' is not guaranteed. We thus expect uphill currents to play a major role for these systems, that do not follow the basic tenets of irreversible thermodynamics. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{{Figures-manuscript-fig-msfig-m-beta=1}.pdf} \caption{\label{fig-m}Absolute mean total magnetization $|\overline{m}|$ vs $h_+$ for $\beta=1$ (at $\tau=10^{13}/N_b$), obtained from the magnetizations $m(\sigma)$ defined in \eqref{magn}, or alternatively, the averaged magnetization profiles $\overline{m}_x$ defined after \eqref{defmx}. For $\beta=0.3$ the mean magnetization vanishes for all $h_+$.} \end{figure} \paragraph*{Acknowledgments.} The authors wish to thank Anna De Masi (University of L'Aquila) and Errico Presutti (Gran Sasso Science Institute) for inspiring this work and for the many useful comments and discussions. C.~Giardin\`{a} is acknowledged for useful discussion. MC acknowledges financial support from FFABR 2017. CG and CV acknowledge financial supports from Fondo di Ateneo per la Ricerca 2016 and 2017 (UniMoRe). \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,033
Mili est un atoll des îles Marshall. Géographie Mili est composé de 92 îlots d'une superficie totale de , entourant un lagon de de superficie. Mili est l'atoll des Marshall qui comporte le plus de terres émergées après celui de Kwajalein mais son lagon est beaucoup plus petit. Il fait partie des îles Ratak. L'atoll était autrefois connecté à Nadikdik. De nos jours, les deux atolls sont séparés par le passage Klee. Démographie Le recensement de 1999 y dénombre habitants. Notes et références Atoll aux îles Marshall
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,190
ANN ARBOR, Mich., July 28, 2017 (GLOBE NEWSWIRE) -- Zomedica Pharmaceuticals Corp. (TSX-V:ZOM), a veterinary pharmaceutical and health care solutions company, today announced the closing of the first tranche of a non-brokered private placement offering of up to 4,563,636 common shares (each a "Common Share") at a price of C$2.75 per Common Share for aggregate gross proceeds of up to C$12,550,000. An aggregate of 1,502,691 Common Shares have been issued under this first tranche for aggregate gross proceeds of C$4,132,400. All of the Common Shares issued in connection with this financing are subject to a statutory four-month hold period in accordance with applicable securities laws, which will expire on November 29, 2017. The balance of the offering remains ongoing, with additional tranches expected to be completed upon receipt of subscription materials and subscription funds. Except for statements of historical fact, this news release contains certain "forward-looking information" within the meaning of applicable securities law. Forward-looking information is frequently characterized by words such as "plan", "expect", "project", "intend", "believe", "anticipate", "estimate" and other similar words, or statements that certain events or conditions "may" or "will" occur. In particular, forward-looking information in this press release includes, but is not limited to the potential aggregate proceeds to be raised under the offering, the intended use of proceeds and the potential closing of additional tranches under the offering. Although we believe that the expectations reflected in the forward-looking information are reasonable, there can be no assurance that such expectations will prove to be correct. We cannot guarantee future results, performance or achievements. Consequently, there is no representation that the actual results achieved will be the same, in whole or in part, as those set out in the forward-looking information.
{ "redpajama_set_name": "RedPajamaC4" }
7,846
DoorMat::Engine.routes.draw do if DoorMat.configuration.define_door_mat_routes get '/sign_up' => 'sign_up#new', as: 'sign_up' post '/sign_up' => 'sign_up#create' get '/sign_in' => 'sign_in#new', as: 'sign_in' post '/sign_in' => 'sign_in#create' get '/sign_out' => 'sign_in#destroy', as: 'sign_out' post '/terminate_session/:guid' => 'sessions#terminate', as: 'terminate_session' get '/reconfirm_password' => 'reconfirm_password#new', as: 'reconfirm_password' post '/reconfirm_password' => 'reconfirm_password#create' get '/add_email' => 'manage_email#new', as: 'add_email' post '/add_email' => 'manage_email#create' post '/delete_email' => 'manage_email#destroy' post '/set_primary_email' => 'manage_email#set_primary_email' get '/email_confirmation_required' => 'static#email_confirmation_required', as: 'email_confirmation_required' get '/confirm_email/:token/:email' => 'activities#confirm_email', as: 'confirm_email' post '/resend_email_confirmation' => 'activities#resend_email_confirmation', as: 'resend_email_confirmation' post '/download_recovery_key' => 'activities#download_recovery_key', as: 'download_recovery_key' get '/change_password' => 'change_password#new', as: 'change_password' post '/change_password' => 'change_password#create' get '/sign_up_success' => 'static#sign_up_success', as: 'sign_up_success' get '/sign_in_success' => 'static#sign_in_success', as: 'sign_in_success' get '/add_email_success' => 'static#add_email_success', as: 'add_email_success' get '/confirm_email_success' => 'static#confirm_email_success', as: 'confirm_email_success' get '/change_password_success' => 'static#change_password_success', as: 'change_password_success' get '/reconfirm_password_success' => 'static#reconfirm_password_success', as: 'reconfirm_password_success' # skip_before_filter :require_valid_session for these routes get '/access_token/:token_for(/:token)' => 'password_less_session#access_token', :as => 'access_token_token_for_token' post '/access_token' => 'password_less_session#access_token_post' get '/sign_out_success' => 'static#sign_out_success' get '/forgot_password_verification_mail_sent' => 'static#forgot_password_verification_mail_sent' get '/forgot_password' => 'forgot_passwords#new' post '/forgot_password' => 'forgot_passwords#create' get '/choose_new_password/:token/:email' => 'forgot_passwords#choose_new_password', as: 'choose_new_password' post '/reset_password' => 'forgot_passwords#reset_password', as: 'reset_password' end end
{ "redpajama_set_name": "RedPajamaGithub" }
3,338
{"url":"https:\/\/wikieducator.org\/Secondary_Education_Commission_(1952-53)","text":"# Introduction\n\n*The secondary education appointed by the government of India in term of their Resolution number F 9-5\/52 B-1 dated 23 September 1952\n\n*The commission appointed by the government of India .\n\n*Dr A. Lakshmanswami Mudaliar (vice-chancellor madras university chairman).\n\n## AIMS OF SECONDARY EDUCATION COMMISSION\n\n*Development of democratic citizenship\n\n*Development of personality\n\n*Improvement of vocational capability and efficiency\n\n*Concept of world citizenship\n\n## EXAMINATIONAL REFORM\n\n*Balance mixture of essay type, short answer type and objective type question\n\n*Thought provoking question\n\n*There should be no optional questions\n\n*Question should be made to cover the maximum course\n\n*In place of one paper of three hour duration in a particular subject their should be two paper each of three hour\n\n*Class work be given some consideration\n\n*External exams may be supplemented by vivavoce\n\n*Marking norms should be carefully determined and prescribed\n\n*Difficult, as well as easy question should place in question paper\n\n*There should be no compulsory public examination\n\n*The number of external exam should be reduce\n\n*Cumulative records in respect of every child should be introduced and and maintained\n\n*Symbol \/ Grades in place of numerical marking should be introduced\n\n\t*Student may be allow to appear in two or three subject and complete the examinant in two consecutive year or so","date":"2022-05-28 14:12:15","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.18450437486171722, \"perplexity\": 6607.623801085752}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652663016853.88\/warc\/CC-MAIN-20220528123744-20220528153744-00015.warc.gz\"}"}
null
null
{"url":"https:\/\/math.stackexchange.com\/questions\/3127609\/how-to-compute-t-n-1-i-i-in-t-for-lexicographic-ordering","text":"# How to compute $T'=\\{n + 1 - i : i \\in T \\}$ for lexicographic ordering?\n\nI have a question that came up during one of my combinatorial algorithm lectures, and could use some help. One of the theorems our book provides states that:\n\nLet $$S$$ consist of all $$k$$-element subsets of the $$n$$-set $$S=\\{1,\\dots,n\\}$$. Suppose that $$\\operatorname{rank}_L$$ denotes rank in the lexicographic ordering, and $$\\operatorname{rank}_C$$ denotes rank in the co-lexicographic ordering. Then, for any $$k$$-set $$T\\in S$$, we have $$\\operatorname{rank}_L(T)+\\operatorname{rank}_C(T')=\\binom{n}{k}-1,$$ where $$T'=\\{n+1-i : i \\in T \\}$$.\n\nThere was also an example provided where one subset $$T=\\{1,2,3\\}$$ had the corresponding $$T'=\\{3,4,5\\}$$, and I'm not entirely sure how this conclusion was reached. In this example $$n=5$$ and $$k=3$$.\n\nHow was $$T'$$ computed in this instance, and what would $$n$$ and $$i$$ be in the equation for $$T'$$?\n\n\u2022 Should that be $T'=\\{n-i+1:i\\in T\\}?$ \u2013\u00a0saulspatz Feb 26 at 16:17\n\u2022 @saulspatz in the textbook I'm using it has the formula as $T'=\\{n + 1 - i : i \\in T \\}$ \u2013\u00a0Ted Feb 26 at 16:21\n\u2022 That looks like a typo to me. If you use the formula I suggest, you'll see how to compute $T'.$ \u2013\u00a0saulspatz Feb 26 at 16:22\n\u2022 @saulspatz just confirmed that the textbook did have a typo! I'll fix my post accordingly. Still, what would $i$ be? For example, for the set $\\{1,2,3\\}$, would $n$ be 5, and $i$ be 1? That still wouldn't yield $\\{3,4,5\\}$ \u2013\u00a0Ted Feb 26 at 16:26\n\u2022 \"$\\left\\{n+1-i : i \\in T\\right\\}$\" is an instance of set-builder notation. Speaking in dynamical terms: you let $i$ run through $T$ and write down the resulting values of $n+1-i$; then you pack these resulting values into a set. So for $T = \\left\\{1,2,3\\right\\}$ and $n = 5$, you write down the values $5,4,3$ (obtained for $i$ being $1,2,3$, respectively) and pack them into a set; the resulting set is $\\left\\{5,4,3\\right\\} = \\left\\{3,4,5\\right\\}$. \u2013\u00a0darij grinberg Feb 26 at 17:00","date":"2019-04-25 10:34:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 18, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8737684488296509, \"perplexity\": 316.2130555194799}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-18\/segments\/1555578716619.97\/warc\/CC-MAIN-20190425094105-20190425120105-00057.warc.gz\"}"}
null
null
Q: Construct Ellipses and Ellipsoids in MATLAB from MATRICES Can some one explain how to draw ellipses and ellipsoids in MATLAB for two cases: Viz, for square and non-square matrices. Case 1) A = [25 28 31; 27 26 29; 30 27 28]; Case 2) B = [25 28 31; 27 26 29; 30 27 28; 29 27 38; 31 25 29]; Furthermore, how to calculate semi-axis length (i.e.xr, yr, and zr) of a given matrix for following MATLAB function. I know xc, yc, and zc are the mean for respective columns (i.e. x, y, and z) of the matrix. [x,y,z] = ellipsoid(xc,yc,zc,xr,yr,zr,n) Pleae note that I am new to both Quadric shapes and MATLAB, so please give more descriptive and detailed answer! Many Thanks A: Drawing an ellipse given the covariance matrix C: th = linspace(0, 2*pi, 500 ); xy = [cos(th);sin(th)]; RR = chol( C ); % cholesky decomposition exy = xy'*RR; %//' figure; plot( 2*exy(:,1)+mx, 2*exy(:,2)+my, 'r', 'LineWidth', 2 );
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,876
#include "greentea-client/test_env.h" #include "inttypes.h" #include "val_greentea.h" void pal_mbed_os_compliance_test_initialize(void); void pal_mbed_os_compliance_test_destroy(void); #ifdef __cplusplus extern "C" { #endif /* globals */ test_status_buffer_t g_status_buffer; void mbed_val_test_init(uint32_t test_num, char8_t *desc, uint32_t test_bitfield) { /*global init*/ g_status_buffer.state = 0; g_status_buffer.status = VAL_STATUS_INVALID; mbed_val_print(PRINT_ALWAYS, "\nTEST: %d | DESCRIPTION: ", test_num); mbed_val_print(PRINT_ALWAYS, desc, 0); GREENTEA_SETUP(100, "default_auto"); mbed_val_set_status(RESULT_START(VAL_STATUS_SUCCESS)); pal_mbed_os_compliance_test_initialize(); return; } void mbed_val_test_exit(void) { uint32_t status = mbed_val_get_status(); pal_mbed_os_compliance_test_destroy(); /* return if test skipped or failed */ if (IS_TEST_FAIL(status) || IS_TEST_SKIP(status)) { GREENTEA_TESTSUITE_RESULT(false); } else { GREENTEA_TESTSUITE_RESULT(true); mbed_val_set_status(RESULT_END(VAL_STATUS_SUCCESS)); } } /** @brief - This function executes given list of tests from non-secure sequentially This covers non-secure to secure IPC API scenario @param - test_num : Test_num @param - tests_list : list of tests to be executed @param - server_hs : Initiate a server handshake @return - val_status_t **/ val_status_t mbed_val_execute_non_secure_tests(uint32_t test_num, client_test_t *tests_list, bool_t server_hs) { val_status_t status = VAL_STATUS_SUCCESS; int32_t test_status = VAL_STATUS_SUCCESS; psa_handle_t handle; uint32_t i = 1; test_info_t test_info; char testcase_name[100] = ""; bool continue_test = true; test_info.test_num = test_num; mbed_val_print(PRINT_TEST, "[Info] Executing tests from non-secure\n", 0); while (tests_list[i] != NULL) { memset(testcase_name, 0, 100); sprintf(testcase_name, "Check%" PRIu32, i); GREENTEA_TESTCASE_START(testcase_name); if (server_hs == TRUE) { /* Handshake with server tests */ test_info.block_num = i; status = mbed_val_execute_secure_test_func(&handle, test_info, SERVER_TEST_DISPATCHER_SID); if (VAL_ERROR(status)) { mbed_val_set_status(RESULT_FAIL(status)); mbed_val_print(PRINT_ERROR, "[Check%d] START\n", i); return status; } else { mbed_val_print(PRINT_DEBUG, "[Check%d] START\n", i); } } /* Execute client tests */ test_status = tests_list[i](CALLER_NONSECURE); if (server_hs == TRUE) { /* Retrive Server test status */ status = mbed_val_get_secure_test_result(&handle); } if (test_status != VAL_STATUS_SUCCESS) { status = VAL_STATUS_ERROR; } if (IS_TEST_SKIP(status)) { mbed_val_set_status(status); mbed_val_print(PRINT_DEBUG, "[Check%d] SKIPPED\n", i); GREENTEA_TESTCASE_FINISH(testcase_name, 1, 0); continue_test = false; } else if (VAL_ERROR(status)) { mbed_val_set_status(RESULT_FAIL(status)); if (server_hs == TRUE) { mbed_val_print(PRINT_ERROR, "[Check%d] FAILED\n", i); } GREENTEA_TESTCASE_FINISH(testcase_name, 0, 1); continue_test = false; } else { if (server_hs == TRUE) { mbed_val_print(PRINT_DEBUG, "[Check%d] PASSED\n", i); } GREENTEA_TESTCASE_FINISH(testcase_name, 1, 0); continue_test = true; } if (!continue_test) { return status; } i++; } return status; } /** @brief - Records the state and status of test @return - val_status_t **/ val_status_t mbed_val_set_status(uint32_t status) { g_status_buffer.state = ((status >> TEST_STATE_BIT) & TEST_STATE_MASK); g_status_buffer.status = (status & TEST_STATUS_MASK); return VAL_STATUS_SUCCESS; } /** @brief - Updates the state and status for a given test @return - test status **/ uint32_t mbed_val_get_status(void) { return ((g_status_buffer.state) << TEST_STATE_BIT) | (g_status_buffer.status); } /** @brief - This function is used to handshake between: - nonsecure client fn to server test fn - secure client fn and server test fn - nonsecure client fn to secure client test fn @param - handle : handle returned while connecting given sid @param - test_info : Test_num and block_num to be executed @param - sid : RoT service to be connected. Partition dispatcher sid @return - val_status_t **/ val_status_t mbed_val_execute_secure_test_func(psa_handle_t *handle, test_info_t test_info, uint32_t sid) { uint32_t test_data; val_status_t status = VAL_STATUS_SUCCESS; psa_status_t status_of_call = PSA_SUCCESS; *handle = pal_ipc_connect(sid, 0); if (*handle < 0) { mbed_val_print(PRINT_ERROR, "Could not connect SID. Handle=%x\n", *handle); return VAL_STATUS_CONNECTION_FAILED; } test_data = ((uint32_t)(test_info.test_num) | ((uint32_t)(test_info.block_num) << BLOCK_NUM_POS) | ((uint32_t)(TEST_EXECUTE_FUNC) << ACTION_POS)); psa_invec data[1] = {{&test_data, sizeof(test_data)}}; status_of_call = pal_ipc_call(*handle, data, 1, NULL, 0); if (status_of_call != PSA_SUCCESS) { status = VAL_STATUS_CALL_FAILED; mbed_val_print(PRINT_ERROR, "Call to dispatch SF failed. Status=%x\n", status_of_call); pal_ipc_close(*handle); } return status; } /** @brief - Print module. This is client interface API of secure partition mbed_val_print_sf API for nspe world @param - verbosity: Print verbosity level - string : Input string - data : Value for format specifier @return - val_status_t **/ val_status_t mbed_val_print(print_verbosity_t verbosity, const char *string, uint32_t data) { if (data != 0) { printf(string, data); } else { printf(string); } return VAL_STATUS_SUCCESS; } /** @brief - This function is used to retrive the status of previously connected test function using mbed_val_execute_secure_test_func @param - handle : handle of server function. Handle of Partition dispatcher sid @return - The status of test functions **/ val_status_t mbed_val_get_secure_test_result(psa_handle_t *handle) { uint32_t test_data; val_status_t status = VAL_STATUS_SUCCESS; psa_status_t status_of_call = PSA_SUCCESS; test_data = (TEST_RETURN_RESULT << ACTION_POS); psa_outvec resp = {&status, sizeof(status)}; psa_invec data[1] = {{&test_data, sizeof(test_data)}}; status_of_call = pal_ipc_call(*handle, data, 1, &resp, 1); if (status_of_call != PSA_SUCCESS) { status = VAL_STATUS_CALL_FAILED; mbed_val_print(PRINT_ERROR, "Call to dispatch SF failed. Status=%x\n", status_of_call); } pal_ipc_close(*handle); return status; } /** * @brief Connect to given sid @param -sid : RoT service id @param -minor_version : minor_version of RoT service @param -handle - return connection handle * @return val_status_t */ val_status_t mbed_val_ipc_connect(uint32_t sid, uint32_t minor_version, psa_handle_t *handle) { *handle = pal_ipc_connect(sid, minor_version); if (*handle < 0) { return VAL_STATUS_CONNECTION_FAILED; } return VAL_STATUS_SUCCESS; } /** * @brief Call a connected Root of Trust Service.@n * The caller must provide an array of ::psa_invec_t structures as the input payload. * * @param handle Handle for the connection. * @param in_vec Array of psa_invec structures. * @param in_len Number of psa_invec structures in in_vec. * @param out_vec Array of psa_outvec structures for optional Root of Trust Service response. * @param out_len Number of psa_outvec structures in out_vec. * @return val_status_t */ val_status_t mbed_val_ipc_call(psa_handle_t handle, psa_invec *in_vec, size_t in_len, psa_outvec *out_vec, size_t out_len) { psa_status_t call_status = PSA_SUCCESS; call_status = pal_ipc_call(handle, in_vec, in_len, out_vec, out_len); if (call_status != PSA_SUCCESS) { return VAL_STATUS_CALL_FAILED; } return VAL_STATUS_SUCCESS; } /** * @brief Close a connection to a Root of Trust Service. * Sends the PSA_IPC_DISCONNECT message to the Root of Trust Service so it can clean up resources. * * @param handle Handle for the connection. * @return void */ void mbed_val_ipc_close(psa_handle_t handle) { pal_ipc_close(handle); } /** * @brief reprogram the watchdog timer * always succeeds on mbed-greentead testing. * * @param timeout_type type of timeout. * @return val_status_t */ val_status_t mbed_val_wd_reprogram_timer(wd_timeout_type_t timeout_type) { return VAL_STATUS_SUCCESS; } #ifdef __cplusplus } // extern "C" #endif
{ "redpajama_set_name": "RedPajamaGithub" }
7,354
Divisionen 2021-22 var turneringen om mesterskabet på niveau 2 i dansk ishockey i sæsonen 2021-22. Holdene spillede udelukkende om mesterskabet på andet niveau i ligasystemet, og der var ingen automatisk oprykning til Metal Ligaen. Turneringen havde deltagelse af ni hold, der først spillede et grundspil opdelt i to geografiske kredse med fire hold i vest-kredsen og fem hold i øst-kredsen, hvorefter de fire bedste hold i hver kreds gik videre til slutspillet. Turneringen blev vundet af Rødovre SIK, som i finaleserien besejrede Hvidovre IK med 3-0 i kampe. Den afgørende kamp blev spillet den 18. april 2022 i Frihedens Idrætscenter i Hvidovre, hvor udeholdet fra Rødovre vandt den tredje kamp i træk i finaleserien med 6-2. Finalen var et opgør mellem de to bedste hold i grundspillets østkreds - Hvidovre IK, der havde vundet kredsen med tre points forspring til Rødovre SIK. Det var første gang siden sæsonen 2014-15, at Rødovre SIK vandt den næstbedste række. Hold Ligaen havde deltagelse af ni hold: reserveholdene for fem af holdene i Metal Ligaen og førsteholdet fra fire andre klubber. I forhold til den foregående sæson havde Esbjerg IK, Frederikshavn IK, Herlev IK og Herning IK forladt divisionen. Grundspil Grundspillet blev spillet i perioden 2. september 2021 - 26. marts 2022, og de ni hold var opdelt i to geografiske kredse: en østkreds med fem hold og en vestkreds med fire hold. Der blev uddelt point efter IIHF's trepointsystem: Sejr i ordinær spilletid gav 3 point. Sejr i forlænget spilletid eller straffeslagskonkurrence gav 2 point. Nederlag i forlænget spilletid eller straffeslagskonkurrence gav 1 point. Nederlag i ordinær spilletid gav 0 point. I kampe, der blev aflyst pga. COVID-19-pandemien, blev begge hold tildelt 1,5 point. Øst-kredsen De fem hold i øst-kredsen spillede en femdobbelt-turnering alle-mod-alle, hvilket gav 20 kampe til hvert hold. Kampprogram Vest-kredsen De fire hold spillede en firedobbelt-turnering alle-mod-alle, hvilket gav 12 kampe til hvert hold. Kampprogram Slutspil Slutspillet havde deltagelse af de fire bedste hold fra hver grundspilspulje. Kvart- og semifinaleserierne spilledes bedst af tre kampe, mens finaleserien afvikledes bedst af fem kampe. Uafgjorte kampe blev forlænget med op til 5 minutters sudden death med tre markspillere på hvert hold. Hvis dette ikke frembragte en afgørelse, blev kampene afgjort i straffeslagskonkurrence. Hold Kvartfinaler I kvartfinalerne parredes holdene på tværs af de to geografiske puljer, således at nr. 1 fra øst-kredsen mødte nr. 4 fra vest-kredsen, nr. 2 fra øst-kredsen mødte nr. 3 fra vest-kredsen osv. Semifinaler I semifinalerne blev kampene sammensat ud fra holdenes pointgennemsnit pr. spillet kamp i grundspillet. Holdet med det højeste pointgennemsnit, Hvidovre IK (2,15 point pr. kamp), mødte holdet med det laveste pointgennemsnit, IC Gentofte Stars (1,525 point pr. kamp), i den ene semifinale, mens holdene med det næsthøjeste og næstlaveste gennemsnit, Rødovre SIK (2,00 point pr. kamp) og Vojens IK (1,92 point pr. kamp), mødtes i den anden semifinale. Finale Se også Superisligaen 2021-22 Metal Final4 2021-22 Kilder / eksterne henvisninger DIU - Division Vest 2021-22 DIU - Division Øst 2021-22 DIU - Divisionen Slutspil 2021-22 Noter 2021-22 Ishockey i 2021 Ishockey i 2022
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,816
from apache.thermos.config.schema import * # TODO(wickman) Bind {{mesos.instance}} to %shard_id% class MesosContext(Struct): # The instance id (i.e. replica id, shard id) in the context of a task instance = Required(Integer) class UpdateConfig(Struct): batch_size = Default(Integer, 1) restart_threshold = Default(Integer, 60) watch_secs = Default(Integer, 45) max_per_shard_failures = Default(Integer, 0) max_total_failures = Default(Integer, 0) rollback_on_failure = Default(Boolean, True) wait_for_batch_completion = Default(Boolean, False) pulse_interval_secs = Integer class HealthCheckConfig(Struct): initial_interval_secs = Default(Float, 15.0) interval_secs = Default(Float, 10.0) timeout_secs = Default(Float, 1.0) max_consecutive_failures = Default(Integer, 0) endpoint = Default(String, '/health') expected_response = Default(String, 'ok') expected_response_code = Default(Integer, 0) class HttpLifecycleConfig(Struct): # Named port to POST shutdown endpoints port = Default(String, 'health') # Endpoint to hit to indicate that a task should gracefully shutdown. graceful_shutdown_endpoint = Default(String, '/quitquitquit') # Endpoint to hit to give a task it's final warning before being killed. shutdown_endpoint = Default(String, '/abortabortabort') class LifecycleConfig(Struct): http = HttpLifecycleConfig DisableLifecycle = LifecycleConfig() DefaultLifecycleConfig = LifecycleConfig(http = HttpLifecycleConfig()) class Announcer(Struct): primary_port = Default(String, 'http') # Portmap can either alias two ports together, e.g. # aurora <= http # Or it can be used to alias static ports to endpoints, e.g. # http <= 80 # https <= 443 # aurora <= https portmap = Default(Map(String, String), { 'aurora': '{{primary_port}}' }) # The executorConfig populated inside of TaskConfig. class MesosTaskInstance(Struct): task = Required(Task) instance = Required(Integer) role = Required(String) announce = Announcer environment = Required(String) health_check_config = Default(HealthCheckConfig, HealthCheckConfig()) lifecycle = LifecycleConfig class Parameter(Struct): name = Required(String) value = Required(String) class Docker(Struct): image = Required(String) parameters = Default(List(Parameter), []) class Container(Struct): docker = Docker class MesosJob(Struct): name = Default(String, '{{task.name}}') role = Required(String) contact = String cluster = Required(String) environment = Required(String) instances = Default(Integer, 1) task = Required(Task) announce = Announcer tier = String cron_schedule = String cron_collision_policy = Default(String, "KILL_EXISTING") update_config = Default(UpdateConfig, UpdateConfig()) constraints = Map(String, String) service = Default(Boolean, False) max_task_failures = Default(Integer, 1) production = Default(Boolean, False) priority = Default(Integer, 0) health_check_config = Default(HealthCheckConfig, HealthCheckConfig()) # TODO(wickman) Make Default(Any, LifecycleConfig()) once pystachio #17 is addressed. lifecycle = Default(LifecycleConfig, DefaultLifecycleConfig) task_links = Map(String, String) # Unsupported. See AURORA-739 enable_hooks = Default(Boolean, False) # enable client API hooks; from env python-list 'hooks' container = Container Job = MesosJob Service = Job(service = True)
{ "redpajama_set_name": "RedPajamaGithub" }
6,516
Q: Ad Block Plus Blocking jQuery Script? I have a script that pulls data from my CMS and then allows a person to vote on a poll. The script works fine. However, I have Ad Block Plus Plugin installed in Firefox. When that is enabled to blocks the script from submitting the form correctly. It appears to submit correctly in the front end but is never registered in the back end. Why does Ad Block Plus block my script that has nothing to do with ads? The script is below: $(document).ready(function () { var Engine = { ui: { buildChart: function() { if ($("#pieChart").size() === 0) { return; } var pieChartData = [], totalVotes = 0, $dataItems = $("ul.key li"); // grab total votes $dataItems.each(function (index, item) { totalVotes += parseInt($(item).data('votes')); }); // iterate through items to draw pie chart // and populate % in dom $dataItems.each(function (index, item) { var votes = parseInt($(item).data('votes')), votePercentage = votes / totalVotes * 100, roundedPrecentage = Math.round(votePercentage * 10) / 10; $(this).find(".vote-percentage").text(roundedPrecentage); pieChartData.push({ value: roundedPrecentage, color: $(item).data('color') }); }); var ctx = $("#pieChart").get(0).getContext("2d"); var myNewChart = new Chart(ctx).Pie(pieChartData, {}); }, // buildChart pollSubmit: function() { if ($("#pollAnswers").size() === 0) { return; } var $form = $("#pollAnswers"), $radioOptions = $form.find("input[type='radio']"), $existingDataWrapper = $(".web-app-item-data"), $webAppItemName = $existingDataWrapper.data("item-name"), $formButton = $form.find("button"), bcField_1 = "CAT_Custom_1", bcField_2 = "CAT_Custom_2", bcField_3 = "CAT_Custom_3", $formSubmitData = ""; $radioOptions.on("change", function() { $formButton.removeAttr("disabled"); // enable button var chosenField = $(this).data("field"), // gather value answer_1 = parseInt($existingDataWrapper.data("answer-1")), answer_2 = parseInt($existingDataWrapper.data("answer-2")), answer_3 = parseInt($existingDataWrapper.data("answer-3")); if (chosenField == bcField_1) { answer_1 = answer_1 + 1; $formSubmitData = { ItemName: $webAppItemName, CAT_Custom_1: answer_1, CAT_Custom_2: answer_2, CAT_Custom_3: answer_3 }; } if (chosenField == bcField_2) { answer_2 = answer_2 + 1; $formSubmitData = { ItemName: $webAppItemName, CAT_Custom_1: answer_1, CAT_Custom_2: answer_2, CAT_Custom_3: answer_3 }; } if (chosenField == bcField_3) { answer_3 = answer_3 + 1; $formSubmitData = { ItemName: $webAppItemName, CAT_Custom_1: answer_1, CAT_Custom_2: answer_2, CAT_Custom_3: answer_3 }; } prepForm($formSubmitData); }); function prepForm(formSubmitData) { $formButton.click(function(e) { e.preventDefault(); logAnonUserIn("anon", "anon", formSubmitData); // log user in }); // submit } // prepForm function logAnonUserIn(username, password, formSubmitData) { $.ajax({ type: 'POST', url: '/ZoneProcess.aspx?ZoneID=-1&Username=' + username + '&Password=' + password, async: true, beforeSend: function () {}, success: function () {}, complete: function () { fireForm(formSubmitData); } }); } // logAnonUserIn function fireForm(formSubmitData) { // submit the form var url = "/CustomContentProcess.aspx?A=EditSave&CCID=13998&OID=3931634&OTYPE=35"; $.ajax({ type: 'POST', url: url, data: formSubmitData, async: true, success: function () {}, error: function () {}, complete: function () { window.location = "/"; } }); } } // pollSubmit } // end ui }; Engine.ui.buildChart(); Engine.ui.pollSubmit(); }); A: As it turns out easylist contains this filter: .aspx?zoneid= This is why my script is being blocked. I was told I can try this exception filter: @@||example.com/ZoneProcess.aspx?*$xmlhttprequest I could also ask easylist to add an exception. Answer comes from Ad Block Plus Forums.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,660
\section{Introduction} \label{sec:intro} A subgraph $H$ of an edge-coloured graph $G$ is called \defn{rainbow} if all its edges have different colours. Rainbow colourings appear for example in canonical Ramsey theory, and many open problems in combinatorics such as the Ryser--Brualdi--Stein conjecture on partial transversals in Latin squares and the graceful labelling conjecture can be phrased as rainbow subgraph problems. The central question is under which conditions on $G$ and its edge colouring a rainbow copy of $H$ is guaranteed. Here, $H$ is usually a spanning subgraph such as a perfect matching~\cite{CPS:17,CP:17,HS:08,KY:17,pokrovskiy:16}, Hamilton cycle~\cite{AFR:95,APS:17,BM:17,BPS:17,CPS:17,ENR:83,FR:93,HT:86}, spanning tree~\cite{BLM:17,BPS:17,FK:08,PS:17}, or a general bounded degree graph~\cite{BKP:12,KSV:17,SV:17}. Closely related questions concern properly coloured subgraphs and rainbow decompositions, which we shall discuss briefly in Section~\ref{subsec:related stuff}. Clearly, a necessary condition for the existence of a rainbow copy of $H$ in $G$ is that $H$ is at least a subgraph of~$G$. Thus, the best one can hope for is to find a rainbow copy of $H$ in~$G$ `whenever' $H$ is a subgraph of~$G$. The blow-up lemma of Koml\'os, S\'ark\"ozy and Szemer\'edi is a powerful tool to find spanning subgraphs, which, since its invention roughly 20~years ago, has significantly shaped the landscape of extremal combinatorics~\cite{BST:09,KSS:98,KSS:98a,KSS:01,KO:09,KO:13}. It is tailored to be used after an application of Szemer\'edi's regularity lemma and roughly says that super-regular pairs behave like complete bipartite graphs in terms of embedding bounded degree subgraphs. In the present paper, we prove a rainbow blow-up lemma. As one application, we transfer the bandwidth theorem of B\"ottcher, Schacht and Taraz~\cite{BST:09} to the rainbow setting. It would be interesting to find out whether other results can be transferred in a similar way. In many of the classical rainbow problems, the host graph $G$ is complete. For instance, Erd\H{o}s and Stein asked for the maximal $k$ such that any $k$-bounded edge colouring of $K_n$ contains a rainbow Hamilton cycle (cf.~\cite{ENR:83}). Here, an edge colouring is \defn{$k$-bounded} if each colour appears on at most $k$ edges. After several subsequent improvements,\COMMENT{(see~e.g.~\cite{ENR:83,FR:93,HT:86})} Albert, Frieze, and Reed~\cite{AFR:95} showed that $k=\Omega(n)$, i.e.~there exists a constant $\mu>0$ such that for any $\mu n$-bounded edge colouring of~$K_n$, there exists a rainbow Hamilton cycle. Note that this is best possible up to the value of the constant~$\mu$. A natural generalization is to ask for general rainbow (spanning) subgraphs. For example, Frieze and Krivelevich~\cite{FK:08} showed that there exists some $\mu>0$ such that any almost spanning tree with bounded degree is contained as a rainbow copy in $K_n$ for any $\mu n$-bounded edge colouring of~$K_n$. This was greatly improved by B\"ottcher, Kohayakawa, and Procacci~\cite{BKP:12}, who showed the following very general result. Given any $n/(51\Delta^2)$-bounded edge colouring of $K_n$ and any graph $H$ on $n$ vertices with $\Delta(H)\le \Delta$, one can find a rainbow copy of~$H$. Their proof is based on the Lopsided Lov\'asz local lemma as well as the framework of Lu and Sz\'ekely~\cite{LS:07} for random injections. Using these tools, they show that a random injection $V(H)\to V(K_n)$ yields with positive probability a rainbow copy of~$H$. Kam\v{c}ev, Sudakov, and Volec~\cite{KSV:17} recently extended this result to the setting where $G$ is complete multipartite, and Sudakov and Volec~\cite{SV:17} considered the case when the number of cherries in $H$ is restricted (instead of the maximum degree). There is a major stumbling stone if one wants to consider the above problem for incomplete host graphs, say, for example quasi-random graphs with density $d$ for some arbitrarily small (fixed) $d>0$. If $G$ is complete, then any injection $V(H)\to V(G)$ yields a valid embedding of $H$ (similarly for the multipartite setting). However, this is not the case for general host graphs $G$, where a random injection yields a valid embedding with exponentially small probability. Restricting the probability space to the `valid' injections does not seem to work with the Lu--Sz\'ekely framework, as the latter relies on the perfect symmetry of the setup. \COMMENT{Thus, the Lu--Sz\'ekely framework does not seem to be the right tool to study general host graphs~$G$.} Some recent results on rainbow subgraphs in incomplete host graphs with $\mu n$-bounded edge colourings were obtained using the so-called `switching method'. For example, Coulson and Perarnau~\cite{CP:17} found rainbow perfect matchings in Dirac bipartite graphs, improving an approximate result of \cite{CPS:17}. The crucial property is that given a perfect matching $M$ and an edge $e\in M$ (which is in conflict with another edge in~$M$), there are many ways of `switching $e$ out of $M$' to obtain a new perfect matching which does not contain~$e$. As another example, Coulson, Keevash, Perarnau, and Yepremyan~\cite{CKPY:18} show the existence of a rainbow $F$-factor in a graph $G$ whenever $\delta(G)\geq (\delta_F+o(1))n$, where $\delta_F$ is the minimum degree threshold for the existence of an $F$-factor (cf.~\cite{KO:09}). (Here $F$ is an arbitrary fixed hypergraph as their results also apply to hypergraphs.)\COMMENT{Divisibility} However, the switching method seems to be limited to `simply structured' spanning graphs~$H$ with rich symmetry properties. We are motivated by the following question: \begin{quote} \emph{Given a (dense) graph $G$ on $n$ vertices with a $\mu n$-bounded edge colouring and a (bounded degree) graph $H$ on $n$ vertices, is there a rainbow copy of $H$ in~$G$?} \end{quote} By proving a rainbow blow-up lemma, we provide a tool which allows for the systematic study of this question, profiting from various techniques and methods which have been developed in the non-coloured setting. In particular, we give affirmative answers to the above question if $G$ is quasi-random (see Corollary~\ref{cor:quasirandom}) or has sufficiently high minimum degree (see Section~\ref{sec:apps}). We remark that the constant $\mu>0$ we obtain is very small. Nevertheless, our rainbow blow-up lemma has applications beyond this setting, in at least the following two aspects. Firstly, it can still be applied even if the number of available colours is only slightly larger than the number of edges in the desired subgraph. Secondly, one can even obtain approximate decompositions, for example into Hamilton cycles and $H$-factors. This has recently been demonstrated by Kim, K\"uhn, Kupavskii and Osthus (see Section~\ref{subsec:related stuff} for further details). We will discuss the blow-up lemma in more detail in the next subsection. As mentioned above, commonly used techniques like the Lu--Sz\'ekely framework and the switching method do not seem capable of dealing with quasi-random host graphs $G$ and/or general graphs $H$. (Here, the idea would have been to use the original blow-up lemma as a `blackbox' result.) Another natural question is whether the proof of the blow-up lemma can be adapted to work in the rainbow setting. In the original proof due to Koml\'os, S\'ark\"ozy and Szemer\'edi, the vertices of $H$ are embedded one by one using a randomized algorithm, until all but a small fraction of the vertices are embedded, and the embedding is then completed using the K\"onig-Hall theorem. Note that this approach is extremely vulnerable in the rainbow setting, as already a constant number of choices can render the algorithm unsuccessful (as a vertex may be incident to only a constant number of different colours), which seems rather hopeless. To overcome these obstacles, several new ideas are needed. As an underlying strategy, we employ the alternative proof of the blow-up lemma given by R\"odl and Ruci\'nski, and combine it with various techniques such as the partial resampling algorithm, the switching method, and a parallelization of the embedding procedure. We will provide a detailed proof overview in Section~\ref{sec:sketch}. Sections~\ref{sec:colour splitting} and~\ref{sec:main proof} contain the main proof. In Section~\ref{sec:apps}, we demonstrate the applicability of our rainbow blow-up lemma. \subsection{Rainbow blow-up lemma}\label{subsec:blow-up intro} In order to state a simplified version of our rainbow blow-up lemma, we need some more terminology. For $k\in \bN$, we write $[k]_0:=[k]\cup \Set{0}=\Set{0,1,\dots,k}$. We say that $(H,G,R,(X_i)_{i\in[r]_0},(V_i)_{i\in[r]_0})$ is a \defn{blow-up instance} if the following hold: \begin{itemize} \item $H$ and $G$ are graphs, $(X_i)_{i\in[r]_0}$ is a partition of $V(H)$ into independent sets, $(V_i)_{i\in[r]_0}$ is a partition of $V(G)$, and $|X_i|=|V_i|$ for all $i\in[r]_0$; \item $R$ is a graph on $[r]$ such that for all distinct $i,j\in [r]$, the graph $H[X_i,X_j]$ is empty if $ij\notin E(R)$. \end{itemize} Here, $X_0$ and $V_0$ are so-called `exceptional sets'. For simplicity, we assume in this subsection that they are empty. For a graph $G$ and two disjoint subsets $S,T\In V(G)$, denote by $e_G(S,T)$ the number of edges of $G$ with one endpoint in $S$ and the other one in $T$, and define $$d_G(S,T):=\frac{e_G(S,T)}{|S||T|}$$ as the \defn{density} of the pair $S,T$ in $G$. We say that the bipartite graph $G[V_1,V_2]$ is \defn{lower $(\eps,d)$-super-regular} if \begin{itemize} \item for all $S\In V_1$ and $T\In V_2$ with $|S|\ge \eps|V_1|$, $|T|\ge \eps |V_2|$, we have $d_G(S,T)\ge d-\eps$; \item for all $i\in [2]$ and $v\in V_i$, we have $|N_G(v)\cap V_{3-i}|\ge (d-\eps)|V_{3-i}|$. \end{itemize} We say that the blow-up instance $(H,G,R,(X_i)_{i\in[r]_0},(V_i)_{i\in[r]_0})$ is \defn{lower $(\eps,d)$-super-regular} if for all $ij\in E(R)$, the bipartite graph $G[V_i,V_j]$ is lower $(\eps,d)$-super-regular. We now state a simplified version of the rainbow blow-up lemma. The full statement can be found in Lemma~\ref{lem:blow up} and also allows exceptional vertices and candidate sets. Moreover, it does not only apply to rainbow embeddings, but to slightly more general conflict-free embeddings. \begin{lemma}[Rainbow blow-up lemma---simplified]\label{lem:blow-up simple} For all $d,\Delta,r$, there exist $\eps=\eps(d,\Delta)$, $\mu=\mu(d,\Delta,r)>0$ and an $n_0\in \mathbb{N}$ such that the following holds for all $n\geq n_0$. Let $(H,G,R,(X_i)_{i\in[r]},(V_i)_{i\in[r]})$ be a lower $(\eps,d)$-super-regular blow-up instance. Assume further that \begin{enumerate}[label={\rm (\roman*)}] \item $\Delta(H), \Delta(R)\leq \Delta$, \item $|V_i|=(1\pm \eps)n/r$ for all $i\in[r]$. \end{enumerate} Then, given any $\mu n$-bounded edge colouring of $G$, there exists a rainbow embedding of $H$ into $G$ (where $X_i$ is mapped to $V_i$ for all $i\in [r]$). \end{lemma} Observe that $\eps$ does not depend on $r$, which is crucial in applications. Originally the blow-up lemma was formulated with $R$ being the clique on at most $\Delta$ vertices. In essentially all applications it is applied to many clique blow-ups iteratively. However, in order to have a useful rainbow blow-up lemma, we cannot apply it independently to two or more clique blow-ups as in both parts the blow-up lemma may use the same colour. We will discuss this issue further in Section~\ref{sec:sketch}. \subsection{A rainbow bandwidth theorem and quasirandom host graphs} \label{subsec:intro bandwidth} One of the fundamental results of extremal combinatorics is the Erd\H{o}s--Stone theorem, stating that for a fixed graph $H$, every large graph $G$ on $n$ vertices with average degree at least $(1-1/(\chi(H)-1)+o(1))n$ contains $H$ as a subgraph. The bandwidth theorem can be viewed as an analogue of the Erd\H{o}s--Stone theorem when $H$ is a spanning subgraph of~$G$. Clearly, in this setting, one has to replace the average degree condition by a minimum degree condition in order to obtain sensible results. Let $\ell:=\chi(H)$ and assume that $H$ has bounded degree. A long line of research confirmed for various cases that $\delta(G)\geq (1-1/\ell+o(1))n$ suffices to find $H$ as a spanning subgraph in $G$, for instance when $H$ is a spanning tree, a (power of a) Hamilton cycle, or a clique factor. A conjecture of Bollob\'as and Koml\'os, which became known as the bandwidth conjecture, attempted to generalize all the mentioned results by claiming that $\delta(G)\geq (1-1/\ell+o(1))n$ suffices whenever $H$ has not too strong expansion properties (in this case measured by the parameter bandwidth). The \emph{bandwidth} of a graph $H$ with vertex set $[n]$ is defined as $\min_{\pi}\max \{|\pi(i)-\pi(j)|\colon ij\in E(H)\}$ where the minimum ranges over all permutations on $[n]$. B\"ottcher, Schacht and Taraz~\cite{BST:09} proved the bandwidth conjecture roughly ten years ago. We refer to their paper for more information on the history of the conjecture. Graph classes with appropriately small bandwidth include for example powers of Hamilton cycles, $F$-factors and bounded degree trees and planar graphs. We also remark that the minimum degree bound given by the bandwidth theorem is not necessarily the optimal threshold (cf.~Section~\ref{subsec:bandwidth}). Here, we extend the bandwidth theorem to the rainbow setting. We will prove the following theorem in Section~\ref{subsec:bandwidth} by replacing the usual blow-up lemma with our rainbow blow-up lemma in the approach of~\cite{BST:09}. \begin{theorem}\label{thm:bandwsimple} For all $\delta,\Delta,\ell$, there are $\beta,\mu>0$ and an $n_0\in \mathbb{N}$ such that the following holds for all $n\geq n_0$. Suppose $G$ is a graph on $n$ vertices with $\delta(G)\geq (1-1/\ell+\delta)n$. Suppose $H$ is a graph on $n$ vertices with $\Delta(H)\leq \Delta$, bandwidth at most $\beta n$, and $\chi(H)\le \ell$. Then, given any $\mu n$-bounded edge colouring of $G$, the graph $G$ contains a rainbow copy of $H$. \end{theorem} A particular example which is covered by the bandwidth theorem is the case when $H$ is a tree. This was known as Bollob\'as' conjecture and was proved by Koml\'os, S\'ark\"ozy and Szemer\'edi~\cite{KSS:95}. In Section~\ref{sec:appstree}, we present an easy proof of this result in the rainbow setting. Our main motivation is to give the reader a straightforward example of how to apply our rainbow blow-up lemma. (The proof in~\cite{KSS:95} is without the blow-up lemma and thus much longer.) \medskip Another straightforward corollary of our rainbow blow-up lemma is the existence of rainbow bounded degree graphs in quasi-random host graphs. Let us say a graph $G$ on $n$ vertices is \emph{$(\eps,d)$-dense} if for all disjoint $S,T\subseteq V(G)$ with $|S|,|T|\geq \eps n$, we have $e_G(S,T)\geq d|S||T|$. \begin{cor} \label{cor:quasirandom} For all $d,\Delta$, there are $\eps,\mu>0$ and an $n_0\in \mathbb{N}$ such that the following holds for all $n\geq n_0$. Suppose $G,H$ are graphs on $n$ vertices, $\delta(G)\geq dn$, the graph $G$ is $(\eps,d)$-dense, and $\Delta(H)\leq \Delta$. Then, given any $\mu n$-bounded edge colouring of $G$, the graph $G$ contains a rainbow copy of $H$. \end{cor} \COMMENT{ \proof By the Hajnal--Szemer\'edi-theorem, there is a partition $(X_1,\ldots,X_{\Delta+1})$ of $V(H)$ into independent sets such that their sizes differ by at most $1$. Observe that with probability at least $1/2$ a random partition $(V_1,\ldots,V_{\Delta+1})$ of $V(G)$ such that $|V_i|=|X_i|$ for all $i\in [\Delta+1]$ induces lower $(2\eps,d/2)$-super-regular graphs $G[V_j,V_{j'}]$ for all distinct $j,j'\in [\Delta+1]$ by some standard Chernoff-type inequality. Let $(V_1,\ldots,V_{\Delta+1})$ be such a partition. Now the application of Lemma~\ref{lem:blow-up simple} with $R=K_{\Delta+1}$ completes the proof. \endproof} This is the first result that considers spanning rainbow embeddings in graphs of arbitrarily small (but fixed) density. \subsection{Further applications and related problems} \label{subsec:related stuff} An edge colouring is \defn{locally $k$-bounded} if each colour appears at most $k$ times on the edges which are incident to a single vertex. (For instance, a locally $1$-bounded colouring is proper.) For many of the aforementioned results, there is a corresponding result where the colouring is not $k$-bounded, but only locally $k$-bounded, and instead of aiming for a rainbow copy of $H$, one aims for a properly coloured copy of $H$. Our proof of the rainbow blow-up lemma transfers to this setting as well (with $k=\mu n$). In fact, it is then much easier, because there are only local conflicts and no global conflicts, and a simple modification of the original proofs already yields such a result. Note that every locally $O(1)$-bounded edge colouring is (globally) $O(n)$-bounded. For example, every proper edge colouring is $n/2$-bounded. Quite a few open problems are formulated in this setting. For example, the Ryser--Brualdi--Stein conjecture asks for a rainbow matching of size $n-1$ in every properly $n$-edge-coloured~$K_{n,n}$, and Andersen~\cite{andersen:89} conjectured that there is a rainbow path of length $n-2$ in every properly edge-coloured~$K_n$. The currently best approximate result for the first problem is due to Hatami and Shor~\cite{HS:08}. An approximate result for the second problem was achieved in a breakthrough result by Alon, Pokrovskiy and Sudakov~\cite{APS:17} and slightly improved by Balogh and Molla~\cite{BM:17}. One may also ask for any other rainbow subgraph with at most $n$ edges, for instance a particular (almost) spanning tree or an (almost) spanning collection of cycles. (Observe that a proper edge-colouring may only use $n$ colours. Hence it only makes sense to ask for rainbow subgraphs with at most $n$ edges.) Recently, Montgomery, Pokrovskiy and Sudakov~\cite{MPS:18a} showed that $K_n$ equipped with a locally $k$-bounded edge colouring contains any tree with at most $(1-o(1))n/k$ vertices as a rainbow subgraph. This also implies an approximate version of Ringel's conjecture on decompositions of $K_n$ into a given tree. Instead of finding a single rainbow subgraph, it is also natural to ask for decompositions into rainbow subgraphs. For instance, there are three conjectures due to Brualdi--Hollingsworth, Kaneko--Kano--Suzuki, and Constantine regarding the decomposition of properly edge-coloured complete graphs into rainbow (isomorphic) spanning trees. Montgomery, Pokrovskiy and Sudakov~\cite{MPS:18b} obtain strong results which prove these conjectures approximately (for not necessarily isomorphic trees). Independently, Kim, K\"uhn, Kupavskii and Osthus~\cite{KKKO:18} use our blow-up lemma to prove several related results. In their setting, the given colouring does not need to be proper, but can be locally $n/\log^5 n$-bounded, say. On the other hand, the global condition is slightly more restrictive than in a proper colouring. As one example, they show the following. \begin{theorem}[Kim, K\"uhn, Kupavskii and Osthus~\cite{KKKO:18}] For any $\eps > 0$, there exists $n_0\in \bN$ such that for all $n\geq n_0$, any $(1 - \eps)n/2$-bounded, locally $n/\log^5 n$-bounded edge-colouring of $K_n$ contains $(1/2- \eps)n$ edge-disjoint rainbow Hamilton cycles. \end{theorem} This also establishes approximate solutions to the above tree decomposition conjectures. Analogous results hold for approximate decompositions into rainbow $H$-factors~\cite{KKKO:18}. The following observation indicates why our rainbow blow-up lemma can be applied in this setting. Suppose for simplicity that we consider a properly edge-coloured complete graph. If one chooses a random subset $U$ of vertices of size $\mu n$, then with high probability, the colouring restricted to this subset is $\mu |U|$-bounded,\COMMENT{expected number of edges in $U$ of fixed colour is at most $(|U|/n)^2 n/2 =\mu |U|/2 $, appear independently because colouring is proper.} and the blow-up lemma can be applied to $U$ to complete a partial embedding (constructed outside $U$ using different methods, e.g.~via a hypergraph matching argument). Rainbow decomposition results may in fact hold in a more general setting. Kim, K\"uhn, Osthus and Tyomkyn~\cite{KKOT:ta} proved a blow-up lemma for approximate decompositions into arbitrary (bounded degree) graphs. In combination with our result and methods there is hope for a rainbow blow-up lemma for approximate decompositions. \section{Proof overview} \label{sec:sketch} Our starting point is the alternative proof of the blow-up lemma given by R\"odl and Ruci\'nski~\cite{RR:99}. We briefly sketch their approach, before considering the rainbow setting. Suppose that $(H,G,R,(X_i)_{i\in[r]},(V_i)_{i\in[r]})$ is a blow-up instance. Assume that $\Delta(H)\le \Delta$ and that $G[V_i,V_j]$ is super-regular\COMMENT{not yet defined} for all $1\le i<j\le r$, i.e.~$R=K_r$. We want to find an embedding $\phi$ of $H$ into $G$ (where $X_i$ is mapped onto $V_i$). The strategy is to embed $H$ in $r$ rounds, where in round $i$, all vertices of $X_i$ are embedded simultaneously into~$V_i$. Clearly, this corresponds to a perfect matching between $X_i$ and~$V_i$. Yet not every perfect matching yields a valid embedding. They keep track of this by defining a (dynamic) `candidacy graph' $A^j$ for each pair of clusters $(X_j,V_j)$. This candidacy graph depends on the partial embedding up to round $i$. More precisely, for $j>i$, let $A_i^j$ denote the candidacy graph for $(X_j,V_j)$ after $X_1,\dots,X_i$ have been embedded, where $xv\in E(A_i^j)$ if and only if $v$ is still a candidate to be the image of~$x$. In the beginning, all candidacy graphs are complete bipartite graphs, but during the embedding, they become gradually sparser. The main idea is to maintain super-regularity of the candidacy graphs. More precisely, given an embedding of the clusters $X_1,\dots,X_i$, we assume that the candidacy graphs $A_{i}^{i+1},\dots,A_i^{r}$ are super-regular (which is true for $i=0$), and now have to find a perfect matching $\sigma$ of $A_i^{i+1}$ (which defines an embedding of $X_{i+1}$ into $V_{i+1}$) in such a way that the updated candidacy graphs $A_{i+1}^{i+2},\dots,A_{i+1}^{r}$ are again super-regular. Fortunately, it turns out that if $\sigma$ is chosen randomly among all perfect matchings of $A_i^{i+1}$, then this desired property holds with high probability. The key tool to show this is the so-called Four graphs lemma from \cite{RR:99} (see~Lemma~\ref{lem:four graphs}). For this to work, we actually need a stronger assumption on~$H$, that is, we require that the clusters $X_i$ are not only independent in~$H$, but $2$-independent. However, using the Hajnal--Szemer\'edi theorem, it is not hard to reduce the general case to this setting. This will also be the first step in our proof (see Section~\ref{subsec:matching split}). Now, assume that $c\colon E(G)\to C$ is a $\mu n$-bounded edge colouring of $G$. We might try to proceed as above. Suppose that we have already found a rainbow embedding $\phi_i$ of $H[X_1\cup \dots \cup X_i]$ into $G[V_1\cup \dots \cup V_i]$ and that the candidacy graphs $A_{i}^{i+1},\dots,A_i^{r}$ are super-regular. Clearly, the definition of candidacy graphs needs to be adapted to the rainbow setting. Consider $x\in X_j$ and $v\in V_j$ for $j>i$. Let $F_{xv}$ denote the set of edges $\phi_i(y)v$ where $y\in N_H(x) \cap (X_1\cup \dots \cup X_i)$. Previously, $v$ was a candidate for $x$ if and only if $F_{xv}\In E(G)$, where $F_{xv}$ is the set of edges in $G$ which are used if $x$ is embedded at $v$. Now, we need in addition that the edges of $F_{xv}$ have mutually distinct colours, and do not have a colour which is already used by $\phi_i$. Suppose that this is the case and $A_{i}^{i+1}$ is the candidacy graph for $(X_{i+1},V_{i+1})$. In the uncoloured setting, any perfect matching $\sigma$ of $A_{i}^{i+1}$ yields a valid embedding of $H[X_1\cup \dots \cup X_{i+1}]$ into $G[V_1\cup \dots \cup V_{i+1}]$ (and almost all of them leave the updated candidacy graphs super-regular). In contrast to this, by far not every perfect matching $\sigma$ of $A_{i}^{i+1}$ yields a rainbow embedding of $H[X_1\cup \dots \cup X_{i+1}]$. For this to be the case, we need that $c(F_{x\sigma(x)})\cap c(F_{x'\sigma(x')})=\emptyset$ for all distinct $x,x'\in X_{i+1}$. This can be modelled by defining a suitable conflict system on the edges of $A_{i}^{i+1}$, where two edges $xv,x'v'$ \defn{conflict} if $c(F_{xv})\cap c(F_{x'v'})\neq \emptyset$. Observe that a conflict-free perfect matching would yield a rainbow embedding of $H[X_1\cup \dots \cup X_{i+1}]$. Crucially, since $c$ is $\mu n$-bounded, every edge conflicts with at most $\Delta(H)\mu n$ other edges.\COMMENT{Here already assume $2$-independence} Using the switching method, we can show that a uniformly random perfect matching is conflict-free with positive probability. This step uses a recent result of Coulson and Perarnau~\cite{CP:17}. For the embedding procedure to work, we have to make sure that the updated candidacy graphs are again super-regular. However, this might easily be wrong, as we might accidentally isolate a vertex. For instance, vertex $v\in V_{i+2}$ might only see $\mu^{-2}$ colours, say, but all of those colours might already be used in the embedding $\phi_{i+1}$. Then vertex $v$ cannot be a candidate for any $x\in X_{i+2}$ and thus the embedding is stuck. To overcome this issue, we reserve an exclusive set of colours for each embedding round in the beginning. For instance, we might partition the colour set $C$ into sets $(C_{ij})_{1\le i<j\le r}$, and instead of embedding $H$ into $G$, we embed $H$ into the graph $G^\ast$ which between clusters $V_i,V_j$ only contains those edges of $G[V_i,V_j]$ whose colour is contained in $C_{ij}$. This decouples much of the colour dependency between the embedding rounds and allows us to analyse the updated candidacy graphs as before using the Four graphs lemma. Of course, this strategy only works if the pairs $G^\ast[V_i,V_j]$ are super-regular. A natural approach to show the existence of a suitable partition of $C$ is to partition $C$ randomly. Simplified, the problem we face here is as follows: Consider the complete bipartite graph $K_{n,n}$ with any $\mu n$-bounded edge colouring. If we `activate' each colour independently with probability $1/2$, what is the probability that the activated edges induce a $(\sqrt{\mu},1/2)$-super-regular pair, say? This seems to be an interesting problem in its own right, and we are not aware of any coverage in the literature. Heuristically one expects this to be a rare event of exponentially small probability. However, the Lov\'asz local lemma does not seem flexible enough to solve this problem. We will use the partial resampling algorithm introduced by Harris and Srinivasan~\cite{HS:18} to deal with this problem (see Section~\ref{sec:colour splitting}). One remaining issue arises from the applicability of the blow-up lemma. In the original version of the blow-up lemma, the regularity parameter~$\eps$ needs to be small as a function of the number of clusters~$r$. However, in a regularity partition obtained via an application of Szemer\'edi's regularity lemma, the number of clusters~$r$ is large as a function of~$\eps$. Usually, this is dealt with roughly as follows: one finds a vertex partition of the reduced graph $R$ into $r/k$ cliques of order $k$, where $k$ is `small', and then applies the blow-up lemma to each clique independently. This approach is not feasible in the rainbow setting, as the union of rainbow subgraphs does not necessarily yield a rainbow subgraph. Also, splitting the colour set in the beginning in order to reserve an exclusive set of colours for each application of the blow-up lemma does not solve this problem, as the density of the regular pairs is divided by $r/k$ through the splitting and will thus be too small compared to~$\eps$. We thus prove a `global' version of the blow-up lemma which allows the number of clusters $r$ to be large, but requires that $\Delta(R)\le \Delta$. Such a version of the blow-up lemma was first proved by Csaba~\cite{csaba:07} following the proof in~\cite{KSS:97}. The proof of R\"odl and Ruci\'nski~\cite{RR:99} can also be adapted to work in this setting (see e.g.~\cite{KKOT:ta}). However, we have to be careful with the colour splitting, for the very reason discussed above: if we reserved an exclusive colour set for each pair $V_i,V_j$ with $ij\in E(R)$, the density of the regular pairs would be far too low. This can be dealt with by `parallelizing' the embedding (for different reasons, this has also been done in~\cite{KKOT:ta}). We split $V(R)$ into $2$-independent sets $J_1,\dots,J_T$, where we might take $T=\Delta(R)^2+1$. We then reserve an exclusive colour set $C_{t_1t_2}$ for each pair $1\le t_1<t_2\le T$. The sparsified graph $G^\ast$ contains between clusters $V_i,V_j$ only those edges whose colour is contained in $C_{t_1t_2}$, where $i\in J_{t_1}$ and $j\in J_{t_2}$. We now embed $H$ in just $T$ rounds, where in round $t\in [T]$, we embed all clusters $X_i$ with $i\in J_t$ simultaneously. Consequently, the perfect matching $\sigma$ is not drawn as a uniform perfect matching of one candidacy graph, but is the union of $|J_t|$ such perfect matchings. Fortunately, this still works nicely with the switching method and the Four graphs lemma. The resulting proof can be found in Section~\ref{subsec:main proof}. \section{Notation} We only consider finite, undirected and simple graphs. For a graph $G$, we let $V(G)$ and $E(G)$ denote the vertex set and edge set, respectively. Moreover, for a vertex $v$, let $d_G(v)$ denote the degree of $v$, and let $d_G(v,A)$ denote the number of neighbours of $v$ in $A\In V(G)$. If we deal with several graphs simultaneously, we say \defn{$G$-neighbour} to clarify which graph we refer to. As usual, $\delta(G)$ and $\Delta(G)$ denote the minimum and maximum degree of $G$, respectively. For disjoint subsets $A,B\In V(G)$, let $G[A,B]$ denote the bipartite subgraph of $G$ between $A$ and $B$ and $G[A]$ the subgraph in $G$ induced by $A$. We let $G^2$ denote the square of $G$, i.e.~the graph obtained from $G$ by adding edges between vertices which have a common neighbour in~$G$. A subset $X\In V(G)$ is $2$-independent if it is independent in~$G^2$. Let $G$ be a graph. Given a set $C$, a function $c\colon E(G)\to 2^C$ is called an \emph{edge set colouring} of~$G$. A colour $\alpha \in C$ \emph{appears} on an edge $e$ if $\alpha\in c(e)$. We say that $c$ is \defn{$k$-bounded} if each colour appears on at most $k$ edges. Moreover, we say that $c$ is \defn{$(k,\Delta)$-bounded} if it is $k$-bounded and $|c(e)|\le \Delta$ for all $e\in E(G)$. For $C'\In C$, let $G_{C'}$ denote the subgraph of $G$ with vertex set $V(G)$ and edge set $\set{e\in E(G)}{c(e)\In C'}$. We let $d_G^{\alpha}(v)$ denote the number of edges incident to $v$ on which $\alpha$ appears. If $G$ is clear from the context, we may simply write $d_{C'}(v)$ and $d^\alpha(v)$ instead of $d_{G_{C'}}(v)$ and $d_G^{\alpha}(v)$, respectively. Given $\eps>0$ and $d\in[0,1]$, a bipartite graph $G$ with vertex classes $V_1,V_2$ is called \defn{$(\eps,d)$-regular} if for all pairs $S\In V_1$ and $T\In V_2$ with $|S|\ge \eps|V_1|$, $|T|\ge \eps |V_2|$, we have $|d_G(S,T)-d|\le \eps$. We simply say that $G$ is \defn{$\eps$-regular} if $G$ is $(\eps,d_G(V_1,V_2))$-regular. Moreover, an $(\eps,d)$-regular graph $G$ is called \defn{$(\eps,d)$-super-regular} if for all $v\in V_1$, we have $d_G(v)=(d\pm \eps)|V_2|$ and for all $v\in V_2$ we have $d_G(v)=(d\pm \eps)|V_1|$. Clearly, if $G$ is $(\eps,d)$-super-regular, it is also lower $(\eps,d)$-super-regular. For a finite set $S$ and $i\in \mathbb{N}$, we write $\binom{S}{i}$ for the set of all subsets of $S$ of size $i$ and $2^S$ for the powerset of $S$. For a set $\Set{i,j}$, we sometimes simply write $ij$. For $a,b,c\in \mathbb{R}$, we write $a=b\pm c$ whenever $a\in [b-c,b+c]$. For $a,b,c\in (0,1]$, we sometimes write $a\ll b \ll c$ in our statements meaning that there are increasing functions $f,g:(0,1]\to (0,1]$ such that whenever $a\leq f(b)$ and $b \leq g(c)$, then the subsequent result holds. \section{Colour splitting} \label{sec:colour splitting} The goal of this section is to prove the following lemma, which will allow us to reserve exclusive colour sets for our embedding rounds in the proof of the rainbow blow-up lemma. Our main tools are the partial resampling algorithm introduced by Harris and Srinivasan~\cite{HS:18} and McDiarmid's inequality. \begin{lemma} \label{lem:separate colours} Let $1/n \ll \mu \ll 1/r,\eps, 1/t,d,1/\Delta$. Suppose that $G$ is a graph with vertex partition $V(G)=V_1\cupdot\dots\cupdot V_r$ and $R_S\In R$ are graphs on $[r]$ such that \begin{enumerate}[label={\rm (\roman*)}] \item $n\le |V_i|\le 2n$ for all $i\in[r]$; \item $G[V_i,V_j]$ is (lower) $(\eps,d)$-regular for all $ij\in E(R)$; \item $G[V_i,V_j]$ is (lower) $(\eps,d)$-super-regular for all $ij\in E(R_S)$. \end{enumerate} Let $c\colon E(G)\to 2^C$ be a $(\mu n,\Delta)$-bounded edge set colouring of $G$ with $|C|\le \mu^{-1.3}n$.\COMMENT{deleted: Let $(E_\ell)_{\ell\in [t]}$ be any partition of $E(R)$. Doesn't work with the bandwidth application. Thus, we split every pair into many colour exclusive subpairs. Keep in mind} Then there exists a partition $(C_\ell)_{\ell\in [t]}$ of $C$ such that for all $\ell \in [t]$ and all $ij \in E(R)$, there is a subgraph $G_\ell^{ij}\In G_{C_\ell}[V_i,V_j]$ such that $G_\ell^{ij}$ is (lower) $(2\eps,d/t^\Delta)$-regular, and (lower) $(2\eps,d/t^\Delta)$-super-regular if $ij \in E(R_S)$. \end{lemma} We remark that if $c$ is uniform, i.e.~$c(e)=\Delta$ for all $e\in E(G)$, for some $\Delta\in \bN$, then we can take $G_\ell^{ij}= G_{C_\ell}[V_i,V_j]$. In particular, this applies if $c$ is a normal edge colouring. In the general case, we introduce `dummy colours' to make $c$ uniform. In the proof of Lemma~\ref{lem:separate colours}, we will partition $C$ randomly and then show that with positive probability, all the obtained subpairs are (super-)regular. In order to motivate our approach, we briefly discuss the following simplified variant of the problem (which will be the running example throughout this section): Consider $K_{n,n}$ with any $\mu n$-bounded edge colouring. If we `activate' each colour independently with probability $1/2$, what is the probability that the activated edges induce a super-regular pair? (An edge is activated if its colour is activated.) Using McDiarmid's inequality, it is not hard to see that with high probability, the activated subgraph is indeed $(\eps,1/2)$-regular. We first recall McDiarmid's inequality and then prove a more elaborated version of what we just indicated. \begin{theorem}[McDiarmid's inequality, see~\cite{mcdiarmid:89}\COMMENT{Lemma~1.2}] \label{thm:McDiarmid} Suppose $X_1,\dots,X_m$ are independent Bernoulli random variables and suppose $w_1,\dots,w_m\in [0,W]$. Suppose $X$ is a real-valued random variable determined by $X_1,\dots,X_m$ such that changing the outcome of $X_i$ changes $X$ by at most $w_i$ for all $i\in [m]$. Then, for all $t>0$, we have $$\prob{|X-\expn{X}|\ge t} \le 2 \eul^{-\frac{2t^2}{W\sum_{i=1}^m w_i}}.$$ \end{theorem} \begin{prop}\label{prop:regularity slicing} Let $1/n \ll \mu \ll \eps , p,d,1/\Delta$. Suppose that $G$ is a (lower) $(\eps,d)$-regular graph between $V,V'$ with $\eps n\le |V|,|V'|\le 2n$ and let $c\colon E(G)\to \binom{C}{\Delta}$ be a $\mu n$-bounded edge set colouring of $G$. Suppose we choose a random subset $C'$ of $C$ by including every colour $\alpha \in C$ independently with probability $p$. Then with probability at least $1-\eul^{-\mu^{-0.9} n}$, the graph $G_{C'}$ is (lower) $(2\eps,p^{\Delta}d)$-regular. \end{prop} \proof We only consider the case when $G$ is $(\eps,d)$-regular. If $G$ is lower\COMMENT{deleted brackets} $(\eps,d)$-regular, the proof is similar. Consider sets $S\In V$, $T \In V'$ with $|S|\ge \eps |V|$ and $|T|\ge \eps |V'|$. Let $X(S,T)=e_{G_{C'}}(S,T)$ denote the number of edges $e$ between $S$ and $T$ such that $c(e)\subseteq C'$. Clearly, $\expn{X(S,T)}=p^\Delta e_G(S,T) = p^\Delta d |S||T| \pm \eps |S||T|$. Moreover, $X(S,T)$ is a random variable where a single choice of whether $\alpha\in C'$ changes $X(S,T)$ by at most $\mu n$. Hence, by Theorem~\ref{thm:McDiarmid}, \begin{align*} \prob{|X(S,T)-\expn{X(S,T)}|> \eps |S||T|} \le 2\eul^{-\frac{2(\eps |S||T|)^2}{\mu n \Delta e(G)}} \le 2\eul^{-\frac{\eps^{10}}{2\mu \Delta} n}. \end{align*} Thus, a union bound shows that the activated subgraph is not $(2\eps,p^\Delta d)$-regular with probability at most $2^{4n}2\eul^{-\frac{\eps^{10}}{2\mu \Delta}n}\le \eul^{-\mu^{-0.9} n}$. \endproof However, in order to show that the activated subgraph is super-regular, we have to control the degree of every vertex as well. Whilst for sets $S,T$ as above, the activation or deactivation of one colour has only a small effect, this is very different for the degree of a single vertex. To appreciate this issue, note that there might only be $\mu^{-1}$ colours present at a particular vertex at all. Hence, if we suppose the `activated' degrees of the vertices are mutually independent, then the probability that all `activated' degrees are in the desired range decays exponentially with~$n$. We note that under the additional assumption that the given colouring is locally $o(n/\log n)$-bounded, we could apply McDiarmid's inequality for each vertex and conclude with a union bound that our desired properties hold with probability close to~$1$. We remark that in \cite{APS:17}, the authors considered a problem of a similar kind. They prove that if the colours of a proper edge colouring of $K_n$ are activated with probability~$1/2$, say, then with high probability, the activated subgraph has very good expansion properties. Our efforts in the rest of this section focus on overcoming this local barrier. As a consequence, we cannot hope that a random partition satisfies the desired properties with high probability. The approach we pursue is motivated by the Lov\'asz local lemma, but the Lov\'asz local lemma itself does not seem flexible enough for our purposes. We will discuss this in more detail in the next subsection. In order to motivate the next result, which allows us to split the degree of each vertex evenly among the colour sets, we note that (the simplified version of) this problem can be phrased as a `column-sparse assignment-packing problem'. Let $G$ be a graph on $n$ vertices and $c\colon E(G)\to C$ a $\mu n$-bounded edge colouring. Our aim is to find a subset $C'$ of `activated' colours such that for every vertex $v\in V(G)$, we have $d_{C'}(v)=d_G(v)/2\pm 2\mu n$, say. Define the matrix $\hat{A}$ whose rows are indexed by $V(G)$ and whose columns are indexed by $C$, where $\hat{A}_{v,i}:=d^i(v)$ is the number of edges at $v$ coloured~$i$. Let $$D:=\max_{i\in C}\sum_{v\in V(G)}d^i(v)$$ denote the maximum column sum of $\hat{A}$. Let $\hat{c}$ denote the vector indexed by $V(G)$ with $\hat{c}_v:=d_G(v)/2$. Define \begin{align*} \text{$A:= \left( \begin{array}{c} \hat{A}\\ -\hat{A} \end{array} \right)$ and $c:= \left( \begin{array}{c} \hat{c}\\ -\hat{c} \end{array} \right)$.} \end{align*} Let $x$ be a vector indexed by~$C$. Clearly, the linear system $Ax=c$ has the solution $x_i^\ast=1/2$ for all $i\in C$. The rounding theorem of Karp et al.~\cite{KLRTVV:87} implies the existence of an integral vector $x$ such that $|x_i-x^\ast_i|<1$ for all $i\in C$ and such that $(\hat{A}x)_v= (\hat{A} x^\ast)_v \pm D$ for all $v\in V(G)$. Clearly, $x$ is a $0/1$-vector. Let $C'$ be the subset of $C$ which represents the $1$-coordinates of $x$. Then we have $d_{C'}(v)=(\hat{A}x)_v=\hat{c}_v\pm D=d_G(v)/2\pm D$ for all $v\in V(G)$. Crucially, since $c$ is $\mu n$-bounded, we have that $D\le 2\mu n$, as desired. This is good news for our problem. However, in order to prove Lemma~\ref{lem:separate colours}, we need that a random partition of $C$ satisfies the desired degree properties with high enough probability so that after applying Proposition~\ref{prop:regularity slicing}, we can use a union bound to conclude that the sliced pairs are super-regular. We will deal with this in the next two subsections. The following corollary is the result of these efforts. With this tool at hand, we can deduce that the sliced pairs do not only inherit regularity, but also super-regularity, all we need to prove Lemma~\ref{lem:separate colours}. \begin{cor} \label{cor:degree split advanced} Suppose $1/n\ll \mu \ll 1/t,d,1/\Delta,1/r$ and let $\kappa:=4\sqrt{t^\Delta d^{-1}\mu}$. Let $G$ be a graph on $n$ vertices and let $c\colon E(G)\to \binom{C}{\Delta}$ be $\mu n$-bounded. Suppose for every vertex $v\in V(G)$, we are given a partition $S^v_1,\ldots,S_{m_v}^v$ of $N_G(v)$ into at most $r$ sets of size at least $drn$ each.\COMMENT{Allows us to use same $\kappa$ as in Lemma~\ref{lem:degree split advanced}} Suppose we split $C$ randomly into $C_1,\dots,C_t$, by independently assigning $\alpha \in C$ to $C_\ell$ with probability $1/t$. Then with probability at least $\frac{\kappa}{3}(1+\kappa)^{-|C|}$, the following holds: $d_{C_\ell}(v,S_i^v)= t^{-\Delta} |S_i^v| \pm (2t)^\Delta \kappa r n$ for all $v\in V(G)$, $i\in [m_v]$, and $\ell\in [t]$. \end{cor} \lateproof{Lemma~\ref{lem:separate colours}} We only consider the case when $G[V_i,V_j]$ is in fact $(\eps,d)$-(super)-regular. If $G[V_i,V_j]$ is only lower $(\eps,d)$-(super)-regular, the proof is similar. If $2\eps \ge d$, there is nothing to prove, so let us assume that $\eps\le d/2$. In order to apply Proposition~\ref{prop:regularity slicing} and Corollary~\ref{cor:degree split advanced}, we need the edge set colouring to be uniform. We thus add a set of new `dummy' colours $C'$ with $|C'|= \lceil \mu^{-1}\Delta n \rceil$. Let $$C^\ast:=C \cupdot C'$$ be the extended colour set and note that $|C^\ast|\le 2\mu^{-1.3}n$. Let $c'\colon E(G) \to \binom{C'}{\Delta}$ be any $\mu n$-bounded edge set colouring. Clearly, $c'$ exists since $\frac{\Delta e(G)}{\mu n} \le |C'|$. Now, let $c^*\colon E(G) \to \binom{C^\ast}{\Delta}$ be such that \begin{align} \text{$c^* (e) \In c(e) \cup c'(e)$ and $c(e) \In c^* (e)$ for all $e\in E(G)$.} \label{filled colour sets} \end{align} Clearly, $c^*$ is $\mu n$-bounded. Split $C^\ast$ randomly into $C_1^*,\dots,C_t^*$, by independently assigning $\alpha\in C^\ast$ to $C_\ell^*$ with probability $1/t$. By Proposition~\ref{prop:regularity slicing}, we conclude that with probability at least $1-tr^2\eul^{-\mu^{-0.9} n}$, the graph $G_\ell^{ij}:=G_{C_\ell^*}[V_i,V_j]$ is $(2\eps,d/t^\Delta)$-regular for all $\ell \in [t]$ and all $ij \in E(R)$. In order to apply Corollary~\ref{cor:degree split advanced}, define $G^\ast:=\bigcup_{ij\in E(R_S)}G[V_i,V_j]$. Consider $i\in[r]$ and $v\in V_i$. For $j\in N_{R_S}(i)$, define $S^v_j:=N_G(v)\cap V_j$. Note that $(S^v_j)_{j\in N_{R_S}(i)}$ partitions $N_{G^\ast}(v)$, and that $|S^v_j|= (d\pm \eps)|V_j|$ since $G[V_i,V_j]$ is $(\eps,d)$-super-regular. In particular, $|S^v_j|\ge d|V_j|/2 $. By Corollary~\ref{cor:degree split advanced} (with $G^\ast$, $d/4r^2$ playing the roles of $G,d$), we have with probability at least $\mu (1+ \sqrt{\mu})^{-|C^\ast|} $ that \begin{align} d_{G^{\ast}_{C_\ell^*}}(v,V_j)= t^{-\Delta}(d\pm \eps)|V_j| \pm \mu^{1/3} n \label{splitted degree} \end{align} for all $ij\in E(R_S)$, $v\in V_i$, and $\ell\in [t]$. Since $|C^*|\le 2\mu^{-1.3}n$, we have\COMMENT{with $(1+ \sqrt{\mu})^{-|C^*|}\ge \eul^{\mu^{1/2}(-2\mu^{-1.3}n)}=\eul^{-2\mu^{-0.8}n}$} \begin{align*} \mu(1+ \sqrt{\mu})^{-|C^*|}-tr^2\eul^{-\mu^{-0.9} n} \geq \mu \eul^{-2\mu^{-0.8} n }-tr^2\eul^{-\mu^{-0.9} n} >0. \end{align*} Therefore, there exists a partition $(C_\ell^*)_{\ell\in[t]}$ of $C^\ast$ with the above properties. From \eqref{splitted degree}, we can infer that $G_\ell^{ij}$ is $(2\eps,d/t^\Delta)$-super-regular for all $\ell \in [t]$ and all $ij\in E(R_S)$. Finally, define $C_\ell:=C\cap C_\ell^*$ for all $\ell\in [t]$. Clearly, $(C_\ell)_{\ell\in[t]}$ is a partition of $C$, and we have $G_\ell^{ij}\In G_{C_\ell}[V_i,V_j]$ by~\eqref{filled colour sets} for all $\ell \in [t]$ and all $ij \in E(R)$. \endproof \subsection{The partial resampling algorithm} The proof of Corollary~\ref{cor:degree split advanced} is based on the partial resampling algorithm introduced by Harris and Srinivasan, which is a relative of the Moser-Tardos resampling algorithm. These algorithms are set in the so-called `variable model' of the Lov\'asz local lemma. Let $X_1,\dots,X_N$ be independent random variables, and suppose we want to avoid a set of bad events $\cB$. For every $A\in \cB$, let $vbl(A)$ be the set of variables which determine $A$. This naturally defines a dependency graph $\Gamma$ on $\cB$ as follows: $AB \in E(\Gamma)$ if $vbl(A)\cap vbl(B)\neq\emptyset$. The (symmetric) Lov\'asz local lemma implies that if each bad event has probability at most $p$ and $4p\Delta(\Gamma)\le 1$, then there exists an assignment of the variables $X_1,\dots, X_N$ which avoids all bad events. In their well-known paper, Moser and Tardos~\cite{MT:10} provided an algorithmic version thereof. Their so-called resampling algorithm is as simple as it could be: Start with a random assignment of the variables $X_1,\dots,X_N$. As long as some bad event $A\in \cB$ is still true, resample all the variables in $vbl(A)$ (according to their respective distribution). Clearly, if the algorithm terminates, then this yields an assignment of the variables which avoids all bad events. Moser and Tardos showed that, under the assumptions of the Lov\'asz local lemma, the algorithm terminates with probability~$1$. In our (simplified) setting, let $X_i$ denote the Bernoulli random variable indicating whether colour $i\in C$ is activated or not. For each vertex $v\in V(G)$, let $A_v$ be the bad event that $\sum_{i\in C}d_G^i(v)\mathds{1}_{\Set{X_i=1}} \neq d_G(v)/2\pm \sqrt{\mu}n$, say. Note that $A_v$ depends on all colours which are present at~$v$. This may lead to a very dense dependency graph, possibly far too dense to apply the Lov\'asz local lemma. Nonetheless, a lot of these dependencies might be very weak. For example, given a vertex $v$ and colours $i,j\in C$, where $d^i(v)$ is much larger than $d^j(v)$, the dependency of $A_v$ on $X_i$ should intuitively be much more significant than its dependency on~$X_j$. A variant of the resampling algorithm of Moser and Tardos tailored towards this problem could thus vaguely look as follows: Start with a random assignment of the variables $(X_i)_{i\in C}$. As long as some bad event $A_v$ is still true, choose randomly a subset $C'$ of colours, where a colour is more likely to be chosen if many edges at $v$ are coloured with this colour. Now only resample $(X_i)_{i\in C'}$. This `partial resampling algorithm' falls into a very general framework, which was introduced by Harris and Srinivasan~\cite{HS:13b,HS:13a,HS:18}. They eliminated the need of a dependency graph by introducing `fractional hitting sets' instead, which capture `how dependent' a bad event $A$ is on a variable $X_i$ (or more generally, a set of variables). We now introduce this framework, following their exposition in~\cite{HS:18}. Let $X_1,\dots,X_N$ be independent random variables, where each $X_i$ has a finite set $L_i$ of possible assignments. For $i\in [N]$ and $j\in L_i$, let $p_{i,j}=\prob{X_i=j}$. Hence, $\sum_{j\in L_i}p_{i,j}=1$. Let $(\Omega,\mathbb{P})$ be the respective product probability space. An ordered pair $(i,j)$ with $i\in [N]$ and $j\in L_i$ is referred to as an \defn{element}, and $\cX$ denotes the set of all elements. An \defn{atomic event} is a set $Y\In \cX$ such that if $(i,j),(i,j')\in Y$, then $j=j'$. Let $\cA$ denote the family of all atomic events. A set $\cB\In \cA$ is called a \defn{complex event}. An \defn{assignment} is an atomic event $A$ such that $|A|=N$. We say that an assignment $A$ \defn{avoids $\cB\In \cA$} if there is no $B\in \cB$ with $B\In A$. These definitions naturally relate to the more standard notation for events. For an element $\omega=(j_1,\dots,j_N)$ of the probability space $\Omega$, define $A_{\omega}:=\Set{(1,j_1),\dots,(N,j_N)}$. Then $A_{\omega}$ is an assignment. For an event $\cE\In \Omega$, define $\cB_{\cE}:=\set{A_{\omega}}{\omega \in \cE}$. Clearly, we have $\omega\in \cE$ if and only if $A_\omega$ does not avoid $\cB_{\cE}$. Note that our complex events $\cB_{\cE}$ only contain assignments. In general, it is also possible to deal with complex events which contain atomic events $A$ with $|A|<N$. Given a function $\lambda\colon \cX \to \bR$, we use the following shorthand notation: for an element $(i,j)\in \cX$, we write $\lambda_{i,j}:=\lambda((i,j))$, and for a set $Y\In \cX$, we write $$\lambda^Y:=\prod_{(i,j)\in Y}\lambda_{i,j}.$$ For $Y\In \cX$ and $i\in [N]$, we write $Y\sim i$ if there exists some $j\in L_i$ such that $(i,j)\in Y$. \begin{defin}[fractional hitting set, cf.~{\cite[{Definition~2.1}]{HS:18}}] Let $Q\colon \cA \to [0,1]$. For a set $B\in \cA$, we say that $Q$ is a \defn{fractional hitting set for $B$} if $Q(\emptyset)=0$ and $$\sum_{Y\In B}Q(Y)\ge 1.$$ For a complex event $\cB\In \cA$, we say that $Q$ is a \defn{fractional hitting set for $\cB$} if it is a fractional hitting set for all $B\in \cB$. Moreover, for an event $\cE\In \Omega$, we say that $Q$ is a \defn{fractional hitting set for $\cE$} if it is a fractional hitting set for $\cB_{\cE}$. \end{defin} The intention of this definition is to offer a more flexible notion of dependency within the framework, which finally eliminates the need for a dependency graph. Suppose we want to find an assignment that avoids the complex events $\cB_1,\ldots,\cB_K$, for which we are given fractional hitting sets $Q_1,\dots,Q_K$, respectively. The \defn{partial resampling algorithm (PRA)} starts with a random assignment of the~$X_i$. Then it repeats the following, as long as some bad event is true: choose any $B\in \cB_k$ which is true. Now, select randomly a set $Y\In B$, where the probability of selecting $Y$ is given by $\frac{Q_k(Y)}{\sum_{Y'\In B}Q_k(Y')}$. Then resample all $X_i$ with $Y\sim i$ (according to their distribution). Clearly, if the algorithm terminates, it outputs an assignment of the $X_i$ which avoids all bad events. The next definition is the final ingredient before we can state the result of Harris and Srinivasan. Therein, the function $\lambda$ may be thought of as an `inflated' probability vector. \begin{defin} Let $Q\colon \cA\to [0,1]$ and $\lambda\colon \cX \to [0,\infty)$. We define $$\Gamma(Q,\lambda):= \sum_{Y\in \cA}Q(Y)\lambda^Y,$$ and for each $i\in [N]$, we define $$\Gamma_i(Q,\lambda):= \sum_{Y\in \cA \colon Y\sim i} Q(Y)\lambda^Y.$$\COMMENT{Note that we do not have $\Gamma(Q,\lambda)=\sum_{i\in [N]}\Gamma_i(Q,\lambda)$.} \end{defin} \begin{theorem}[Harris and Srinivasan {\cite[{Theorem~3.8, see also Equation~(10)}]{HS:18}}] \label{thm:PRA} \COMMENT{By Equation (10), we can use $\Gamma(Q_k,\lambda)$ instead of $S(\cB_k,Q_k,\lambda)$} Let $\cB_1,\dots,\cB_K \In \cA$ be complex (bad) events. Let $\lambda\colon \cX \to [0,\infty)$ be such that $p_{i,j}=\frac{\lambda_{i,j}}{\sum_{j\in L_i}\lambda_{i,j}}$. Suppose we are given fractional hitting sets $Q_1,\dots,Q_K$ for $\cB_1,\dots,\cB_K$, respectively. Assume that the following conditions are satisfied: \begin{enumerate}[label=\rm{(\roman*)}] \item for all $k\in [K]$, we have $\Gamma(Q_k,\lambda)<1$; \item for all $i\in [N]$, we have $\sum_{k\in [K]}\frac{\Gamma_i(Q_k,\lambda)}{1-\Gamma(Q_k,\lambda)} +1 \le \sum_{j\in L_i}\lambda_{i,j}$. \end{enumerate} Then the expected number of resamplings of the PRA for these parameters is at most $\sum_{(i,j)\in \cX}\lambda_{i,j}$. In particular, the PRA terminates with probability~$1$. \end{theorem} \subsection{Proof of Corollary~\ref{cor:degree split advanced}} Theorem~\ref{thm:PRA} guarantees that there exists an assignment which avoids all of $\cB_1,\dots,\cB_K$. It does however not state a lower bound on the probability (in the space $(\Omega,\mathbb{P})$) that no bad event happens. Fortunately, we can deduce the following corollary, which yields such a lower bound. For this, we use the PRA in an indirect proof. It would be interesting to know whether there exists an appropriately formulated generalized Lov\'asz local lemma with a direct proof. \begin{cor}\label{cor:PRA explicit prob} Let $\cE_1,\dots,\cE_K\In \Omega$ be (bad) events. Let $\lambda\colon \cX \to [0,\infty)$ be defined as $\lambda_{i,j}:=(1+\kappa)p_{i,j}$, where $\kappa\in (0,1)$.\COMMENT{Could even allow another $\kappa_i$ for every $i$.} Suppose there are fractional hitting sets $Q_1,\dots,Q_K$ for $\cE_1,\dots,\cE_K$, respectively, which satisfy the following conditions: \begin{enumerate}[label=\rm{(\roman*)}] \item for all $k\in [K]$, we have $\Gamma(Q_k,\lambda)< 1$; \item for all $i\in [N]$, we have $\sum_{k\in [K]}\frac{\Gamma_i(Q_k,\lambda)}{1-\Gamma(Q_k,\lambda)} \le \kappa/2$. \end{enumerate} Then $\prob{\bigcap_{k\in[K]}\overline{\cE_k}} \ge \frac{\kappa}{3}(1+\kappa)^{-N}$. \end{cor} \proof For any event $\cE\In \Omega$, we have that $\prob{\cE}=\sum_{\omega \in \cE}\prob{\Set{\omega}}=\sum_{\omega \in \cE}\prod_{(i,j)\in A_{\omega}}p_{i,j}$. Therefore, \begin{align}\label{eq:E*} \prob{\bigcap_{k\in[K]}\overline{\cE_k}}=\prob{\cE^\ast}=\sum_{A\in \cB_\ast}\prod_{(i,j)\in A} p_{i,j}, \end{align} where $\cE^\ast:=\Omega\sm \bigcup_{k\in[K]}\cE_k$, $\cB_\ast:=\cB_{\cE^\ast}$, and $\cB_k:=\cB_{\cE_k}$ for all $k\in [K]$. Observe that $\bigcup_{k\in [K]\cup \Set{\ast}}\cB_k = \cB_{\Omega}$ and hence there is no assignment which avoids all of $\cB_1,\dots,\cB_K,\cB_\ast$. Suppose, for a contradiction, that $\prob{\cE^\ast}\le p^\ast:=\frac{\kappa}{3}(1+\kappa)^{-N}$. Let $Q_\ast\colon \cA \to [0,1]$ be the trivial fractional hitting set for $\cB_\ast$, that is $Q_\ast(Y):=1$ if $Y\in \cB_\ast$ and $Q_\ast(Y):=0$ otherwise. We emphasize that $\cB_\ast$ only contains assignments and hence \begin{align} \Gamma(Q_\ast,\lambda) =\sum_{Y\in \cB_\ast}\lambda^Y = \sum_{Y\in \cB_\ast}\prod_{(i,j)\in Y} (1+\kappa)p_{i,j} \stackrel{(\ref{eq:E*})}{=} (1+\kappa)^N\prob{\cE^\ast} \le p^\ast (1+\kappa)^N = \kappa/3. \label{gamma upper bound} \end{align} Trivially, for all $i\in[N]$, we have $\Gamma_i(Q_\ast,\lambda)\le\Gamma(Q_\ast,\lambda)\le \kappa/3$, which implies $\frac{\Gamma_i(Q_\ast,\lambda)}{1-\Gamma(Q_\ast,\lambda)}\le \kappa/2$. We now apply Theorem~\ref{thm:PRA} to the complex events $\cB_1,\dots,\cB_K,\cB_\ast$ with fractional hitting sets $Q_1,\dots,Q_K,Q_\ast$, respectively. For all $k\in[K]\cup \Set{\ast}$, we have $\Gamma(Q_k,\lambda)< 1$ by~(i) and~\eqref{gamma upper bound}. Moreover, for all $i\in [N]$, we have $$\sum_{k\in [K]\cup \Set{\ast}}\frac{\Gamma_i(Q_k,\lambda)}{1-\Gamma(Q_k,\lambda)} +1 \le \kappa/2+\kappa/2 +1 = \kappa +1 =\sum_{j\in[t]}\lambda_{i,j}.$$ Thus, by Theorem~\ref{thm:PRA}, there exists an assignment $A\In \cX$ which avoids all of $\cB_1,\dots,\cB_K,\cB_\ast$. This clearly is a contradiction, which completes the proof. \endproof We now use Corollary~\ref{cor:PRA explicit prob} to prove Corollary~\ref{cor:degree split advanced}. To this end, we need to introduce some more notation. Let $\cS_\Delta^t:=\set{(s_1,\dots,s_t)}{\sum_{j=1}^t s_j=\Delta,s_j\in \bN_0}$ denote the $t$-dimensional discrete simplex. For $\mathbf{s}=(s_1,\dots,s_t)\in \cS_\Delta^t$, let $\binom{\Delta}{\mathbf{s}}:=\frac{\Delta!}{s_1!\cdots s_t!}$ denote the multinomial coefficient. Recall that \begin{align} \sum_{\mathbf{s} \in \cS_\Delta^t}\binom{\Delta}{\mathbf{s}} = t^\Delta. \label{multinomial identity} \end{align} Let $G$ be a graph, $c\colon E(G)\to \binom{C}{\Delta}$ and suppose that $C_1,\ldots, C_t$ is a (random) partition of $C$. For a vertex~$v$ and $\mathbf{s}=(s_1,\dots,s_t)\in \cS_\Delta^t$, let $d_{\mathbf{s},v}$ be the (random) number of edges~$e$ incident to~$v$ with $|c(e)\cap C_j|=s_j$ for all $j\in[t]$. For instance, for $\mathbf{s}=(\Delta,0,\dots,0)$, we have $d_{\mathbf{s},v}=d_{G_{C_1}}(v)$. Observe that if each $i\in C$ is independently assigned to $C_j$ with probability $1/t$, then $\expn{d_{\mathbf{s},v}}=\binom{\Delta}{\mathbf{s}} t^{-\Delta} d_G (v)$. \begin{lemma} \label{lem:degree split advanced} Suppose $1/n\ll \mu \ll 1/t,d,1/\Delta$ and let $\kappa:=4\sqrt{t^\Delta d^{-1}\mu}$. Let $G$ be a graph on $n$ vertices with $\delta(G)\ge dn$, and let $c\colon E(G)\to \binom{C}{\Delta}$ be $\mu n$-bounded. Suppose we split $C$ randomly into $C_1,\dots,C_t$, by independently assigning $i\in C$ to $C_j$ with probability $1/t$. Then with probability at least $\frac{\kappa}{3}(1+\kappa)^{-|C|}$, the following holds: $$d_{\mathbf{s},v}=\binom{\Delta}{\mathbf{s}} t^{-\Delta} d_G (v) \pm (2t)^\Delta \kappa n$$ for all $v\in V(G)$ and $\mathbf{s}\in \cS_\Delta^t$. \end{lemma} We remark that for any $1\le K\le 2^{-\Delta}\kappa^{-1}$, the above probability bound can be improved to $\frac{\kappa}{3K}(1+\kappa/K)^{-|C|}$ on the expense of an extra factor $K$ in the degree error term, with the same proof.\COMMENT{Define $\lambda=(1+\kappa/K)/t$ and apply Corollary~\ref{cor:PRA explicit prob} with $\kappa'=\kappa/K$} \proof We may assume that $C=[N]$. Let $X_1,\dots,X_N$ be independent random variables taking values in $[t]$ uniformly at random. (Hence, the probability space is $\Omega=[t]^N$ and the set of elements is $\cX=[N]\times [t]$.) For $j\in[t]$, define the random sets $C_j:=\set{i\in [N]}{X_i=j}$. Thus, the $C_j$ are as in the statement of the lemma. For every vertex $v\in V(G)$ and every $\mathbf{s}\in \cS_\Delta^t$, we define $\cE_{v,\mathbf{s}}$ as the (bad) event that $d_{\mathbf{s},v} \ge \binom{\Delta}{\mathbf{s}} t^{-\Delta} d_G (v) + 2^\Delta \kappa n$. We will show that \begin{align} \prob{\bigcap_{v\in V(G),\mathbf{s}\in \cS_\Delta^t}\overline{\cE_{v,\mathbf{s}}}}\ge \frac{\kappa}{3}(1+\kappa)^{-N}.\label{target prob advanced} \end{align} Observe that \eqref{target prob advanced} would complete the proof: if no bad event $\cE_{v,\mathbf{s}}$ happens, we have $d_{\mathbf{s},v} \le \binom{\Delta}{\mathbf{s}} t^{-\Delta} d_G (v) + 2^\Delta\kappa n$ for every vertex $v\in V(G)$ and every $\mathbf{s}\in \cS_\Delta^t$. We can then use \eqref{multinomial identity} to also establish the lower bound \begin{align*} d_{\mathbf{s},v} &= d_G(v)-\sum_{\mathbf{s}'\in \cS_\Delta^t\sm\Set{\mathbf{s}}}d_{\mathbf{s}',v} \ge \left(1-\sum_{\mathbf{s}'\in \cS_\Delta^t\sm\Set{\mathbf{s}}} \binom{\Delta}{\mathbf{s}'} t^{-\Delta}\right)d_G(v) - 2^\Delta(|\cS_\Delta^t|-1)\kappa n \\ &\ge \binom{\Delta}{\mathbf{s}} t^{-\Delta} d_G (v) - (2t)^\Delta\kappa n. \end{align*} We will obtain~\eqref{target prob advanced} by applying Corollary~\ref{cor:PRA explicit prob}. To this end, we first define fractional hitting sets for our bad events. \begin{NoHyper} \begin{step} Defining fractional hitting sets \end{step} \end{NoHyper} Fix $v\in V(G)$ and $\mathbf{s}\in \cS_\Delta^t$ throughout Step~1. For $I\in \binom{[N]}{\Delta}$, let $d_I(v)$ denote the number of edges $e$ incident to $v$ with $c(e)=I$. We say that $Y\in \cA$ is of type $(I,\mathbf{s})$ if $Y=\Set{(i_1,j_1),\dots,(i_\Delta,j_\Delta)}$ with $\Set{i_1,\dots,i_\Delta}=I$ and for each $j\in [t]$, the number of indices $k\in [\Delta]$ with $j_k=j$ is equal to $s_j$. Let $\cT(I,\mathbf{s})$ denote the set of all $Y\in \cA$ of type $(I,\mathbf{s})$. This means that the edges~$e$ at~$v$ with $c(e)=I$ contribute to the value of the random variable $d_{\mathbf{s},v}$ if and only if $Y$ is a subset of the assignment given by the outcome of $X_1,\ldots,X_N$. More formally, given an assignment $A$, we have \begin{align}\label{eq:dIAfix} \sum_{I\in \binom{[N]}{\Delta},Y\in \cT(I,\mathbf{s})\colon Y\In A}d_I(v)= d_{\mathbf{s},v}(A), \end{align} where $d_{\mathbf{s},v}(A)$ refers to the value of $d_{\mathbf{s},v}$ if $X_1,\ldots,X_N$ have outcome $A$. Observe that \begin{align}\label{eq:Is} |\cT(I,\mathbf{s})|=\binom{\Delta}{\mathbf{s}}. \end{align} We make another remark for later use. We have \begin{align}\label{eq:dI} \sum_{I\in \binom{[N]}{\Delta},Y\in \cT(I,\mathbf{s})}d_I(v) \stackrel{\eqref{eq:Is}}{=}\sum_{I\in \binom{[N]}{\Delta}}\binom{\Delta}{\mathbf{s}}d_I(v) = \binom{\Delta}{\mathbf{s}} d_G(v). \end{align} Next we define a fractional hitting set $Q_{v,\mathbf{s}}$ for the event $\cE_{v,\mathbf{s}}$. Define $Q_{v,\mathbf{s}}\colon \cA \to [0,\infty)$ for all $I\in \binom{[N]}{\Delta}$ and all $Y\in \cT(I,\mathbf{s})$ as \begin{align} Q_{v,\mathbf{s}}(Y):=\frac{d_I(v)}{\binom{\Delta}{\mathbf{s}} t^{-\Delta} d_G (v) + 2^\Delta \kappa n}\label{def fractional advanced} \end{align} and set $Q_{v,\mathbf{s}}(Y):=0$ for all other $Y\in \cA$. We claim that $Q_{v,\mathbf{s}}$ is indeed a fractional hitting set for $\cE_{v,\mathbf{s}}$. Recall that $\cB_{v,\mathbf{s}}$ is the corresponding set of assignments. Hence we need to check that $Q_{v,\mathbf{s}}$ is a fractional hitting set for all assignments $A\in \cB_{v,\mathbf{s}}$. Consider first any assignment $A$. Then \begin{align*} \sum_{Y\In A}Q_{v,\mathbf{s}}(Y) =\sum_{I\in \binom{[N]}{\Delta},Y\in \cT(I,\mathbf{s})\colon Y\In A} Q_{v,\mathbf{s}}(Y) \stackrel{\eqref{eq:dIAfix},\eqref{def fractional advanced}}{=}\frac{d_{\mathbf{s},v}(A)}{\binom{\Delta}{\mathbf{s}} t^{-\Delta} d_G (v) + 2^\Delta \kappa n}. \end{align*} Since $d_{\mathbf{s},v}(A) \ge \binom{\Delta}{\mathbf{s}} t^{-\Delta} d_G (v) + 2^\Delta \kappa n$ for all $A\in\cB_{v,\mathbf{s}}$ by definition of $\cE_{v,\mathbf{s}}$, we conclude that for any $A\in \cB_{\cE_{v,\mathbf{s}}}$, we have $\sum_{Y\In A}Q_{v,\mathbf{s}}(Y)\ge 1$, as required. \begin{NoHyper} \begin{step} Verification of the conditions in Corollary~\ref{cor:PRA explicit prob} \end{step} \end{NoHyper} Now, define $\lambda\colon \cX\to [0,\infty)$ as $\lambda_{i,j}:=(1+\kappa)/t$ for all $(i,j)\in \cX$. We need to check conditions (i) and~(ii) from Corollary~\ref{cor:PRA explicit prob}. In order to check~(i), consider $v\in V(G)$ and $\mathbf{s}\in \cS_\Delta^t$. We have \begin{eqnarray*} \Gamma(Q_{v,\mathbf{s}},\lambda)&=& \sum_{Y\in \cA}Q_{v,\mathbf{s}}(Y)\lambda^Y = \sum_{I\in \binom{[N]}{\Delta},Y\in \cT(I,\mathbf{s})} Q_{v,\mathbf{s}}(Y)\left(\frac{1+\kappa}{t}\right)^\Delta \\ &\overset{\eqref{eq:dI},\eqref{def fractional advanced}}{=}& \left(\frac{1+\kappa}{t}\right)^\Delta \cdot \frac{\binom{\Delta}{\mathbf{s}}d_G(v)}{\binom{\Delta}{\mathbf{s}} t^{-\Delta} d_G (v) + 2^\Delta \kappa n} \\ &\le& \frac{(1+\kappa)^\Delta}{1+2^\Delta \kappa} \le \frac{1+(2^\Delta-1) \kappa}{1+2^\Delta\kappa} = 1-\frac{\kappa}{1+2^\Delta\kappa} \end{eqnarray*} \COMMENT{$= \frac{\binom{\Delta}{\mathbf{s}}d_G(v)}{\binom{\Delta}{\mathbf{s}} d_G (v) + 2^\Delta t^\Delta \kappa n} = \frac{1}{1+\frac{2^\Delta \kappa n t^\Delta}{\binom{\Delta}{\mathbf{s}} d_G(v)}} $ and $n\ge d_G(v)$ and $t^\Delta\ge \binom{\Delta}{\mathbf{s}}$} and thus (as $2^\Delta \kappa \le 1 $) \begin{align} \Gamma(Q_{v,\mathbf{s}},\lambda)&\le 1-\kappa/2 <1.\label{1st gamma condition new} \end{align} We now check (ii). Consider $i\in [N]$. First observe that for any $v\in V(G),\mathbf{s}\in \cS_\Delta^t$, we have \begin{align} \Gamma_i(Q_{v,\mathbf{s}},\lambda) &=\sum_{Y\in \cA\colon Y\sim i }Q_{v,\mathbf{s}}(Y)\lambda^Y = \sum_{I\in \binom{[N]}{\Delta},Y\in \cT(I,\mathbf{s})\colon I\ni i} Q_{v,\mathbf{s}}(Y)\left(\frac{1+\kappa}{t}\right)^\Delta .\label{eq:Gammai} \end{align} Crucially, we have for all $i\in [N]$ \begin{align} \sum_{v\in V(G)}\sum_{I\in \binom{[N]}{\Delta}\colon I\ni i} d_I(v) \le 2\mu n \label{column sparse} \end{align} since $c$ is $\mu n$-bounded and every edge $e\in E(G)$ with $i\in c(e)$ is counted twice on the left hand side. Therefore, for any $\mathbf{s}\in \cS_\Delta^t$, we have \begin{eqnarray*}\notag \sum_{v\in V(G)} \sum_{I\in \binom{[N]}{\Delta},Y\in \cT(I,\mathbf{s})\colon I\ni i} Q_{v,\mathbf{s}}(Y) &\overset{\eqref{eq:Is},\eqref{def fractional advanced},\eqref{column sparse}}{\le}& \frac{\binom{\Delta}{\mathbf{s}} \cdot 2\mu n}{\binom{\Delta}{\mathbf{s}} t^{-\Delta} d_G (v) + 2^\Delta \kappa n} \le t^\Delta \frac{2\mu }{d}. \end{eqnarray*} Using~\eqref{multinomial identity} and \eqref{eq:Gammai}, we deduce that $$\sum_{v\in V(G),\mathbf{s}\in \cS_\Delta^t}\Gamma_i(Q_{v,\mathbf{s}},\lambda) \le |\cS_\Delta^t|\left(\frac{1+\kappa}{t}\right)^\Delta \cdot t^{\Delta} \cdot \frac{2\mu }{d} \le \frac{3t^\Delta \mu}{d}.$$ Altogether, with \eqref{1st gamma condition new}, we conclude that $$\sum_{v\in V(G),\mathbf{s}\in \cS_\Delta^t}\frac{\Gamma_i(Q_{v,\mathbf{s}},\lambda)}{1-\Gamma(Q_{v,\mathbf{s}},\lambda)} \le \frac{2}{\kappa} \cdot \frac{3t^\Delta \mu}{d} \le \kappa/2$$ by definition of $\kappa$. Thus, \eqref{target prob advanced} follows from Corollary~\ref{cor:PRA explicit prob}. \endproof For the proof of Lemma~\ref{lem:separate colours}, we need the additional flexibility that not only the total degree of a vertex is of interest, but its degree into each of the clusters of a regularity partition. By splitting every vertex we can deduce this easily from Lemma~\ref{lem:degree split advanced}. \lateproof{Corollary~\ref{cor:degree split advanced}} We apply Lemma~\ref{lem:degree split advanced} to the graph $G^*$, which is obtained from $G$ by splitting each vertex as follows: for each $v\in V(G)$, let $v_1,\dots,v_{m_v}$ be new vertices. Let $V(G^\ast)=\bigcup_{v\in V(G)}\Set{v_1,\dots,v_{m_v}}$. For every edge $uv\in E(G)$, there are unique $i\in [m_u]$ and $j\in[m_v]$ such that $v\in S^u_i$ and $u\in S^v_j$. Add the edge $u_iv_j$ to $G^\ast$ and define $c^\ast(u_iv_j):=c(uv)$. Clearly, $n^\ast:=|V(G^\ast)|\le rn$ and for all $v\in V(G)$ and $i\in[m_v]$, we have $|N_{G^\ast}(v_i)|=|S^v_i|$. In particular, $\delta(G^\ast)\ge drn\ge d n^\ast$. Moreover, $c^\ast\colon E(G^\ast)\to \binom{C}{\Delta}$ is $\mu n^\ast$-bounded. By Lemma~\ref{lem:degree split advanced} (with $G^\ast$ playing the role of $G$), the following holds with probability at least $\frac{\kappa}{3}(1+\kappa)^{-|C|}$: $$d_{G^\ast_{C_\ell}}(v_i) = t^{-\Delta} d_{G^\ast} (v_i) \pm (2t)^\Delta \kappa n^\ast$$ for all $v\in V(G)$, $i\in[m_v]$ and $\ell\in[t]$. Since $d_{C_\ell}(v,S_i^v)=d_{G^\ast_{C_\ell}}(v_i)$ and $|S_i^v| =d_{G^\ast}(v_i)$, this completes the proof. \endproof \section{Proof of the rainbow blow-up lemma} \label{sec:main proof} In this section, we state and prove the full version of the rainbow blow-up lemma. As indicated in Section~\ref{subsec:blow-up intro}, we also include exceptional vertices and candidate graphs in the statement. In~\cite{KSS:97}, the authors provided a version of the blow-up lemma which also allowed for the assignment of certain `candidate sets' for some of the vertices. More precisely, for sufficiently small $\alpha$, it is possible to assign for every cluster $X_i$ to $\alpha |X_i|$ vertices $x$ of $X_i$ a candidate set $A_x\In V_i$, which needs to be of size at least $\beta |V_i|$, such that the blow-up lemma still holds with the additional property that the image of $x$ will lie inside $A_x$. We keep track of candidate sets by storing the essential information in a `candidacy graph' $A^i$ with bipartition $(X_i,V_i)$, where $N_{A^i}(x)$ encodes the candidate set for~$x$. Thus, we allow candidate sets for all vertices, not only a small fraction. However, we require that the resulting candidacy graph for each cluster is lower super-regular. Clearly this includes the original framework. More formally, a bipartite graph $A^i$ with bipartition $(X_i,V_i)$ is a \defn{candidacy graph for $(X_i,V_i)$}. We say that the blow-up instance $(H,G,R,(X_i)_{i\in[r]_0},(V_i)_{i\in[r]_0})$ with candidacy graphs $(A^i)_{i\in[r]}$ is \defn{(lower) $(\eps,d)$-super-regular} if for all $ij\in E(R)$, the bipartite graph $G[V_i,V_j]$ is (lower) $(\eps,d)$-super-regular, and for all $i\in [r]$, the graph $A^i$ is (lower) $(\eps,d)$-super-regular. Candidate sets are especially helpful if a part of $H$ is already embedded, for example if one has to deal with `exceptional vertices' before the application of the blow-up lemma. For instance, if $x_0$ is an exceptional vertex which already has been assigned its image $v_0$, and $x$ is an $H$-neighbour of $x_0$ to be embedded by the blow-up lemma, then the image of $x$ better lies $N_G(v_0)$. This can be achieved by assigning $x$ a candidate set which is a subset of $N_G(v_0)$. In the rainbow setting, we face the additional challenge that, depending on which of the candidates $v$ we pick as the image of $x$, the edge $vv_0$ will already use a colour which is then forbidden for the rest of the embedding. In the full statement of our rainbow blow-up lemma, we thus already include the exceptional vertices. For the proof to work, we put the following restrictions on the partial embedding of the exceptional vertices. We will see in Section~\ref{sec:apps} that these criteria can be met in applications easily. Given a blow-up instance $(H,G,R,(X_i)_{i\in[r]_0},(V_i)_{i\in[r]_0})$ with candidacy graphs $(A^i)_{i\in[r]}$ and an edge set colouring $c\colon E(G)\to 2^C$, we say that the bijection $\phi_0\colon X_0\to V_0$ is \defn{$D$-feasible} if the following hold: \begin{enumerate}[label=\rm{(EXC\arabic*)}] \item for all $x\in X_0$, $j\in[r]$ and $y\in N_H(x)\cap X_j$, we have $N_{A^j}(y)\subseteq N_G(\phi_0(x))$; \label{exc condition:neighbourhoods} \item for all $j\in [r]$, $x\in X_j$, $v\in N_{A^j}(x)$ and distinct $x_0,x_0'\in N_H(x)\cap X_0$, we have $c(\phi_0(x_0)v)\cap c(\phi_0(x_0')v)=\emptyset$ ; \label{exc condition:rainbow} \item for all colours $\alpha \in C$, we have $\sum_{x\in X_0} d^{\alpha}_G (\phi_0(x))\cdot d_H(x) \le D$.\COMMENT{Make sure $d_G^{\alpha}(v)$ is defined.} \label{exc condition:high degrees} \end{enumerate} Condition \ref{exc condition:neighbourhoods} ensures that whenever we pick for $y\in V(H)\sm X_0$ an image~$v$ from its candidate set, then $v$ is appropriately connected to~$V_0$. Condition \ref{exc condition:rainbow} in turn ensures that no conflict arises from this, i.e.~the star with center $v$ and the images of the neighbours of $v$ in $X_0$ is rainbow. Condition \ref{exc condition:high degrees} is designed for an application where the exceptional vertices are not required to have bounded degree. Note that \ref{exc condition:neighbourhoods}--\ref{exc condition:high degrees} are trivially satisfied if $X_0$ is empty.\COMMENT{And $D\ge 0$} Moreover, \ref{exc condition:rainbow} is clearly satisfied if $X_0$ is $2$-independent. Note that if $c$ is $k$-bounded and $\Delta(H)\le \Delta$, then \ref{exc condition:high degrees} holds with $D=2\Delta k$. Given an edge set colouring $c$ of $G$, we say that a subgraph $H$ is \defn{rainbow} if $c(e)\cap c(e')=\emptyset$ for all distinct $e,e'\in E(H)$. This allows to model slightly more general systems of conflicts. For instance, if $c_1,\dots,c_{\Delta}$ are edge colourings of $G$, we can define $c^\ast(e):=\Set{c_1(e),\dots,c_\Delta(e)}$ for all $e\in E(G)$. If $H$ is rainbow with respect to the edge set colouring $c^\ast$, then $H$ is simultaneously rainbow with respect to all the~$c_i$. We also note that, even if the given colouring of $G$ is a `normal' edge colouring, the ability to handle edge set colourings is crucial when dealing with exceptional vertices (see Step~\ref*{step:colour splitting} in the proof of Lemma~\ref{lem:blow up matchings}). We now state the full version of the rainbow blow-up lemma. It clearly implies Lemma~\ref{lem:blow-up simple}. \begin{lemma}[Rainbow blow-up lemma]\label{lem:blow up} Let $1/n \ll \mu , \eps \ll d,1/\Delta$ and $\mu \ll 1/r$. Let $(H,G,R,(X_i)_{i\in[r]_0},(V_i)_{i\in[r]_0})$ with candidacy graphs $(A^i)_{i\in[r]}$ be a lower $(\eps,d)$-super-regular blow-up instance and assume further that \begin{enumerate}[label={\rm (\roman*)}] \item $\Delta(R)\le \Delta$ and $d_H(x)\le \Delta$ for all $x\in V(H)\sm X_0$; \item $|V_i|=(1\pm \eps)n/r$ for all $i\in[r]$; \item for all $i\in[r]$, at most $(2\Delta)^{-4}|X_i|$ vertices in $X_i$ have a neighbour in $X_0$.\label{connected to exceptional} \end{enumerate} Let $c\colon E(G)\to 2^C$ be $(\mu n,\Delta)$-bounded. Suppose a $2\Delta\mu n$-feasible bijection $\phi_0 \colon X_0 \to V_0$ is given. Then there exists a rainbow embedding $\phi$ of $H$ into $G$ which extends $\phi_0$ such that $\phi(x)\in N_{A^i}(x)$ for all $i\in [r]$ and $x\in X_i$. \end{lemma} In the remaining four subsections of this section, we prove Lemma~\ref{lem:blow up}. In the first subsection, we deduce Lemma~\ref{lem:blow up} from a similar statement (Lemma~\ref{lem:blow up matchings}), where we impose considerably stronger assumptions on $G$ and $H$. In Subsection~\ref{sec:4GL}, we introduce the so-called `Four graphs lemma' from~\cite{RR:99}, which we use in the proof of Lemma~\ref{lem:blow up matchings}. In Subsection~\ref{sec:conflictM}, we discuss `conflict-free' matchings and see how a recent result of Coulson and Perarnau~\cite{CP:17} via the switching method can be used to find perfect conflict-free matchings in super-regular pairs. This will be another important ingredient for the proof of Lemma~\ref{lem:blow up matchings}. In the last subsection we finally prove Lemma~\ref{lem:blow up matchings}. \subsection{Split into matchings} \label{subsec:matching split} Our first step in proving Lemma~\ref{lem:blow up} is to reduce it to a similar statement where $H$ is highly structured in the sense that $H$ only induces (perfect) matchings between its partition classes. The main steps of this reduction are essentially the same as in~\cite{RR:99}: we apply the Hajnal--Szemer\'edi theorem to $H^2[X_i]$ for each vertex class $X_i$ to obtain a refined partition of $H$ where every vertex class is now $2$-independent. We refine the partition of $G$ randomly such that super-regularity is preserved. Our reduction is more intricate than the one in~\cite{RR:99} as we also consider candidacy graphs and exceptional vertices and allow the cluster sizes to be slighty unbalanced. We now state the auxiliary lemma, which we will prove in Section~\ref{subsec:main proof}. In this subsection, we deduce the rainbow blow-up lemma (Lemma~\ref{lem:blow up}) from Lemma~\ref{lem:blow up matchings}. \begin{lemma} \label{lem:blow up matchings} Suppose $1/n \ll \mu \ll \eps \ll d,1/\Delta$ and $\mu \ll 1/r$ such that $r$ divides~$n$. Let $(H,G,R,(X_i)_{i\in[r]_0},(V_i)_{i\in[r]_0})$ with candidacy graphs $(A^i)_{i\in[r]}$ be an $(\eps,d)$-super-regular blow-up instance and assume further that \begin{enumerate}[label={\rm (\roman*)}] \item $\Delta(R)\le \Delta$ and $d_H(x,X_0)\leq \Delta$ for all $x\in V(H)\sm X_0$; \item $|V_i|=n/r$ for all $i\in[r]$; \item for all $ij\in \binom{[r]}{2}$, the graph $H[X_i,X_j]$ is a perfect matching if $ij\in E(R)$ and empty otherwise. \end{enumerate} Let $c\colon E(G)\to 2^C$ be $(\mu n,\Delta)$-bounded. Suppose a $\mu n$-feasible bijection $\phi_0 \colon X_0 \to V_0$ is given. Then there exists a rainbow embedding $\phi$ of $H$ into $G$ which extends $\phi_0$ such that $\phi(x)\in N_{A^j}(x)$ for all $j\in [r]$ and $x\in X_j$. \end{lemma} In the reduction, we will need the classical Hajnal--Szemer\'edi theorem and two facts about regular pairs. \begin{theorem}[\cite{HS:70}] \label{thm:HS} Let $G$ be a graph on $n$ vertices with $\Delta(G)< k \le n$. Then $V(G)$ can be partitioned into $k$ independent sets of size $\lfloor \frac{n}{k}\rfloor$ or $\lceil \frac{n}{k}\rceil$. \end{theorem} \begin{prop}[{\cite[Fact~2]{RR:99}}] \label{prop:lower to super} Let $1/n\ll \eps \ll \eps' \ll d$. Let $G$ be a bipartite graph with bipartition $(V_1,V_2)$, where $|V_1|=|V_2|=n$. If $G$ is lower $(\eps,d)$-super-regular, then $G$ contains a spanning subgraph $G'$ which is $(\eps',d^2/2)$-super-regular. \end{prop} \begin{fact} \label{fact:regularity} Let $G$ be a bipartite graph with vertex partition $(A,B)$. Suppose $G$ is lower $(\eps,d)$-regular and $Y\In B$ with $|Y|\ge \eps|B|$. Then all but at most $\eps|A|$ vertices of $A$ have at least $(d-\eps)|Y|$ neighbours in~$Y$. \end{fact} \COMMENT{ Let $A_1\In A$ be the set of vertices which have less than $(d-\eps)|Y|$ neighbours in~$Y$. Suppose for a contradiction that $|A_1|\ge \eps |A|$. Then, since $G[A,B]$ is lower $(\eps,d)$-regular, $d_G(A_1,Y)\ge d-\eps$ and hence $e_G(A_1,Y)\ge (d-\eps)|A_1||Y|$. On the other hand, by definition of $A_1$, we have $e_G(A_1,Y)< |A_1|(d-\eps)|Y|$, a contradiction.} \lateproof{Lemma~\ref{lem:blow up}} The proof divides into four steps. Firstly, we modify the partition of $H$. From this, we obtain a few more exceptional vertices. In Step~2, we then extend $\phi_0$ to the new exceptional set. Subsequently, in Step~3 we refine the partition of $G$ accordingly, and finally we apply Lemma~\ref{lem:blow up matchings}. Choose a new constant $\eps^\ast$ such that $1/n \ll \mu , \eps \ll \eps^\ast \ll d,1/\Delta$. Let $n'$ be the largest integer not exceeding $(1-\eps)n/r$ which is divisible by $\Delta^2$. For $i\in[r]$, define $a_i:=|X_i|-n'$. Note that \begin{align} 0\le a_i \le 3\eps n/r. \label{additional exceptionals} \end{align} Let $R^\ast$ be the graph with vertex set $[r]\times [\Delta^2]$ and edges $(i,j)(i',j')\in E(R^\ast)$ whenever $ii'\in E(R)$. Clearly, $\Delta(R^\ast) \le \Delta^3$. Let $r^\ast:=r\Delta^2$ and $n^\ast:=n'r$. So $(1-2\eps)n\le n^\ast \le n$. Later, we will apply Lemma~\ref{lem:blow up matchings} with $n^\ast,r^\ast,R^\ast$ playing the roles of $n,r,R$, respectively. \begin{NoHyper} \begin{step} Refining $H$ \end{step} \end{NoHyper} First, we move $a_i$ vertices from each cluster $X_i$ to the exceptional set in order to adjust the sizes. For vertex sets $U_1\In U_2 \In V(H)$, we say that $(U_2,U_1)$ is \defn{$(2,1)$-independent} if $U_2$ is independent in $H$ and whenever $u,u'\in U_2$ have a common neighbour in $H$, then $u,u'\in U_1$. \begin{claim} There is a set $B\In V(H)\sm X_0$ such that $|B\cap X_i|=a_i$ for all $i\in[r]$ and such that $(X_0\cup B,X_0)$ is $(2,1)$-independent in~$H$. \end{claim} \claimproof The set $B$ can be constructed greedily. Assume that for some $i\in[r-1]_0$, we have found a set $B_i$ such that $|B_i\cap X_j|=a_j$ for all $j\in[i]$ and $(X_0\cup B_i,X_0)$ is $(2,1)$-independent in~$H$. (Note that $B_0=\emptyset$ satisfies this for $i=0$.) Using \eqref{additional exceptionals} and the fact that $\Delta(R)\le \Delta$ and $\Delta(H)\le \Delta$, it is not hard to see that at most $(2\Delta)^{-2} n/r$ vertices in $X_{i+1}$ are at distance at most $2$ from $X_0\cup B_i$.\COMMENT{Indeed, by~\ref{connected to exceptional}, at most $(2\Delta)^{-4}|X_{i+1}|$ vertices of $X_{i+1}$ are in distance $1$ to $X_0$, and using \eqref{additional exceptionals}, there are at most $\sum_{j\in N_R(i+1)}\Delta a_j \le 3\Delta^2 \eps n/r$ vertices in $X_{i+1}$ which are in distance $1$ of $B_i$. Similarly, there are at most $3\Delta^4 \eps n/r$ vertices in $X_{i+1}$ which are in distance $2$ of $B_i$ (but are not in distance $1$ of $X_0$, those are already counted before). Using \eqref{additional exceptionals} again, there are at most $\sum_{j\in N_R(i+1)}\Delta(2\Delta)^{-4}|X_{j}| \le (4\Delta)^{-2}(1+\eps) n/r$ vertices in $X_{i+1}$ which are in distance $2$ of $X_0$.} Now, since $\Delta(H^2[X_{i+1}])\le \Delta^2-1$, there exists a $2$-independent subset of $X_{i+1}$ of size at least $|X_{i+1}|/\Delta^2$. Thus, since $|X_{i+1}|/\Delta^2 - (2\Delta)^{-2} n/r \ge a_{i+1}$, we can pick $a_{i+1}$ $2$-independent vertices from $X_{i+1}$ which are at distance at least $3$ from $X_0\cup B_i$ and add them to $B_i$ to obtain~$B_{i+1}$. Observe that then $(X_0\cup B_{i+1},X_0)$ is $(2,1)$-independent in~$H$. \endclaimproof Define $X_0^\ast:=X_0\cup B$ and $X_i':=X_i\sm B$ for $i\in[r]$. Clearly, we have $|X_i'|=n'$. For all $i\in[r]$, apply the Hajnal--Szemer\'edi theorem (Theorem~\ref{thm:HS}) to $H^2[X_i']$. Note that $\Delta(H^2[X_i'])\le \Delta(\Delta-1)<\Delta^2$. Recall that $n'$ is divisible by $\Delta^2$. Thus, there exists a partition of $X_i'$ into $2$-independent sets $X^\ast_{i,1},\dots,X^\ast_{i,\Delta^2}$ in $H$ of size exactly $n'/\Delta^2$ each. For all $(i,j),(i',j')\in [r]\times [\Delta^2]$, we have that $H[X^\ast_{i,j},X^\ast_{i',j'}]$ is a matching if $(i,j)(i',j')\in E(R^\ast)$ and empty otherwise. Thus, $R^\ast$ is a suitable reduced graph for the new partition $(X^\ast_{i,j})_{(i,j)\in [r]\times [\Delta^2]}$ of $V(H)\sm X_0^\ast$, and $X_0^\ast$ is the new exceptional set. Surely, we can add edges to $H$ to obtain a supergraph $H^\ast$ such that $H^\ast[X^\ast_{i,j},X^\ast_{i',j'}]$ forms a perfect matching for all $(i,j)(i',j')\in E(R^\ast)$ Clearly, any rainbow embedding of $H^\ast$ induces a rainbow embedding of $H$. We observe that by the $(2,1)$-independence of $(X_0^\ast,X_0)$, we have that for all $i\in[r]$ and $y\in X_i'$, exactly one of the following alternatives applies: \begin{enumerate}[label=\rm{(\alph*)}] \item $N_H(y)\cap X_0^\ast=N_H(y)\cap X_0$; \label{exceptional neighbourhood a} \item $N_H(y)\cap X_0^\ast=\Set{x}$ for a unique $x\in X_j$ with $j\in N_R(i)$; \label{exceptional neighbourhood b} \item $N_H(y)\cap X_0^\ast=\emptyset$. \end{enumerate} Let $W_a$ be the set of all vertices $y\in V(H)\sm X_0^\ast$ for which \ref{exceptional neighbourhood a} applies, and let $W_b$ be the set of all vertices $y\in V(H)\sm X_0^\ast$ for which \ref{exceptional neighbourhood b} applies. For $i\in[r]$, we have \begin{align} |W_b\cap X_i|\le \sum_{j\in N_R(i)}\Delta a_j \overset{\eqref{additional exceptionals}}{\le} \Delta^2 \cdot 3 \eps n/r. \label{new neighbours of exceptionals} \end{align} \begin{NoHyper} \begin{step} Extending $\phi_0$ \end{step} \end{NoHyper} We want to find a suitable set $V_0^\ast \supseteq V_0$ and extend $\phi_0$ to a bijection $\phi_0^\ast \colon X_0^\ast \to V_0^\ast$ such that $\phi_0^\ast(x) \in N_{A^j}(x)$ for all $j\in[r]$ and $x\in X_0^\ast\cap X_j$, and such that for all $x\in X_0^\ast\sm X_0$, all $i\in[r]$ and all $y\in N_H(x)\cap X_i$, we have \begin{align} |N_{A^i}(y) \cap N_G(\phi_0^\ast(x))| \ge (d-\eps)|N_{A^i}(y)|.\label{new exceptional nbhds} \end{align} We can find $V_0^\ast$ and $\phi_0^\ast$ by successively picking a suitable image for each $x\in X_0^\ast\sm X_0$. Suppose we have already defined images for $Z\In X_0^\ast\sm X_0$ and now want to find a suitable image for $x\in X_0^\ast\sm (X_0\cup Z)$. Let $j\in[r]$ be such that $x\in X_j$. We want to pick $\phi_0^\ast(x)$ from $N_{A^j}(x)$. A vertex $v\in N_{A^j}(x)$ is a suitable image if for all $i\in[r]$ and all $y\in N_H(x)\cap X_i$, we have $d_G(v,N_{A^i}(y))\ge (d-\eps)|N_{A^i}(y)|$. For fixed $i\in[r]$ and $y\in N_H(x)\cap X_i$, since $ij\in E(R)$ and $G[V_i,V_j]$ is lower $(\eps,d)$-regular, Fact~\ref{fact:regularity}\COMMENT{with $Y=N_{A^i}(y)$} implies that all but at most $\eps |V_j|$ vertices in $V_j$ have at least $(d-\eps)|N_{A^i}(y)|$ neighbours in $N_{A^i}(y)$. Thus, at most $\Delta \eps |V_j|$ vertices of $V_j$ are not suitable images for $x$. Moreover, at most $a_j$ vertices of $V_j$ have already been used as images for the vertices in $Z$. Thus, there exists a suitable image $\phi_0^\ast(x)$ for $x$. Let $V_0^\ast:=\phi_0^\ast(X_0^\ast)$. Clearly, we have $|V_0^\ast \cap V_i|=a_i$ for all $i\in[r]$. Before we continue to refine the partition of $G$, we have to redefine the neighbourhoods of the vertices in $W_b$ in their respective candidate graphs in order to meet condition~\ref{exc condition:neighbourhoods} for $\phi_0^\ast$ (see Step~4). Consider $i\in[r]$. Let $A'^i$ be the spanning subgraph of $A^i$ obtained by deleting for every vertex $y\in X_i\cap W_b$ all edges $yv$ for which $v\notin N_G(\phi_0^\ast(x))$, where $x$ is the unique $H$-neighbour of $y$ in $X_0^\ast$ (cf.~\ref{exceptional neighbourhood b}). Note that $A'^i$ still contains vertices from $X_0^\ast$ and $V_0^\ast$. We might as well remove them, but it is more convenient to leave them in for now. Note that by~\eqref{new exceptional nbhds}, we still have that $d_{A'^i}(y) \ge (d-\eps)|N_{A^i}(y)| \ge d^2 |V_i|/2$ for all $y\in X_i\cap W_b$. Moreover, since $|X_i\cap W_b|\le 4\Delta^2\eps |X_i|$ by \eqref{new neighbours of exceptionals}, it is easy to see that $A'^i$ is still lower $(8 \Delta^2\eps,d^2/2)$-super-regular.\COMMENT{Clearly, the degrees are fine. Moreover, for sets $Y\In X_i$ and $Z\In V_i$ with $|Y|,|Z|\ge 8 \Delta^2\eps |X_i|$, we have $e(Y,Z)\ge e(Y\sm W_b,Z) \ge (d-\eps)|Y\sm W_b||Z| \ge (d-\eps)|Y||Z|/2$.} \begin{NoHyper} \begin{step} Refining $G$ \end{step} \end{NoHyper} Let $d':=d^2/2$. For $i\in[r]$, let $V_i':=V_i\sm V_0^\ast$. Clearly, $|V_i'|=|X_i'|=n'$. We will partition each $V_i'$ randomly into $\Delta^2$ equal-sized parts and then match those with the refined parts $X^\ast_{i,j}$ of $X_i'$. In order to obtain super-regular new candidacy graphs, we need to take special care of some vertices. Consider $i\in [r]$. We say that a vertex $v\in V_i'$ is \defn{good} if \begin{align} d_{A'^i}(v,X^\ast_{i,j}) \ge (d'-8 \Delta^2\eps)|X^\ast_{i,j}| \label{def good vertex} \end{align} for all $j\in[\Delta^2]$. By Fact~\ref{fact:regularity} and since $A'^i$ is lower $(8 \Delta^2\eps,d')$-regular, all but at most $8 \Delta^4 \eps |V_i|$ vertices of $V_i$ are good. Let $V_i''$ be the set of good vertices of $V_i'$. Thus, \begin{align} |V_i\sm V_i''|\le 8 \Delta^4 \eps |V_i| + a_i \le \sqrt{\eps} |V_i|. \label{most suitable} \end{align} We now partition $V_i'$ into equal-sized parts $V^\ast_{i,1},\dots,V^\ast_{i,\Delta^2}$ of size exactly $n'/\Delta^2=n^\ast/r^\ast$ each. First, we take care of the vertices which are not good. For every $v\in V_i'\sm V_i''$, choose $j\in [\Delta^2]$ such that $d_{A'^i}(v,X^\ast_{i,j}) \ge (d'-\sqrt{\eps})|X^\ast_{i,j}|$. Clearly, such an index $j\in [\Delta^2]$ exists since $d_{A'^i}(v)\ge (d'-8 \Delta^2\eps)|X_i| \ge (d'-\sqrt{\eps})|X_i'|$. For $j\in [\Delta^2]$, let $V'_{i,j}$ be the set of all vertices $v\in V_i'\sm V_i''$ which have been assigned to $j$ in this way. Now, for each $i\in [r]$, let $V''_{i,1},\dots,V''_{i,\Delta^2}$ be a partition of $V_i''$ such that $|V''_{i,j}|=n'/\Delta^2-|V'_{i,j}|$ and such that the following hold: for all $i\in[r]$, $i'\in N_R(i)$, $v\in V_i$ and $j'\in[\Delta^2]$, we have \begin{align} d_G(v,V''_{i',j'}) \ge (d-3 \sqrt{\eps})|V''_{i',j'}|, \label{G partition random degree} \end{align} and for all $i\in[r]$, $x\in X_i$ and $j\in[\Delta^2]$, we have \begin{align} d_{A'^i}(x,V''_{i,j}) \ge (d'-3 \sqrt{\eps})|V''_{i,j}|. \label{G partition random degree candidates} \end{align} That such partitions exist can be seen using a probabilistic argument as follows: For each $i\in[r]$, let $V''_{i,1},\dots,V''_{i,\Delta^2}$ be a partition of $V_i''$ such that $|V''_{i,j}|=n'/\Delta^2-|V'_{i,j}|$, chosen uniformly at random amongst all such partitions. (We may also assume that the partitions of $V_{i}''$ and $V_{i'}''$, say, are independent, but this is not even necessary.) In particular, for all $(i,j)\in [r]\times [\Delta^2]$, the set $V''_{i,j}$ is a uniformly random subset of $V_i''$ of size $n'/\Delta^2-|V'_{i,j}|$. Note that for all $i\in[r]$, $i'\in N_R(i)$, $v\in V_i$, we have $$d_G(v,V''_{i'}) \overset{\eqref{most suitable}}{\ge} (d-\eps)|V_{i'}|-\sqrt{\eps} |V_{i'}| \ge (d-2\sqrt{\eps})|V_{i'}''|.$$ Thus, for all $j'\in[\Delta^2]$, we have $\expn{d_G(v,V''_{i',j'})} \ge (d-2 \sqrt{\eps})|V''_{i',j'}|$. Similarly, for all $i\in[r]$, $x\in X_i$ and $j\in[\Delta^2]$, we have $\expn{d_{A'^i}(x,V''_{i,j})}\ge (d'-2 \sqrt{\eps})|V''_{i,j}|$. Using a Chernoff-Hoeffding-type bound for the hypergeometric distribution and a union bound, we can see that \eqref{G partition random degree} and \eqref{G partition random degree candidates} are satisfied with positive probability. Finally, for $(i,j)\in [r]\times [\Delta^2]$, let $V^\ast_{i,j}:= V'_{i,j} \cup V''_{i,j}$. We claim that for all $(i,j)(i',j')\in E(R^\ast)$, the bipartite graph $G[V^\ast_{i,j},V^\ast_{i',j'}]$ is lower $(3\sqrt{\eps},d)$-super-regular. Indeed, it follows simply from the lower $(\eps,d)$-regularity of the pairs $G[V_i,V_{i'}]$ for $ii'\in E(R)$ that $G[V^\ast_{i,j},V^\ast_{i',j'}]$ is lower $(2\Delta^2\eps,d)$-regular, say. That every vertex has large enough degree in the respective pair follows from~\eqref{G partition random degree}. Similarly, for all $(i,j)\in [r]\times [\Delta^2]$, the new candidacy graph $A^{(i,j)}:=A'^i[X^\ast_{i,j},V^\ast_{i,j}]$ is lower $(3\sqrt{\eps},d')$-super-regular. Here, every vertex $x\in X^\ast_{i,j}$ has sufficiently high degree in $V^\ast_{i,j}$ by~\eqref{G partition random degree candidates}. Moreover, all good vertices of $V_i'$ have automatically sufficiently high degree by~\eqref{def good vertex}, and all vertices which are not good have sufficiently high degree in their new candidate graph because of their assignment to a set $V_{i,j}'$. Finally, using Proposition~\ref{prop:lower to super}, we can transition to a spanning subgraph $G^\ast$ of $G$ such that $G^\ast[V^\ast_{i,j},V^\ast_{i,j'}]$ is $(\eps^\ast,d^2/2)$-super-regular for all $(i,j)(i',j')\in E(R^\ast)$, and for each $(i,j)\in [r]\times[\Delta^2]$, we can transition to a spanning subgraph $A^{\ast(i,j)}$ of $A^{(i,j)}$ such that $A^{\ast(i,j)}$ is $(\eps^\ast,d'^2/2)$-super-regular. Note that we do not delete any edges incident to $V_0^\ast$. \begin{NoHyper} \begin{step} Applying Lemma~\ref{lem:blow up matchings} \end{step} \end{NoHyper} We can now complete the proof. It remains to check that $\phi_0^\ast$ is feasible. First, consider $x\in X_0^\ast$, $(i,j)\in[r]\times[\Delta^2]$ and $y\in N_{H^\ast}(x)\cap X^\ast_{i,j}$. If $y\in W_a$, then we must have $x\in X_0$ and thus $N_{A^i}(y) \In N_G(\phi_0(x))$ by \ref{exc condition:neighbourhoods} for $\phi_0$. If $y\in W_b$, then we have $N_{A'^i}(y)\In N_G(\phi_0^\ast(x))$ by the definition of $A'^{i}$. In both cases, we conclude that $N_{A'^i}(y)\In N_{G^\ast}(\phi_0^\ast(x))$ since edges in $G$ are only removed between regular pairs. Since $A^{\ast(i,j)}$ is a subgraph of $A'^i$, we have $N_{A^{\ast(i,j)}}(y) \In N_{G^\ast}(\phi_0^\ast(x))$. Thus, \ref{exc condition:neighbourhoods} holds for $\phi_0^\ast$. Condition~\ref{exc condition:rainbow} also holds for $\phi_0^\ast$ because only the vertices in $W_{b}$ gained a new neighbour in~$X_{0}^\ast$. However, as each $y\in W_b$ only has one neighbour in $X_0^\ast$, the condition holds trivially. Finally, consider \ref{exc condition:high degrees}. Let $\alpha\in C$. Note that $d_{H^\ast}(x)=d_H(x)\le \Delta$ for all $x\in X_0^\ast\sm X_0$. Thus, $$\sum_{x\in X_0^\ast\sm X_0} d_{G^\ast_\alpha} (\phi^\ast_0(x))\cdot d_{H^\ast}(x) \le 2\Delta \mu n$$ and hence $\sum_{x\in X_0^\ast} d_{G^\ast_\alpha} (\phi^\ast_0(x))\cdot d_{H^\ast}(x) \le 2\Delta \mu n + 2\Delta \mu n \le \mu^{0.9}n^\ast$. Therefore, $\phi_0^\ast$ is $\mu^{0.9}n^\ast$-feasible. Now apply Lemma~\ref{lem:blow up matchings} as follows: \medskip { \noindent { \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c} $n$ & $\mu$ & $\eps$ & $d$ & $\Delta$ & $r$ & $H$ & $G$ & $R$ & $X_i$ & $V_i$ & $A^i$ & $c$ & $\phi_0$ \\ \hline $n^\ast$ & $\mu^{0.9}$ & $\eps^\ast$ & $d^4/8$ & $\Delta^3$ & $r^\ast$ & $H^\ast$ & $G^\ast$ & $R^\ast$ & $X_{i,j}^\ast$ & $V^\ast_{i,j}$ & $A^{\ast (i,j)}$ & $c{\restriction_{E(G^\ast)}}$ & $\phi^\ast_0$ \end{tabular} } } \medskip This yields a rainbow embedding $\phi$ of $H^\ast$ into $G^\ast$ which extends $\phi^\ast_0$ such that $\phi(x)\in N_{A^{\ast (i,j)}}(x)$ for all $(i,j)\in [r] \times [\Delta^2]$ and $x\in X_{i,j}^\ast$. Clearly, $\phi$ is thus a rainbow embedding of $H$ into $G$ which extends $\phi_0$ such that $\phi(x)\in N_{A^i}(x)$ for all $i\in [r]$ and $x\in X_i$. \endproof \subsection{The Four graphs lemma}\label{sec:4GL} In this subsection, we state the so-called `Four graphs lemma' due to R\"odl and Ruci\'nski, which is an important ingredient in the proof of the blow-up lemma. Let $V_1,V_2,V_3$ be three disjoint sets of size $n$. Suppose $G_{i,j}$, for $1\leq i<j\leq 3$, is an $(\eps,d_{i,j})$-super-regular bipartite graph with vertex partition $(V_i,V_j)$ and for all $v_1v_2\in E(G_{1,2})$ (with $v_1\in V_1$), we have \begin{align} |N_{G_{1,3}}(v_1)\cap N_{G_{2,3}}(v_2)| = (d_{1,3}d_{2,3}\pm \eps)n. \label{four graphs condition} \end{align} In such a case we say that the triple $(G_{1,2},G_{1,3},G_{2,3})$ is $(\eps,d_{1,2},d_{1,3},d_{2,3})$-regular. For a perfect matching $\sigma\colon V_1\to V_2$ of $G_{1,2}$, let $A_\sigma$ be the spanning subgraph of $G_{1,3}$ such that $v_1v_3\in E(A_\sigma)$ (with $v_i\in V_i$ for $i\in \{1,3\}$) if $\sigma(v_1)v_3\in E(G_{2,3})$. \begin{lemma}[Four graphs lemma, see~\cite{RR:99}] \label{lem:four graphs} Suppose $1/n\ll a \ll \eps \ll \eps' \ll d_{1,2},d_{1,3},d_{2,3}$. Suppose $(G_{1,2},G_{1,3},G_{2,3})$ is $(\eps,d_{1,2},d_{1,3},d_{2,3})$-regular on the vertex set $V_1\cup V_2\cup V_3$. Suppose $\sigma\colon V_1\to V_2$ is a perfect matching of $G_{1,2}$ drawn uniformly at random, then \begin{align*} \prob{A_\sigma \mbox{ is }(\eps',d_{1,3}d_{2,3})\mbox{-super-regular}}\ge 1-(1-a)^n. \end{align*} \end{lemma} The following proposition is useful when the graphs $G_{i,j}$ are super-regular, but \eqref{four graphs condition} is not satisfied. One can always delete a small fraction of the edges of $G_{1,2}$ such that \eqref{four graphs condition} is then satisfied. The proof is based on standard regularity methods and thus omitted. (See also Fact~1 in~\cite{RR:99} for a very similar statement.) \begin{prop}\label{prop:four graphs condition} Let $1/n \ll \eps \ll d,1/k$. Let $V_1,\dots,V_{k+2}$ be disjoint vertex sets of size~$n$. Suppose that $G_{1,2}$ is an $(\eps,d_{1,2})$-super-regular bipartite graph with bipartition $(V_1,V_2)$, where $d_{1,2}\ge d$, and for all $i\in [2]$ and $j\in \Set{3,\dots,k+2}$, $G_{i,j}$ is an $(\eps,d_{i,j})$-super-regular bipartite graph with bipartition $(V_i,V_j)$, where $d_{i,j}\ge d$. Let $G_{1,2}'$ be the spanning subgraph of $G_{1,2}$ consisting of those edges $v_1v_2\in E(G_{1,2})$ (with $v_1\in V_1,v_2\in V_2$) which satisfy the following: $|N_{G_{1,j}}(v_1)\cap N_{G_{2,j}}(v_2)| = (d_{1,j}d_{2,j}\pm \eps)n$ for all $j\in \Set{3,\dots,k+2}$. Then $G_{1,2}'$ is still $(2k\sqrt{\eps},d_{1,2})$-super-regular. \end{prop} We close this subsection by giving some intuition as to how the Four graphs lemma is applied. Suppose that we embed cluster $X_i$ into $V_i$ by choosing a perfect matching $\sigma$ in a suitable candidacy graph $A^i$ with bipartition $(X_i,V_i)$. Suppose we have not embedded cluster $X_j$ yet. In order to proceed with the embedding, we need to update the candidacy graph $A^j$. This involves the graphs $H[X_i,X_j]$, $A^i$, $A^j$, and $G[V_i,V_j]$. More precisely, suppose that $x\in X_j$ and that $v\in V_j$ is a candidate for $x$ before the embedding of $X_i$. After embedding $X_i$, vertex $v$ remains a valid candidate for $x$ if and only if it is suitably connected to the images of the neighbours of $x$ which are already embedded. Now, if $H[X_i,X_j]$ is a perfect matching, we can define the bijection $\pi\colon X_j\to X_i$ where $\pi(x)$ is the unique $H$-neighbour of $x$ in $X_i$. With this notation, $v$ remains a valid candidate for $x$ if and only if $\sigma(\pi(x))v\in E(G[V_i,V_j])$. We can now identify $X_j$ with $X_i$ according to~$\pi$. More precisely, define the graph $P$ with bipartition $(X_i,V_j)$ which is isomorphic to $A^j$, where $\pi(x)$ plays the role of $x$. We are now left with three vertex sets $X_i,V_i,V_j$ and the three graphs $A^i,P,G[V_i,V_j]$. In this setting, we can apply the Four graphs lemma, which yields a super-regular spanning subgraph $P_\sigma$ of $P$, which, via $\pi$, translates back to the updated candidacy graph for $(X_j,V_j)$. \subsection{Conflict-free perfect matchings}\label{sec:conflictM} A \defn{system of conflicts} in a graph $G$ is a set $\cF$ of unordered pairs of edges. If $\Set{e,f}\in \cF$, we say that $e$ and $f$ \defn{conflict}. We say that $\cF$ is $k$-bounded if every edge is contained in at most $k$ conflicts. A subgraph $H$ is \defn{conflict-free} if no two edges of $H$ conflict. Let $G$ be a bipartite graph with vertex classes $A,B$ such that $|A|=|B|=n$. Suppose $M$ is a perfect matching of $G$ and $e=a_1b_1\in M$. An edge $ab\in E(G)\sm M$ is \defn{$(e,M)$-switchable} if $a_1b_2,a_2 b_1\in E(G)$, where $a_2$ and $b_2$ are matched to $b$ and $a$ by $M$, respectively, i.e.~$a_2b,ab_2\in M$. The following lemma from~\cite{CP:17} is another important tool in our proof. Its proof is based on the Lopsided Lov\'asz local lemma. In~\cite{CP:17}, it is used to show the existence of conflict-free perfect matchings in Dirac bipartite graphs. \begin{lemma}[Coulson and Perarnau~{\cite[{Lemma~6}]{CP:17}}] \label{lem:switchings} Suppose $1/n\ll \mu \ll \gamma$. Let $G$ be a bipartite graph with bipartition $(A,B)$ such that $|A|=|B|=n$. Assume that $G$ has at least one perfect matching, and for every perfect matching $M$ of $G$ and for every $e\in M$ there are at least $\gamma n^2$ edges in $G$ that are $(e,M)$-switchable. Then, given any $\mu n$-bounded system of conflicts for $G$, a uniformly chosen perfect matching of $G$ is conflict-free with probability at least $\eul^{-\mu^{1/2} n}$. \end{lemma} We will apply it to find conflict-free perfect matchings of our candidacy graphs. It is easy to see that there are many switchings in an $(\eps,d)$-super-regular graph. \begin{prop}\label{prop:many switchings} Let $1/n \ll \eps \ll d$.\COMMENT{moderate $\eps$ enough here} Let $G$ be a bipartite graph with bipartition $(A,B)$ such that $|A|=|B|=n$ and $G$ is lower $(\eps,d)$-super-regular. Then $G$ has a perfect matching. Moreover, for every perfect matching $M$ of $G$ and every $e\in M$, there are at least $(d-2\eps)^3 n^2$ edges in $G$ that are $(e,M)$-switchable. \end{prop} \proof It is well known that $G$ has a perfect matching. Let $M$ be a perfect matching of $G$ and suppose $e\in M$. Let $a\in A$, $b\in B$ with $e=ab$. Define $N_a:=N_G(a)$ and $N_b:=N_G(b)$. Moreover, let $N_a'\In A$ be the set of vertices which are matched to the vertices in $N_a$ by $M$, and let $N_b'\In B$ be the set of vertices which are matched to the vertices in $N_b$ by $M$. Clearly, $|N_a'|=|N_a|$ and $|N_b'|=|N_b|$. Note that all edges in $G[N_a',N_b']$ are $(e,M)$-switchable, except those which already belong to~$M$. Since $d_G(a),d_G(b)\ge (d-\eps)n\ge \eps n$, we have $d_G(N_a',N_b') \ge d-\eps$ and hence $e_G(N_a',N_b')-|M| \ge (d-\eps)|N_a'||N_b'|-n \ge (d-2\eps)^3 n^2$. \endproof \subsection{Proof of Lemma~\ref{lem:blow up matchings}} \label{subsec:main proof} We are now ready to prove the auxiliary blow-up lemma. We split the proof into four steps. In Step 1, we show that we may assume that $|C|$ is not too large. This is needed for the application of Lemma~\ref{lem:separate colours}. We embed $H$ into $G$ in a number of rounds, which depends only on $\Delta$; in particular, all vertices that belong to the same cluster are embedded simultaneously. As $r$ may be much larger than $\Delta$, we even have to embed several clusters in a single round (cf.~Section~\ref{sec:sketch}). In Step 2, we reserve for each round a set of colours. During the embedding procedure, we will in each round only use colours of $G$ which were assigned to this round. In Step 3, we set up the induction statement and in Step 4, we perform the induction step. \lateproof{Lemma~\ref{lem:blow up matchings}} Let $n':=|V(G)|=n+|V_0|$. By removing isolated vertices from $X_0$ and their images determined by $\phi_0$ from $V_0$, we can assume that $|V_0|\le \Delta n$ and hence $n'\le (\Delta+1)n$.\COMMENT{Do we need this? $n'$ doesn't really appear again.} We may clearly assume that $V_0,\dots,V_r$ are independent in $G$. Let $$T:=\Delta^2+1,\quad d':=\frac{d}{\binom{T+1}{2}^{\Delta^2}}\quad \mbox{and }\eps_0:=2\eps.$$ Choose new constants $a,\eps_1,\dots,\eps_T$ such that $$1/n \ll \mu \ll a \ll \eps \ll \eps_1 \ll \dots \ll \eps_T \ll d,1/\Delta.$$ \begin{NoHyper} \begin{step}\label{step:linear colours} Modifying the colouring assignment \end{step} \end{NoHyper} We claim that we may assume that $|C|\leq 7\mu^{-1}\Delta^2 n$. Roughly speaking, if two colours $\alpha,\beta$ appear on at most $\mu n/2$ edges each, say, then we wish to merge them to a single colour. Clearly, the new colouring will still be $(\mu n,\Delta)$-bounded and any rainbow embedding of $H$ in the new colouring is also a rainbow embedding in the original colouring. However, we have to be a bit careful not to violate the feasibility of $\phi_0$. We start with a simple observation: \begin{align*} \sum_{\alpha\in C}\sum_{x\in X_0} d^{\alpha}_G (\phi_0(x))\cdot d_H(x) \le \sum_{x\in X_0} d_H(x) \Delta d_G (\phi_0(x)) \le \Delta n \sum_{x\in X_0} d_H(x) \leq \Delta^2 n^2. \end{align*}\COMMENT{Using $\sum_{\alpha\in C}d^{\alpha}_G (v_0) \le \Delta d_G(v_0) \le 2\Delta n$. Here I thought we get an extra $\Delta+1$ if $V_0$ is not independent. Of course we can assume that $V_0$ is independent. $\sum_{x\in X_0} d_H(x)\le \sum_{x\in V(H)\sm X_0}d_H(x,X_0) \le n\Delta$ } Hence there are at most $2\mu^{-1}\Delta^2 n$ colours $\alpha \in C$ for which $\sum_{x\in X_0} d^{\alpha}_G (\phi_0(x))\cdot d_H(x)\geq \mu n/2$ holds. Let us call these colours \emph{critical}. In order to ensure that~\ref{exc condition:high degrees} still holds, we will not merge colours where one is critical. We say $\alpha,\beta\in C$ \emph{block} each other if there exist distinct $x,x'\in X_0, x''\in N_H(x)\cap N_H(x')$ and $v\in N_{A^i}(x'')$ for some $i\in [r]$ such that $\alpha\in c(\phi_0(x)v)$ and $\beta\in c(\phi_0(x')v)$. For~\ref{exc condition:rainbow} to be preserved, we must not merge colours which block each other. Next we seek an upper bound on the number of pairs $\alpha,\beta$ that block each other. For any $x,x'\in X_0, x''\in N_H(x)\cap N_H(x')$, there are clearly at most $n$ vertices $v\in N_{A^i}(x'')$ and for each such $v$, there are at most $\Delta^2$ such pairs. We claim that there are at most $\Delta^2 n$ choices for $x,x',x''$ as above. Indeed, there are clearly at most $n$ choices for $x''$ as $x''\in V(H)\setminus X_0$ and as $d_H(x'',X_0)\leq \Delta$, we have at most $\Delta^2$ choices for the pair $x,x'$. Hence there are at most $n \cdot \Delta^2 \cdot n \Delta^2=\Delta^4n^2$ pairs $\alpha,\beta$ that block each other. Therefore, any set of colours of size, say, $10 \Delta^2 n$ contains a pair $\alpha,\beta$ such that $\alpha,\beta$ do not block each other. The following observation motivates our discussion above. Given $\alpha,\beta$ that do not block each other, are both not critical, and there are at most $\mu n$ edges on which $\alpha$ or $\beta$ appear, then we may replace every appearance of $\beta$ by $\alpha$ (some colour sets may become smaller) and the new edge set colouring is still $(\mu n,\Delta)$-bounded and $\phi_0$ is still $\mu n$-feasible. In addition, any rainbow embedding of $H$ in the new colouring is also a rainbow embedding in the original colouring. Therefore, we may assume that there are at most $2\mu^{-1}\Delta^2 n+ 10\Delta^2 n \leq 3\mu^{-1}\Delta^2 n$ colours $\alpha\in C$ such that $\alpha$ appears on at most $\mu n/2$ edges. Hence $(|C|-3\mu^{-1}\Delta^2 n)\cdot\mu n/2 \leq \Delta e(G)\le 2\Delta^2 n^2$, which implies $|C|\leq 7\mu^{-1}\Delta^2 n$. \begin{NoHyper} \begin{step} \label{step:colour splitting} Colour splitting \end{step} \end{NoHyper} Let $\psi\colon V(R) \to [T]$ be a proper vertex colouring of $R^2$.\COMMENT{$\Delta(R^2)\le \Delta^2$, so $\chi(R^2)\le \Delta^2+1$.} Moreover, set $\psi(0):=0$. For $t \in[T]_0$, let $$J_t:=\psi^{-1}(t) \text{ and } J_t^\ast:=\bigcup_{\ell\in [t]_0}J_{\ell}.$$ Note that the sets $(J_t)_{t\in [T]}$ are $2$-independent in $R$, and $J_0=\Set{0}$. In round~$t\in [T]$, we will embed all vertices from clusters $X_i$ with $i\in J_t$. In order to reserve colours for the respective rounds, we first partition $E(R)$ according to the colouring~$\psi$. For $t_1t_2\in \binom{[T]}{2}$, let $E_{t_1t_2}:=\set{ij\in E(R)}{\psi(i)=t_1,\psi(j)=t_2}$. Clearly, $(E_{t_1t_2})_{t_1t_2\in \binom{[T]}{2}}$ is a partition of $E(R)$ since $\psi(i)\neq \psi(j)$ for all $ij\in E(R)$. We will also reserve colours for the edges that have an endpoint in $V_0$. For $t\in [T]$, define $E_{0t}:=\set{\Set{j,j+r}}{j \in J_t}$.\COMMENT{Explanation?} Our aim is now to use Lemma~\ref{lem:separate colours} to reserve for every set $(E_{t_1t_2})_{t_1t_2\in \binom{[T]_0}{2}}$ an exclusive set of colours, and sparsify $G$ accordingly. This will also let the neighbourhoods of exceptional vertices shrink, and thus we need to update the candidacy graphs $A^j$. To ensure that the new candidacy graphs are again super-regular, we define an auxiliary colouring of the candidacy graphs and apply Lemma~\ref{lem:separate colours} to the somewhat artificial graph $G^{exc}:=G[V(G)\sm V_0]\cup \bigcup_{j\in[r]}A^j$ which is the union of $G[V(G)\sm V_0]$ and all the candidacy graphs. For $j\in[r]$, let $V_{j+r}:=X_j$. Let $R^{exc}$ be the graph on $[2r]$ which is the union of $R$ and the perfect matching $\set{\Set{j,j+r}}{j\in [r]}$. Note that $R^{exc}$ is a reduced graph for $G^{exc}$, and $(E_{t_1t_2})_{t_1t_2\in \binom{[T]_0}{2}}$ is a partition of $E(R^{exc})$. We now transfer the colouring of $G[V_0,V(G)\sm V_0]$ onto the candidacy graphs in a natural way: Consider $i\in [r]$, $x\in X_i$ and $v\in V_i$ with $xv\in E(A^i)$. Define \begin{align} c^{exc}(xv):=\bigcup_{x'\in N_H(x)\cap X_0}c(\phi_0(x')v).\label{exceptional colouring lifted} \end{align} Note that $|c^{exc}(xv)|\le \Delta^2$ since $c$ is $(\mu n, \Delta)$-bounded and $d_H(x,X_0)\le \Delta$. We also define $c^{exc}(e):=c(e)$ for all $e\in E(G[V(G)\sm V_0])$. We claim that $c^{exc}$ is $(2\mu n,\Delta^2)$-bounded. Consider any $\alpha \in C$. Since $\phi_0$ is $\mu n$-feasible, we can deduce from~\ref{exc condition:high degrees} that $\alpha$ appears on at most $\mu n$ edges $xv\in \bigcup_{j\in[r]}E(A^j)$. Moreover, there are at most $\mu n$ edges $e\in E(G[V(G)\sm V_0])$ on which $\alpha$ appears. Recall from Step~\ref*{step:linear colours} that we can assume that $|C|\leq 7\mu^{-1}\Delta^2 n$. We now apply Lemma~\ref{lem:separate colours} (with $G^{exc}$, $c^{exc}$, $R^{exc}$, $\Delta^2$, $\binom{T+1}{2}$ playing the roles of $G,c,R_S,\Delta,t$) to obtain a partition $(C_{t_1t_2})_{t_1t_2\in \binom{[T]_0}{2}}$ of $C$ such that for all $t_1t_2\in \binom{[T]_0}{2}$ and all $ij\in E(R^{exc})$, there is a subgraph $G^{ij}_{t_1t_2}$ of $G^{exc}_{C_{t_1t_2}}[V_i,V_j]$ which is $(\eps_0,d')$-super-regular. We will only keep those for which $\psi(ij)=t_1t_2$. More precisely, for all $t_1t_2\in \binom{[T]}{2}$ and all $ij\in E_{t_1t_2}$, we let $G^{ij} := G^{ij}_{t_1t_2}\In G_{C_{t_1t_2}}[V_i,V_j]$, and for all $1\le t\le T$ and all $j\in J_t$, we let $A_0^j:=G^{0j}_{0t} \In A^j_{C_{0t}}$.\footnote{The `0' in $A^j_0$ refers to the fact that these graphs are candidacy graphs after round 0. Later we will also define $A_t^j$ for $t\geq 1$.} This means that we reserve the colours in $C_{t_1t_2}$ for the edges in $G[V_i,V_j]$ with $\psi(ij)=t_1t_2$. The colours in the sets $C_{0t}$ are `reserved' for the candidacy graphs, which we now transfer back to the exceptional edges. For all $t\in [T]$ and all $j\in J_t$, define the bipartite graph $G^{0j}$ with bipartition $(V_0,V_j)$ and edge set $E(G^{0j})=\set{v_0v\in E(G)}{v_0\in V_0,v\in V_j,c(v_0v)\In C_{0,t}}$. We now transition from $G$ to the spanning subgraph $G^\ast$ defined as $$G^\ast:= \bigcup_{j\in[r]} G^{0j} \cup \bigcup_{ij\in E(R)} G^{ij} $$ which is the spanning subgraph of $G$ containing those edges of $G$ which are `admissibly' coloured. More precisely, for all $t_1t_2\in \binom{[T]_0}{2}$ and all $ij\in E_{t_1t_2}$, we have \begin{align} \text{$c(vw)\In C_{t_1t_2}$ for all $vw\in G^\ast[V_i,V_j]$.} \label{admissible colours} \end{align} The following is the reason why we transferred the colouring of the exceptional edges onto the candidacy graphs before applying Lemma~\ref{lem:separate colours}. Observe that for all $j\in [r]$, $x_0\in X_0$ and $x\in N_H(x_0)\cap X_j$, we have \begin{align} N_{A_0^j}(x)\subseteq N_{G^\ast}(\phi_0(x_0)),\label{exceptional containment updated} \end{align} i.e.~the new candidacy graphs satisfy~\ref{exc condition:neighbourhoods} with respect to the sparsified graph $G^{\ast}$. Indeed, given $j,x_0,x$ as above and $v\in N_{A_0^j}(x)$, we have $xv\in A^j$ and $c^{exc}(xv)\In C_{0,t}$, where $t:=\psi(j)$. By~\ref{exc condition:neighbourhoods}, we have $\phi_0(x_0)v\in E(G)$. Moreover, by definition of $c^{exc}$, we have $$c(\phi_0(x_0)v) \overset{\eqref{exceptional colouring lifted}}{\In} c^{exc}(xv)\In C_{0,t}$$ and thus $\phi_0(x_0)v\in E(G^{0j})\In E(G^\ast)$, as claimed. From now on, we do not need the colouring $c^{exc}$ anymore. \begin{NoHyper} \begin{step} Candidacy graphs and the inductive statement \end{step} \end{NoHyper} We will embed $H$ into $G^\ast$. For brevity, define for $t\in[T]_0$: \begin{align*} \bX_t &:=\bigcup_{i\in J_t}X_i, & \bV_t &:=\bigcup_{i\in J_t}V_i, & C_t &:=\bigcup_{k\in[t-1]_0}C_{kt}, \\ \bX_t^\ast &:=\bigcup_{i\in J_t^\ast}X_i, & \bV_t^\ast &:=\bigcup_{i\in J_t^\ast}V_i, & C_t^\ast &:= \bigcup_{\ell \in[t]_0} C_\ell, \\ H_t &:= H[\bX_t^\ast], & G_t &:= G^\ast[\bV_t^\ast]. & \end{align*} Note that $\bX_t^\ast$ contains $X_0$ and $\bV_t^\ast$ contains $V_0$ for all $t\in [T]_0$. Moreover, $\bX_0^\ast=\bX_0=X_0$, $\bV_0^\ast=\bV_0=V_0$, and $C_0^\ast=C_0=\emptyset$. After round $t$, we want to have embedded $H_t$ into $G_t$, only using colours from $C_t^\ast$. Given a partial embedding of $H$ into $G^\ast$, an edge of $G^\ast$ is called \defn{used} if it is the image of an edge of $H$, and a colour is called \defn{used} if some edge is used on which this colour appears. A bijection $\phi\colon \bX_t^\ast \to \bV_t^\ast$ is \defn{valid} if $\phi{\restriction_{X_0}}=\phi_0$ and for all $j\in J_t^\ast$ and all $x\in X_j$, we have $\phi(x)\in N_{A_0^j}(x)$. In particular, this implies that $\phi(X_j)=V_j$ for all $j\in J_t^\ast$. The following claim ensures that edges which are embedded in different rounds have automatically distinct colours. Moreover, it also ensures that the edges embedded at one vertex in the same round have distinct colours. \begin{NoHyper} \begin{claim} \label{claim:colour separated} Let $t\in [T]_0$ and suppose that $\phi\colon H_t \to G_t$ is a valid embedding. Then $\phi$ uses only colours from $C_t^\ast$. Moreover, for all $z\in X_k$, $v\in V_k$ with $zv\in E(A^k)$ and $\ell:=\psi(k) >t$ and all distinct $x,y\in N_H(z)\cap \bX_t^\ast$ such that $\phi(x)v,\phi(y)v\in E(G^\ast)$, we have $c(\phi(x)v),c(\phi(y)v)\In C_\ell$ and $c(\phi(x)v)\cap c(\phi(y)v) = \emptyset$. \end{claim} \end{NoHyper} \claimproof To prove the first part of the claim, assume that $vw\in E(G_t)$. Thus, there are unique $i,j\in J_{t}^\ast$ with $ij\in E_{t_1t_2}$ such that $v\in V_i$ and $w\in V_j$. By~\eqref{admissible colours}, we have $c(vw)\In C_{t_1t_2}$. Since $i,j\in J_{t}^\ast$, we have $t_1,t_2\le t$ and thus $C_{t_1t_2}\In C_t^\ast$, as claimed. To prove the second part of the claim, suppose that $z,v,\ell,k,x,y$ are as in the statement. There are unique $i,j\in J_t^\ast$ with $x\in X_i$ and $y\in X_j$. Let $\ell_1:=\psi(i)$ and $\ell_2:=\psi(j)$. Without loss of generality, assume that $\ell_1\le \ell_2\le t<\ell$. By~\eqref{admissible colours}, we have that $c(\phi(x)v)\In C_{\ell_1\ell}$ and $c(\phi(y)v)\In C_{\ell_2\ell}$. If $\ell_1\neq \ell_2$, then $C_{\ell_1\ell}$ and $C_{\ell_2\ell}$ are disjoint subsets of $C_\ell$, and the claim follows. If $i,j>0$, this implies that $ik,jk\in E(R)$ and we can conclude that $\ell_1\neq \ell_2$ by definition of $\psi$. If $i=0$ and $j>0$, we have $\ell_1=0$ and $\ell_2>0$. The remaining case is when $x,y\in X_0$. But then, since $zv\in E(A^k)$, we have $c(\phi(x)v)\cap c(\phi(y)v) = \emptyset$ by \ref{exc condition:rainbow} since $\phi_0$ feasible. \endclaimproof Given $t\in [T]_0$ and any bijection $\phi\colon \bX_t^\ast \to \bV_t^\ast$ with $\phi{\restriction_{X_0}}=\phi_0$ (e.g.~a valid embedding as above), we define (for round $t+1$) updated candidacy graphs for the clusters still to be embedded. Let $j\in [r]\sm J_t^\ast$. For vertices $x\in X_j$ and $v\in V_j$, we say that $v$ is a \defn{candidate for $x$} if $xv \in E(A_0^j)$ and for all $y\in N_{H}(x)\cap \bX_t^\ast$ we have that $\phi(y)v\in E(G^\ast)$. Let $A_t^j$ be the bipartite auxiliary graph with bipartition $(X_j,V_j)$ and edge set \begin{align} E(A_t^j):=\set{xv}{x\in X_j,v\in V_j\mbox{ and }v \mbox{ is a candidate for }x}. \label{def candidate set} \end{align} Note that $A_t^j$ depends on the specific choice of $\phi$. We might write $A_t^j(\phi)$ to indicate this dependency, but will just write $A_t^j$ if $\phi$ is clear from the context. Note that $A_0^j(\phi_0)=A_0^j$ by~\eqref{exceptional containment updated}. We will prove by induction that the following statement \ind{t} holds for all $t\in[T]_0$. \begin{itemize} \item[\ind{t}.] There exists a valid rainbow embedding $\phi_t\colon H_t \to G_t$ such that for all $j\in [r]\sm J_t^\ast$, the candidacy graph $A_t^j(\phi_t)$ is $(\eps_t,d_j)$-super-regular for some $d_j\ge d'^{t+1}$. \end{itemize} The instance \ind{0} holds since the graphs $(A_0^j)_{j\in[r]}$ are $(\eps_0,d')$-super-regular. The instance \ind{T} completes the proof since then $\phi_T$ is a valid rainbow embedding of $H$ into $G$. \begin{NoHyper} \begin{step} The inductive step \end{step} \end{NoHyper} Now, assume the truth of \ind{t} for some $t\in[T-1]_0$, and let $\phi_t$ be as in \ind{t}. We will extend $\phi_t$ to $\phi_{t+1}$ such that \ind{t+1} holds. Any bijection $\sigma\colon \bX_{t+1} \to \bV_{t+1}$ induces a bijection $\phi_{t+1}\colon \bX_{t+1}^\ast \to \bV_{t+1}^\ast$ which extends $\phi_t$ as follows: \begin{align} \phi_{t+1}(x):=\begin{cases} \phi_t(x) & \mbox{if } x\in \bX_{t}^\ast, \\ \sigma(x) & \mbox{if } x\in \bX_{t+1}. \end{cases} \label{def new phi} \end{align} We will choose $\sigma$ as a perfect matching in a suitably defined bipartite graph with bipartition $(\bX_{t+1}, \bV_{t+1})$. The natural choice for this graph is $\bigcup_{i\in J_{t+1}} A_t^i$. Indeed, if we pick for every $x\in \bX_{t+1}$ its image $\sigma(x)$ among the neighbours of $x$ in $A_t^i$, then, since every such neighbour $v$ is a candidate for $x$, we will obtain a valid embedding of $H_{t+1}$ into $G_{t+1}$. However, not every perfect matching $\sigma$ in $\bigcup_{i\in J_{t+1}} A_t^i$ induces a $\phi_{t+1}$ which satisfies \ind{t+1}. For this, we also need to ensure that the embedding is rainbow, and that the new candidacy graphs are again super-regular. In order to show that a suitable $\sigma$ exists, we pick $\sigma$ randomly and show that the desired properties hold with positive probability. We will use Lemma~\ref{lem:switchings} to show that $\phi_{t+1}$ is rainbow (see~Claim~\ref*{claim:conflict free}), and the Four graphs lemma (Lemma~\ref{lem:four graphs}) to show that the new candidacy graphs are again super-regular (see~Claim~\ref*{claim:candidate update}). We now prepare the application of the Four graphs lemma by finding a suitable spanning subgraph $\tilde{A}_t$ of $\bigcup_{i\in J_{t+1}} A_t^i$ of which we will pick the perfect matching $\sigma$ uniformly at random. (The reason for the transition to $\tilde{A}_t$ is to satisfy condition~\eqref{four graphs condition}.) Recall that for every $ij\in E(R)$, the graph $H[X_i,X_j]$ is a perfect matching. For $i,j\in V(R)$ with $ij\in E(R)$, we define the bijection $\pi_{j\to i}\colon X_j\to X_i$ where $\pi_{j\to i}(x)$ is the unique $H$-neighbour of $x$ in $X_i$. Clearly, $\pi_{j\to i}^{-1}=\pi_{i \to j}$. Let $i\in J_{t+1}$. For $j\in N_R(i)\sm J_{t+1}^\ast$, we define the new bipartite auxiliary graph $P^j_i$ with bipartition $(X_i,V_j)$, where $xv\in P^j_i$ if and only if $\pi_{i\to j}(x)v\in E(A_t^j)$. Clearly, $P^j_i$ is isomorphic to $A_{t}^j$ by construction. The reason for the definition of $P^j_i$ is that it forms a super-regular triple with $A_t^i$ and $G^\ast[V_i,V_j]$ on vertex set $X_i\cupdot V_i \cupdot V_j$ (cf.~Figure~\ref{fig:induction step}). We would like to apply the Four graphs lemma with $A_t^i$, $P^j_i$, and $G^\ast[V_i,V_j]$ playing the roles of $G_{1,2}$, $G_{1,3}$, and $G_{2,3}$, respectively. However, in order to satisfy condition~\eqref{four graphs condition}, we define $\tilde{A}_t^i$ as the spanning subgraph of $A_t^i$ which contains only those edges $xv\in E(A_t^i)$, $x\in X_i,v\in V_i$, that satisfy the following: for all $j\in N_R(i)\sm J_{t+1}^\ast$, we have \begin{align} |N_{P^j_i}(x)\cap N_{G^\ast[V_i,V_j]}(v)|=(d_jd'\pm \eps_t)n/r. \label{four graphs condition satisfied} \end{align} By Proposition~\ref{prop:four graphs condition}, we still have that for every $i\in J_{t+1}$, \begin{align} \text{$\tilde{A}_t^i$ is $(2\Delta \sqrt{\eps_t},d_i)$-super-regular.} \label{sparsified regular} \end{align} Now, define \begin{align} \tilde{A}_t:=\bigcup_{i\in J_{t+1}} \tilde{A}_t^i. \end{align} Choose $\sigma$ uniformly at random among all perfect matchings of $\tilde{A}_t$. We will show that with positive probability, $\phi_{t+1}$ as defined in~\eqref{def new phi} satisfies \ind{t+1}, which then completes the proof. Note first that the graphs $(\tilde{A}_t^i)_{i\in J_{t+1}}$ are all vertex-disjoint. In particular, every perfect matching $\sigma$ of $\tilde{A}_t$ is the union of $|J_{t+1}|$ perfect matchings $\sigma_i\colon X_i \to V_i$, where $\sigma_i$ is a perfect matching in $\tilde{A}_t^i$. In particular, every choice of $\sigma$ yields a $\phi_{t+1}$ which is valid. Moreover, a simple but crucial observation is that since $\sigma$ is chosen uniformly at random, each $\sigma_i$ is a uniformly random perfect matching of $\tilde{A}_t^i$. In order to make sure that $\phi_{t+1}$ is a rainbow embedding, we define a suitable system of conflicts for $\tilde{A}_t$ and then use Lemma~\ref{lem:switchings} to obtain a lower bound on the probability that $\sigma$ is conflict-free. We will subsequently use this lower bound in a union bound, hence a mere existence result would not be sufficient here. Consider $xv\in E(\tilde{A}_t)$ with $x\in \bX_{t+1},v\in \bV_{t+1}$. There is a unique $i\in J_{t+1}$ with $xv\in E(\tilde{A}_{t}^i)$, i.e.~$x\in X_i$, $v\in V_i$. Let $$F_{xv}:=\set{\phi_t(y)v}{y\in N_{H}(x)\cap \bX_t^\ast}.$$ By definition of $A_{t}^i$, we have that $F_{xv}\In E(G^\ast)$. In particular, $F_{xv}$ is the set of edges of $G^\ast$ which is used in the embedding of $H$ if $x$ is mapped to $v$. For our embedding to be rainbow, we clearly need that the colour sets\COMMENT{Check terminology in final draft} $(c(e))_{e\in F_{xv}}$ of the edges in $F_{xv}$ are pairwise disjoint, and disjoint from the set of colours used to embed $H_t$. That this is the case follows from Claim~\ref*{claim:colour separated}. Recall from Claim~\ref*{claim:colour separated} that the embedding of $H_t$ only uses colours from $C_t^\ast$. Moreover, since $\psi(i)=t+1>t$, the second part of Claim~\ref*{claim:colour separated} implies that the sets $(c(e))_{e\in F_{xv}}$ are pairwise disjoint subsets of $C_{t+1}$. Recall that $C_{t+1}\cap C_t^\ast=\emptyset$. Let $$Z_{xv}:=\bigcup_{e\in F_{xv}} c(e)$$ be the set of all colours which appear on the edges in $F_{xv}$ (cf.~Figure~\ref{fig:induction step}). By the above, $Z_{xv}\In C_{t+1}$. We define a system of conflicts $\cF$ for $E(\tilde{A}_t)$ as follows: two edges $xv,x'v'\in E(\tilde{A}_t)$ conflict, i.e.~$\Set{xv,x'v'}\in \cF$, if and only if $Z_{xv}\cap Z_{x'v'}\neq \emptyset$. Hence, if $\sigma$ is a conflict-free perfect matching of $\tilde{A}_t$, then $\phi_{t+1}\colon H_{t+1}\to G_{t+1}$ is a valid rainbow embedding. \begin{figure}[t \begin{center} \begin{tikzpicture}[scale = 0.7,every text node part/.style={align=center}] \def\ver{0.1} \draw[black!20,fill=black!20] (8,-1) rectangle (14,1); \draw (11,0) node {$G^*[V_i,V_{j}]$}; \draw (16,6) node {$H$}; \draw (16,0) node {$G^*$}; \draw (1,7.5) node {$X_{i'}$}; \draw (5,7.5) node {$X_{i''}$}; \draw (9,7.5) node {$X_{i}$}; \draw (15,7.5) node {$X_{j}$}; \draw (1,-1.5) node {$V_{i'}$}; \draw (5,-1.5) node {$V_{i''}$}; \draw (9,-1.5) node {$V_{i}$}; \draw (15,-1.5) node {$V_{j}$}; \filldraw[fill=black!20, draw=black!50] (7.3,6)--(8.7,6) -- (8.7,0) -- (7.3,0) -- cycle; \draw[] (8,5.8) -- (14,5.8) (8,6) -- (14,6) (8,6.2) -- node[anchor=south] {$\pi_{i\rightarrow j}$}(14,6.2); \draw[fill=black!50,black!50] (13.5,0) rectangle (14.5,6) ; \draw (16,3) node {$A_t^{j}$ \color{black!70}{$\to A_{t+1}^j$}}; \draw (6.8,4) node {$\tilde{A}_t^{i}$}; \draw (12.6,3) node {$P_i^{j}$}; \filldraw[fill=black!50, draw=black!50] (13.5,0) -- (14.5,0.5) -- (8.5,6.5) -- (7.5,6) --cycle; \draw[thick,->] (14,4) arc (90:130:4cm); \draw (12.5,4.2) node {$\pi_{j \rightarrow i}$}; \draw[very thick, fill=white] (0,0) ellipse (1 and 1.5) (4,0) ellipse (1 and 1.5) (8,0) ellipse (1 and 1.5) (14,0) ellipse (1 and 1.5) (0,6) ellipse (1 and 1.5) (4,6) ellipse (1 and 1.5) (8,6) ellipse (1 and 1.5) (14,6) ellipse (1 and 1.5); \filldraw[fill=black!50, draw=black!60] (8,6) -- (7.6,0) -- (8.4,0) -- cycle; \draw[fill=black!50, thick] (8,0) ellipse (0.4 and 0.6); \draw[fill] (8,6) circle (\ver) node[anchor=south west] {$x$} ; \draw[fill] (4,6) circle (\ver) node[anchor=south east] {$x''$} (0,6) circle (\ver) node[anchor=south east] {$x'$} (4,0) circle (\ver) node[anchor=south] {$\phi(x'')$} (0,0) circle (\ver) node[anchor=south] {$\phi(x')$} (8,0) circle (\ver) node[anchor=north] {$v$} ; \draw[thick] (0,6) .. controls (2,7) and (6,7) .. (8,6) (4,6) .. controls (5,6.5) and (7,6.5) .. (8,6) (0,0) .. controls (2,-1) and (6,-1) .. node[near end,anchor=north] {$\alpha$} (8,0) (4,0) .. controls (5,-0.5) and (7,-0.5) .. node[anchor=south] {$\beta$} (8,0) (8,0) -- node[anchor=west] {$\{\alpha,\beta\}$} (8,6) ; \draw[ultra thick,->] (0,4.5) -- node[anchor=west] {$\phi_t$}(0,1.5); \draw[ultra thick,->] (4,4.5) -- node[anchor=west] {$\phi_t$} (4,1.5) ; \draw[ultra thick,->,dashed] (7.5,4.7) -- node[anchor=east] {$\sigma_i$} (7.5,1.3) ; \end{tikzpicture} \end{center} \caption{A sketch of the induction step. Here, $i\in J_{t+1}$, that is, the cluster $X_i$ in the middle is currently embedded into $V_i$ by choosing a random perfect matching $\sigma_i$ in the candidacy graph $\tilde{A}_t^{i}$. To the left, we have clusters that are already embedded, so $i',i''\in J_t^\ast$. The edge $xv$ receives all colours from edges from $v$ to already embedded neighbours of~$x$. This produces a conflict system for $\tilde{A}_t^{i}$. To the right, we have a cluster which will be embedded in a later round, so $j\in [r]\sm J_{t+1}^*$. The current candidacy graph $A_t^j$ is projected using the perfect matching $H[X_i,X_j]$. The random choice of $\sigma_i$ produces a subgraph $P^j_i(\sigma_i)$ of this projected candidacy graph $P_i^j$, which translates back to the new candidacy graph $A_{t+1}^j$ for $(X_j,V_j)$.} \label{fig:induction step} \end{figure} \begin{NoHyper} \begin{claim}\label{claim:conflict free} The probability that $\sigma$ is conflict-free is at least $\eul^{-\mu^{1/3} n}$. \end{claim} \end{NoHyper} \claimproof We first claim that $\cF$ is $4\mu \Delta^2 n$-bounded. Fix any edge $xv\in E(\tilde{A}_t)$. Clearly, $|Z_{xv}|\le 2\Delta^2$ since $d_H(x,\bX_t^\ast)\le 2\Delta$ and $c$ is $(\mu n,\Delta)$-bounded. Fix a colour $\alpha\in Z_{xv}$. Note that if $\alpha \in Z_{x'v'}$, where $x'\in \bX_{t+1}$, $v'\in \bV_{t+1}$ and $x'v'\in E(\tilde{A}_t)$, then $\alpha$ was \defn{forced on $x'v'$} by some edge $v'v''\in E(G^\ast)$ with $v''\in \bV_{t}^*$, $\phi_t^{-1}(v'')x'\in E(H)$ and $\alpha\in c(v'v'') $. Note that if $v''\in \bV_{t}^*\sm V_0$, then $x'':=\phi_t^{-1}(v'')\in V(H)\sm X_0$ satisfies $d_H(x'',\bX_{t+1})=1$ by definition of $\psi$. Thus, $v'v''$ forces $\alpha$ on at most one edge $x'v'\in E(\tilde{A}_t)$. Since there are at most $\mu n$ edges $v'v''$ with $\alpha\in c(v'v'')$, we conclude that $\alpha$ is forced on at most $\mu n$ edges $x'v'\in \tilde{A}_t$ in this way. On the other hand, if $v''\in V_0$, then $v'v''$ forces $\alpha$ on at most $d_H(\phi_0^{-1}(v''),\bX_{t+1})\le d_H(\phi_t^{-1}(v''))$ edges $x'v'\in \tilde{A}_t$. Since there are at most $d_G^\alpha (v'')$ vertices $v'$ in $\bV_{t+1}$ such that $\alpha\in c(v'v'')$, the total number of edges $x'v'\in \tilde{A}_t$ on which $\alpha$ is forced by some $v'v''$ with $v''\in V_0$, can be bounded from above by $$\sum_{v''\in V_0} d^{\alpha}_G (v'') \cdot d_H(\phi_0^{-1}(v''),\bX_{t+1}) \le \sum_{x''\in X_0} d^{\alpha}_G (\phi_0(x''))\cdot d_H(x'') \overset{\ref{exc condition:high degrees}}{\leq} \mu n.$$ Therefore, $\alpha$ appears on at most $2\mu n$ edges in $\tilde{A}_t$ and hence $\cF$ is $4\mu \Delta^2 n$-bounded. By~\eqref{sparsified regular} and Proposition~\ref{prop:many switchings} (applied for each $i\in J_{t+1}$), there exists a perfect matching of $\tilde{A}_t$. Let $M$ be any perfect matching of $\tilde{A}_t$, and $e\in M$. We need to show that there are many $(e,M)$-switchable edges. Suppose that $e\in E(\tilde{A}_t^i)$ for $i\in J_{t+1}$. Let $M_i$ be the perfect matching of $\tilde{A}_t^i$ induced by $M$. Clearly, every $(e,M_i)$-switchable edge is also $(e,M)$-switchable. By~\eqref{sparsified regular}, we have that $\tilde{A}_t^i$ is $(2\Delta \sqrt{\eps_t},d_i)$-super-regular. By Proposition~\ref{prop:many switchings} (with $n/r,2\Delta \sqrt{\eps_t},d'^{t+1}$ playing the roles of $n,\eps,d$), there are at least $\frac{d'^{3(t+1)}}{2} (n/r)^2$ edges in $\tilde{A}_t^i$ that are $(e,M_i)$-switchable, and thus $(e,M)$-switchable. Hence, by Lemma~\ref{lem:switchings} (with $|J_{t+1}|n/r, 4\mu r\Delta^2/|J_{t+1}| , d'^{3(t+1)}/2|J_{t+1}|^2$ playing the roles of $n,\mu,\gamma$),\COMMENT{$n^\ast=|J_{t+1}|n/r$. Need that $4\mu \Delta^2 n \le \mu^\ast n^\ast$, i.e. $\mu^\ast\ge 4\mu r\Delta^2/|J_{t+1}|$. Need that $ \frac{d'^{3(t+1)}}{2} (n/r)^2 \ge \gamma^\ast n^{*2}$ and thus $\gamma^\ast \ge \frac{d'^{3(t+1)}}{2|J_{t+1}|^2}$} the probability that $\sigma$ is conflict-free is at least $\eul^{-\mu^{1/3} n}$. \endclaimproof It remains to show that with high enough probability, the updated candidacy graphs $A_{t+1}^j$ are super-regular. For this, we use the Four graphs lemma. \begin{NoHyper} \begin{claim}\label{claim:candidate update} For all $j\in [r]\sm J_{t+1}^\ast$, with probability at least $1-(1-a)^{n/r}$, the candidacy graph $A_{t+1}^j$ is $(\eps_{t+1},d'')$-super-regular for some $d''\ge d'^{t+2}$. \end{claim} \end{NoHyper} \claimproof Crucially, $A_{t+1}^j$ only depends on (at most) one of the $\sigma_i$. Consider $x\in X_j$ and $v\in V_j$. Observe first that by~\eqref{def candidate set} and~\eqref{def new phi}, we have $xv\in E(A_{t+1}^j (\phi_{t+1}))$ if and only if $xv\in E(A_{t}^j (\phi_{t}))$ and $\sigma(y)v\in E(G^\ast)$ for all $y\in N_H(x)\cap \bX_{t+1}$. Recall that $j$ has at most one $R$-neighbour in $J_{t+1}$. If $j$ has no such neighbour, then $A_{t+1}^j=A_{t}^j$ and thus there is nothing to prove. Assume now that $i$ is the unique $R$-neighbour of $j$ in $J_{t+1}$. Thus, $\pi_{j\to i}(x)$ is the unique $H$-neighbour of $x$ in $\bX_{t+1}$. We conclude that $xv\in E(A_{t+1}^j)$ if and only if $xv\in E(A_{t}^j)$ and $\sigma_i(\pi_{j\to i}(x))v\in E(G^\ast)$. Thus, $A_{t+1}^j$ only depends on $\sigma_i$. Moreover, recalling the definition of $P^j_i$, we have that \begin{align} xv\in E(A_{t+1}^j) \quad \Leftrightarrow \quad \pi_{j\to i}(x)v \in E(P^j_i) \mbox{ and } \sigma_i(\pi_{j\to i}(x))v\in E(G^\ast). \label{four graphs identification} \end{align} Consider the triple $(\tilde{A}_t^i,P^j_i,G^\ast[V_i,V_j])$ on vertex set $X_i\cupdot V_i \cupdot V_j$. Recall that $\sigma_i\colon X_i\to V_i$ is a uniformly random perfect matching of $\tilde{A}_t^i$. Define $P^j_i(\sigma_i)$ as the spanning subgraph of $P^j_i$ which contains all edges $x'v\in E(P^j_i)$, with $x'\in X_i,v\in V_j$, for which $\sigma_i(x')v\in E(G^\ast[V_i,V_j])$. (Thus, $P^j_i(\sigma_i)$ is the graph $A_\sigma$ defined in the Four graphs lemma.) By~\eqref{four graphs identification}, $P^j_i(\sigma_i)$ is isomorphic to $A_{t+1}^j$. By~\eqref{sparsified regular}, $\tilde{A}_t^i$ is $(2\Delta \sqrt{\eps_t},d_i)$-super-regular. Moreover, by \ind{t}, $P^j_i$ (which is isomorphic to $A_t^j$) is $(\eps_t,d_j)$-super-regular. Also, recall that $G^\ast[V_i,V_j]$ is $(\eps_0,d')$-super-regular. Finally, condition~\eqref{four graphs condition} is satisfied by~\eqref{four graphs condition satisfied}. Therefore, we can apply the Four graphs lemma (Lemma~\ref{lem:four graphs}) as follows: \medskip { \noindent { \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} $n/r$ & $a$ & $\eps_t^{1/3}$ & $\eps_{t+1}$ & $\tilde{A}_t^i$ & $P^j_i$ & $G^\ast[V_i,V_j]$ & $d_i$ & $d_j$ & $d'$ & $\sigma_i$ & $P^j_i(\sigma_i)$ \\ \hline $n$ & $a$ & $\eps$ & $\eps'$ & $G_{1,2}$ & $G_{1,3}$ & $G_{2,3}$ & $d_{1,2}$ & $d_{1,3}$ & $d_{2,3}$ & $\sigma$ & $A_\sigma$ \end{tabular} } } \medskip With probability at least $1-(1-a)^{n/r}$, the graph $P^j_i(\sigma_i)$ (and thus $A_{t+1}^j$) is $(\eps_{t+1},d_jd')$-super-regular. Note that $d_jd'\ge d'^{t+2}$, as required. \endclaimproof Using a union bound, it follows from Claims~\ref*{claim:conflict free} and~\ref*{claim:candidate update} that with probability at least $$\eul^{-\mu^{1/3} n}-r(1-a)^{n/r}>0,$$ $\sigma$ has the property that $\phi_{t+1}$ satisfies \ind{t+1}. This completes the proof. \endproof \section{Applications} \label{sec:apps} \subsection{Rainbow spanning trees in Dirac graphs}\label{sec:appstree} Although the following result is implied by the rainbow bandwidth theorem which we prove in the next subsection, we state and prove it here to illustrate an easy application of our blow-up lemma. \begin{theorem}\label{thm:diractrees} Suppose $1/n\ll\mu\ll \delta,1/\Delta$. Suppose $G$ is a graph on $n$ vertices such that $\delta(G)\geq (1/2+\delta)n$. Then, given any $\mu n$-bounded edge colouring of $G$, the graph $G$ contains any tree $T$ on $n$ vertices with $\Delta(T)\leq \Delta$ as a rainbow subgraph. \end{theorem} Koml\'os, S\'ark\"ozy and Szemer\'edi \cite{KSS:95} proved this result in the non-rainbow setting even before the development of the blow-up lemma (some of the ideas which later led to the blow-up lemma were already present there). With a blow-up lemma at hand, the proof becomes much shorter. It essentially boils down to distributing the vertices of $T$ evenly among the clusters of the regularity partition, with the reduced graph being a Hamilton cycle. For this, we need the following result, which essentially appears in \cite{JKKO:ta} and uses the fact that the symmetric Markov chain on an odd cycle mixes rapidly. \begin{lemma}[\cite{JKKO:ta}]\label{lem:randomwalk} Suppose $1/n\ll 1/r, 1/\Delta$ and $r$ is odd. Suppose $T$ is a tree on $n$ vertices with $\Delta(T)\leq \Delta$. Then there is a partition $(X_i)_{i\in[r]}$ of $V(T)$ such that \begin{enumerate}[label={\rm (\roman*)}] \item the endvertices of any edge of $T$ lie in two consecutive sets $X_{i},X_{i+1}$ (where $X_{r+1}$ is identified with $X_{1}$), \item $|X_i|=n/r\pm n/\log^2 n$ for all $i\in [r]$, and \item for every $i\in[r]$, the vertex set $X_{i+1}$ contains at least $2^{-\Delta-3}n/r$ vertices $v\in V(T)$ such that $N_T(v)\subseteq X_{i}$. \end{enumerate} \end{lemma} We remark here that for a cycle on $n$ vertices it is easy to find a vertex decomposition as described in Lemma~\ref{lem:randomwalk}. As the following proof only uses these properties of $T$ and the fact that $\Delta(T)\leq \Delta$, (the proof of) Theorem~\ref{thm:diractrees} also implies the existence of a rainbow Hamilton cycle under the same assumptions on $G$ \COMMENT{Hence it implies the result of Cano, Perarnau and Serra. Guillem said they leave it with the Eurocomb abstract. There's not gonna be a paper.} \lateproof{Theorem~\ref{thm:diractrees}} Let $G$ and $T$ be as in the statement; in particular, we assume that some $\mu n$-bounded edge colouring of $G$ is given. Choose new constants $\eps,d$ such that $\mu\ll \eps \ll d \ll \delta,1/\Delta$. We apply the regularity lemma to $G$ and obtain a vertex partition $(V_i)_{i\in[r]_0}$ such that \begin{itemize} \item $|V_1|=\ldots = |V_r|$ for some odd $r$ (we may assume $\mu \ll 1/r \ll \eps$),\COMMENT{So $|V_i|\le n/r$} \item $|V_0| \leq \eps n$, and \item for every $i\in [r]$ and for all but at most $\eps r$ integers $j\in [r]\sm\{i\}$, the graph $G[V_i,V_j]$ is $\eps$-regular. \end{itemize} Let $R$ be the graph with vertex set $[r]$ where two vertices $i,j\in[r]$ are joined by an edge if $G[V_i,V_j]$ is lower $(\eps,d)$-regular. As $\delta(G)\geq (1/2+\delta)n$, it is not hard to see that $\delta(R)\geq (1/2+\delta/2)r$. Hence $R$ contains a Hamilton cycle $C$. Without loss of generality, we may assume that $1,2,\ldots,r$ appear in this order on~$C$. Below we always identify vertex sets $S_{r+1}$ with $S_1$. By Fact~\ref{fact:regularity}, one can remove at most $2\eps n/r$ vertices from every $V_i$ (call the new vertex sets $\widetilde{V}_i'$) such that $G[\widetilde{V}_i',\widetilde{V}_{i+1}']$ is lower $(4\eps,d)$-super-regular for all $i\in [r]$. For purposes that will become clear below, we remove even more vertices. So let us remove from every $V_i$ exactly $\eps^{1/2}n/r$ vertices (in total) and add them to $V_0$ (call the new vertex sets $V_i'$) such that $G[V_i',V_{i+1}']$ is lower $(2\eps^{1/2},d)$-super-regular for all $i\in [r]$ (where $V_{r+1}':=V_1'$). We claim that \begin{enumerate}[label=(\roman*)] \item $\eps^{1/2}n \leq |V_0'|\leq (\eps^{1/2}+\eps)n$, \item for every vertex $v\in V_0'$, there are at least $(1/2+\delta/2)r$ integers $i\in [r]$ such that $v$ has at least $d n/2r$ neighbours in $V_i'$, and \item for every $i\in [r]$, there are at least $(1/2+\delta/3)|V_0'|$ vertices in $V_0'$ that have at least $dn/2r$ neighbours in $V_i'$. \end{enumerate} Indeed, (i) is trivial. To see (ii), consider $v\in V_0'$. Clearly, $v$ has at most $|V_0'|\leq 2\eps^{1/2} n$ neighbours in $V_0'$ and at most $dn$ neighbours in clusters $V_i'$ in which $v$ has less than $d n/2r$ neighbours. As $d\ll \delta$ and $\delta(G)\geq (1/2+\delta)n$, statement (ii) follows. To see (iii), we fix some $i\in [r]$ and let $j\in N_R(i)$. As $G[V_i,V_j]$ is lower $(\eps,d)$-regular, among the $\eps^{1/2}n/r$ vertices we moved from $V_j$ to $V_0$, at least $(\eps^{1/2}-2\eps)n/r$ have at least $(d-\eps)|V_i|\ge dn/2r$ neighbours in $V_i$ by Fact~\ref{fact:regularity}. Now (iii) follows as $d_R(i)\geq (1/2+\delta/2)r$ and by (i). Next we apply Lemma~\ref{lem:randomwalk} to $T$ and obtain a vertex partition $(X_i)_{i\in[r]}$ of $V(T)$ such that the endvertices of every edge lie in consecutive parts (modulo $r$) and $|X_i|=n/r\pm n/\log^2 n$ for all $i\in[r]$. Moreover, $X_{i+1}$ contains at least $2^{-\Delta-3}n/r$ vertices such that all their neighbours lie in $X_{i}$; among those vertices we select a $2$-independent set $\widehat{X}_{i+1}$ of size exactly $|X_{i+1}|-|V_{i+1}'|$. For all $i\in[r]$, since $|X_i|= n/r \pm n/\log^2 n$ and $|V_i'|= (1- \eps^{1/2} \pm \eps)n/r$, we have that \begin{align}\label{eq:size hat X} |\widehat{X}_{i}|=(1\pm \delta/10)\eps^{1/2}n/r. \end{align} Let $X_i':=X_i\setminus \widehat{X}_{i}$. Hence $|X_i'|=|V_i'|$. Let $X_0':=\widehat{X}_1\cup \dots \cup \widehat{X}_r$. It is easy to see that $X_0'$ is $2$-independent and $|V_0'|=|X_0'|$. We wish to apply the rainbow blow-up lemma to the blow-up instance $(T,G,C,(X_i')_{i\in[r]_0},(V_i')_{i\in[r]_0})$. Before, we need to find a feasible partial embedding $\phi_0\colon X_0'\to V_0'$ and define appropriate candidacy graphs for the remaining vertices. Next, we want to match the vertices in $V_0'$ and $X_0'$ such that whenever $v\in V_0'$ is matched to $x\in \widehat{X}_{i+1}$, the vertex $v$ has at least $dn/2r$ neighbours in $V_i'$. To this end, let $H$ be the bipartite graph with bipartition $(V_0',X_0')$ and join $v\in V_0',x\in \widehat{X}_{i+1}$ whenever $d_G(v,V_i')\ge dn/2r$. By (iii), we have $d_H(x)\geq (1/2+\delta/3)|V_0'|$ for all $x\in X_0'$. Moreover, for all $v\in V_0'$, we have $$d_H(v) \overset{{\rm (ii)},\eqref{eq:size hat X}}{\geq} (1/2+\delta/2)r\cdot (1-\delta/10)\eps^{1/2}n/r \overset{\rm(i)}{\geq} (1/2+\delta/3)|V_0'|.$$ Clearly, by Hall's theorem, $H$ contains a perfect matching $\phi_0\colon X_0'\to V_0'$. Now, consider $i\in [r]$. We need to define a suitable candidacy graph $A^i$. Note that at most $2\Delta \eps^{1/2}|X_i'|$ vertices of $X_i'$ have a neighbour in $X_0'$.\COMMENT{In particular, condition (iii) from the rainbow blow-up lemma is satisfied.} For these vertices we have to restrict their candidate set. Now suppose that $x\in X_i'$ has a (unique) neighbour $x'$ in $X_0'$. Observe that we must have $x'\in \widehat{X}_{i+1}$. We define $N_{A^i}(x):=N_G(\phi_0(x'),V_i')$. By definition of $H$, we have that $|N_{A^i}(x)|\ge dn/2r$. For the vertices $x\in X_i'\setminus N_T(X_0')$, we can define $N_{A^i}(x):=V_i'$. Thus, $A^i$ is lower $(\eps^{1/3},d/2)$-super-regular. Moreover, we have met condition~\ref{exc condition:neighbourhoods}, and since $X_0'$ is $2$-independent, $\phi_0$ is $2\Delta\mu n$-feasible. Thus, since the blow-up instance $(T,G,C,(X_i')_{i\in[r]_0},(V_i')_{i\in[r]_0})$ with candidacy graphs $(A^i)_{i\in[r]}$ is lower $(\eps^{1/3},d/2)$-super-regular, an application of the rainbow blow-up lemma (Lemma~\ref{lem:blow up}) yields a rainbow embedding of $T$ into $G$. \endproof \subsection{Rainbow bandwidth theorem} \label{subsec:bandwidth} In this subsection, we apply the rainbow blow-up lemma to prove a rainbow bandwidth theorem (Theorem~\ref{thm:bandwsimple}). Due to the proof in~\cite{BST:09} being carried out in a very modular way, we can directly make use of important parts of the original proof. In fact, B\"ottcher, Schacht and Taraz~\cite{BST:09} proved a slightly more general result (and so do we). As mentioned earlier, the minimum degree threshold in the bandwidth theorem is in general not optimal. For example, if $H$ is a Hamilton cycle and $n$ is odd, then $\chi(H)=3$, yet $\delta(G)\ge n/2$ already suffices to find~$H$. Roughly speaking, this is because $H$ is essentially $2$-colourable, except that we need an additional colour for one vertex. This motivates the following definition. Suppose the vertex set of $H$ is $[n]$. For $\ell,x,y\in \mathbb{N}$, we say an $(\ell + 1)$-colouring $\sigma\colon V (H) \to [\ell]_0$ of $H$ is \defn{$(x, y)$-zero free} if for each $t \in [n]$, there exists a $t'$ with $t \leq t' \leq t+x$ such that $\sigma(u) \neq 0$ for all $u \in [t', t'+y]$. The following theorem can provide the optimal threshold for such graphs $H$, for example if $H$ is the $(\ell-1)$-st power of a Hamilton cycle. We remark however that it does not always yield the optimal threshold, e.g.~for $C_5$-factors the optimal threshold is $3/5$ (cf.~\cite{KO:09}). \begin{theorem}\label{thm:bandwidth} Suppose $1/n\ll \mu , \beta \ll \delta,1/\Delta,1/\ell$. Let $H$ be a graph on vertex set $[n]$ with $\Delta(H)\leq \Delta$. Assume that $H$ has an $(\ell+1)$-colouring that is $(8\ell \beta n,4\ell \beta n)$-zero free and uses colour $0$ at most $\beta n$ times. Also assume that $\max \{|i-j|\colon ij\in E(G)\}\le \beta n$. Suppose $G$ is a graph on $n$ vertices with $\delta(G)\geq (1-1/\ell+\delta)n$. Then, given any $\mu n$-bounded edge colouring of $G$, there is a rainbow copy of $H$ in~$G$. \end{theorem} Obviously Theorem~\ref{thm:bandwidth} implies Theorem~\ref{thm:bandwsimple}, and the remaining part of this section is devoted to the proof of Theorem~\ref{thm:bandwidth}. Its proof in~\cite{BST:09} without the rainbow condition is based on three results, the so-called `Lemma for $G$', `Lemma for $H$' and `partial embedding lemma' (see Lemmas~6, 8 and~9 in~\cite{BST:09}). The first two are strong structural decomposition results for $G$ and $H$, respectively, which prepare the application of the blow-up lemma and require the main part of the work done in~\cite{BST:09}. Fortunately, for the proof of Theorem~\ref{thm:bandwidth} we can use the Lemmas for $G$ and $H$ without any alterations. The partial embedding lemma deals with a very small set of exceptional vertices that cannot be handled by the blow-up lemma directly. In~\cite{BST:09}, this is by far the easiest part and follows by standard techniques. However, it does not easily translate to the rainbow setting and hence we give a proof later in this section using the methods developed in this paper. We now state the partial embedding lemma for the rainbow setting, which replaces Lemma 9 from \cite{BST:09}. Roughly speaking, it yields a rainbow embedding $\phi$ of a small exceptional set $X$ such that every outside neighbour $y\in Y:=N_H(X)\sm X$ of vertices in $X$ still has a large candidate set (and will later be embedded by the blow-up lemma). \begin{lemma}\label{lem:partemb} Suppose $1/n\ll\mu, \eps \ll d' \ll d,1/\Delta$ and $\mu \ll 1/r$. Suppose $R$ is a graph on~$[r]$. Suppose $G$ is a graph on $n$ vertices with vertex partition $(V_i)_{i\in [r]}$ such that $G[V_i,V_j]$ is lower $(\eps,d)$-regular whenever $ij\in E(R)$ and $|V_i|=(1\pm \eps)n/r$. Suppose $H$ is a graph on at most $\eps n/r$ vertices with vertex partition $(X_1,\ldots,X_r,Y_1,\ldots,Y_r)$ such that $\Delta(H)\leq \Delta$ and $Z_i:=X_i\cup Y_i$ is an independent set for all $i\in [r]$. Moreover, whenever $z_iz_{j}\in E(H)$ with $z_i\in Z_i$, $z_{j}\in Z_{j}$, then $ij\in E(R)$. Let $X:= X_1\cup \ldots \cup X_r$ and $Y:=Y_1\cup \ldots \cup Y_r$. Given any $\mu n$-bounded edge colouring $c\colon E(G)\to C$ of $G$, there exists a set of colours $C'\In C$, a rainbow embedding $\phi\colon H[X]\to G_{C\sm C'}$ and candidate sets $S_y\subseteq V_i$ for all $y\in Y_i$ and $i\in [r]$ such that the following hold: \begin{enumerate}[label={\rm (\roman*)}] \item $\phi(X_i)\subseteq V_i$ for all $i\in [r]$; \item for all $y\in Y$, we have $|S_y|\geq d' n/r$ and for all $x\in N_H(y)\cap X$ we have $S_y\In N_{G_{C'}}(\phi(x))$; \item for all $y\in Y$, all distinct $x,x'\in N_H(y)\cap X$ and all $v\in S_y$, we have $c(\phi(x)v)\neq c(\phi(x')v)$. \end{enumerate} \end{lemma} With the Lemmas for $G$ and $H$ at hand, the proof of Theorem~\ref{thm:bandwidth} without the rainbow condition works verbatim also for Theorem~\ref{thm:bandwidth}, by replacing the partial embedding lemma with the rainbow partial embedding lemma and the blow-up lemma with the rainbow blow-up lemma. We provide a proof for completeness, but focus on the differences to the original proof. For integers $r,\ell$, let $K^r_\ell$ denote the vertex-disjoint union of $r$ cliques of order~$\ell$. This will be a spanning subgraph of the reduced graph $R$ indicating super-regular pairs, and the blow-up lemma will be applied with reduced graph $K^r_\ell$. In~\cite{BST:09}, $K^r_\ell$ is found within $R$ inside a so-called `backbone', that is, the $\ell$-cliques in $K^r_\ell$ form a sequence where two consecutive cliques are joined almost completely in~$R$. The purpose of this backbone is to allow changing the cluster sizes $V_i$ in $G$ slightly to match the cluster sizes of the $H$-partition given by the Lemma for~$H$. \lateproof{Theorem~\ref{thm:bandwidth}} Choose new constants $d,d',\eps,\xi,r',r''$ such that we have the following hierarchy of parameters \begin{align*} 1/n\ll \mu , \beta \ll \xi \ll 1/r'' \ll 1/r' \ll \eps \ll d'\ll d \ll \delta, 1/\Delta, 1/\ell. \end{align*} Suppose now that $G$ and $H$ are as in the statement. In exactly the same way as in~\cite{BST:09}, we first use the structural results for $G$ (Lemma~6 in~\cite{BST:09}) and $H$ (Lemma~8 in~\cite{BST:09}), respectively, to obtain a partition $(V_i)_{i\in [\ell r]}$ of $V(G)$, a partition $(X_i\cup X_i')_{i\in [\ell r]}$ of $V(H)$ and a graph $R$ on $[\ell r]$ for some $r'\leq r\leq r''$ such that the following hold: \begin{enumerate}[label=(\roman*)] \item $|\bigcup_{i\in [\ell r]}X_i'|\leq \xi n$; \item $|X_i\cup X_i'|= |V_i|= (1\pm \xi)\frac{n}{\ell r}$ for all $i\in [\ell r]$; \item $R$ contains a $K_\ell$-factor $K^r_\ell$ such that $G[V_i,V_j]$ is lower $(\eps,d)$-regular for all $ij\in E(R)$ and even lower $(\eps,d)$-super-regular if $ij\in E(K^r_\ell)$; \item for all $xy\in E(H)$ with $x\in X_i\cup X_i'$ and $y\in X_j\cup X_j'$, we have $ij\in E(R)$, and moreover we have $ij\in K_\ell^r$ if $x\in X_i$ and $y\in X_j$. \end{enumerate} For the first time, we need to make an easy amendment for the rainbow setting. Let $c\colon E(G) \to C$ be any $\mu n$-bounded edge colouring. Note that we may assume that $|C|\le 3\mu^{-1}n$, say. (Otherwise, there would be two colour classes with combined size at most $\mu n$ and we could merge them.) We split the colour set $C$ into two sets $C_1,C_2$ such that $G_{C_i}$ has still $R$ as a reduced graph and all lower (super-)regular pairs halve their density. This can be achieved with Lemma~\ref{lem:separate colours}. Let $X_0:=\bigcup_{i\in [\ell r]}X_i'$. Next we use the (rainbow) partial embedding lemma (Lemma~\ref{lem:partemb} instead of Lemma~9 in~\cite{BST:09}), with $X_0,N_H(X_0)\sm X_0,H[X_0\cup N_H(X_0)]$ playing the roles of $X,Y,H$, to obtain a set $C'\In C_1$, a rainbow embedding $\phi_0\colon H[X_0] \to G_{C_1\sm C'}$ and a candidate set $S_y$ for each $y\in N_H(X_0)\sm X_0$ such that the following hold: \begin{enumerate}[label={\rm (\alph*)}] \item $\phi_0(X_i')\subseteq V_i$ for all $i\in [\ell r]$; \label{partial embedding location} \item for all $y\in N_H(X_0)\sm X_0$ with $y\in X_i$ for some $i\in[\ell r]$, we have $S_y\In V_i$ and $|S_y|\geq d' n/\ell r$, and for all $x\in N_H(y)\cap X_0$, we have $S_y\In N_{G_{C'}}(\phi_0(x))$; \label{partial embedding candidates} \item for all $y\in N_H(X_0)\sm X_0$, all distinct $x,x'\in N_H(y)\cap X_0$ and all $v\in S_y$, we have $c(v\phi(x))\neq c(v\phi(x'))$. \label{partial embedding rainbow} \end{enumerate} Let $V_0':=\phi_0(X_0)$ and $V_i':=V_i\setminus V_0$ for all $i\in [\ell r]$. By~\ref{partial embedding location}, we have $|V_i'|=|X_i|$ for all $i\in [\ell r]$. Let $H':=H-H[X_0]$. Now we want to use the rainbow blow-up lemma to embed $H'$ into $G_{C_2\cup C'}$.\COMMENT{Adding edges with colours in $C'$ doesn't destroy the lower regularity of $G_{C_2}$} As a blow-up instance we use $(H',G_{C_2\cup C'},K^r_\ell,(X_i)_{i\in [\ell r]_0},(V_i')_{i\in [\ell r]_0})$ with the obvious candidacy graphs, where the candidate set of $y\in N_H(X_0)\sm X_0$ is~$S_y$ and there are no restrictions for other vertices. From~\ref{partial embedding candidates},~\ref{partial embedding rainbow} and the fact that $\Delta(H)\leq \Delta$, we conclude that $\phi_0$ is $2\Delta \mu n$-feasible. Thus, Lemma~\ref{lem:blow up} yields a rainbow embedding $\phi$ of $H'$ into $G_{C_2\cup C'}$ which extends~$\phi_0$. Since $\phi_0$ and $\phi$ use distinct colours, this is a rainbow embedding of~$H$, completing the proof. \endproof It remains to prove Lemma~\ref{lem:partemb}, for which we need the following straightforward consequence of Lemma~\ref{lem:switchings} and Proposition~\ref{prop:many switchings}. \begin{cor}\label{cor:manyswitch} Suppose $1/n\ll \mu, \eps \ll d$ and $\mu\ll 1/r$. Let $V_1,\ldots,V_r$ be disjoint vertex sets such that $|V|=n$, where $V:=\bigcup_{i\in[r]}V_i$, and $n/(2r)\leq |V_i|\leq 2n/r$ for all $i\in [r]$. Suppose $X_1,\ldots,X_r$ are disjoint vertex sets such that $|X_i|\leq \eps n/r$ for all $i\in [r]$. Let $X:=\bigcup_{i\in[r]}X_i$. Suppose $G$ is a bipartite graph with vertex partition $(X,V)$ and suppose every edge in $G$ joins $X_i$ and $V_i$ for some $i\in [r]$. Suppose that $d_G(x)\geq dn/r$ for all $x\in X$. Then, given any $\mu n$-bounded system of conflicts for $G$, there is a conflict-free matching covering $X$. \end{cor} \proof Define a supergraph $G'$ of $G$ on $2n$ vertices as follows. First we add to every set $X_i$ exactly $|V_i|-|X_i|$ new vertices and call the new set $X_i'$. Let $X':=\bigcup_{i\in[r]}X_i'$. Next, we join every vertex $x\in X_i'\sm X_i$ to all vertices in $V_i$. It is easy to see that $G'[X_i',V_i]$ is lower $(4\eps,d/2)$-super-regular for all $i\in [r]$. Every conflict system of $G$ easily transfers to a conflict system of $G'$. Observe that by Proposition~\ref{prop:many switchings}, for every $i\in [r]$, the graph $G'[X_i',V_i]$ has a perfect matching, and thus $G'$ has one. In addition, for every perfect matching $M$ of $G'$ and every edge $e\in M$, there are at least $d^3/9\cdot n^2/(4r^2)$ edges that are $(e,M)$-switchable. Hence, Lemma~\ref{lem:switchings} implies the existence of a conflict-free perfect matching $M'$ in $G'$. Let $M\subseteq M'$ be the matching obtained by deleting all edges incident to $X'\setminus X$. Consequently, $M$ is a conflict-free matching of $G$ covering $X$. \endproof The following proof is reminiscent of our main proof in Section~\ref{subsec:main proof} in that we proceed in rounds and reserve exclusive colours for each round. It is much simpler though since we only need to embed a small fraction of all vertices. \lateproof{Lemma~\ref{lem:partemb}} As $\Delta(H)\leq \Delta$, we can colour $H^2$ with $T:=\Delta^2+1$ colours. Hence, there exists a partition of $X$ into sets $X^1,\ldots,X^{T}$ which are $2$-independent in~$H$.\COMMENT{independent partition of $X_1,\dots,X_r$} We define $X^{T+1}:=Y$. We will proceed in $T$ rounds and embed in round $t\in [T]$ all vertices of $X^t$, whilst keeping track of candidate sets for the remaining vertices. Beforehand, we apply Lemma~\ref{lem:separate colours}\COMMENT{actually just McDiarmid here} to partition $C$ into sets $\{C_{t_1t_2}\colon t_1t_2\in \binom{[T+1]}{2}\}$ such that $G_{C_{t_1t_2}}[V_i,V_j]$ is lower $(2\eps,\hat{d})$-regular for all $t_1t_2\in \binom{[T+1]}{2}$ and all $ij\in E(R)$, where $\hat{d}:=d/\binom{T+1}{2}$. Define $G^{t_1t_2}:=G_{C_{t_1t_2}}$. In the beginning we set $S_z(0):=V_i$ for all $i\in [r]$ and $z\in Z_i$. We claim that after round~$t$, we have \begin{enumerate}[label=(\alph*)] \item a rainbow embedding $\phi^t$ of $H^t:=H[X^1\cup \dots \cup X^t]$ such that for every edge $xy\in E(H^t)$ with $x\in X^{t_1}$ and $y\in X^{t_2}$, we have $c(\phi^t(xy))\in C_{t_1t_2}$; \item $|S_z(t)|\geq (\hat{d}/2)^t\cdot n/2r$ and $S_z(t)\subseteq V_{i}\setminus \phi^{t}(V(H^{t}))$ for all $i\in[r]$ and $z\in Z_i \sm V(H^{t})$; \item for all $t'\in [t]$, $t''\in [T+1]\setminus [t]$, and all $x'\in X^{t'},x''\in N_H(x')\cap X^{t''}$, we have $S_{x''}(t)\subseteq N_{G^{t't''}}(\phi^t(x'))$. \end{enumerate} Indeed, our claim is true for $t=0$. Now let $t>0$ and suppose it is true for $t-1$. We first restrict the candidate sets for the vertices in $x\in X^t$ slightly. Consider any $x\in X^t$. By Fact~\ref{fact:regularity}, for all but at most $\eps^{1/2}n/r$ vertices $v\in S_x(t-1)$, we have \begin{align} d_{G^{tt'}}(v,S_{x'}(t-1))\ge 3\hat{d}/4 \cdot |S_{x'}(t-1)| \overset{\rm{(b)}}{\ge} 3\hat{d}/4 \cdot (\hat{d}/2)^{t-1} \cdot n/2r. \label{candidate shrink} \end{align} for all $t'>t$ and $x'\in N_H(x)\cap X^{t'}$. Let $S_x(t)$ be obtained from $S_x(t-1)$ by removing all those vertices $v\in S_x(t-1)$ for which~\eqref{candidate shrink} does not hold for some $t',x'$. This will ensure that (b) and (c) hold again for the next step. We now embed $X^t$ by picking for every $x\in X^t$ a suitable image from $S_x(t)$. Let $V:=V(G)$. We define a bipartite graph $J$ with bipartition $(X^t,V\setminus \phi^{t-1}(V(H^{t-1})))$ and we join every $x\in X^t$ to all vertices in $S_x(t)$. To ensure that the new embedding is again rainbow, we define a system of conflicts for $E(J)$. For an edge $xv\in E(J)$ with $x\in X^t$ and $v\in S_x(t)$, let $F_{xv}$ be the set of edges $\phi^{t-1}(x')v$ for $x'\in N_H(x)\cap V(H^{t-1})$. By (c), we have $F_{xv}\In E(G)$. Moreover, the colours of the edges in $F_{xv}$ are distinct as the sets $X^{t'}$ are $2$-independent. We let two distinct edges $xv,x'v'\in E(J)$ conflict if $c(F_{xv})\cap c(F_{x'v'})\neq \emptyset$. It is easy to see that this gives a $\mu n$-bounded conflict system.\COMMENT{} We apply Corollary~\ref{cor:manyswitch} and obtain a conflict-free matching $\sigma \colon X^t\to V\setminus \phi^{t-1}(V(H^{t-1}))$. Now, let $\phi^t$ be the extension of $\phi^{t-1}$ defined by $\phi^t(x):=\sigma(x)$ for all $x\in X^t$. Observe that $\phi^t$ is a rainbow embedding of $H^t$ with the properties required for~(a). Next we update the candidate sets $S_x$ for all $x\in X^{t'}$ with $t'>t$. If $x$ has no neighbour in $X^t$ we only remove $\phi^t(X^t)$ from $S_x(t-1)$ to obtain $S_x(t)$. If $x'\in X^t$ is the neighbour of $x$ (there is at most one for every $x$), we define $$S_x(t):=(S_x(t-1)\cap N_{G^{tt'}}(\sigma(x')))\sm \phi^t(X^t).$$ This automatically ensures that (c) holds again for~$t$. Moreover, by~\eqref{candidate shrink} and since $|X^t|\leq \eps n/r$, we deduce that $|S_x(t)|\ge 3\hat{d}/4 \cdot |S_{x'}(t-1)|- \eps n/r \ge (\hat{d}/2)^t\cdot n/2r$, hence (b) also holds for $t$. Assume now that the claim holds for $T$. Let $C':=\bigcup_{t\in [T]}C_{t,T+1}$. By (a), $\phi:=\phi^T$ is a rainbow embedding of $H[X]$ into $G_{C\sm C'}$. For all $y\in Y$, let $S_y:=S_y(T)$. It remains to check that (i)--(iii) hold. Clearly, (i) holds as we have $S_z\In V_i$ for all $z\in Z_i$ and all $i\in[r]$ throughout the embedding. Since $d'\leq (\hat{d}/2)^{T}/2$, we have $|S_y|\geq d' n/r$ for all $y\in Y$ by~(b). Moreover, from (c) we deduce that for all $y\in Y$, $t\in [T]$ and $x\in N_H(y)\cap X^t$, we have $S_y\In N_{G^{t,T+1}}(\phi(x))$. Since $G^{t,T+1}\In G_{C'}$, this establishes (ii). Finally, for all $y\in Y$, all distinct $x,x'\in N_H(y)\cap X$ and all $v\in S_y$, we have $c(\phi(x)v)\neq c(\phi(x')v)$, i.e.~(iii) holds. Indeed, by the above, we have $c(\phi(x)v)\in C_{t,T+1}$ and $c(\phi(x')v)\in C_{t',T+1}$, where $x\in X^t,x'\in X^{t'}$. Since we partitioned $X$ into $2$-independent sets, $t$ and $t'$ are distinct and thus $C_{t,T+1}\cap C_{t',T+1}=\emptyset$. This completes the proof. \endproof \section{Concluding remarks} \label{sec:conclusion} We have proved a rainbow blow-up lemma for $\mu n$-bounded edge colourings and have used it to prove a rainbow bandwidth theorem in this setting and to show that every bounded degree spanning graph exists as a rainbow subgraph in quasi-random graphs of arbitrarily small fixed density. We conclude this paper with the following remarks: \begin{itemize} \item In fact, our blow-up lemma applies to slightly more general systems of conflicts (see beginning of Section~\ref{sec:main proof}), allowing for instance to obtain embeddings which are simultaneously rainbow with respect to several given edge colourings. A natural question is whether a blow-up lemma still holds for arbitrary $\mu n$-bounded conflict systems (as defined in Section~\ref{sec:conflictM}). The bottleneck in the current proof is that it relies on the colour splitting technique, which seems to be limited to `highly transitive' conflict systems. \item Note that in order to guarantee a rainbow copy of a graph $H$ with maximum degree $\Delta$ in a graph $G$ of density $d$, the given edge-colouring of $G$ needs to be $dn/\Delta $-bounded, as otherwise there may be less than $\Delta n/2$ different colours available. In particular, our theorems are optimal up to the value of the constant~$\mu$. As noted before, the constant $\mu$ in our theorems is very small. In particular, in an embedding obtained with the rainbow blow-up lemma, only a small fraction of the colours available in $G$ is used for the embedding. On the contrary, affirmative answers to many open rainbow conjectures would imply that (almost) all colours need to be used. As mentioned in Section~\ref{sec:intro}, Kim, K\"uhn, Kupavskii and Osthus used the rainbow blow-up lemma on a small random subset (of vertices and colours) to complete a partial embedding (even a partial approximate decomposition), thus effectively using almost all colours. We expect further applications in this direction. \item There has been some exciting progress towards rainbow decompositions of properly edge-coloured complete graphs, for instance in~\cite{BLM:17,PS:17} it is shown that there are $\Omega(n)$ edge-disjoint rainbow spanning trees in every properly edge-coloured~$K_n$. For $\mu n$-bounded edge colourings, it might be possible to achieve (approximate) rainbow decompositions for any prescribed collection of bounded degree graphs. For instance, it was proved in~\cite{AST:95} that for every $\mu n$-bounded edge colouring of $K_{n,n}$, there is a decomposition into rainbow perfect matchings provided $\mu$ is small enough and $n$ is a power of~$2$. We conjecture that for any $\mu n$-bounded edge colouring of~$K_n$, there exists a decomposition into rainbow Hamilton cycles (provided $n$ is odd). Similarly, we conjecture that for any collection of $n$-vertex graphs $H_1,\dots,H_t$ with bounded degree and $\sum_{i=1}^t e(H_i) \le (1-\alpha)e(K_n)$ and any $\mu n$-bounded edge colouring of~$K_n$, where $\mu \ll \alpha$, the graphs $H_1,\dots,H_t$ pack edge-disjointly into $K_n$ such that each subgraph is rainbow. In the uncoloured setting, this was proved in~\cite{KKOT:ta}. \end{itemize} \section*{Acknowledgement} We are grateful to David Harris as well as Matthew Coulson and Guillem Perarnau for helpful discussions on their papers. \bibliographystyle{amsplain_v2.0customized}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,635
<?php /** * Author: Bui Cong Dang (dangbcd6591@seta-asia.com.vn) * File Class/Controler/Model **/ class Model_Mpartner extends Fuel\Core\Model_Crud { protected static $_table_name = 'm_partner'; protected static $_primary_key = 'partner_code'; protected static $_properties = array( 'partner_code', 'user_id', 'edit_data', 'type', 'master_num', 'branch_name', 'zipcode', 'addr1', 'addr2', 'addr3', 'tel', 'fax', 'billing_department', 'billing_tel', 'billing_fax', 'billing_deadline_day', 'payment_day', 'billing_start_date', 'bank_name', 'bank_branch_name', 'bank_account_number', 'notes', 'status', 'created_at', 'updated_at', 'bank_type', 'm_group_id', 'department_id', 'usami_branch_code', ); protected static $_observers = array( 'Orm\Observer_CreatedAt' => array( 'events' => array('before_insert'), 'mysql_timestamp' => true, 'property' => 'created_at', 'overwrite' => true, ), 'Orm\Observer_UpdatedAt' => array( 'events' => array('after_update'), 'mysql_timestamp' => true, 'property' => 'updated_at', 'overwrite' => true, ), ); public static function _set($data = []) { $fields = array(); if( ! isset($data['zipcode'])) $fields['zipcode'] = $data['zipcode_p1'].$data['zipcode_p2']; if (isset($data['tel_1']) && isset($data['tel_1']) != '') { $data['tel'] = $data['tel_1'].'-'.$data['tel_2'].'-'.$data['tel_3']; } if (isset($data['fax_1']) && isset($data['fax_1']) != '') { $data['fax'] = $data['fax_1'].'-'.$data['fax_2'].'-'.$data['fax_3']; } foreach ($data as $k => $v) { if(in_array($k,self::$_properties)) { $fields[$k] = ($v != '') ? $v : null; } } return $fields; } /** * @author DangBc <dang6591@seta-asia.com.vn> * @param is where */ public function _where($filter = array(), $select = '') { if($select == 'autocomplete') { $is_where = DB::select('m_group.name','m_partner.branch_name','m_partner.partner_code')->from('m_partner')->join('m_group', 'INNER')->on('m_partner.m_group_id', '=', 'm_group.m_group_id')->order_by('m_partner.created_at','desc'); } else { $is_where = DB::select('*')->from('m_partner')->join('m_group', 'INNER')->on('m_partner.m_group_id', '=', 'm_group.m_group_id')->order_by('m_partner.created_at','desc'); } if (isset($filter['type']) && $filter['type'] != '') { $is_where->where('type', '=', $filter['type']); } if (isset($filter['addr1']) && $filter['addr1'] != '') { $is_where->where('addr1', '=', $filter['addr1']); } if (isset($filter['partner_code']) && $filter['partner_code'] != '') { $is_where->where('partner_code', '=', $filter['partner_code']); } if (isset($filter['status'])) { $is_where->where('status', '=', $filter['status']); } if (isset($filter['keyword']) && $filter['keyword'] != '') { $arr_keyword = array_filter(preg_split('/\s|\s+| /', trim($filter['keyword']))); $is_where->and_where_open(); $is_where->and_where_open(); foreach($arr_keyword as $k => $v) { $is_where->where(\Fuel\Core\DB::expr('CONCAT(m_group.name, m_partner.branch_name)'), 'like', '%'.$v.'%'); } $is_where->and_where_close(); $is_where->and_where_close(); } if(isset($filter['department_id']) && $filter['department_id'] != '') { $is_where->where('department_id','=',$filter['department_id']); } if(isset($filter['group_id']) && $filter['group_id'] != '') { $is_where->where('m_partner.m_group_id','=',$filter['group_id']); } if (isset($filter['limit'])) { $is_where->limit($filter['limit']); } if (isset($filter['offset'])) { $is_where->offset($filter['offset']); } return $is_where; } /** * @author DangBc <dang6591@seta-asia.com.vn> * @param count data */ public function count_data($filters = array()) { $query = $this->_where($filters); return count($query->execute()); } /** * @author DangBc <dang6591@seta-asia.com.vn> * @param Get filter_partner */ public function get_filter_partner($filter, $select = '*') { $result = $this->_where($filter,$select); return $result->execute()->as_array(); } /* * Get list by type * * @since 05/11/2015 * @author Ha Huu Don<donhh6551@seta-asia.com.vn> */ public function get_list_by_type($type = 1) { $query = DB::select(DB::expr('DISTINCT m_group_id')) ->from(self::$_table_name) ->where('type', $type) ->execute() ->as_array(); $list_partner = array(); if($query) { $list_partner = array_column($query, 'm_group_id'); } return $list_partner; } /* * Get info by user_id * * @since 16/12/2015 * @author Ha Huu Don<donhh6551@seta-asia.com.vn> */ public function get_info_by_userid($user_id) { return DB::select('*') ->from(self::$_table_name) ->where('user_id', $user_id) ->execute() ->as_array(); } //Filter User in department public static function get_filter_user_department($id = null) { $select = DB::select('user_id','name')->from('m_user')->where('department_id','=',$id); return $select->execute()->as_array(); } //Get department to user_id public static function get_department_user($user_id = null) { $department_id = null; if( ! isset($user_id)) return false; if(\Model_Muser::find_by_pk($user_id)) $department_id = \Model_Muser::find_by_pk($user_id)->department_id; return $department_id; } //Get partner code in department public static function get_partnercode_department($department_id) { $select = \Fuel\Core\DB::select('partner_code')->from(self::$_table_name)->where('department_id','=',$department_id); return $select->execute()->as_array(); } /** * @author DangBc <dang6591@seta-asia.com.vn> * @param Get partner group using presenter */ public function delete_partner($partner_id) { if( ! isset($partner_id) or ! $partner = Model_Mpartner::find_by_pk($partner_id)) { Session::set_flash('error','取引先は存在しません'); Response::redirect('master/partners/?'.Session::get('url_filter_partner')); } try{ if($partner->delete()) { return true; } else return false; } catch(Exception $ex) { return false; } } public function approval_partner($partner_id) { if( ! isset($partner_id) || ! $partner = \Model_Mpartner::find_by_pk($partner_id)) { Session::set_flash('error','取引先は存在しません'); Response::redirect('master/partners/?'.Session::get('url_filter_partner')); } //Get json from field edit_data $edit_data = json_decode($partner->edit_data,true); //Set array partner to save array field //Check group in json exits if($edit_data and ! Model_Mgroups::find_by_pk($edit_data['m_group_id'])) { Session::set_flash('error','取引先グループは存在しません'); Response::redirect('master/partners/?'.Session::get('url_filter_partner')); } if(isset($edit_data)) $arr_partner = \Model_Mpartner::_set($edit_data); $arr_partner['status'] = \Constants::$_status_partner['approval']; $arr_partner['edit_data'] = null; $partner->set($arr_partner); if($partner->save()) { return true; } return false; } /** * @author DangBc <dang6591@seta-asia.com.vn> * @param Get all partner name */ public function get_partner_name() { $select = DB::select('branch_name'); $select->from('m_partner')->distinct(true)->order_by('created_at', 'desc'); return $select->execute()->as_array(); } /** * @author DangBc <dang6591@seta-asia.com.vn> * @param Get partner group using presenter */ public function get_partner_group($idgroup = null, $type = null) { $select = DB::select('partner_code','branch_name','m_group_id'); $select->from('m_partner'); if(isset($idgroup)) { $select->where('m_group_id','=',$idgroup); } if(isset($type)) { $select->where('type','=',$type); } return $select->execute()->as_array(); } /** * @author NamDD <namdd6566@seta-asia.com.vn> * @param string $where * @return object */ public function get_list_partner($where ='type=1') { $sql = 'SELECT * FROM m_partner WHERE '.$where; return Fuel\Core\DB::query($sql)->execute(); } public function get_list_partner_login($user_id) { $sql = 'SELECT * FROM m_partner WHERE user_id = '.$user_id; return Fuel\Core\DB::query($sql)->execute(); } public function get_list_data($config) { $obj = static::forge()->find($config); if(count($obj)) { return $obj; } return array(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
251
\section{Introduction} Cosmological data provide strong evidence for two mechanisms beyond the minimal big-bang model: `inflation' and `dark energy'. In a broad sense, `inflation' is whatever makes the universe homogeneous and correlated on large scales; `dark energy' what corrects the luminosity distance for its observed redshift-dependence~\cite{ruth}. One might well suspect these two phenomena to be related. After all, taken at face value, both the high redshift behavior of luminosity distances and the temperature fluctuations entering our causal horizon concern the infrared physics of Hubble scales and beyond: why do not try to give them a unified explanation? Such an `Okham's razor' attitude clashes with general relativity (GR). In a homogeneous universe, luminosity distance and expansion are related in a very specific way. It follows that dark energy must be very recent, or the successful expansion history since, say, nucleosynthesis would be spoiled. On the other hand, the decelerating phases of radiation and matter domination cannot explain homogeneity: we need to assume \emph{another} epoch of acceleration to happen before the onset of deceleration, at much earlier times and higher energies. Still, it looks curious having to invoke accelerating expansion \emph{twice}. Before definitely embracing the more and more successful `DE-CDM+Inflation' paradigm, it is worth asking whether some theoretical framework beyond GR could suggest a unified explanation, a dramatic simplification. The task looks challenging because causality and expansion are entangled in GR at the pure kinematical level, \emph{i.e.}, just by the form of the Friedman Robertson Walker (FRW) line element. Models of massive gravity, like any other modification of the Einstein equations (see e.g.~\cite{costas}), do not impact the basic kinematics of a FRW universe. The infra-red modification that we are after needs to be \emph{geometrical} in the first place, because it has to disentangle causality and luminosity distances from the expansion history. In this paper we explore a geometrical modification of the FRW paradigm at large scales defined by a modified expansion law for the comoving observers which also depends on their distances, \begin{equation} \label{basic} \dot X = H X \left[1+g(X;t)\right] . \end{equation} In the above, $X(t)$ is the distance between any pair of comoving observers, a dot means derivative with respect to proper time, $H(t)$ is the Hubble parameter and $g$ a function that starts quadratically in $X$. Note that in any FRW universe the separation among comoving observers grows proportionally to the scale factor $a(t)$, and therefore $g = 0$. In this paper we try to reconcile~\eqref{basic} with homogeneity and make sense of it being valid for \emph{any} pair of comoving observers. This approach is suggested by the `ultra-strong', or `extreme' equivalence principle (EEP)~\cite{usep1}. EEP postulates that there is a vacuum state for test fields on which the energy momentum expectation value is the same as in flat space (i.e. local and non-local space-time dependent contributions to $\langle T_{\mu \nu}\rangle_{\rm bare}$ are absent), and that the geometry of spacetime as described in GR should be modified accordingly. Stationary spacetimes (e.g. Minkowski and AdS) already allow such a state and therefore are immune from the invoked modification. On the opposite, EEP suggests that time dependent spacetimes as described in GR are only local approximations of the `real theory'. When applied to a minimally coupled scalar field in a FRW universe, EEP can be enforced~\cite{usep2} by a modified dispersion relation, that in real space translates into a modified expansion law of the above type with \begin{equation} \label{basic2} g(X;t) = \frac{X^2}{2}(\dot H + H^2) + {\cal O}(X^4). \end{equation} Such a modification is infra-red because it starts being effective at Hubble scales. Here we show that its impact on cosmology can be stronger than appeared from earlier analyses~\cite{usep2,usep3}. \section{FRW without GR} In GR spatial homogeneity and isotropy are realized by imposing a group of isometries on the metric tensor, that univocally lead to the FRW metric. In this section we reconstruct some properties of a FRW Universe by using looser ingredients than the metric. This will allow us in the next section to generalize the concept of cosmological homogeneity beyond GR. In a spatially flat FRW, at every value $t$ of her proper time, a co-moving observer describes the set of all simultaneous events as a 3-dimensional Euclidean space $\mathbb R^3$. At a later time, the expansion of the universe has taken all comoving observers further apart. The observer at the origin represents this state of affair with the \emph{Hubble velocity field}, $\vec h(\vec X)$, representing the Hubble flow at position $\vec X$ (both $\vec X$ and $\vec h$ are 3-dimensional vectors). The position of the comoving observer $\vec X$ after a time interval $d t$ is \begin{equation} \label{shift} \vec X(t+ d t) = \vec X(t) + \vec h(\vec X(t))\ d t\, . \end{equation} In a FRW universe the Hubble velocity field $\vec h(\vec X)$ is proportional to the distance, \begin{equation} \label{frw_hubble} \vec h(\vec X) = H \vec X\, , \end{equation} where $H=\dot a/a$ is the Hubble parameter. The mapping~\eqref{shift}-\eqref{frw_hubble} appears centered around the origin. However, homogeneity can be enforced by assigning a transformation law for the velocity fields when they are ``seen" by different observers. In GR such a transformation is Galilean: velocities are simply added to the Hubble flow. A velocity field $\vec v (\vec X)$ `seen by' the point $\vec A$ reads \begin{equation} \label{frw_trans} \vec v_{\vec A} (\vec X) = \vec v (\vec X) - H \vec A\, , \end{equation} where the index means ``as seen from" -- no index implies ``as seen from the origin". This transformation can be especially applied to the Hubble-velocity field $\vec h(\vec X)$, giving \begin{equation} \label{homo} \vec h_{\vec A} (\vec X) = \vec h(\vec X - \vec A)\, . \end{equation} The above crucial relation is what defines homogeneity in this framework: the expansion looks the same for any observer. As a corollary, $h_{\vec A}(\vec A) = 0$ as expected. Note that, in this description, what happens at another point might look totally unphysical. For example, the Hubble velocity at $X > H^{-1}$ (beyond the cosmological horizon) is superluminal. What needs to be at most unity, because directly measurable, is the velocity at a point $\vec X$ \emph{seen} at the same point $\vec X$: $v_{\vec X}({\vec X}) \leq 1$. This is the element of \emph{general} (rather than special-) relativistic physics in this construction. We will call $v_{\vec X}({\vec X})$, \emph{local at} $\vec X$. By definition, a light ray has always unit local velocity and thus is defined as a function $\vec L(t)$ satisfying $d \vec L_{\vec L}(t)/dt = 1$. By using~\eqref{frw_trans} we obtain the correct equation for a light ray, \begin{equation} \label{GRlight0} \frac{d L}{d t} = 1 + H L\, . \end{equation} By switching to conformal coordinates $\vec l = \vec L/a(t)$ and time $d \tau = dt/a(t)$, we recover the familiar \begin{equation} \label{GRlight} \frac{d l}{d \tau} = 1 . \end{equation} Given an expansion history $a(t)$, the trajectories of comoving observers~\eqref{frw_hubble} and light rays~\eqref{GRlight0} define the basic kinematic and causality of a FRW universe. The descriptions that different comoving observers give are related by the transformation law for velocities~\eqref{frw_trans} and by simple space translations. Nowhere in above construction is the concept of four-dimensional metric used. \section{Beyond FRW} We now turn to a more general Hubble velocity field than~\eqref{frw_hubble}, such as that of equation~\eqref{basic}, \begin{equation} \label{hubble} \vec h(\vec X) = H \vec X \left[1+ g(X;t)\right] \, . \end{equation} We recall that $g$ is a function that starts quadratically in $X$ and that, according to the EEP~\cite{usep1,usep2}, it is given by~\eqref{basic2}. While keeping $g$ general whenever possible, we find the absence of mass parameters in~\eqref{basic2} very appealing; therefore, we assume at least that $X$ will appear in $g$ always multiplied by some power of $H$ and its derivatives, rather than by some given mass parameter $m$. If we assume a constant equation of state, this means that $g$ is in fact a function of $(X/t)^2$. Since we insist that the universe be homogeneous, we want to enforce a transformation law for velocities replacing~\eqref{frw_trans} such that the modified Hubble expansion looks the same at any point, which is our definition of homogeneity. Unfortunately, such a transformation is not unique. Here we consider what looks like the simplest and most reasonable choice, \begin{equation} \label{trans} \vec v_{\vec A} (\vec X) = \left(\frac{\vec v (\vec X)}{1 + g(X)} - H \vec A\right)\left(1+ g(|\vec X - \vec A|)\right)\, , \end{equation} but we should warn the reader that, to some extent, the implications that we derive later depend on such a choice. In a more symmetric fashion, \begin{equation} \frac{\vec v_{\vec A} (\vec X)}{1+ g(|\vec X - \vec A|)} - \frac{\vec v_{\vec B} (\vec X)}{1+ g(|\vec X - \vec B|)} = H(\vec B - \vec A)\, . \end{equation} From~\eqref{trans}, the transformation law of a \emph{local} velocity vector at $\vec A$ as seen from the origin is found: \begin{equation}\label{translocal} \vec v (\vec A) = \left(1 + g(A)\right) \left(H \vec A + \vec v_{\vec A} (\vec A) \right) \, . \end{equation} Similarly to the GR case, we define null rays as those having always unit local velocity. So they satisfy the equation \begin{equation} \frac{d L}{dt} = (1+HL)\left(1+g(L)\right)\, . \end{equation} We can, again, switch to `comoving coordinates' $\vec x = \vec X/a(t)$, that we denote here with lower-case latin letters. However, notice that in this modified framework comoving observers are no longer charactrized by $\vec x = const.$. Rather, according to~\eqref{hubble}, \begin{equation} \label{anomalous} \dot {\vec x} = \vec x H g(a x)\, . \end{equation} By using comoving coordinates and conformal time the equation of a light ray can be written as \begin{equation} \label{light} \frac{d\, l(\tau)}{d \tau} = 1 + g(L)(1 + H L ), \end{equation} where $L = a(\tau) l(\tau)$. Without taking into proper account the implications of homogeneity, in~\cite{usep2,usep3} the equation of a light ray was written by simply adding the anomalous piece~\eqref{anomalous} to the usual GR one---eqs.~\eqref{GRlight0} and~\eqref{GRlight}. \section{Geometrical Considerations} The non-linearity of velocity transformations~\eqref{trans} has inevitable geometrical consequences. Consider two observers $A$ and $C$ (Figure~\ref{figure1}, left panel) described by trajectories $\vec A(t)$ and $\vec C(t)$. $A$ is comoving while $C$ is not. Say that, at some time $t_0$, $C$ intersects $A$, $\vec C(t_0) = \vec A(t_0)$, with some local velocity $\vec v_{\vec A}(\vec A)$. This means that at a later time $t_0+dt$, the distance between $A$ and $C$ is $ d\vec L = \vec v_{\vec A}(\vec A) dt$. Let us see how the same situation is seen from the faraway origin. The velocity of $C$ can be read from~\eqref{translocal}. By subtracting the velocity of $A$, the distance between $A$ and $C$ at $t_0+dt$ is calculated as $ \vec v_{\vec A}(\vec A) (1+ g(A))dt$. We get to the puzzling conclusion that the same line element $d\vec L$, when accounted for from a distance $X$, gets rescaled by a conformal factor $1+g(X)$: \begin{equation}\label{puzzle} d\vec L(`{\rm seen\ from\ distance}\ X ') = (1+ g(X))d\vec L\, . \end{equation} \begin{figure}[htbp] \begin{center} \vspace{-.5cm} \includegraphics*[width=3.7in]{AC}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\includegraphics*[width=3.7in]{circle}% \end{center}\vspace{-.5cm} \caption{\small Left panel: the situation of two observers $A$ and $C$ described in the text: $A$ is comoving -- its velocity vector is $\vec h(\vec A)$, while $C$ has velocity $\vec v(\vec A)$. The non-linearity of the velocity transformation~\eqref{translocal} makes the length interval $d L$ dependent of the observer. Right panel: a simple pictorial way to make sense of a distance that is not additive and of a metric space that is a Riemannian manifold only ``\emph{locally}".} \label{figure1} \end{figure} In one dimension we can intuitively make sense of this puzzling behavior in a simple way. Consider a circle ${\cal C}$ of radius $R$ (Figure~\ref{figure1}, right panel) and define the distance between any two points as the length of the \emph{chord} (rather than the arc!) between them: two points at a relative angle $\theta$ have distance $L(\theta) = 2 R \sin(\theta/2)$. Equipped with such a distance, ${\cal C}$ is a metric space; it is `homogeneous', in the sense that there is a set of transformations (rotations) that preserve the mutual distances between the points. However, ${\cal C}$ is not a Riemannian manifold, as there is no line element that, integrated, gives the chosen distance $L$. It is tempting to say that this metric space is `\emph{locally}' a metric manifold, because, at small angles, $L$ and the Riemannian distance $\theta R$ coincide. Going back to the puzzle of our two observers, we are lead to conclude that on our modified FRW universe distances are not additive and therefore cannot be expressed as an integral of a line element. By looking at the example of the circle, there is a simple way we can make sense of~\eqref{puzzle} in one dimension. Since $A$ and $C$ are very close to each other, $dL = R d \theta$. On the other hand, if we attempt to define the distance between $A$ and $C$ as a \emph{difference} of distances, we differentiate $L(\theta)$ and get something analogous to~\eqref{puzzle}, \begin{equation} dL({\rm seen\ from\ distance}\ L) = R \cos\left(\frac{\theta}{2}\right) d\theta = \left(1 - \frac{L^2}{8 R^2} + \dots\right) dL\, . \end{equation} In 3 dimensions it is not clear how to make sense of~\eqref{puzzle} in such a pictorial way. In particular, it is not clear whether or not, beside the abstract notion of distance between two points, we can also consistently associate a length with any space-like curve, nor whether that is needed. Giving up the `principle' of additivity of space-like distances may look grotesque. On the other hand, any firm operational basis for such a principle, as far as we have been able to see, seems rooted in the possibility of repeating some measurement operation (e.g. laying down a ruler) many times in the same physical conditions. This looks possible only in stationary spacetimes, that are in fact immune from the EEP's modification. While leaving to future work a better understanding of various issues related to this modified geometry, here we content ourselves with the mnemonic rule~\eqref{puzzle} which, together with equation~\eqref{light}, is the main message of this paper. \section{Outlook} In this note we considered an infra-red modification of the FRW model that corrects the usual expansion law $\dot X = H X$ by distance-dependent terms (eq.~\ref{basic}), and tried to make it consistent with homogeneity. We derived a modified equation for the light rays~\eqref{light}. Perhaps more importantly, we found that the clash between the modified expansion~\eqref{basic} and homogeneity can be reconciled at the expense that space-like distances are no longer additive on time-dependent spacetimes. In order to illustrate this point we used the one-dimensional example of a circle, with distances defined by chords rather than arcs. Non additivity of distances looks incompatible with a Riemannian manifold, which is at the basis of the GR description of physical events. As radical as it is, this approach contains potential pay-offs and opportunities. First, the proposed modification impacts the calculation of any cosmological distance without changing the (local) expansion history. The mnemonic rule~\eqref{puzzle} suggests, roughly, that distances calculated in GR should in fact be divided by $1+g(z)$. Since $g$ defined in~\eqref{basic2} is negative in a decelerating universe, this prescription amplifies cosmological distances and goes in the direction of an effective acceleration. Second, the causality and horizon structure of a FRW universe is also modified, again, without the local expansion being touched. It will be interesting to see how comoving volumes are `swept' by the light rays according to the modified equations that we have derived. Whether or not these effects can address dark energy and inflation, they are permanently at work and active at space-like separations of order Hubble, rather than being `happening' at given times and energy scales. Finally, the proposed modification~\eqref{basic}, \eqref{basic2} does not contain any adjustable mass parameter, and the numerical ones can be fixed~\cite{usep2} by enforcing the EEP. While the prevalent model-building attitude demands at least one new parameter for each cosmological problem to solve -- but potentially many more -- this framework does not contain in principle any more free parameter than GR itself. Had the universe done us the favor of being simple, and not merely beautiful, venturing into such an uncommon theoretical set-up could prove rewarding. \section*{Acknowledgements} I thank George Smoot for his comments on the manuscript, including recommending a new name -- `extreme' rather than `ultra-strong' -- for the principle originally proposed in~\cite{usep1}. I also thank Ruth Durrer, Francesco Nitti, Daniele Steer and Filippo Vernizzi.
{ "redpajama_set_name": "RedPajamaArXiv" }
623
Q: Casting an interface to another interface that it does not inherit I'm hoping someone here can explain what incorrect assumptions I'm making. In C# 4.0, I have 2 interfaces and a class that implements them both. In a method I declare a variable with the type of the first interface, instantiate it using the class that implements both interfaces and can somehow cast it successfully to the second interface like in the following code: public interface IFirstInterface { void Method1(); } public interface ISecondInterface { void Method2(); } public class InterfaceImplementation : IFirstInterface, ISecondInterface { public void Method1() { } public void Method2() { } } public class SomeClass { public void SomeMethod() { IFirstInterface first = new InterfaceImplementation(); first.Method1(); // Shouldn't the next line return null? ISecondInterface second = first as ISecondInterface; // second is not null and the call to Method2() works fine second.Method2(); } } I'm trying to understand why the casting is successful. Yes, the class implements both interfaces, but I would think that since the first variable is declared as IFirstInterface (which doesn't inherit from ISecondInterface), the casting should still fail. I've also tried restructuring my code in other ways, such as not using 'as', but the cast is still successful. What am I missing? A: The actual type of the instance first points to implements both interface. So obviously both Method1 and Method2 are available on the object. The static type of first only lets you access Method1. The static type of second only lets you access Method2. I you declare a reference to the object using either of the interfaces, you just select to view the instance as an object fulfilling the selected contract (the interface). As InterfaceImplementation implements both interfaces, you have the option of referring to the instance using either of the interfaces. A: From your example, you should be good by testing type type before calling any of the functionality. The first creation will create a fully qualified "InterfaceImplementation" that supports both interfaces. However, you are putting it into a declared type of only the first interface. So from the "first" object's perspective, it only cares about anything associated as an IFirstInterface implementation. Now, on to you second... Even though you've created the object, you can still ask... By the way... are you also a Second Interface? If so, do this... IFirstInterface first = new InterfaceImplementation(); if( first is ISecondInterface ) // typecast since the second interface is legit, then call it's method 2 ((ISecondInterface)first).Method2(); A: If you look from the concrete object's point of view, you can say "I'm a IFirstInterface, but I'm also a ISecondInterface". Is that what you mean? The question you described, would end up in casting just inside a inheritance/implementation chain. A: The only thing you're missing is that that's exactly how it's meant to be, and that's a useful feature, not a problem. When casting, you can think of the code as basically saying, "I don't care what I knew this object's type was, I want to see if it can be converted to type T". In this case, since the underlying object is of type InterfaceImplementation, regardless of the fact that it's currently known as an IFirstInterface, the answer is that yes, it can be converted to an ISecondInterface. A: Welcome to polymorphism. The object first is always going to be an instance of InterfaceImplementation. How you choose to reference it doesn't affect what the object truly "is." This is how the concept of abstraction works as a whole. A: This really indicates a design flaw. The client sort of knows that both interfaces are implemented by the same object. For you example that's fine but if those interfaces were implemented separately, you wouldn't be able to jump form the first to the second one. Ideally it would be better to have some kind of query interface where you could go from one type to the other.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,429
West Ham rallies to defeat Swansea London, England (SportsNetwork.com) - West Ham United rallied at Upton Park on Sunday to collect a 3-1 win over 10-man Swansea City. The Hammers controlled the early part of the match, but Swansea managed to draw first blood against the run of play when Wilfreid Bony steered home a square pass from Jefferson Montero in the 19th minute. Swansea nearly made it to the halftime interval with the lead in tact, but Andy Carroll brought West Ham level in the 41st minute when he got on the end of a cross from Carl Jenkinson and nodded a looping header into the upper corner. Carroll then handed West Ham the lead at the hour mark with another header, this time guiding a corner kick from Stewart Downing into the back of the net. Swansea was left no way back when goalkeeper Lukasz Fabianski was issued a red card in the 68th minute for blocking Diafra Sakho on a breakaway, and Sakho added insult to injury for the visitors by latching on to flick from Carroll and smashing home the dagger three minutes from time. West Ham has won three straight in Premier League play to improve to 27 points and climb to third place, though the Hammers will be ousted from the position by Monday following Manchester United's meeting with Southampton. Swansea remains on 22 points to drop to eighth place. Also on Sunday, Aston Villa came from behind to collect a 2-1 win over struggling Leicester City at Villa Park. Jose Leonardo Ulloa handed Leicester the lead in the 13th minute, but Ciaran Clark equalized within five minutes. The winner came from Alan Hutton in the 71st minute and a red card to Paul Konchesky 10 minutes from time put the Foxes to the sword.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,509
{"url":"https:\/\/www.kb.cert.org\/vuls\/id\/667933","text":"# Software Engineering Institute\n\n## Pulse Connect Secure Samba buffer overflow\n\n#### Vulnerability Note VU#667933\n\nOriginal Release Date: 2021-05-24 | Last Revised: 2021-06-17\n\n### Overview\n\nPulse Connect Secure (PCS) gateway contains a buffer overflow vulnerability in Samba-related code that may allow an authenticated remote attacker to execute arbitrary code.\n\n### Description\n\nCVE-2021-22908\n\nPCS includes the ability to connect to Windows file shares (SMB). This capability is provided by a number of CGI scripts, which in turn use libraries and helper applications based on Samba 4.5.10. When specifying a long server name for some SMB operations, the smbclt application may crash due to either a stack buffer overflow or a heap buffer overflow, depending on how long of a server name is specified. We have confirmed that PCS 9.1R11.4 systems are vulnerable, targeting a CGI endpoint of: \/dana\/fb\/smb\/wnf.cgi. Other CGI endpoints may also trigger the vulnerable code.\n\nSpecifying a long server name to this endpoint may result in a PCS events log entry that may look like the following:\n\nCritical ERR31093 2021-05-24\u00a014:05:37\u00a0-\u00a0ive\u00a0-\u00a0[127.0.0.1]\u00a0Root::System()[]\u00a0-\u00a0Program smbclt recently failed.\n\n\nSuccessful exploitation of this vulnerability may not produce such a log entry if the program is cleanly exited during exploitation, or if the log files are sanitized after successful exploitation.\n\nIn order to be vulnerable, a PCS server must have a Windows File Access policy that allows \\\\* or it must have some other policy set that would allow an attacker to connect to an arbitrary server. In the administrative page for the PCS, see Users -> Resource Policies -> Windows File Access Policies to view your current SMB policy. Any PCS device that started as version 9.1R2 or earlier will have a default policy that allows connecting to arbitrary SMB hosts. Starting with 9.1R3, this policy was changed from a default allow to a default deny.\n\nNote that the vendor implies that the Files, Window[sic] access feature can be disabled for user roles in order to protect against this vulnerability. This is NOT the case. The vulnerable CGI endpoints are still reachable in ways that will trigger the smbclt application to crash, regardless of whether the Files, Windows user role is enabled or not. These steps are only included in the advisory to limit excessive errors showing up in PCS logs after the XML workaround has been installed.\n\nIn our testing, an attacker would need either valid PCS user credentials, or a DSID value from an authenticated user to successfully reach the vulnerable code on a PCS server that has an open Windows File Access policy. We have created a PoC utility to test for PCS systems vulnerable to CVE-2021-22908 as well as which mitigations may be applied.\n\n### Impact\n\nBy performing certain SMB operations with a specially-crafted server name, an authenticated attacker may be able to execute arbitrary code with root privileges on a vulnerable PCS server.\n\n### Solution\n\n#### Apply an XML workaround\n\nPulse Secure has published advisory SA44800 that mentions a Workaround-2105.xml file that contains a mitigation to protect against this vulnerability. Importing this XML workaround will activate the protections immediately and does not require any downtime for the VPN system. This workaround will block requests that match the following URI patterns:\n\n^\/+dana\/+fb\/+smb\n^\/+dana-cached\/+fb\/+smb\n\n\nWorkaround-2105.xml will automatically deactivate the mitigations applied by Workaround-2104.xml when it is installed. As such, it is imperative that a PCS system is running 9.1R11.4 before applying the Workaround-2105.xml mitigation, which will ensure that the vulnerabilities outlined in SA44784 are not reintroduced as the result of applying this workaround.\n\nNote that installing this workaround will block the ability to use the following feature:\n\n\u2022 Windows File Share Browser\n\n#### Set a Windows File Access Policy\n\nThis vulnerability relies on the ability to connect to an arbitrary SMB server name to trigger the vulnerability. A PCS system that started as version 9.1R3 or later will have a default Initial File Browsing Policy of Deny for \\\\* SMB connections. If you have a PCS system that started as 9.1R2 or earlier, it will retain the default Initial File Browsing Policy of Allow for \\\\* SMB connections, which will expose this vulnerability. In the administrative page for the PCS, see Users -> Resource Policies -> Windows File Access Policies to view your current SMB policy.\n\nIf your PCS has a policy that explicitly allows \\\\* or otherwise may allow users to initiate connections to arbitrary SMB server names, you should configure the PCS to Deny connections to such resources to minimize your PCS attack surface.\n\n### Acknowledgements\n\nThis vulnerability was reported by Will Dormann of the CERT\/CC.\n\nThis document was written by Will Dormann.\n\n667933\n\n### Ivanti Affected\n\nNotified:\u00a0\u00a02021-06-09 Updated:\u00a02021-06-17\n\n CVE-2021-22908 Affected\n\n#### Vendor Statement\n\nWe have not received a statement from the vendor.\n\n### Pulse Secure Affected\n\nNotified:\u00a0\u00a02021-05-06 Updated:\u00a02021-05-25\n\nStatement Date:\u00a0\u00a0 May 25, 2021\n\n CVE-2021-22908 Affected\n\n#### Vendor Statement\n\nBuffer Overflow in Windows File Resource Profiles in 9.X allows a remote authenticated user with privileges to browse SMB shares to execute arbitrary code as the root user. As of version 9.1R3, this permission is not enabled by default.\n\n### Other Information\n\n CVE IDs: CVE-2021-22908 Date Public: 2021-05-24 Date First Published: 2021-05-24 Date Last Updated: 2021-06-17 20:42 UTC Document Revision: 8","date":"2021-06-22 05:07:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.21121563017368317, \"perplexity\": 6418.8173031202305}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623488507640.82\/warc\/CC-MAIN-20210622033023-20210622063023-00050.warc.gz\"}"}
null
null
Flower Drum Song (subtitled A Jazz Interpretation by the Mastersounds) is an album by The Mastersounds led by vibraphonist Buddy Montgomery with pianist Richie Crabtree, bassist Monk Montgomery and drummer Benny Barth featuring performances of tunes from Richard Rodgers and Oscar Hammerstein II musical Flower Drum Song recorded in 1958 and released on the World Pacific label. Reception The Allmusic review by Scott Yanow stated "In general, the music is pretty and relaxed, but not too invigorating. None of the themes from the show ended up catching on. The Mastersounds, who cannot help sounding a bit like the Modern Jazz Quartet in spots, play quite well, but this LP falls short of being essential". Track listing All compositions by Richard Rodgers and Oscar Hammerstein II "Overture: Chop Suey/Grant Avenue/Sunday/You Are Beautiful" - 7:30 "Sunday" - 3:40 "Love Look Away" - 4:37 "Grant Avenue" - 4:47 "Chop Suey" - 3:03 "You Are Beautiful" - 4:07 "I'm Going to Like It Here" - 6:00 Personnel Buddy Montgomery - vibraphone Richie Crabtree - piano Monk Montgomery - Fender electric bass Benny Barth - drums References Buddy Montgomery albums Monk Montgomery albums 1959 albums World Pacific Records albums
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,566
Q: Drupal 9. Poniendo color picker para el color del precio del producto estoy probando a hacer paragraph y tal. Y queria hacer una practica que seria tener en un paragraph un color picker del plugin : Color Field . Lo tengo bien instalado porque en otro componente si va todo bien, configuro el campo de mi paragraph : field_color_precio_componente. Ya se ve como color picker a la hora de crear/editar ese paragragrah. Y ahora estoy intentando que el precio de mi componente tenga asignado el color de ese color picker, el codigo que estoy implementando es el siguiente: <span {% if content.field_color_precio_componente|render is not empty %} style=" font-size:100px; font-family:fantasy; {{content.field_color_precio_componente.0}}; " {% endif %} >{{content.field_precio_component.0}}</span> <span style="font-size:30px;"> {{content.field_tipo_moneda.0}} color:{{content.field_color_precio_componente.0}} </span> Lo he querido imprimir como texto el valor que me devuelve para debugearlo un poco, y para mi sorpresa me de devuelve un codigo hexadecimal + 1 numero: #c92222 1. Por eso creo que no esta funcionando. Ahora mi pregunta es ¿Por que sale ese '1' ahi separado? He mirado el format storage del campo : field_color_precio_componente y esta seleccionado el del tipo : #123ABC asi que no entiendo muy bien ese 1 de donde sale. ¿Alguna sugerencia? A: Ya he encontrado la solucion, por lo visto ese segundo numero es la cantidad de opacidad, como no me interesaba ese argumento, me he ido a las propiedades de ese campo y le he dicho que no guarde la opcacidad, asi solo aparece el numero hexadecimal.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,418
Q: How to put modals in partials? I can't get how to put modals in partials. Help me, please ! I have button: <a class="btn btn-large" data-toggle="modal" href="#show_me_modal" onclick="printpage()">Name of button<sup>TM</sup> message in action</a> and div modal: <div class="modal hide fade" id="show_me_modal"> <div class="modal-header"> <a class="close" data-dismiss="modal">×</a> </div> <div class="modal-body"> <p>some text</p> <ul class="media-grid"> <%= image_tag("/images/pic02.png") %> </ul> </div> <div class="modal-footer"> <b><%= link_to "#", '#' %></b> </div> </div> A: I have created ModalHelper for one project. It helps dynamically create modals and links to them. Hope it helps you: Helper code: Create file app/helpers/modal_helper.rb module ModalHelper def modal(css_id, header_text, hidden = true, &block) content_tag(:div, :class => 'modal', :id => css_id, :style => ("display:none;" if hidden) ) do concat modal_header(header_text) concat modal_body(&block) end end def modal_button(link_text, href) modal_caller link_text, href, :button end def modal_link(link_text, href) modal_caller link_text, href end private def modal_caller(link_text, href, type = nil) options = { :"data-toggle" => "modal" } options.merge!({ :class => "btn" }) if type == :button link_to link_text, "#" + href, options end def modal_header(header_text) content_tag(:div, :class => 'modal-header') do concat content_tag(:button, 'x', :class => 'close', :"data-dismiss" => 'modal') concat content_tag(:h3, header_text) end end def modal_body content_tag(:div, :class => 'modal-body') do yield end end end Link to modal: Generates link to your modal. <%= modal_link('Sign up', "myModal") %> Rendering to modal: Contains code you want to render. <%= modal('myModal', 'Registration') do %> <% render 'devise/registrations/register' %> <% end %>
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,258
Keyword Analysis & Research: george soros wsj george soros wsj 0.97 0.4 5001 52 16 soros 0.4 0.4 3733 62 5 wsj 1.81 0.1 806 49 3 Keyword Research: People who searched george soros wsj also searched george soros wsj 0.68 0.9 2850 89 george soros wsj china 1.57 0.7 79 75 Is George Soros headed for a Wall Street showdown with Fink? Talk about a Wall Street showdown. George Soros is battling it out with Larry Fink over investing in China. The world's largest money manager, BlackRock , has raised around $1 billion for a mutual fund for Chinese individuals, the first and only foreign firm allowed to do so in the country. Is George Soros investing in China? George Soros is battling it out with Larry Fink over investing in China. The world's largest money manager, BlackRock , has raised around $1 billion for a mutual fund for Chinese individuals, the first and only foreign firm allowed to do so in the country. More may soon follow suit. Who is Mr George Soros? Mr George Soros (BSc Philosophy 1951, MSc Philosophy 1954) Chairman, Soros Fund Management ^ Greenwald, Glenn (October 20, 2010). "George Soros' 'foreign' money". What did George Soros say about the United States in 2006? When Soros was asked in 2006 about his statement in The Age of Fallibility that "the main obstacle to a stable and just world order is the United States", he responded that "it happens to coincide with the prevailing opinion in the world. And I think that's rather shocking for Americans to hear. Search Results related to george soros wsj on Search Engine Xi's Dictatorship Threatens the Chinese State - WSJ https://www.wsj.com/articles/xi-jinping-deng-xiaoping-dictatorship-ant-didi-economy-communist-party-beijing-authoritarian-11628885076 Aug 14, 2021 · George Soros Aug. 13, 2021 5:12 pm ET Xi Jinping, the ruler of China, suffers from several internal inconsistencies which greatly … Is Accessible For Free: False BlackRock's China Blunder - WSJ https://www.wsj.com/articles/blackrock-larry-fink-china-hkex-sse-authoritarianism-xi-jinping-term-limits-human-rights-ant-didi-global-national-security-11630938728 Sep 06, 2021 · George Soros Sept. 6, 2021 11:42 am ET Print Text 537 BlackRock, the world's largest asset manager, has begun a major initiative in China. On Aug. 30 it launched a set of mutual funds and other... Reviews: 537 Is Accessible For Free: False Reviews: 537 Sparring With Soros on America and China - WSJ https://www.wsj.com/articles/george-soros-china-xi-trade-11629439029 Aug 20, 2021 · Reading George Soros's op-ed ... for The Wall Street Journal. You may change your billing preferences at any time in the Customer Center or call Customer Service. You will be notified in advance ... Reviews: 19 Is Accessible For Free: False BlackRock-Soros Feud Is a Microcosm of Wall ... - wsj.com https://www.wsj.com/articles/blackrock-soros-feud-is-a-microcosm-of-wall-streets-china-dilemma-11631184772 Sep 09, 2021 · BlackRock-Soros Feud Is a Microcosm of Wall Street's China Dilemma BlackRock is making a big bet on the hunger of China's individual investors for professional asset management—and on the fraught... Is Accessible For Free: False George Soros and the 'Caravan' - WSJ https://www.wsj.com/articles/george-soros-and-the-caravan-1525635094 May 07, 2018 · George Soros and the 'Caravan' Left-wing NGOs circle the wagons around a rogue U.N. commission. George Soros, founder and chairman of the Open Society Foundation, before the start of a meeting ... Reviews: 26 Is Accessible For Free: False Soros says BlackRock's China investments likely to lose reuters.com https://www.reuters.com/business/finance/soros-says-blackrocks-china-investments-likely-lose-money-wsj-2021-09-07/ Sep 07, 2021 · Sept 7 (Reuters) - Billionaire investor George Soros said BlackRock Inc (BLK.N) investing billions of dollars into China now is a "mistake" and will … George Soros Blasts China - But Here's the Real Reason bongino.com https://bongino.com/george-soros-blasts-china-but-heres-the-real-reason/ Aug 16, 2021 · L eftist megadonor George Soros, who praised China's communist regime in 2010 as functioning better than the U.S. government, does a 180 in a new Wall Street Journal article titled "Xi's Dictatorship Threatens the Chinese State." In an attempt to appeal to the WSJ's audience, he even showers some rare praise on the U.S. George Soros says BlackRock is on wrong side of 'life and https://www.marketwatch.com/story/george-soros-criticizes-blackrock-for-a-second-time-on-china-11631006083 Sep 07, 2021 · Legendary investor George Soros has criticized index powerhouse BlackRock for a second time, blasting the fund manager over its stance on Chinese investments. In a Wall Street Journal op-ed, the... George Soros: Xi Jinping Is the 'Most Dangerous Enemy' … https://www.breitbart.com/asia/2021/08/16/george-soros-xi-jinping-most-dangerous-enemy-free-world/ 16 Aug 2021 2:47 Left-wing billionaire George Soros once again warned that Chinese leader Xi Jinping is "the most dangerous enemy of open societies in the world" in an op-ed published Friday by the Wall Street Journal ( WSJ ). https://en.wikipedia.org/wiki/George_Soros
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,392
{"url":"https:\/\/mathematica.stackexchange.com\/questions\/180027\/listlineplot-of-a-3d-function-over-one-of-the-arguments-with-a-range-of-values","text":"# ListLinePlot of a 3D function, over one of the arguments, with a range of values for the other 2 arguments\n\nI have a function of 3 arguments\n\nmu[kx_, ky_, lambda_] := -(kx^2 + ky^2) - (kx^2*(kx^2 + 5*ky^2)*lambda)\/(kx^2 + ky^2)^4\n\n\nI want to plot mu against lambda for a range of values for each of kx and ky - for each pair {kx,ky} there will be one straight line of mu[lambda] as shown in the attached screenshot.\n\nI got the screenshot using Plot, but that is very slow (prohibitively slow if I want to decrease epsilon to get better resolution).\n\nQ1 - will it be quicker if I use ListLinePlot to plot this set of straight lines? (My guess is that it would be much quicker as Mathematica will only need to calculate two points (mu[lambdaMin] and mu[lambdaMax]) for each line.)\n\nQ2 - how do I do it with ListLinePlot?! My guess is to first Table out the values of kx,ky I want to use, but this gives me a square matrix. Then I think I need to reshape this into a 2-by-many matrix then expand this by Table-ing over lambda. But I can't work out how.\n\nThanks in advance, apologies for such a basic question, still fairly new to Mathematica etc.\n\n### how do I do it with ListLinePlot?\n\n\u03f5 = .2;\nListLinePlot[Join @@ Table[{l, mu[x, y, l]}, {x, \u03f5, 1, \u03f5}, {y, \u03f5, 1, \u03f5}, {l, {-5, 5}}]]\n\n\n### will it be quicker if I use ListLinePlot to plot this set of straight lines?\n\ntimings = Table[{\u03f5 , First@RepeatedTiming@\nPlot[Evaluate[Join @@ Table[mu[x, y, l], {x, \u03f5, 1, \u03f5}, {y, \u03f5,\u00a01, \u03f5}]], {l, -5, 5}],\nFirst@ RepeatedTiming@\nListLinePlot[Join @@ Table[{l, mu[x, y, l]}, {x, \u03f5, \u00a01, \u03f5}, {y, \u03f5, 1, \u03f5},\n{l, {-5, 5}}], PlotRange -> {-20, 20}]}, {\u03f5 , .05, .2, .05}]\n\nGrid[Prepend[timings, {\"\u03f5\", \"Plot\", \"ListLinePlot\"}]] \/.\nx_Real :> NumberForm[x, {5, 3}] \/\/ TeXForm\n\n\n$\\begin{array}{ccc} \\epsilon & \\text{Plot} & \\text{ListLinePlot} \\\\ 0.050 & 7.060 & 0.158 \\\\ 0.100 & 0.532 & 0.048 \\\\ 0.150 & 0.099 & 0.024 \\\\ 0.200 & 0.057 & 0.019 \\\\ \\end{array}$\n\n\u2022 Thanks very much! Bonus question, which is off-topic but refers to your answer, how did you get markdown to write out the epsilons in code sections without using TeX? In my question I tried copy-pasting from Mathematica but it came out as \\[Epsilon]. I'd like to use special characters like Greek letters in future questions. Thanks again! Aug 15 '18 at 8:24\n\u2022 @jms547, I use Silvia's UnicodeCopy.\n\u2013\u00a0kglr\nAug 15 '18 at 8:27\n\u2022 @jms547 An alternative is to use the MMA SE userscript. It adds a few useful buttons that can replace those sequences with the correct characters and more Aug 15 '18 at 8:29","date":"2021-12-07 23:46:29","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4219357669353485, \"perplexity\": 3011.769335909954}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964363420.81\/warc\/CC-MAIN-20211207232140-20211208022140-00117.warc.gz\"}"}
null
null
Xlendi és un poble de Malta situat al sud-est de l'illa de Gozo. És envoltat pels pobles de Munxar, Fontana i Kerċem. El poble és administrat per Munxar, però té certa autonomia i manté un escut i un lema propi. Des de març del 2010, Xlendi té un sistema de govern propi amb 5 polítics que estableixen un petit òrgan d'admministració de les principals activitats de la zona. Etimologia El nom Xlendi és d'origen bizantí. Prové de l'enfonsament d'una galera en época bizantina a la badia, el nom de la qual era Shilandi L'evidència que realment existí la galera s'hi trobà els anys 60 al fons de l'entrada a la badia. Des d'aleshores, el lloc s'ha convertit en un popular lloc de busseig. Llocs històrics Tombes púniques Es van trobar tombes de l'època púnic-bizantina a Xlendi, algunes a St. Simon Point (sota el carrer St. Simon) i altres a la vall de Xlendi. Els romans solien atracar al port de Xlendi, ja que la badia i llurs penya-segats, els protegia del vent. Al bell mig de la badia hi ha un escull que va provocar molts nàufrags. Aquests vaixells enfonsats van deixar un gran nombre d'àmfores romanes al fons marí de la boca de la badia. Xlendi Torre La torre Xlendi que custodia la desembocadura de la badia va ser construïda pel mestre Juan de Lascaris-Castellar el 29 de juny de 1650. Es va construir de manera que els pirates o els turcs no poguessin llançar atacs des d'aquesta badia. Ha estat abandonada amb danys importants causats als murs exteriors de la torre. La responsabilitat de la torre va passar al Consell Local el 2010. Durant el 2011 es van iniciar els treballs de restauració. La torre era força important per a l'exèrcit britànic de Malta, ja que era l'única torre del sud-oest de l'illa. Es deia Torre B per mostrar la seva importància. Capelles És estrany que Xlendi a mitjan tingués un total de 4 capelles. Es tractava de la capella de St Simon (punt de Sant Simon) que també tenia un cementiri i, quan va ser profanada, el bisbe va ordenar que es tallés una creu de pedra a les roques; St Domenica, que era una capella subterrània situada aproximadament als penya-segats de la vall de Xlendi, al costat de Munxar, de difícil accés, de manera que va ser profanada poc després de la seva creació; Santa Catalina es va establir sobre Xlendi, als penya-segats del costat del poble de Kercem. Va ser construït sobre un penya-segat que porta el mateix nom. Es diu que hi havia una petita comunitat a la zona d'aquesta capella; La visita de "Forn il-Gir" no va ser gaire visitada per la gent. Es va establir entre Munxar i Xlendi, però es coneix molt poc sobre això. Totes aquestes capelles van ser profanades entre els anys 1650 i 1680. L'església, dedicada a la Mare de Déu del Carme, va ser dedicada el 1974, però algunes parts de l'edifici són molt més antigues, datades el 1868. Cada any, el primer diumenge de setembre, se celebra una festa dedicada al patró. A la tarda, els jocs d'aigua es fan a la badia amb el tradicional ' gostra ', un pol gras que els jugadors han de caminar per agafar una bandera. Al vespre es fa una processó amb l'estàtua de la Mare de Déu del Carmel al voltant de Xlendi. Molí d'emergències subterrànies El 1955, el molí Xlendi va ser excavat als penya-segats, situat darrere de l'església del Mont Carmel. L'excavació va ser una gran empresa, que consistia en un primer moment per un túnel d'entrada, d'uns 30 metres de llarg, 2,5 metres d'alçada i 3 d'amplada, que donava a una gran cambra. Aquesta cambra es dividia en tres plantes i acollia els equips d'emmagatzematge, rectificat i fresat. A la part posterior del molí hi ha la sitja, amb una capacitat d'emmagatzematge d'aproximadament 1.000 tones de blat, i connectada a la maquinària de fresat mitjançant uns filadors mecànics. Un 80 Potència subministrada per motor diesel i alternador. Les entrades des de dalt també poden accedir a la sitja. El molí es va construir quan la guerra freda va anar escalant quan va ser possible el conflicte nuclear. Aquest molí era nuclear. Però aquest molí no es va utilitzar mai després de ser construït. Topografia Aquest poble té una gran topografia amb penya-segats força costeruts al costat i una vall a la part posterior que porta l'aigua de pluja dels pobles que l'envolten (Kerċem, Munxar, Fontana i Victòria ) fins a la badia. Badia de Xlendi Durant el domini britànic, la badia de Xlendi va ser sorrenca, però amb el pas del temps, l'aigua de la vall i la interferència humana, ara és cagar. La badia encara és coneguda per les roques del costat esquerre de la badia, que són bones per prendre el sol i bussejar. Valls La vall de Xlendi parteix de Fontana continuant des de la vall de Lunzjata i Wied l-Ghawdxija i desemboca a la badia cap al mar. La vall de Xlendi recull gairebé tota la pluja que cau sobre els pobles adjacents de Kerċem, Munxar i Fontana. L'aigua de pluja passa per Xlendi i això és un problema per a la majoria dels ciutadans que viuen a Xlendi perquè estan aïllats per l'aigua que flueix ràpidament. Això també provoca inundacions als edificis de la carretera principal per on passa l'aigua de la vall. Aquesta vall és una de les poques cases del cranc d'aigua dolça maltesa . Il-Kantra és una vall a l'esquerra de la badia just al costat de la torre. El nom Kantra deriva d' Alcántara en castellà-sicilià. Això és a causa de l'entrada de la vall i a la forma de la vall. Des de la seva entrada, es podia veure com un arc. Aquesta vall acull molts tipus de flora i fauna perquè no hi va molta gent. Es va arribar a la torre de Xlendi per un pont construït pels cavallers de Sant Joan sobre la vall de Kantra. Coves Hi ha moltes coves, petites o grans, als costats de la badia. Les principals coves i més conegudes són: La cova Caroline és una cova als penya-segats de la badia dreta. Va ser una propietat de Caroline Cauchi, una dona rica de Victòria. Posteriorment va fundar les germanes agustines a Gozo i va donar gairebé tota la seva terra, inclosa la cova i altres terres de Xlendi. Les germanes durant els estius van començar a quedar-se a Xlendi. Anarien a nedar en aquesta cova que estava aïllada i a la qual només s'hi podia arribar per unes escales. Així que farien servir aquesta cova com a pròpia i no la veurien altres persones de la badia. La cova Catherine de Siena es troba fora de la badia a la part dreta. És coneguda per l'aigua blava molt clara. Al , la gent vivia a les zones del voltant de la cova i va construir una església just a sobre de la cova. Així doncs, la cova va obtenir el seu nom del sant al qual estava dedicada l'església. La zona sense desenvolupar al voltant de Xlendi acull moltes espècies de flora i fauna, algunes d'elles rares. Es pot anomenar les gavines, el cranc maltès d'aigua dolça i el " Widnet il-Bahar ". Avui, Xlendi és una de les zones més desenvolupades de l'illa, una característica que perjudica la biodiversitat de la zona. El tram de costa enfilat de 3 km des de la badia de Xlendi cap a l'oest fins a Wardija Point forma la zona de les aigües importants de la badia de Xlendi a la de Wardija Point, identificada com a tal per BirdLife International per la seva importància per a dues espècies de cisalles. Referències Pàgines amb traduccions sense revisar Geografia de Malta
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,945
Skandium (21Sc) má 26 známých izotopů od 36Sc po 61Sc, ovšem pouze jeden z nich, 45Sc, je stabilní. Je známo celkem 25 radioizotopů tohoto prvku, nejstabilnější je 46Sc s poločasem přeměny 83,79 dne, dále následují 47Sc (3,35 d) a 48Sc (43,67 h). Ostatní izotopy mají poločasy kratší než 4 hodiny, většinou pod 2 minuty. Nejméně stabilní je 39Sc s poločasem přeměny pod 300 nanosekund (poločasy 36Sc, 37Sc a 38Sc nejsou známy). Tento prvek má také 10 jaderných izomerů, z nichž je nejstabilnější 44m2Sc s poločasem 58,61 hodiny. Izotopy lehčí než stabilní 45Sc většinou podléhají beta plus přeměně nebo záchytu elektronu za vzniku vápníku, zatímco těžší se přeměňují beta minus na titan. Seznam izotopů Reference Externí odkazy Skandium Skandium
{ "redpajama_set_name": "RedPajamaWikipedia" }
119
{"url":"http:\/\/math.stackexchange.com\/questions\/649378\/how-come-complex-numbers-represent-coordinates","text":"# How come complex numbers represent coordinates?\n\nI'm wondering why complex numbers represent coordinates without being on the form of a tuple (a,b). The complex numbers come in the form: $a+bi$ where $a$ denotes the real part and $bi$ denotes the imaginary part. This doesn't represent an ordered pair in my eyes, but more like a sum. How does it represent coordinates on a grid?\n\n-\nWell, there is a very clear 1-1 correspondence between ordered pairs $(a,b)$ and numbers of the form $a+ib$. \u2013\u00a0Old John Jan 23 '14 at 23:30\nThe difference between (a,b) and a + bi is just notation. \u2013\u00a0Tobias Brandt Jan 23 '14 at 23:30\nIt is the same thing, it is usually more convenient (and clear) to write in the form $a+ib$, for example, $e^{i\\theta}$ would be $e^{(0,1)\\theta}$. \u2013\u00a0copper.hat Jan 23 '14 at 23:36\n\nOne could actually define the complex numbers as $\\mathbb R^2$ with the usual sum of vectors, and an additional multiplication which is: $(a,b)(c,d)=(ac-bd,bc+ad)$. With this we already have the complex numbers, however the multiplication is more naturally visualized if we write $$(a+bi)(c+di)=ac+(ad+bc)i+bdi^2=(ac-bd)+(ad+bc)i$$\nSo basically, the rule for multiplication is also verified if we write $(x,y)$ as $x+yi$, and multiply using the normal rules (distributive, associative, etc.), therefore we are justified in using this notation, and we do so because it's more comfortable. Of course the definition given is a modern construct, made in order to have a solid, rigorous theory; the historical definition was the $x+yi$ notation, with the particularity that $i^2=-1$. Also, other definitions exist.","date":"2016-05-06 20:59:08","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9786753058433533, \"perplexity\": 273.7870118218331}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-18\/segments\/1461862134822.89\/warc\/CC-MAIN-20160428164854-00146-ip-10-239-7-51.ec2.internal.warc.gz\"}"}
null
null
Q: How can I import .cvs files with a loop in R? I've got a series of .cvs files called Natalidad[i], where [i] is the year of the data from 1996 to 2020. I want to make I loop in order to load each of them. I've tried the next code, but it is working. for (i in 1996:2020) { nacimientos[i] <- read.csv("Natalidad [i]_p.csv", header = TRUE, sep = ";") } I've also tried without [] and substituting with (). My question might be basic. I'm not familiarized with loops in R, so there's probably something essential that I'm missing. A: As you originally tagged the question as Python, it could be done as follows: import pandas as pd nacimientos = [pd.read_csv(f"Natalidad [{year}]_p.csv", sep=";") for year in range(1996, 2021)] This creates a list of dataframes with the first being for 1996. It uses a Python "list comprehension" to build the list. This would be equivalent to: nacimientos = [] for year in range(1996, 2021): df = pd.read_csv(f"Natalidad [{year}]_p.csv", sep=";") nacimientos.append(df) A: If you do this in R, you have to paste the string of the file name first. R cannot have variables inside a string. E.g. paste0("first part", i, "second part"), where i is the variable. Try for (i in 1996:2020) { nacimientos[i] <- read.csv(paste0("Natalidad ", [i], "_p.csv"), header = TRUE, sep = ";") }
{ "redpajama_set_name": "RedPajamaStackExchange" }
303
Hiram Edmund Deats (May 20, 1870 – March 16, 1963) was an American philatelist, historian and publisher from Flemington, Hunterdon County, New Jersey. He was especially acclaimed for his collection of revenue stamps. Life and family Hiram Edmund Deats was born on May 20, 1870 in the Brookville section of Stockton to Hiram Deats (1810–1887) and Elmira Stevenson (1830–1908). In 1929, he donated several pieces of agricultural equipment made by the Deats company, started by his father, to Rutgers University under the care of Professor Wabun C. Krueger. This collection, including the Deats plow, patented by his grandfather, John Deats, became important in the creation of the New Jersey Museum of Agriculture in 1990. He died on March 16, 1963. Collecting interests As a youth, Deats started collecting postage stamps of the United States and the Confederate States of America and eventually created one of the finest collections of his era, eventually selling the collection. Deats specialized in the collecting of United States revenue stamps, and his collection, which in 1888 included the revenue collection of Edward Boker Sterling, was unsurpassed. George L. Toppan and Alexander Holland used this collection as a basis for writing, in 1899, An Historical Reference List of the Revenue Stamps of the United States Including the Private Die Proprietary Stamps, which was re-printed in 1979 as The Boston Revenue Book. Deats also formed one of the finest libraries of philatelic books and literature in the United States, which, in 1952, he donated to the Free Library of Philadelphia. Philatelic activity At the age of 16, Deats joined the American Philatelic Association (later renamed the American Philatelic Society) and served the society in various ways, including serving as president and generally attending at conventions. Historian From 1890 to 1957, he was the librarian for the Hunterdon County Historical Society. The library is now known as the Hiram E. Deats Memorial Research Library. From 1891 to 1905, he was editor and publisher of The Jerseyman, a journal of local history and genealogy. Honors and awards Deats signed the Roll of Distinguished Philatelists in 1933 and was named to the American Philatelic Society Hall of Fame in 1963. See also Philately Philatelic literature References External links 1870 births 1963 deaths Philatelic literature American philatelists People from Flemington, New Jersey People from Stockton, New Jersey Signatories to the Roll of Distinguished Philatelists American Philatelic Society Deats family
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,564
Rudolf Silvan (* 22. September 1967 in Mödling, Niederösterreich) ist ein österreichischer Politiker (SPÖ) und Gewerkschafter (Landesgeschäftsführer der Gewerkschaft Bau-Holz Niederösterreich). Am 23. Oktober 2019 wurde Silvan als Abgeordneter zum Nationalrat Österreichs angelobt. Politik und Karriere Rudolf Silvan wuchs in Brunn am Gebirge auf, wo er 1973 bis 1977 die Volksschule, 1977 bis 1981 die Hauptschule und 1981 bis 1982 das Polytechnikum besuchte. Silvan begann eine Maurerlehre, musste dieser aber kurz darauf wegen einer Zementallergie abbrechen. Er erlernte das Bäckerhandwerk und arbeitete nach seinem Präsenzdienst als Schichtarbeiter am Hochofen in der Glasfabrik in Brunn am Gebirge. Seit 1990 setzte sich Silvan für die Rechte der Arbeitnehmerinnen und Arbeitnehmer in der Gewerkschaft ein. Er besuchte 1990 bis 1992 die Gewerkschaftsschule, legte 1992 die Lehrausbilderprüfung ab und besuchte im selben Jahr die Betriebsräteakademie der Kammer für Arbeiter und Angestellte für Niederösterreich (AKNÖ). 2003 erlangte Silvan die Studienberechtigungsprüfung an der Universität Wien und studierte 2003 bis 2004 zwei Semester Rechtswissenschaften an der Universität Linz. Von 2006 bis 2008 war er Mitglied im Reha Ausschuss der Allgemeinen Unfallversicherungsanstalt (AUVA). 2008 bis 2019 war Silvan Vorsitzender der AUVA Wien, Niederösterreich und Burgenland. 2009 wurde Silvan Landesgeschäftsführer und Bundesvorstandsmitglied der Gewerkschaft Bau-Holz. 2011 erlangte er ein Diplom als Mentalcoach. Seit 2015 ist Rudolf Silvan Vorsitzender des Ausschusses für Gesundheitspolitik und Arbeitnehmerschutz in der AKNÖ. Silvan war an der Gründung des AUVA Traumazentrum Wien (ein Traumazentrum an den zwei Standorten UKH Lorenz Böhler und UKH Meidling) maßgeblich mitbeteiligt. 2019 wechselte Silvan in den neu gegründeten Verwaltungsrat der AUVA. Zudem war der Gewerkschafter von 2016–2019 Vorstandsmitglied der Niederösterreichischen Gebietskrankenkasse (NÖ GKK). Am 5. Juli 2019 wurde Silvan vom Landesparteirat der SPÖ Niederösterreich zum Spitzenkandidaten für die Nationalratswahl im September 2019 gekürt. Am 23. Oktober 2019 wurde Silvan als Abgeordneter zum Nationalrat angelobt. Weblinks Rudolf Silvan auf der Website des SPÖ-Nationalratsklubs Rudolf Silvan auf der Webseite der Gewerkschaft Bau-Holz Rudolf Silvan auf der Webseite der Allgemeinen Unfallversicherungsanstalt Rudolf Silvan auf www.meineabgeordneten.at Einzelnachweise Abgeordneter zum Nationalrat (Österreich) Politiker (Niederösterreich) SPÖ-Mitglied Österreicher Geboren 1967 Mann
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,128
True... but I'm trying to shave the weight/bulk of traveling with my folding chair. This may not be the solution, but there has to be a better way. I'm a T2 with absolutely no trunk muscles and I didnt have any issues with the seat. But, I always left my chair right beside me too- that way there was always something to grab, and to rest my feet on. When I really want to travel lite but with a more stable chair I use my Nuprodx 3000tx. I know they seem pricey at first, but that thing has paid for itself time and again. I like the fact that I can use the 3000tx as a shower chair too, even if the "wheelchair accessible room with a barrier free shower" you booked really has a bath-tub when you get there. The Lumex seat cushion does nothing for you in the shower. I can throw 2 days worth of clothing and toilet supplies in the bag Nuprodx gives you with the chair. It truly is awesome. I weigh around 225lbs and after 9 or so years of use the only think I've needed to replace is the nylon back rest. I sprung for the wider legs when I encountered a toilet with a really wide base- the kind that are wall mounted. Those are $99 and worth every penny.
{ "redpajama_set_name": "RedPajamaC4" }
8,260
#ifndef __SYSTEM_HH__ #define __SYSTEM_HH__ #include <string> #include <vector> #include "base/loader/symtab.hh" #include "base/misc.hh" #include "base/statistics.hh" #include "config/full_system.hh" #include "cpu/pc_event.hh" #include "enums/MemoryMode.hh" #include "mem/port.hh" #include "params/System.hh" #include "sim/sim_object.hh" #if FULL_SYSTEM #include "kern/system_events.hh" #endif class BaseCPU; class ThreadContext; class ObjectFile; class PhysicalMemory; #if FULL_SYSTEM class Platform; class FunctionalPort; class VirtualPort; #endif class GDBListener; class BaseRemoteGDB; class System : public SimObject { public: static const char *MemoryModeStrings[3]; Enums::MemoryMode getMemoryMode() { assert(memoryMode); return memoryMode; } /** Change the memory mode of the system. This should only be called by the * python!! * @param mode Mode to change to (atomic/timing) */ void setMemoryMode(Enums::MemoryMode mode); PhysicalMemory *physmem; PCEventQueue pcEventQueue; std::vector<ThreadContext *> threadContexts; int _numContexts; ThreadContext *getThreadContext(ThreadID tid) { return threadContexts[tid]; } int numContexts() { assert(_numContexts == (int)threadContexts.size()); return _numContexts; } /** Return number of running (non-halted) thread contexts in * system. These threads could be Active or Suspended. */ int numRunningContexts(); #if FULL_SYSTEM Platform *platform; uint64_t init_param; /** Port to physical memory used for writing object files into ram at * boot.*/ FunctionalPort *functionalPort; VirtualPort *virtPort; /** kernel symbol table */ SymbolTable *kernelSymtab; /** Object pointer for the kernel code */ ObjectFile *kernel; /** Begining of kernel code */ Addr kernelStart; /** End of kernel code */ Addr kernelEnd; /** Entry point in the kernel to start at */ Addr kernelEntry; /** Mask that should be anded for binary/symbol loading. * This allows one two different OS requirements for the same ISA to be * handled. Some OSes are compiled for a virtual address and need to be * loaded into physical memory that starts at address 0, while other * bare metal tools generate images that start at address 0. */ Addr loadAddrMask; #else Addr pagePtr; protected: uint64_t nextPID; public: uint64_t allocatePID() { return nextPID++; } /** Amount of physical memory that is still free */ Addr freeMemSize(); /** Amount of physical memory that exists */ Addr memSize(); #endif // FULL_SYSTEM protected: Enums::MemoryMode memoryMode; uint64_t workItemsBegin; uint64_t workItemsEnd; std::vector<bool> activeCpus; public: /** * Called by pseudo_inst to track the number of work items started by this * system. */ uint64_t incWorkItemsBegin() { return ++workItemsBegin; } /** * Called by pseudo_inst to track the number of work items completed by * this system. */ uint64_t incWorkItemsEnd() { return ++workItemsEnd; } /** * Called by pseudo_inst to mark the cpus actively executing work items. * Returns the total number of cpus that have executed work item begin or * ends. */ int markWorkItem(int index) { int count = 0; assert(index < activeCpus.size()); activeCpus[index] = true; for (std::vector<bool>::iterator i = activeCpus.begin(); i < activeCpus.end(); i++) { if (*i) count++; } return count; } #if FULL_SYSTEM /** * Fix up an address used to match PCs for hooking simulator * events on to target function executions. See comment in * system.cc for details. */ virtual Addr fixFuncEventAddr(Addr addr) = 0; /** * Add a function-based event to the given function, to be looked * up in the specified symbol table. */ template <class T> T *addFuncEvent(SymbolTable *symtab, const char *lbl) { Addr addr = 0; // initialize only to avoid compiler warning if (symtab->findAddress(lbl, addr)) { T *ev = new T(&pcEventQueue, lbl, fixFuncEventAddr(addr)); return ev; } return NULL; } /** Add a function-based event to kernel code. */ template <class T> T *addKernelFuncEvent(const char *lbl) { return addFuncEvent<T>(kernelSymtab, lbl); } #endif public: std::vector<BaseRemoteGDB *> remoteGDB; std::vector<GDBListener *> gdbListen; bool breakpoint(); public: typedef SystemParams Params; protected: Params *_params; public: System(Params *p); ~System(); void initState(); const Params *params() const { return (const Params *)_params; } public: #if FULL_SYSTEM /** * Returns the addess the kernel starts at. * @return address the kernel starts at */ Addr getKernelStart() const { return kernelStart; } /** * Returns the addess the kernel ends at. * @return address the kernel ends at */ Addr getKernelEnd() const { return kernelEnd; } /** * Returns the addess the entry point to the kernel code. * @return entry point of the kernel code */ Addr getKernelEntry() const { return kernelEntry; } #else Addr new_page(); #endif // FULL_SYSTEM int registerThreadContext(ThreadContext *tc, int assigned=-1); void replaceThreadContext(ThreadContext *tc, int context_id); void serialize(std::ostream &os); void unserialize(Checkpoint *cp, const std::string &section); virtual void resume(); public: Counter totalNumInsts; EventQueue instEventQueue; //////////////////////////////////////////// // // STATIC GLOBAL SYSTEM LIST // //////////////////////////////////////////// static std::vector<System *> systemList; static int numSystemsRunning; static void printSystems(); }; #endif // __SYSTEM_HH__
{ "redpajama_set_name": "RedPajamaGithub" }
4,833
El Distrito Escolar Roosevelt (Roosevelt School District, RSD) puede referirse: Distrito Escolar Roosevelt (Arizona) Distrito Escolar Roosevelt (Estado de Nueva York) Distrito Escolar Roosevelt (Washington)
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,851
Bang & Olufsen has announced it is expanding its BeoPlay H6 headphones range with three new limited edition models designed especially for the autumn/winter season. The headphones have been released in a number of special editions in the past, including a green pair last October and a Pepsi Live for Now custom edition in May earlier this year. The latest additions joining the H6 party are in Blue Stone, Bronzed Hazel and Graphite Blush. We got the chance to see the new limited edition models that will be available alongside the Classic Black and Natural versions during a Christmas in July event and we loved them. The new colours are rich and fantastic, featuring the same soft leather and metallic metal finish found on the current and previous BeoPlay H6 headphones and they look as good as you would expect. The BeoPlay H6 headphones are designed to bring maximum comfort and audio quality while providing a decent level of sound isolation, but the key is that they mix BeoPlay with B&O's premium headphone range. The cowhide leather from New Zealand is beautiful, comfortable and it oozes a premium feel, while the coloured aluminium discs on the outside of the cups are subtly etched with the B&O logo and make you feel proud to have them on your head. As the special edition headphones offer the exact same design, except for the colours, as the previous BeoPlay H6 headphones that have gone before them, we have the same Jakob Wagner to thank for their beauty. However, although we have been big fans of the past models, the new additions really stand out from the crowd - especially the Blue Stone and Graphite Blush. The colours work seamlessly together and they really are some special-looking headphones. They aren't cheap, but they don't look it either and they hit the shelves with the same £329 price tag as the current BeoPlay H6 models. They are fairly compact and they bring great audio-quality with them, offering plenty of sound detail but not too much punchy bass. There are a set of 40mm drivers and an internal bass port and everything is clean, simple and well-balanced. You'll find a 3.5mm headphone jack in the side of the H6, which allows you to daisy chain headphones together if you want to share what you are listening to with a friend and there is also an in-line remote and microphone for hands-free smartphone use. We found the comfort factor to be one of the H6's best attributes when we tried them out before, but we really do love these new limited edition colours so if there is ever a time to pick a pair of these up, it's now. The limited-edition BeoPlay H6 over-ear headphones will be available from Selfridges, John Lewis and Mr Porter from August for £329.
{ "redpajama_set_name": "RedPajamaC4" }
1,576
package hu.scelight.util.sc2rep; import hu.scelight.gui.icon.Icons; import hu.scelight.service.env.Env; import hu.sllauncher.service.job.Job; import java.io.IOException; import java.nio.file.FileVisitOption; import java.nio.file.FileVisitResult; import java.nio.file.Files; import java.nio.file.Path; import java.nio.file.SimpleFileVisitor; import java.nio.file.attribute.BasicFileAttributes; import java.nio.file.attribute.FileTime; import java.util.Collections; /** * Latest replay searcher job. * * @author Andras Belicza */ public class LatestRepSearchJob extends Job { /** Folder in which to search. */ private final Path folder; /** Tells if replays contained in sub-folders are also to be searched. */ private final boolean recursive; /** Latest replay. */ private Path latestReplay; /** Last modified time of the latest replay. */ private FileTime latestTime = FileTime.fromMillis( 0 ); /** * Creates a new {@link LatestRepSearchJob}. * * @param folder folder in which to search * @param recursive tells if replays contained in sub-folders are also to be searched */ public LatestRepSearchJob( final Path folder, final boolean recursive ) { super( "Latest Replay Search: " + folder, Icons.F_BINOCULAR_ARROW ); this.folder = folder; this.recursive = recursive; } @Override public void jobRun() { if ( !Files.exists( folder ) || !Files.isDirectory( folder ) ) return; try { Files.walkFileTree( folder, Collections.< FileVisitOption > emptySet(), recursive ? Integer.MAX_VALUE : 1, new SimpleFileVisitor< Path >() { @Override public FileVisitResult visitFile( final Path file, final BasicFileAttributes attrs ) throws IOException { if ( !mayContinue() ) return FileVisitResult.TERMINATE; if ( !attrs.isDirectory() && RepUtils.hasRepExt( file ) && attrs.lastModifiedTime().compareTo( latestTime ) > 0 ) { latestReplay = file; latestTime = attrs.lastModifiedTime(); } return FileVisitResult.CONTINUE; } } ); } catch ( final IOException ie ) { Env.LOGGER.error( "Could not search replays in folder: " + folder, ie ); latestReplay = null; return; } if ( cancelRequested ) latestReplay = null; } /** * Returns the latest replay, the result of the search. * * @return the latest replay file in the specified folder; <code>null</code> if the specified folder does no exists or is not a folder or if the search was * aborted or some error occurred */ public Path getLatestReplay() { return latestReplay; } /** * Returns the last modified time of the latest replay. * * @return the last modified time of the latest replay */ public FileTime getLatestTime() { return latestTime; } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,807
Q: GWT application on Facebook How can I publish my GWT application on Facebook? EDITED: Facebook requires from the application to have some FacebookSDK configuration in <body> tag on index.html page. The GWT-generated index.html doesn't contain the <body> tag at all, (it has <bodies> instead o_O ). How do I do that? I tried to add these lines to the <bodies> tag and upload app to facebook, but when I click "next" on App QuickStart page I get "Something went wrong. We're working on getting it fixed as soon as we can." Facebook popup. The app is generated by libgdx and works perfectly in browser on my own server. Is there better way to publish GWT app? There are some docs and question here and there, but it looks like they are related to old approach. Thanks!
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,133
{"url":"http:\/\/mathoverflow.net\/feeds\/question\/19644","text":"What is the definition of \"canonical\" ? - MathOverflow most recent 30 from http:\/\/mathoverflow.net 2013-05-19T08:45:45Z http:\/\/mathoverflow.net\/feeds\/question\/19644 http:\/\/www.creativecommons.org\/licenses\/by-nc\/2.5\/rdf http:\/\/mathoverflow.net\/questions\/19644\/what-is-the-definition-of-canonical What is the definition of \"canonical\" ? Konrad Waldorf 2010-03-28T18:16:39Z 2013-02-16T17:18:02Z <p>I just received a referee report critizing that I would too often use the word \"canonical\". I have a certain understanding of what \"canonical\" should stand for, but the report shows me that other people might think differently. So I am asking:<\/p> <ol> <li><p>Is there a definition of \"canonical\"?<\/p><\/li> <li><p>What are examples where the use of \"canonical\" is undoubtedly correct?<\/p><\/li> <li><p>What are examples where the use of \"canonical\" is undoubtedly incorrect?<\/p><\/li> <\/ol> http:\/\/mathoverflow.net\/questions\/19644\/what-is-the-definition-of-canonical\/19645#19645 Answer by Reid Barton for What is the definition of \"canonical\" ? Reid Barton 2010-03-28T18:36:26Z 2010-03-28T20:07:32Z <ol> <li><p>Not a <em>definition<\/em>, exactly; I would say the situation is similar to that of <a href=\"http:\/\/mathoverflow.net\/questions\/19405\/definition-of-forgetful-functor\" rel=\"nofollow\">forgetful functor<\/a>. If I say there is a canonical isomorphism between X and Y, then what I mean is that if asked, pretty much everyone would choose the same isomorphism. A canonical isomorphism is very often a natural isomorphism in the sense of category theory, but the converse need not hold. A canonical isomorphism does not need to be the unique isomorphism between X and Y, though sometimes it is when X and Y are considered as equipped with some additional structure.<\/p><\/li> <li><p>\"There is a canonical isomorphism between the set of elements of a ring R and the set of ring maps $\\mathbb{Z}[x] \\to R$.\" Obviously, I mean for $r \\in R$ to correspond to the ring map sending $x$ to $r$, although I could just as well send $x$ to $-r$.<\/p><\/li> <li><p>\"There is a canonical isomorphism between a finite-dimensional vector space V and its dual.\" No explanation needed, I suppose.<\/p><\/li> <\/ol> <p>Maybe more interesting would be an example where the word \"canonical\" is arguably correct or incorrect; I can't think of one off-hand.<\/p> <hr> <p>Addendum, after reading some of the other answers: I would emphasize that for me there is a difference between \"natural\" in the formal category-theoretic sense and \"canonical\". For one thing there is a linguistic distinction: if I am considering an isomorphism F between X and Y then \"Theorem: F is a natural isomorphism\" is perfectly acceptable but \"Theorem: F is a canonical isomorphism\" is very strange to me. There should be only one canonical isomorphism between two things, though what that isomorphism is could depend on context, e.g., \"the canonical isomorphism $A \\otimes B \\to B \\otimes A$\" where $A$ and $B$ are graded abelian groups might mean different things to an algebraic geometer and an algebraic topologist.<\/p> <hr> <p>Finally, this is hardly a definition, more of a rule of thumb: there is a canonical isomorphism between X and Y if and only if you would feel comfortable writing \"X = Y\".<\/p> http:\/\/mathoverflow.net\/questions\/19644\/what-is-the-definition-of-canonical\/19647#19647 Answer by Dmitri Pavlov for What is the definition of \"canonical\" ? Dmitri Pavlov 2010-03-28T18:52:01Z 2010-03-28T18:52:01Z <p>For me the word \u201ccanonical\u201d always means \u201cfunctorial in some sense\u201d, usually without using any form of the axiom of choice. For example, every finite-dimensional vector space is canonically isomorphic to its double dual, because there is an isomorphism of functors id \u2192 **, but there is no canonical isomorphism between a finite-dimensional vector space and its dual, because one cannot construct an isomorphism of functors id \u2192 * without using some form of the axiom of choice. Likewise, the construction of an algebraic closure is not canonical because there is no functor that sends a field to its algebraic closure, even though every two algebraic closures are (non-canonically) isomorphic.<\/p> <p>I presume that one can allow using the axiom of choice and still get the same results, but in this case one needs to use the language of 2-categories. For every well-pointed elementary topos T (basically, a set theory), we can construct the category of finite-dimensional vector spaces in this topos and isomorphism of functors id \u2192 **. I think that this isomorphism depends 2-functorially on T. On the other hand, even if we use the axiom of choice to construct an isomorphism of functors id \u2192 * for every well-pointed elementary topos T, there is no way to make it depend functorially on T. I must say that I have never tried to prove any of these statements, so they might as well be totally wrong.<\/p> http:\/\/mathoverflow.net\/questions\/19644\/what-is-the-definition-of-canonical\/19650#19650 Answer by Spencer for What is the definition of \"canonical\" ? Spencer 2010-03-28T19:07:50Z 2010-03-28T23:00:28Z <p>I would say that \"canonical\" ought to be used to describe when no choices have been made.<\/p> <p>A nice example of a non-canonical identification: A principle bundle is made up of principle homogeneous spaces for the action of a Lie group. These are spaces which are homeomorphic but non-canonically isomorphic to the Lie group. For example, I might have a circle bundle. My Lie group would be a concrete version' of the group such as ${|z| = 1}$, but my fibres are simply circles. I would need to <em>choose<\/em> a base point on each of the circles to make them into groups in the same way. This amounts to taking a global section and can't always be done (e.g. circle bundle on the sphere has no global section by hairy ball theorem), so the non-canonical-ness might actually be important<\/p> <p>The labelling of identifications as Canonical and Non-canonical is common in linear algebra: Since one chooses bases so often, it is worth pointing out when such a choice is avoided...<\/p> <p>To prove that <code>$V^* \\otimes V^*$<\/code> is isomorphic to <code>$(V \\otimes V)^*$<\/code>, one ought to work with elements of the spaces directly rather than their representations in some basis. I would therefore call the resulting isomorphism canonical'.<\/p> http:\/\/mathoverflow.net\/questions\/19644\/what-is-the-definition-of-canonical\/19655#19655 Answer by Fran\u00e7ois G. Dorais for What is the definition of \"canonical\" ? Fran\u00e7ois G. Dorais 2010-03-28T19:36:53Z 2010-03-28T20:00:27Z <p>I think there is a multi-level classification associated to \"canonicalness,\" which explains why some clashes of definition occur. <\/p> <ul> <li><em>Arbitrary<\/em> &mdash; No requirements.<\/li> <li><em>Uniform<\/em> &mdash; There may be a few options but these options can be selected by making a few global choices.<\/li> <li><em>Canonical<\/em> &mdash; As in the uniform case, but there is only one natural choice of options which applies globally.<\/li> <\/ul> <p>Canonical examples \u00e0 la Russell:<\/p> <ul> <li><em>Choose one sock from each pair in a collection of sock pairs<\/em> &mdash; There is no way to make a uniform choice.<\/li> <li><em>Choose one shoe from each pair in a collection of shoe pairs<\/em> &mdash; There are two obvious global solutions, left shoe or right shoe, but no way to prefer one over the other.<\/li> <li><em>Choose one object from each set in a collection of sets each consisting of a bowtie and possibly other items<\/em> &mdash; There is only one obvious global solution.<\/li> <\/ul> <p>I think the main point of contention is distinguishing <em>uniform<\/em> and <em>canonical<\/em>. Some will argue that it's not canonical if there is a choice to be made, while some will argue that a finite number of global choices is still canonical.<\/p> <p>There is yet another use of <em>canonical<\/em> to mean something like 'universally sanctioned' (this is closer to the religious term). The second occurrence of canonical above is of this type.<\/p> http:\/\/mathoverflow.net\/questions\/19644\/what-is-the-definition-of-canonical\/19663#19663 Answer by Kevin Buzzard for What is the definition of \"canonical\" ? Kevin Buzzard 2010-03-28T20:27:54Z 2010-03-28T20:27:54Z <p>I always had the following working definition of canonical (which I think Gordon James told me and he might have said it was due to Conway? Not sure): a map A->B is canonical if you construct a candidate, and the guy in the office next to you constructs a candidate, and you end up with the same map twice. <\/p> <p>Somehow there is something more to it than that though. For example if A is an abelian group and we want a map A-->A then I will choose the identity, but I know for sure that the wag in the office next door to me will choose the map sending a to -a because that's his sense of humour. What has happened here is that there are in fact two canonical maps A-->A. This issue shows up in class field theory, which is an isomorphism between two rather fancy abelian groups X and Y, and where no-one could decide for a long time which one of the two canonical isomorphisms was \"best\". So you often see statements in number theory papers saying \"we normalise our class field theory isomorphisms so that geometric Frobenii go to uniformisers\" (the alternative being the inverse of this). It also shows up in the Weil pairing on an elliptic curve: it's canonical, but because we're in an abelian situation, its inverse is too. So you see in e.g. Katz-Mazur an explicit spelling out of which of the two canonical choices one is going to make (and hang all the non-canonical ones!).<\/p> http:\/\/mathoverflow.net\/questions\/19644\/what-is-the-definition-of-canonical\/19666#19666 Answer by Sergei Ivanov for What is the definition of \"canonical\" ? Sergei Ivanov 2010-03-28T20:58:26Z 2010-03-28T20:58:26Z <p>I was taught to think that there is a precise definition of \"canonical\" in differential topology, at least in the context of linear algebra constructions. A construction is canonical if it is a smooth functor. (There is a Wikipedia page about smooth functors but it is not very insightful). And since it is hard to invent a non-smooth functor, it practically boils down to just being a functor.<\/p> <p>The categories involved are usually not mentioned explicitly, and they are not things like vectors spaces with linear maps. They are rather things like vector spaces with linear <em>isomorphisms<\/em> as morphisms. Or, more generally, isomorphisms of whatever structure you happen to have on them. For example, dual vector space is a canonical construction but an isomorphism between a vector space and its dual is not. On the other hand, there is a canonical one if your spaces carry Euclidean structure.<\/p> <p>The idea is that a canonical construction can be applied fiber-wise in fiber bundles. Sometimes this feature is advertised as a poor man's definition of \"canonical\" but this is not quite correct: for example, every vector bundle (over a paracompact base) is isomorphic to its dual, but this is not really canonical.<\/p> http:\/\/mathoverflow.net\/questions\/19644\/what-is-the-definition-of-canonical\/19669#19669 Answer by Fabrizio Polo for What is the definition of \"canonical\" ? Fabrizio Polo 2010-03-28T21:44:01Z 2010-03-28T21:44:01Z <p>I was always under the impression that canonical meant, precisely, that no arbitrary choices were necessary. But, that it was occasionally used less formally, in a more standard-English sort of way to mean traditional\/obvious\/well known. The informal meaning is usually used as a cheap way to avoid explaining something that's easier for the reader to guess anyway.<\/p> <p>Ex 1: Two vector spaces of the same dimension are isomorphic. The isomorphism is not canonical.<\/p> <p>Ex 2: A finite dimensional vector space is canonically isomorphic to its double dual.<\/p> <p>Ex 3: Let $\\pi: S^3 \\to S^2$ be the canonical fibration.<\/p> <p>I never really liked it when people use canonical as in example 3. It seems like using it this flexibly detracts from the useful technical interpretation of the word.<\/p> <p>I've also heard some more complicated category theoretic interpretations of what canonical meant. But, after more scrutiny, it seems that these \"definitions\" are specific cases of the \"no arbitrary choices\" principle.<\/p> http:\/\/mathoverflow.net\/questions\/19644\/what-is-the-definition-of-canonical\/19670#19670 Answer by Grant Olney Passmore for What is the definition of \"canonical\" ? Grant Olney Passmore 2010-03-28T22:03:30Z 2010-03-29T12:54:35Z <p>Not a definition, but an example of use in logic:<\/p> <p>In model theory, \"canonical\" is often used in the phrase \"the canonical model\" to mean \"intended structure.\" For instance, in first-order logic, one may speak of \"the canonical model of Peano Arithmetic\" to mean the structure of the natural numbers, or \"the canonical model of the theory of real-closed fields\" to mean the field of real numbers. Intuitively, \"the canonical model\" of a theory is the structure one was trying to pin down when the axiomatisation of the theory was written. It's just that in first-order logic, it is hard to pin down (infinite) structures! No first-order theories admitting infinite models are categorical (they admit non-isomorphic models; indeed, they admit models of every infinite cardinality), and compactness\/ultraproduct\/(many other) constructions can often be used to build \"non-standard\" models of theories. \"Non-standard\" models of Peano Arithmetic or the theory of real-closed fields would in this context be called \"non-canonical\" (even though there are many canonically studied \"non-standard\" models of those theories!).<\/p> <p>But, many commonly studied theories do not have a notion of \"canonical model.\" For instance, one would not say \"the canonical model of group theory.\"<\/p> http:\/\/mathoverflow.net\/questions\/19644\/what-is-the-definition-of-canonical\/19682#19682 Answer by Douglas S. Stones for What is the definition of \"canonical\" ? Douglas S. Stones 2010-03-29T02:33:27Z 2010-03-29T14:02:57Z <p>For me, if we have a partition P of a set S, then we can define a set of representatives, one from each part of P, each of which is called <em>canonical<\/em>.<\/p> <p>Typically, the partition P is formed by the orbits of a group G acting on S. If we choose G so that every element in S has a trivial stabiliser, then we can find |S| by instead counting the canonical representatives since |S|=|G|*|P| by the Orbit-Stabiliser Theorem.<\/p> <p>Often, the elements that are chosen to be canonical can be quite contrived - e.g. just because your program outputs a certain element of P first, i.e. \"lexicographical order\".<\/p> <hr> <p>To add some examples:<\/p> <p>a) An orthomorphism of $\\mathbb{Z}_n$ is a permutation $\\sigma$ such that $i \\mapsto \\sigma(i)-i \\pmod n$ is also a permutation. We partition the orthomorphisms of $\\mathbb{Z}_n$ into equivalence classes under the transformation $E_g$ for which $E_g[\\sigma](i)=\\sigma(i)+g \\pmod n$. Therefore the parts each have cardinality $n$ and we define the canonical representatives to be the orthomorphisms $\\sigma$ for which $\\sigma(0)=0$. Therefore the total number of orthomorphisms is $n$ times the number of canonical orthomorphisms.<\/p> <p>b) A Latin square is an $n \\times n$ matrix containing $n$ distinct symbols in which each symbol occurs exactly once in each row and each column. For instance, $$\\begin{matrix} 1 &amp; 3 &amp; 2 \\\\ 3 &amp; 2 &amp; 1 \\\\ 2 &amp; 1 &amp; 3 \\end{matrix}$$ is a $3 \\times 3$ Latin square. We can put it in a canonical form (which I call normalised) by permuting the columns so that the first row is in order, i.e. $$\\begin{matrix} 1 &amp; 2 &amp; 3 \\\\ 3 &amp; 1 &amp; 2 \\\\ 2 &amp; 3 &amp; 1 \\end{matrix}$$ Here the total number of Latin squares is $n!$ times the number of normalised Latin squares. There's another canonical form (which I call reduced) which has the first row and first column in order.<\/p> http:\/\/mathoverflow.net\/questions\/19644\/what-is-the-definition-of-canonical\/19687#19687 Answer by Tran Chieu Minh for What is the definition of \"canonical\" ? Tran Chieu Minh 2010-03-29T05:06:46Z 2010-03-29T05:06:46Z <p>Hopefully I don't say something too stupid. I just wonder whether the definition of canonical might be relative.<\/p> <p>For example, if we look at $\\mathbb Z \/ p$ as an additive group in fact there is no non-zero element which stands out. But if we look at $\\mathbb Z \/ p$ as a field $1$ stands out as a non-zero element.<\/p> <p>Another example. From the geometrical reason alone, there is no good reason to choose a positive direction (Essentially there is no way to distinguish from left hand and right hand). But in a universe where there is electro magnetic force, we then have a canonical way to choose a positive direction.<\/p> <p>Yet another example, there is a canonical way to choose whether you want a left shoe or a right shoe: If you are left-handed then choose the left one, if you are right handed choose the right one.<\/p> <p>Perhaps what counts as canonical depends on where we are standing. A suggestion for a heuristic definition: canonical is definable with respect to the structure you are standing at.<\/p> http:\/\/mathoverflow.net\/questions\/19644\/what-is-the-definition-of-canonical\/19724#19724 Answer by Thomas Kragh for What is the definition of \"canonical\" ? Thomas Kragh 2010-03-29T13:07:39Z 2010-03-29T13:07:39Z <p>Vague definition of canonical:<\/p> <p>Let $X$ and $Y$ be collections (often sets) for which assumptions has been made (has been given structures and\/or are related somehow). A function $f\\colon X \\to Y$ is canonical if it is given by a rule using only the already given structure.<\/p> <p>This explains the relation to the greek word rule (kanon). The precise meaning of the above are open for enterpretation: how much structure can the rule itself contain (maybe this can be made precise)! This \"definition\" somewhat contradicts many of the other answers, which for some reason is under the impression that canonical implies unique (or almost unique), which in my point of view is very wrong since different rules may define different maps. E.g. if we let $X$ be the objects in the category of abelian groups and $Y$ the morphisms then the definitions makes all the group homomorphisms $A \\to A$ given by multiplication with an element in $\\mathbb{Z}$ canonical, which to me is not a problem.<\/p> <p>Usually when there is an especially simple rule it is often assumed without mentioning that this is the rule defining the function. E.g. most will understand the following:\"there is a canonical endemorphism of any object in a category\". This emphasizes the multiplication with 1 above as somehow speciel or \"more canonical\" than the rest. This is simply because the rule works in much greater generality and is shorter.<\/p> <p>Usually if a rule is very simple the function will have nice properties. E.g. simply rules in category theory often define functors, natural transformations, e.t.c. This leeds many people to confuse the notion of canonical with \"something behaving nicely\".<\/p> <p>I am somewhat puzzled by the use of the word uniform in one of the answers. The nature of the word uniform is \"of the same form\" and relates more to symmetries and things looking the same every where. This often leeds to canonical maps, since a choice at one point can sometimes be extended to a choice at every point. Please someone comment on this since maybe this is just a use of the word I have not seen before!<\/p> http:\/\/mathoverflow.net\/questions\/19644\/what-is-the-definition-of-canonical\/98808#98808 Answer by David Corwin for What is the definition of \"canonical\" ? David Corwin 2012-06-04T20:28:53Z 2012-06-04T20:28:53Z <p>On page vii of the Introduction to the 1996 edition of <em>Sheaf Theory<\/em> by Glen E. Bredon, the author discusses the difference between \"canonical\" and \"natural\" and points to a historical context:<\/p> <blockquote> <p>Occasionally, we use the equal sign to mean a \"canonical\" isomorphism, perhaps not, strictly speaking, an equality. The word \"canonical\" is often used for the concept for which the word \"natural\" was used before category theory gave that word a precise meaning. That is, \"canonical\" certainly means natural when the latter has meaning, but it means more: that which might be termed \"God-given.\" We shall make no attempt to define that concept precisely. (Thanks to Dennis Sullivan for a theological discussion in 1969.)<\/p> <\/blockquote> http:\/\/mathoverflow.net\/questions\/19644\/what-is-the-definition-of-canonical\/98817#98817 Answer by Patrick I-Z for What is the definition of \"canonical\" ? Patrick I-Z 2012-06-04T21:40:39Z 2012-06-05T15:54:19Z <p>It is not an answer but it must have to do with the way our brain pick up a sample between a few ones. It must be the minimum of some function which can be implemented for real in the brain. I do not believe that there is a pure logical definition of \"canonical\" independently of the way our brain works. Experience : give me a number? What do you answer? 0 or 1 rarely $\\pi$ or even 115674. The numbers 0 and 1 are canonical in some sense. Give me a basis of ${\\bf R}^3$. The same holds $((1,0,0),(0,1,0),(0,0,1))$ I minimize the number of different digits and I pick them in my \"basis\" of canonical numbers. Well, interesting question.<\/p> <hr> <p>What is the canonical circle ? Ce circle in ${\\bf R}^2$, centered at $(0,0)$ with radius $1$. <\/p> <p>I know two numbers $0$ and $1$, the radius cannot be $0$ because it is not a (true) circle, so the radius is $1$, now the center could be $(0,0)$, $(0,1)$, $(1,0)$ or $(1,1)$ ? I prefer $(0,0)$, $0$ is simpler than $1$. How do you fit this example with category arguments?<\/p> <p><i> BTW I have nothing against category theory, I like it. But I'm curious to see if this example fits general categorical arguments.<\/i><\/p> http:\/\/mathoverflow.net\/questions\/19644\/what-is-the-definition-of-canonical\/121989#121989 Answer by Matthieu Romagny for What is the definition of \"canonical\" ? Matthieu Romagny 2013-02-16T14:43:07Z 2013-02-16T17:18:02Z <p>Regarding the (widely considered to be false) statement that \"There is a canonical isomorphism between a finite-dimensional vector space V and its dual\" in Reid Barton's answer, I think that the situation is a bit more interesting than that. It is a good illustration of the idea that an object may be defined to be \"canonical\" if it is constructed without making any choices, and the interesting point here is that there are various degrees of (in-)tolerance to choices. If we work with vector spaces of fixed finite dimension, then an isomorphism <code>$i_E:E\\to E^*$<\/code> between a vector space <code>$E$<\/code> and its dual <code>$E^*$<\/code> may be called <em>canonical<\/em> if <\/p> <ol> <li>it does not depend on the choice of a basis for <code>$E$<\/code> but we need a basis to define it, or<\/li> <li>it does not depend on the choice of a basis and may be defined without choosing a basis, or<\/li> <li>it does not depend on basis choices as above and does not even depend on <code>$E$<\/code>, in the sense that whenever <code>$u:E\\to F$<\/code> is an isomorphism between dim. vector spaces of the same finite dimension, then $u^*\\circ i_F\\circ u=i_E$ where $u^*$ is the transpose of $u$. This third notion of canonicity is essentially functoriality.<\/li> <\/ol> <p>Concretely, given a basis <code>$B=\\{e_j\\}$<\/code> of $E$ with dual basis <code>$B^*=\\{e_j^*\\}$<\/code>, we can construct an isomorphism <code>$i=i_{E,B}:E\\to E^*$<\/code> that maps <code>$e_j$<\/code> to <code>$e_j^*$<\/code>. This map does not depend on the choice of basis if and only if <code>$u^*\\circ i\\circ u=i$<\/code> for all <code>$u\\in \\text{GL}(E)$<\/code>. It is easily seen that this is equivalent to the fact that <code>$\\text{GL}_n(k)=\\text{O}_n(k)$<\/code>, where $k$ is the base field, $n$ is the dimension, and <code>$\\text{O}_n(k)$<\/code> is the orthogonal group of the standard (sum of squares) quadratic form. Exercise: this equality holds if and only if $n=1$ and $k$ has at most $3$ elements. Thus for $n=1$ and <code>$\\text{card}(k)\\leq 3$<\/code> the map <code>$i_{E,B}:E\\to E^*$<\/code> does not depend on $B$, so we may write it simply <code>$i_E$<\/code>. When <code>$\\text{card}(k)=2$<\/code> this is not so surprising because any two one-dimensional vector spaces over the field with two elements are uniquely (hence canonically in whichever sense you like) isomorphic, but for <code>$\\text{card}(k)=3$<\/code> this is a bit more exotic. Having reached this point, we might think that we are in the funny notion 1 of canonicity (and this is what I thought some minutes ago). But in fact, still assuming that $n=1$ and <code>$\\text{card}(k)\\leq 3$<\/code>, we can exhibit an isomorphism <code>$i:E\\to E^*$<\/code> without any reference to a basis. Namely, define $i(0)=0$ and if $x\\in E$ is nonzero, then it is a basis of $E$, and we can define $i(x)=x^*$, the only element of the dual basis. The point is that since $a^2=1$ for all nonzero scalars in $k$, this map $i$ is linear.<\/p> <p>Conclusion: if $n=1$ and <code>$\\text{card}(k)\\leq 3$<\/code> there is an isomorphism <code>$i:E\\to E^*$<\/code> that is constructed without a choice of basis, and it is functorial for isomorphisms of one-dimensional vector spaces. If $n\\ge 2$ or $\\text{card}(k)\\ge 4$, the map <code>$i_{E,B}:E\\to E^*$<\/code> is not independent of the basis <code>$B$<\/code>.<\/p> <p>I would guess that it is possible to find examples of phenomena like <code>1<\/code> above.<\/p> http:\/\/mathoverflow.net\/questions\/19644\/what-is-the-definition-of-canonical\/121991#121991 Answer by ACL for What is the definition of \"canonical\" ? ACL 2013-02-16T15:02:42Z 2013-02-16T15:02:42Z <p>I have two competing interpretations of the word canonical.<\/p> <p>One, apparently the one used by Bourbaki, is mathematically informal. In various contexts, some objects (maps, modules, etc.) are defined unambiguously and <em>called<\/em> canonical. For example, the canonical basis of the free module $A^{(I)}$ over a set $I$, the canonical surjection from a set $X$ to its set $X\/R$ of equivalence classes with respect to some equivalence relation $R$, the canonical bilinear map from a product of two modules to their tensor product, etc.<\/p> <p>The other interpretation is categorical. The given context defines (often implicitly) some categories and the canonical object is functorial with respect to <em>isomorphisms<\/em>. It is more or less what Kevin Buzzard says in its answer, when he defines canonical by the property that he and a colleague, when asked to define the object, would agree on the same object.<\/p>","date":"2013-05-19 08:45:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9189068078994751, \"perplexity\": 498.10007164349327}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368697232084\/warc\/CC-MAIN-20130516094032-00088-ip-10-60-113-184.ec2.internal.warc.gz\"}"}
null
null
Why: Support those who are helping to keep our waters clean. The H2O Trash Patrol use SUP and volunteers to paddle the water ways in harbors and lakes to remove trash that otherwise is unreachable and will eventually end up in our oceans. What: Enjoy great food, live music, good people and bid in a silent auction to help raise money for this organization. Visit: h2otrashpatrol.org for more info.
{ "redpajama_set_name": "RedPajamaC4" }
4,011
Common Engines has published an image of the 2020 Chevy Bolt Electric SUV forward of its launch in 2020. The image shows that the new electric cross-over stocks a lot of resemblances with the standard Chevy Secure EV, although it also provides its unique style identification. Common Engines CEO Jane Barra presented the details of the company's playbook at the 2020 Barclays International Automobile Meeting held in New You are able to on Wed recently. The CEO also provided traders an in depth glide demonstration of the brand's programs for electric automobiles. It was in this function where viewers had a glance at the future electric SUV in accordance with the Secure EV forward of its market first appearance in 2020. GM's spokesperson suggested that the image was sign of a brand-new cross utility automobile section, but not necessarily a new product. Barra's long-term strategy for EVs, according to her demonstration last 7 days, composed independent ride-sharing services. GM's defined programs for the electric-vehicle system will eventually involve five SUVs in total. The supposed SUVs will include two high-class crossovers, two high-class vehicles, and a ride-sharing automobile. Barra said we should anticipate vehicles from the new system to elegance the traders by 2021.General Engines desires to build successful EVs using this new system. The company has on different occasions managed that the new system would significantly reduce battery pack costs from $145 per kWh for the Secure EV to less than $100 per kWh. Given that the 2020 Chevy Bolt Electric SUV stocks the same system with the Secure, it is likely to feature the 150kW/360Nm electric drivetrain and 60kWh lithium-ion battery pack power. Meanwhile, we anticipate an increase in range from the current 238 kilometers of the Secure EV up to 300 kilometers. The car maker can turn this into cross-over to be front-wheel-drive only. The side demonstration of the 2020 Chevy Bolt Electric SUV in accordance with the Secure EV appears to be without badges. However, it shows a host of GM style hints. For example, the new cross-over has back ceiling support beams that are identical to the very first GMC Landscape. The future model has a more competitive face that consists of filter, angular front lights along with straight fog lights. It also looks a little more comprehensive, and the back security protection is chunkier than the Chevy Secure cross-over. The new SUV turns into a sailing ceiling style that is in line with the craze for most producers these days. The 2020 Chevy Bolt Electric SUV is one of the two crossovers that will appear within the next 18 months. Therefore, the two automobiles will probably elegance the display rooms before 2020. The other one will lend ideas from the Chevy FNR-X concept that was presented at the Shanghai Motor Show a few months ago. GM divulged last month that it was planning to offer up to 20 automobiles by 2023. Some of them will use genuine power while others will create do with hydrogen fuel-cell. Some of these vehicles will likely share the same structure with the Secure EV. Keep updated in for future reviews about the price of the 2020 Chevy Bolt Electric SUV.
{ "redpajama_set_name": "RedPajamaC4" }
8,336
Q: How to get column data to a variable I need to lookup how much points someone has that is presently logged in. So I have a column name being userId when someone is logged in. I've got this php script to lookup which userId somebody has. ob_start(); session_start(); require_once 'dbconnect.php'; // if session is not set this will redirect to login page if( !isset($_SESSION['user']) ) { header("Location: index.php"); exit; } // select loggedin users detail $res=mysql_query("SELECT * FROM users WHERE userId=".$_SESSION['user']); $userRow=mysql_fetch_array($res); $userId=$_SESSION['user']; But in the table every user has points. How to look up how much points a user has?
{ "redpajama_set_name": "RedPajamaStackExchange" }
965
template <class T> class wxBitset { friend class wxEnumData ; public: // creates a wxBitset<> object with all flags initialized to 0 wxBitset() { m_data = 0; } // created a wxBitset<> object initialized according to the bits of the // integral value val wxBitset(unsigned long val) { m_data = val ; } // copies the content in the new wxBitset<> object from another one wxBitset(const wxBitset &src) { m_data = src.m_data; } // creates a wxBitset<> object that has the specific flag set wxBitset(const T el) { m_data |= 1 << el; } // returns the integral value that the bits of this object represent unsigned long to_ulong() const { return m_data ; } // assignment wxBitset &operator =(const wxBitset &rhs) { m_data = rhs.m_data; return *this; } // bitwise or operator, sets all bits that are in rhs and leaves // the rest unchanged wxBitset &operator |=(const wxBitset &rhs) { m_data |= rhs.m_data; return *this; } // bitwsie exclusive-or operator, toggles the value of all bits // that are set in bits and leaves all others unchanged wxBitset &operator ^=(const wxBitset &rhs) // difference { m_data ^= rhs.m_data; return *this; } // bitwise and operator, resets all bits that are not in rhs and leaves // all others unchanged wxBitset &operator &=(const wxBitset &rhs) // intersection { m_data &= rhs.m_data; return *this; } // bitwise or operator, returns a new bitset that has all bits set that set are in // bitset2 or in this bitset wxBitset operator |(const wxBitset &bitset2) const // union { wxBitset<T> s; s.m_data = m_data | bitset2.m_data; return s; } // bitwise exclusive-or operator, returns a new bitset that has all bits set that are set either in // bitset2 or in this bitset but not in both wxBitset operator ^(const wxBitset &bitset2) const // difference { wxBitset<T> s; s.m_data = m_data ^ bitset2.m_data; return s; } // bitwise and operator, returns a new bitset that has all bits set that are set both in // bitset2 and in this bitset wxBitset operator &(const wxBitset &bitset2) const // intersection { wxBitset<T> s; s.m_data = m_data & bitset2.m_data; return s; } // sets appropriate the bit to true wxBitset& set(const T el) //Add element { m_data |= 1 << el; return *this; } // clears the appropriate flag to false wxBitset& reset(const T el) //remove element { m_data &= ~(1 << el); return *this; } // clear all flags wxBitset& reset() { m_data = 0; return *this; } // true if this flag is set bool test(const T el) const { return (m_data & (1 << el)) ? true : false; } // true if no flag is set bool none() const { return m_data == 0; } // true if any flag is set bool any() const { return m_data != 0; } // true if both have the same flags bool operator ==(const wxBitset &rhs) const { return m_data == rhs.m_data; } // true if both differ in their flags set bool operator !=(const wxBitset &rhs) const { return !operator==(rhs); } bool operator[] (const T el) const { return test(el) ; } private : unsigned long m_data; }; #define WX_DEFINE_FLAGS( flags ) \ class WXDLLEXPORT flags \ {\ public : \ flags(long data=0) :m_data(data) {} \ long m_data ;\ bool operator ==(const flags &rhs) const { return m_data == rhs.m_data; }\ } ; #endif
{ "redpajama_set_name": "RedPajamaGithub" }
3,502
import { ThemeIcon, TreeItem, TreeItemCollapsibleState } from 'vscode'; import { ViewFilesLayout } from '../../configuration'; import { GitUri } from '../../git/gitUri'; import { GitFile } from '../../git/models'; import { makeHierarchical } from '../../system/array'; import { gate } from '../../system/decorators/gate'; import { debug } from '../../system/decorators/log'; import { map } from '../../system/iterable'; import { joinPaths, normalizePath } from '../../system/path'; import { cancellable, PromiseCancelledError } from '../../system/promise'; import { sortCompare } from '../../system/string'; import { ViewsWithCommits } from '../viewBase'; import { FileNode, FolderNode } from './folderNode'; import { ResultsFileNode } from './resultsFileNode'; import { ContextValues, ViewNode } from './viewNode'; export interface FilesQueryResults { label: string; files: GitFile[] | undefined; filtered?: { filter: 'left' | 'right'; files: GitFile[]; }; } export class ResultsFilesNode extends ViewNode<ViewsWithCommits> { constructor( view: ViewsWithCommits, parent: ViewNode, public readonly repoPath: string, public readonly ref1: string, public readonly ref2: string, private readonly _filesQuery: () => Promise<FilesQueryResults>, private readonly direction: 'ahead' | 'behind' | undefined, private readonly _options: { expand?: boolean; } = {}, ) { super(GitUri.fromRepoPath(repoPath), view, parent); this._options = { expand: true, ..._options }; } override get id(): string { return `${this.parent!.id}:results:files`; } private _filter: 'left' | 'right' | false = false; get filter(): 'left' | 'right' | false { return this._filter; } set filter(value: 'left' | 'right' | false) { if (this._filter === value) return; this._filter = value; this._filterResults = undefined; void this.triggerChange(false); } get filterable(): boolean { return this.filtered || (this.ref1 !== this.ref2 && this.direction === undefined); } get filtered(): boolean { return Boolean(this.filter); } async getChildren(): Promise<ViewNode[]> { const results = await this.getFilesQueryResults(); const files = (this.filtered ? results.filtered?.files : undefined) ?? results.files; if (files == null) return []; let children: FileNode[] = [ ...map( files, s => new ResultsFileNode(this.view, this, this.repoPath, s, this.ref1, this.ref2, this.direction), ), ]; if (this.view.config.files.layout !== ViewFilesLayout.List) { const hierarchy = makeHierarchical( children, n => n.uri.relativePath.split('/'), (...parts: string[]) => normalizePath(joinPaths(...parts)), this.view.config.files.compact, ); const root = new FolderNode(this.view, this, this.repoPath, '', hierarchy); children = root.getChildren() as FileNode[]; } else { children.sort((a, b) => a.priority - b.priority || sortCompare(a.label!, b.label!)); } return children; } async getTreeItem(): Promise<TreeItem> { let label; let icon; let files: GitFile[] | undefined; let state; try { const results = await cancellable(this.getFilesQueryResults(), 100); label = results.label; files = (this.filtered ? results.filtered?.files : undefined) ?? results.files; if (this.filtered && results.filtered == null) { label = 'files changed'; icon = new ThemeIcon('ellipsis'); } state = files == null || files.length === 0 ? TreeItemCollapsibleState.None : this._options.expand ? TreeItemCollapsibleState.Expanded : TreeItemCollapsibleState.Collapsed; } catch (ex) { if (ex instanceof PromiseCancelledError) { ex.promise.then(() => queueMicrotask(() => this.triggerChange(false))); } label = 'files changed'; icon = new ThemeIcon('ellipsis'); // Need to use Collapsed before we have results or the item won't show up in the view until the children are awaited // https://github.com/microsoft/vscode/issues/54806 & https://github.com/microsoft/vscode/issues/62214 state = TreeItemCollapsibleState.Collapsed; } const item = new TreeItem( `${this.filtered && files != null ? `Showing ${files.length} of ` : ''}${label}`, state, ); item.id = this.id; item.iconPath = icon; item.contextValue = `${ContextValues.ResultsFiles}${this.filterable ? '+filterable' : ''}${ this.filtered ? `+filtered~${this.filter}` : '' }`; return item; } @gate() @debug() override refresh(reset: boolean = false) { if (!reset) return; this._filterResults = undefined; this._filesQueryResults = this._filesQuery(); } private _filesQueryResults: Promise<FilesQueryResults> | undefined; private _filterResults: Promise<void> | undefined; async getFilesQueryResults() { if (this._filesQueryResults === undefined) { this._filesQueryResults = this._filesQuery(); } const results = await this._filesQueryResults; if ( results.files == null || !this.filterable || this.filter === false || results.filtered?.filter === this.filter ) { return results; } if (this._filterResults === undefined) { this._filterResults = this.filterResults(this.filter, results); } await this._filterResults; return results; } private async filterResults(filter: 'left' | 'right', results: FilesQueryResults) { let filterTo: Set<string> | undefined; const ref = this.filter === 'left' ? this.ref2 : this.ref1; const mergeBase = await this.view.container.git.getMergeBase( this.repoPath, this.ref1 || 'HEAD', this.ref2 || 'HEAD', ); if (mergeBase != null) { const files = await this.view.container.git.getDiffStatus(this.uri.repoPath!, `${mergeBase}..${ref}`); if (files != null) { filterTo = new Set<string>(files.map(f => f.path)); } } else { const commit = await this.view.container.git.getCommit(this.uri.repoPath!, ref || 'HEAD'); if (commit?.files != null) { filterTo = new Set<string>(commit.files.map(f => f.path)); } } if (filterTo == null) return; results.filtered = { filter: filter, files: results.files!.filter(f => filterTo!.has(f.path)), }; } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,108
The Minister of Labor and Social Policy of Macedonia, Mila Carovska, accompanied by her team, as well as representatives of the Unit for policies for Roma inclusion, also the director of the National Employment Service and a delegation of the United Nations visited the Fundación Secretariado Gitano to study in depth the programs we carry out, focusing on the Acceder programme, which they are studying to transfer it to the Balkan country. The Acceder programme is recognized as good international practice by numerous institutions such as the International Labor Organization, the European Commission among others ... It was launched in 2000 and due to its great results has been transferred to other countries such as Bosnia and it is under study in Italy and now the Republic of Macedonia is studying to launch this program to boost the employment of Roma people in this country. The study visit lasted 4 days from Monday, July 2nd to Thursday, July 5th. On the first day that took place at the FSG headquarters, they learnt about the Acceder program, its philosophy, implementation and results. On the second day, the delegation travelled to Valladolid to learn first hand the partnership philosophy of Acceder and visited some of the collaborating companies as IKEA or ISS. On the third day, the minister and her delegation returned to the headquarters of the FSG to meet about the integrated approach for the inclusion of Roma people with other programs such as Promociona, or the Calí programme for equal treatment with Roma women. Apart from these meetings with the FSG, the minister met with representatives of employment and social policies at national, regional and local level.
{ "redpajama_set_name": "RedPajamaC4" }
6,128
Q: Is a way to show multiple metrics on Google Data Studio Geo Map? I'm trying to use Google Data Studio (GDS) to display a dashboard with a map. I want to show 2 metrics on the tooltip - Number of Trips and Number of Customers. But GDS lets me select only one metric at a time as the default and makes everything else optional. Is there a way to show both metrics simultaneously? I'm able to show both metrics on the map using Tableau Public. However, I would prefer staying within the Google ecosystem, if possible. For reference, I've attached a screenshot below - comparing the two visualizations. A: Geo Maps in Google Data Studio currently only display a single metric at a time (no optional or additional tooltips for Geo Maps at the moment). However, as alluded to, Optional Metrics would allow viewers to choose a single metric from a range of different metrics. Google Data Studio Report and a GIF to demonstrate:
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,623
\section{Introduction} Recently there has been extended interesting in weakly interacting Bose-Einstein condensates for use as an atomic interferometer \cite{fattori2008a} and also to probe magnetic dipolar interactions in condensates \cite{fattori2008b}. This work was based on $^{39}$K atoms where a broad Feshbach resonance exists at a magnetic field strength of $B_0=402.4$G \cite{errico07} which allows a large tunability of the atomic interaction in experiments \cite{roati2007}. Similar tunability has also been reported in a condensate of $^{7}$Li \cite{pollack2009}. The atomic interaction can be reduced by tuning the scattering length, $a$, to zero, also known as zero-crossing. In a Gross-Pitaevskii mean-field picture we can thus neglect the usual non-linear term proportional to $a$. The question is then what other interactions are relevant. As shown in \cite{fattori2008b}, the magnetic dipole will contribute here. In the Gross-Pitaevskii picture we might also ask whether higher-order terms in the interaction can contribute around zero-crossing. Recently it was shown that effective-range corrections can in fact influence the stability of condensates around zero-crossing \cite{zinner2009,thoger2009}. The Feshbach resonances used thus far in experiments have typically been very broad, and as a result the effective range, $r_e$, will be small, rendering the higher-order terms negligible. However, around narrow resonances this is not necessarily the case and finite-range corrections are not necessarily negligible. \begin{figure}[ht!] \includegraphics*[angle=0,scale=0.55]{fig1.pdf} \caption{Scattering length and effective range for the $s$-wave scattering of fermionic $^{40}$K atoms around the Feshbach resonance at $B_0=202.1$ G demonstrating the divergence in a coupled-channel calculation (symbols) \cite{nygaard06} and in a zero-range model (full lines). The difference in the zero-range and coupled-channel models is caused by the presence of a bound state close to threshold in the open channel.} \label{fig-scat} \end{figure} The systematic inclusion of finite-range effects through derivative terms in zero-range models was begun in the study of nuclear matter decades ago \cite{skyrme1956}. Later on the intricacies of the cut-off problems that arise in this respect was considered by many authors both for the relativistic and non-relativistic case (see \cite{phil98} for discussion and references). In the context of cold atoms and Feshbach resonances, we need to use a two-channel model \cite{bruun05} in order to take the lowest order finite-range term into account. Similar models were already introduced in \cite{kokkelmanns02} and denoted resonance models (see f.x. \cite{braaten08} for a comprehensive review of scattering models for ultracold atoms). We note that whereas resonance models treat the closed-channel molecular state as a point boson the model of \cite{bruun05} treats the molecule more naturally as a composite object of two atoms. In the end the parameters of the two models turn out to be similarly related to the physical parameters of Feshbach resonances. In Fig.~(\ref{fig-scat}) we show calculations of scattering length and effective range for the Feshbach resonance at $B=202.1$ G in $^{40}$K in both a coupled-channel model \cite{nygaard06} and in the zero-range model discussed here. We see the effective range being roughly constant at resonance and then start to diverge at zero-crossing. The zero-range model provides a good approximation to the full calculations and for many-body purposes it is preferable due to its simplicity. Whereas the earlier work of \cite{kokkelmanns02} considered the regime close to the resonance, we will be exclusively concerned with zero-crossing. To our knowledge the intricacies of this region have not been addressed in the literature in the context of Feshbach resonances. Around zero-crossing the Feshbach model turns out to have a badly behaved effective-range expansion. The parameters obtained from the effective-range expansion should therefore be used with extreme caution as the series is divergent at this point. However, as we show in this article, the finite-range corrections obtained from the full T-matrix at low momenta via an effective potential turn out to be the same as one would naively expect based on the effective-range expansion. After introducing the effective potential we consider its applicability and importance in the case of Bose-Einstein condensates and for two-component Fermi gases where the attractive nature of the effective interaction at zero-crossing could lead to collapse above a certain critical particle number or to pairing instability and superfluidity. In general, we find that tight external confinement is a necessary condition for the higher-order effects to dominate the magnetic dipole interaction and be experimentally observable. \section{Two-Channel Model} We consider a two-channel $s$-wave Feshbach model with zero-range interactions \cite{bruun05} for which the on-shell open-open channel T-matrix as a function of magnetic field, $B$, is \begin{equation}\label{full} T_{oo}(B)=\frac{\frac{4\pi \hbar^2}{m} a_{bg}}{\left( 1+\frac{\Delta\mu \Delta B}{\frac{\hbar^2 q^2}{m}-\Delta\mu(B-B_0) }\right)^{-1}+i a_{bg}q}, \end{equation} where $\Delta\mu$ is the difference between the magnetic moments in the open and closed channel, $q$ is the relative momentum of the atoms of mass $m$, $a_{bg}$ is the scattering length away from the resonance at magnetic field $B_0$, and $\Delta B$ is the width of the resonance. We can compare this to the standard vacuum expression for the $T$-matrix in terms of the phase-shift given by \begin{equation}\label{tvac} T_v(B)=\frac{\frac{4\pi\hbar^2 }{m}a }{-qa\cot\delta(q)+iaq}, \end{equation} where $a(B)=a_{bg}\left(1-\frac{\Delta B}{B-B_0}\right)$ as in the commonly employed single-channel models. From Eqs.~\eqref{full} and \eqref{tvac} we obtain the relation for the phase-shift \begin{equation}\label{fulleff} q\cot\delta(q)=\frac{-1}{a_{bg}}\left( 1+\frac{\Delta\mu \Delta B}{\frac{\hbar^2 q^2}{m}-\Delta\mu(B-B_0) }\right)^{-1}. \end{equation} We now expand the right-hand side in powers of $q$ as is usually done in an effective-range expansion. This yields \begin{align}\label{expand} q\cot\delta(q)=&\frac{-1}{a(B)}&\nonumber\\&+\sum_{n=1}^{\infty}\frac{-1}{a_{bg}}\left[\frac{-a_{bg}r_{e0}}{2}\right]^{n}\left[\frac{a_{bg}}{a(B)}-1\right]^{n+1}q^{2n},& \end{align} where $r_{e0}=-2\hbar^2/(m\Delta B\Delta\mu a_{bg})$ is the background value of the effective range around the resonance. From Eq.~\eqref{expand} we can now read off all coefficients in an effective-range expansion with their full $B$-field dependence. For instance, the effective range is given simply by $r_e=r_{e0}\left[\tfrac{a_{bg}}{a}-1\right]^2$, which is divergent when $a(B)\rightarrow0$. We also clearly see that all the other coefficients are divergent in that limit. This is signaled also before doing the full expansion in $q$ as the first term in Eq.~\eqref{expand} diverges at zero-crossing. However, in effective potentials derived from the T-matrix these problems are not transparent as the lowest order coefficient is proportional to $a(B)$ (see Eq.~\eqref{seff}). Below we will discuss what kind of constraints this introduces on the applicability of the effective-range expansion near zero-crossing. We note that similar issues were briefly discussed in a different context in \cite{massignan06} where an equivalent to Eq.~\eqref{teq} below was obtained. Let us first consider the low-$q$ limit and compare the full T-matrix with the effective-range expansion as zero-crossing is approached. Taking the low-$q$ limit of Eq.~\eqref{fulleff} at zero-crossing where $\Delta B/(B-B_0)=1$, we find \begin{align} q\cot\delta(q)\rightarrow \frac{-1}{a_{bg}}-\frac{\Delta\mu\Delta B}{\frac{\hbar^2q^2}{m}}, \end{align} which diverges as $q^{-2}$. Therefore the coefficients of the expansion in Eq.~\eqref{expand} must necessarily diverge in order to retain any hope of describing the low-$q$ behavior. Furthermore, since the expansion is an alternating series and therefore slowly converged, we also conclude that many terms must be retained for a fair approximation at very small but non-zero $q$. The same conclusion can be reached by considering the radius of convergence of Eq.~\eqref{expand}, which we find by locating the pole in Eq.~\eqref{fulleff} at $\hbar^2 q^2/m=\Delta\mu(B-B_0-\Delta B)$. This radius indeed goes to zero at zero-crossing. We are thus forced to conclude that the effective-range expansion breaks down near zero-crossing. \subsection{Effective Potential at Zero-crossing} Since the effective-range expansion is insufficient we consider the full T-matrix in the low-$q$ limit at zero-crossing. To lowest order we have \begin{equation}\label{teq} T_{oo}(B=B_0+\Delta B)=-\frac{4\pi\hbar^2a_{bg}}{m}\frac{\hbar^2 q^2}{m\Delta\mu\Delta B}+O(q^4). \end{equation} Using the expression for $r_{e0}$, this can be written \begin{equation} \frac{4\pi\hbar^2}{m}\frac{a_{bg}^{2}r_{e0}}{2}q^2. \end{equation} Knowing the T-matrix at low $q$ we can now proceed to find an effective low-$q$ potential through the Lippmann-Schwinger equation \begin{align} V=T-TG_0 V, \end{align} where $G_0=(E-H_0+i\delta)^{-1}$ is the free space Green's function \cite{pet02}. This equation can be solved for $T(q,q')\propto{q}^2+{q'}^2$ (the symmetrized version of the full T-matrix) in an explicit cut-off approach \cite{phil98,pet02} and then be expanded to order $q^2$ for consistence with the input T-matrix. In the long-wavelength limit we can take the cut-off to zero \cite{pet02} and for the on-shell effective potential we then obtain the obvious answer \begin{align}\label{qpot} V(q)=\frac{4\pi\hbar^2}{m}\frac{a_{bg}^{2}r_{e0}}{2}q^2 \end{align} in momentum space. The effective potential in real-space is now easily found by canonical substitution ($\bm q\rightarrow -i\nabla$) and appropriate symmetrization \cite{roth01}. We have \begin{align}\label{effpot} V(\bm r)=-\frac{4\pi\hbar^2}{m}\frac{a_{bg}^{2}r_{e0}}{2}\frac{1}{2}\left[\overleftarrow{\nabla}_{\bm r}^2\delta(\bm r) +\delta(\bm r)\overrightarrow{\nabla}_{\bm r}^2\right]. \end{align} Notice that the Lippmann-Schwinger approach is non-perturbative as opposed to the perturbative energy shift method \cite{roth01,pet07}. \subsection{Comparison to Effective-Range Expansion and Energy-Shift Method} Away from zero-crossing one can easily relate the effective-range expansion to an effective potential through the perturbative energy shift method \cite{phil98,roth01,pet07}. To second order the $s$-wave effective potential is \begin{align}\label{seff} V(\bm r)=\frac{4\pi\hbar^2 a}{m}\left[\delta(\bm r)+\frac{g_2}{2}\left(\overleftarrow{\nabla}_{\bm r}^2\delta(\bm r) +\delta(\bm r)\overrightarrow{\nabla}_{\bm r}^2\right)\right], \end{align} where the first term is the effective interaction usually employed in mean-field theories of cold atoms \cite{pet02}. In terms of $a$ and $r_e$, we have $g_2=a^2/3-ar_e/2$ \cite{roth01,pet07} with the field-dependent $a=a(B)$ and $r_e=r_e(B)$. At zero-crossing the first term in Eq.~\eqref{seff} vanishes and one might expect the second term to vanish as well. However, in the naive effective-range expansion of the two-channel model discussed above we saw that $r_e$ diverges as $a^{-2}$ and we therefore have \begin{align}\label{const} \lim_{a\rightarrow 0} ag_2=-\frac{a_{bg}^2 r_{e0}}{2}. \end{align} In particular, if we for a moment ignore $q^4$ terms in the effective-range expansion, we recover exactly the same effective potential as in Eq.~\eqref{effpot} at zero-crossing. The finite limiting result in Eq.~\eqref{const} shows that the potential in Eq.~\eqref{seff} is well-defined as $a\rightarrow 0$, provided that appropriate regularization and renormalization is performed. Eq.~\eqref{seff} thus applies equally well at resonance ($a\rightarrow\infty$) where the gradient terms are small and at zero-crossing where the lowest order delta function term is unimportant. It is thus a well-defined effective potential over the entire range of a Feshbach resonance. We therefore see that even though the effective-range expansion has divergent coefficients at zero-crossing, the lowest order does in fact give the same effective potential as the full T-matrix if we apply it naively. The effective-range expansion should thus be viewed as an asymptotic series. However, we cannot use the effective-range expansion to estimate the validity of the second order effective potential since the radius of convergence goes to zero at zero-crossing as discussed above. \section{Relation to Experiments} Above we only retained terms of order $q^2$ in the full T-matrix. We now estimate the energy regime in which this expression is valid. Demanding that the $q^4$ term be smaller than the $q^2$ term gives the criterion \begin{align}\label{cond} \frac{\hbar^2q^2}{m} \ll \frac{\hbar^2}{m|a_{bg}r_{e0}|}. \end{align} We relate this condition to recent experiments with bosonic condensates of $^{39}$K working around zero-crossing \cite{fattori2008a}. The resonance used there is very broad ($\Delta B=-52$G) with $a_{bg}=-29a_0$ and $r_{e0}=-58a_0$ ($a_0$ is the Bohr radius). The right-hand side of Eq.~\eqref{cond} is $2.3\cdot 10^{-7}$ eV, corresponding to a temperature of about 3 mK. Since the experiments are performed at much lower temperatures the approximation above is certainly valid. However, as $a_{bg}$ and particularly $r_{e0}$ is small, the front factor in Eq.~\eqref{effpot} is also small. The relevant scale of comparison is the outer trap parameter $b$ \cite{zinner2009} which is typically of order $1\mu$m, yielding a vanishing ratio $|a_{bg}^{2}r_{e0}|/b^3\sim 10^{-9}$. For broad Feshbach resonances the higher-order interactions can thus be safely ignored. For very narrow resonances the situation potentially changes as $r_{e0}$ can be very large and make the potential in Eq.~\eqref{effpot} important. As an example, we consider the narrow resonance in $^{39}$K at $B_0=25.85$G with $\Delta B=0.47$G, $a_{bg}=-33a_0$, and $r_{e0}=-5687a_0$ \cite{errico07}. The right-hand side of Eq.~\eqref{cond} is now $2\cdot 10^{-9}$ eV, corresponding to 24 $\mu$K. This is again much higher than experimental temperatures. A more careful argument can be made from the energy per particle of the non-condensed cloud. Ignoring the trap, we have $E/N=0.770k_B T_c (T/T_c)^{5/2}$ ($T_c$ is the critical temperature) \cite{pet02}. For a sample of $3\cdot 10^4$ a critical temperature of 100 nK was reported in \cite{roati2007}. Using this $T_c$ we find that $T\ll 900$nK for Eq.~\eqref{cond} to hold. Again this is within the experimental regime. The effective potential approach should therefore be applicable around zero-crossing for narrow resonances. However, even with this narrow resonance we find $|a_{bg}^{2}r_{e0}|/b^3\sim 10^{-7}$ and the effect is still completely negligible. In order to increase the relevance of the higher-order term, we now consider some very narrow resonances that have been found in $^{87}$Rb. In particular, the resonance at $B_0=9.13$G \cite{widera04} which was recently utilized in nonlinear atom interferometry \cite{gross10}. We have $\Delta B=0.015$G, $a_{bg}=99.8a_0$, and $\Delta\mu=2.00\mu_B$ \cite{chin10}, which gives $r_{e0}=-19.8\cdot 10^3a_0$ and a ratio $|a_{bg}^{2}r_{e0}|/b^3=2.92\cdot 10^{-5}(1\mu\text{m}/b)^3$. A trap length of $b\sim 0.5\mu$m as used in \cite{gross10} would thus yield $10^{-4}$ and demonstrates that higher-order corrections can safely be neglected. For a ratio of 1 we need $b\sim 0.03\mu$m which is unrealistically small in current traps or optical lattices. However, a resonance of width $\Delta B=0.0004$G is known in the same system at $B_0=406.2$G \cite{marte02} with $a_{bg}=100a_0$ and $\Delta\mu=2.01\mu_B$ \cite{chin10}. In this case we find $r_{e0}=-7.4\cdot 10^5a_0$ and a much more favorable ratio of $|a_{bg}^{2}r_{e0}|/b^3=0.001(1\mu\text{m}/b)^3$. Here we see that a ratio of 1 is achieved already for $b\sim 0.1\mu$m which not far off from tight traps or optical lattice dimensions. In terms of temperature we still have to be in the ultralow regime of $T\lesssim 30$nK according to Eq.~\eqref{cond} for the latter resonance. Consider now a fermionic two-component system where $s$-wave interactions are dominant. Since we have $r_{e0}<0$ for all Feshbach resonances \cite{chin10}, the effective potential in Eq.~\eqref{qpot} is attractive and the system could potentially be unstable toward a paired state or become unstable to collapse above a critical particle number. For simplicity we will use the semi-classical Thomas-Fermi approach to describe a gas with equal population of the two components and estimate the critical particle number. Assuming an isotropic trapping potential with length scale $b=\sqrt{\hbar/m\omega}$ where $\omega$ is the trap frequency, the ground-state density, $\rho(\bm x)$, can be found by minimization and satisfies \begin{align}\label{eom} \left[\frac{\mu}{\hbar\omega} -\frac{1}{2}\left(\frac{\bm x}{b}\right)^2\right]=&\frac{1}{2}(k_F(\bm x)b)^2 -\frac{4}{30\pi}\alpha(k_F(\bm x)b)^5, \end{align} where $\rho(\bm x)=k_{F}(\bm x)/6\pi^2$ and $\alpha=a_{bg}^{2}|r_{e0}|/b^3$. The maximum allowed momentum and chemical potential, $\mu$, is found by solving for the turning point of the right-hand side of Eq.~\eqref{eom} which gives \begin{align} k_{max}b=\left[\frac{3\pi}{2\alpha}\right]^{1/3}\quad\text{and}\quad\mu_{max}=\frac{3}{10}\hbar\omega(k_{max}b)^2. \end{align} We can now compare this $k_{max}$ to the value obtained from the non-interacting density within the Thomas-Fermi approximation at the center of the trap. In terms of the number of particles in each component, $N$, at the center of the trap we have $k_{F}(0)b\approx 1.906N^{1/6}$ \cite{pet02}. By equating these two expression we obtain an estimate for the critical number of particles, $N_{max}$. Inserting the relevant units, we have \begin{align} N_{max}=2\cdot 10^{25}\left(\frac{a_0}{a_{bg}}\right)^{4}\left(\frac{a_0}{r_{e0}}\right)^{2}\left(\frac{b}{1\,\mu\text{m}}\right)^{6}, \end{align} where $a_0$ is the Bohr radius. We note that the scaling $N_{max}\propto \alpha^{-2}$ can also be obtained by considering the point at which the monopole mode becomes unstable. Typical numbers for common fermionic species $^{6}$Li or $^{40}$K in the lowest hyperfine states \cite{chin10} lead to $N_{max}\sim 10^{12}$ for $b=1\,\mu\text{m}$. This is of course a huge number and experiments are well within this limit. Even if one reduced the trap length by a factor of ten and made the presumably unrealistic assumption that the particle number remains the same we still have $N\ll N_{max}$. The reason is that the $s$-wave Feshbach resonances utilized in the two-component gases are generally broad in order to study the universal regime. If we consider the narrow resonance at $B_0=543.25$G in $^{6}$Li \cite{strecker03} with $\Delta B=0.1$G, $a_{bg}=60a_0$, and $\Delta\mu=2.00\mu_B$ \cite{chin10}, we have $N_{max}\sim 2\cdot 10^{13} (b/1\mu\text{m})^6$. This is somewhat better but we still need $b\sim 0.06\mu$m to get to an experimentally relevant $N_{max}\sim 10^6$. We have to conclude that higher-order $s$-wave interactions are highly unlikely to be observable through monopole instabilities. In light of this it seems better to consider $p$-wave resonances which are much more narrow in general. However, also here extremely small trap sizes appear necessary \cite{zinner2009b}. The instability toward Cooper pairing around zero-crossing can also be estimated in simple terms. In general the critical temperature is $T_c\sim T_F \exp(-1/N_0 |U|)$, where $N_0=mk_F(0)/2\pi^2\hbar^2$ is the density of states at the Fermi energy in the trap center and $U<0$ is a measure of the attraction. For the latter we use the effective potential in momentum space from Eq.~\eqref{qpot} and make the assumption that $q\sim k_F(0)$. Using the expression for $k_F(0)$ in terms of $N$ above, we find \begin{align} \frac{1}{N_0|U|}=\frac{1.5\cdot 10^{12}}{\sqrt{N}}\left(\frac{b}{1\,\mu\text{m}}\right)^3\left(\frac{a_0}{a_{bg}}\right)^2\frac{a_0}{|r_{e0}|}. \end{align} For broad resonances in $^{6}$Li or $^{40}$K this exponent is of order $10^3$ and $T_c$ is thus vanishingly small. However, the scaling with trap size can help and if we imagine reducing to $b=0.1\,\mu\text{m}$, we find $T_c\lesssim 0.5T_F$ for $N=10^6$ atoms. For the narrow resonance in $^{6}$Li discussed above, we find that $T_c\sim 0.5T_F$ with $N=10^6$ can be achieved for $b\sim 0.5\mu$m and $T_c\sim 0.1T_F$ for $N=10^5$. Thus there may be a possibility to reach the pairing instability near zero-crossing if high particle numbers can be cooled in tight traps and narrow resonances are used. \subsection{Dipole-Dipole Interactions} The discussion above ignores the dipole-dipole interaction discussed in the introduction which will compete against the higher-order effective potential from the Feshbach resonance. A simple estimate can be made along the lines of the discussion in \cite{pet02}. The external trapping potential is the characteristic scale of spatial variations and we thus find a ratio, $r$, of magnetic dipole-dipole, $U_{md}$, to higher-order $s$-wave zero-range interaction strength, $U_{2}$, which can be written as \begin{align} r=\frac{U_{md}}{U_{2}}=\frac{a_0 b^2}{a_{bg}^{2}|r_{e0}|}= 35.7\left[\frac{b}{1\mu\text{m}}\right]^2 \left[\frac{100a_0}{a_{bg}}\right]^2 \frac{1000a_0}{|r_{e0}|}. \end{align} For $r<1$ the higher-order interaction term will therefore dominate the magnetic dipole term. For the case of narrow resonances in $^{87}$Rb discussed above we find $r\sim 0.11(b/1\mu\text{m})^2$ for the resonance at $B_0=9.13$G and $r\sim 0.05(b/1\mu\text{m})^2$ for the one at $B_0=406.2$G. For the narrow resonance in $^{6}$Li at $B_0=543.25$G we find $r\sim 1.4 (b/1\mu\text{m})^2$. These ratios clearly indicate that magnetic dipole-dipole interactions can be suppressed relative to higher-order zero-range terms for narrow Feshbach resonances and standard trap sizes. This domaninace becomes even stronger for the tight traps needed for the realization of the effects discussed above and we thus conclude that interference of the magnetic dipole-dipole term is not a major concern. \section{Conclusions} In this article we have discussed the effective potential around a Feshbach resonances as the scattering length is tuned to zero and finite-range corrections become important. We showed that the effective-range expansion is badly behaved and the effective potential most be defined from the T-matrix. We have demonstrated that the low momenta effective potential obtained from the full T-matrix agrees with one obtained naively from the effective-range expansion when the scattering length goes to zero. Thus even though the effective-range expansion has divergent coefficients at zero-crossing the first terms of the associated effective potential yield consistent results. We then estimated the effects of the terms on different condensates. Since the effective potential at zero-crossing is attractive it may induce various instabilities which we considered for the case of a two-component Fermi gas under harmonic confinement. For the broad Feshbach resonances used in current experiments the effective potential discussed here are negligible and the dipole-dipole interaction dominates completely at zero-crossing. However, for narrow resonances in very tightly confined systems some of the effects might be detectable. The competing dipole interaction is small for narrow resonances in tight confinement. However, it is conceivable that effects of spherically symmetric higher-order terms could be separated from dipolar effects which change with system geometry \cite{fattori2008b}. \paragraph{Acknowledgments} The author would like to thank Martin Th{\o}gersen for very fruitful collaborations. Correspondence with Georg Bruun about two-channel models is highly appreciated. We are grateful to Nicolai Nygaard for discussions and for producing Fig.~(\ref{fig-scat}). This work was supported by the Villum Kann Rasmussen foundation.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,006
\section{Introduction} Recently many authors have offered diverse approaches to feedforward Neural Network (NN) algorithms \cite{MR4505410,MR4505203,MR4496811,MR4491367}, as well as optimization terms based on kernels. Here we establish some new results in operator theory, and we bring them to bear on the problem. The list of applications of feedforward NN models includes a variety of machine learning settings, and deep NN based on kernels \cite{MR4505882,MR4503771,MR4492099,MR4500409,MR4376564,MR4268857,MR4134776,MR4131039}. A common theme in feedforward NN models is specific prescribed iterations which entail (i) ReLu functions \cite{MR4473797,MR4409717,MR4399726,MR4476907,MR4468133,MR4458444}, (ii) substitution from prescribes systems of affine mappings. Moreover, (iii) each step is then linked to the next with a choice of an activation function. In this paper we show that there are natural positive definite kernels associated with the three steps going into feedforward NN constructions, as well as to their iteration. We believe that this then yields a more direct tool for kernel-based feedforward NN models. This advantage of our approach is based on two facts. First, we identify a direct notion of kernel iteration which accounts for traditional function theoretic feedforward NN steps. Secondly, our approach offers a more direct and natural choices of kernels which govern approximations involved in deep NN models, for example graph NN constructions. While positive definite kernels and their associated reproducing kernel Hilbert spaces have found diverse applications in pure and applied mathematics, we shall focus here on a new role of kernels in feedforward network models. In more detail, the main purpose of our paper is a presentation of choices of particular families of positive definite kernels which serve as powerful tools in analyses of multiple layer feedforward Neural Networks. In general, reproducing kernel constructions, and the corresponding RKHSs are powerful tools in diverse applications. In the present framework of kernel neural networks (KNNs) , their role may be summarized as follows: Starting with the problem at hand, when we build our RKHS$(\mu)$ via IFS iterations (e.g., via Cantor-like fractal limits), then the Cantor-like $\mu$ activation functions arise as relative reproducing dipole functions for RKHS$(\mu)$ as in \figref{rmu} below. \section{\label{sec:nn}Neural networks (NN), and reproducing kernel Hilbert spaces (RKHS)} A main theme in our paper is a development of new tools for design of feedforward Neural Network constructs. For this purpose we point out the use of positive definite kernels, and associated generating function for the NN algorithms. These kernel based functions include the more familiar ReLu functions, see Theorems \ref{thm:hk} and \ref{thm:hmu} below. We stress that the particular RKHS constructs will be relative in the sense of Theorems \ref{thm:hk} and \ref{thm:hmu}, i.e., the inner product reproduces differences of function values. Our approach to the use of kernels and functions for feedforward Neural Network (NN) algorithms, is based on a systematic study of two classes of operators. They act as follows: (i) between prescribed kernel Hilbert spaces, and (ii) other operators acting at indexed levels in the network, i.e., operators acting at fixed levels, so within choices of kernel Hilbert spaces. Case (i) includes a systematic study of composition operators (see Corollaries \ref{cor:b7} and \ref{lem:b9}) in the context of kernel Hilbert spaces; and case (ii), the study of multiplier operators and their adjoints, see e.g., \thmref{b15}. We emphasize that the two classes of operations discussed below depend on choices of kernels at each level in particular NN-network models. Together these families of operators allow for realizations of black box filter-entries in associated generalized multi-resolutions systems, including operators which consist of composition followed by multiplication. Specific 3D applications are presented in the subsequent sections, secs \ref{sec:kact} and \ref{sec:fwidth}. \textbf{Conventions. }Inside the paper we shall work with Hilbert spaces of functions, e.g., reproducing kernel Hilbert spaces (RKHSs), $L^{2}$ spaces, and Sobolev spaces. It will be assumed that these are Hilbert spaces of real valued functions. Inner products will be written $\left\langle \cdot,\cdot\right\rangle $, and we shall use subscripts on $\left\langle \cdot,\cdot\right\rangle $ to indicate the Hilbert space under consideration. Moreover, in our use of differentiation, or differential operators, we shall mean weak derivatives, i.e., differentiation in the sense of distributions, or making use of the natural duality for the spaces under consideration. Our restriction here to the real valued case is dictated by our present applications to feedforward Neural Networks. However, many of our general results in \secref{nn} below extend to complex RKHS theory. The latter in turn are important in the study of geometry and potential theory of complex domains, see e.g., \cite{MR1340173}. The power of kernel machines derives in part from the following facts. First, kernel machines serve to map points in a low-dimensional data sets (typically nonlinear) into higher dimensions. The dimensionality of this linear \textquotedblleft hyperspace\textquotedblright{} may be infinite but is designed for optimization and efficient encoding of features. Hence the kernel method allows one to find coefficients of separating hyperplanes for the problem at hand via RKHS-inner products, one selected for each pair of high-dimensional features. While kernel machines of various types have been used for decades, it was with the invention of support vector machines (SVMs) that kernels have now taken center stage (see e.g., \cite{zbMATH01669138,MR4329806,MR2849119,MR3108145,MR2274418,MR2246374}). By now, SVMs are used in diverse applications, including in bioinformatics (for finding similarities between different protein sequences), machine vision, and handwriting recognition. Deep neural networks (to be discussed in Sections \ref{sec:kact} and \ref{sec:fwidth} below) are made of layers of artificial neurons: input layer, an output layer, and multiple hidden layers in-between them. Deeper the networks have more hidden layers. The parameters of the network represent the strengths of the connections between layers. Training a network yield determination of values of parameters. Once trained, the ANN represents a model for turning an input (say, an image) into an output (a label or category). The variety of uses of forward Neural Network algorithms, the recent literature is substantial and diverse, especially with regards to applications. See e.g., \cite{MR4512468,MR4505888,MR4395164,MR4185345,MR4072078,MR3457582}. The following lemma is a basic result in the theory of RKHSs. For details, see e.g., \cite{MR738131,zbMATH06526193,zbMATH06526192,MR4250453,Szabook}, and also \cite{MR4295177} and the papers cited therein. \begin{lem} \label{lem:B1}Fix a p.d. kernel $X\times X\xrightarrow{\;K\;}\mathbb{R}\left(\text{or \ensuremath{\mathbb{C}}}\right)$, let $\mathscr{H}_{K}$ denote the corresponding RKHS. Then a function $F$ on $X$ is in $\mathscr{H}_{K}$ if and only if there exists a constant $C_{F}<\infty$, such that the following estimate holds for all $n\in\mathbb{N}$, all $\left(\xi_{i}\right)_{i=1}^{n}$, $\xi_{i}\in\mathbb{R}\left(\text{or \ensuremath{\mathbb{C}}}\right)$, and all $\left(x_{i}\right)_{i=1}^{n}$, $x_{i}\in X$: \begin{equation} \left|\sum_{i=1}^{n}\xi_{i}F\left(x_{i}\right)\right|^{2}\leq C_{F}\sum_{i=1}^{n}\sum_{j=1}^{n}\overline{\xi}_{i}\xi_{j}K\left(x_{i},x_{j}\right).\label{eq:rhks1} \end{equation} \end{lem} \begin{rem} With the construction $K\mapsto\mathscr{H}_{K}$ (referring to a RKHS of a fixed p.d. kernel $K$), we arrive at the following two conclusions: \begin{enumerate} \item For all $x\in X$, the function $K_{x}:=K\left(\cdot,x\right)$ is in $\mathscr{H}_{K}$; and \item For all $F\in\mathscr{H}_{K}$, and $x\in X$, we have \begin{equation} F\left(x\right)=\left\langle F,K\left(\cdot,x\right)\right\rangle _{\mathscr{H}_{K}},\label{eq:b2} \end{equation} i.e., the values of functions $F$ in $\mathscr{H}_{K}$ are \emph{reproduced} via the inner product $\left\langle \cdot,\cdot\right\rangle _{\mathscr{H}_{K}}$, and the kernel functions. \end{enumerate} In addition to (\ref{eq:b2}), we shall also consider \emph{relative reproducing kernels}, and \emph{relative} RKHSs. As noted in \cite{MR3251728}, the \emph{relative }reproducing property takes the following form \begin{equation} F\left(y\right)-F\left(x\right)=\left\langle F,v_{x,y}\left(\cdot\right)\right\rangle _{\mathscr{H}_{rel}},\label{eq:b3} \end{equation} now valid for all pairs of points $x,y\in X$. So this entails double-indexed kernel functions $v_{x,y}\in\mathscr{H}_{rel}$. A particular class of $\mathscr{H}_{rel}$ spaces are considered in \thmref{hmu} below. There the setting is $X=\mathbb{R}$, and the relative kernel functions $v_{a,b}$ take the form of activation functions for classes of feedforward-NN-algorithms, see e.g., \figref{rmu}. A systematic study of (\ref{eq:b3}) is undertaken in \cite{MR3251728} where it is shown that the setting of relative reproducing is characterized by \emph{conditionally negative definite functions}. \end{rem} We now recall the RKHS for the standard 1-dimensional Brownian motion. (See e.g., \cite{MR4295177,MR4302453,MR4274591,MR4472250}.) \begin{lem} When $K$ is the Brownian motion kernel on $\mathbb{R}_{\geq}\times\mathbb{R}_{\geq}$, i.e., \begin{equation} K\left(x,y\right)=x\wedge y=\frac{\left|x\right|+\left|y\right|-\left|x-y\right|}{2},\quad x,y\in\mathbb{R}_{\geq},\label{eq:B2} \end{equation} the corresponding RKHS $\mathscr{H}_{K}$ is the Hilbert space of absolutely continuous functions $f$ on $\mathbb{R}$ such that the derivative $f'=df/dx$ is in $L^{2}\left(\mathbb{R}\right)$, and $f\left(0\right)=0$. Moreover, \begin{equation} \left\Vert f\right\Vert _{\mathscr{H}_{K}}^{2}=\int_{\mathbb{R}_{\geq}}\left|f'\left(x\right)\right|^{2}dx,\quad\text{for all \ensuremath{f\in\mathscr{H}_{K}}.}\label{eq:1} \end{equation} \end{lem} \begin{proof} The key observation is that, if $x>0$, the function \begin{equation} \mathbb{R}_{\geq}\ni y\longmapsto F_{x}\left(y\right):=K\left(y,x\right)=\begin{cases} y & \text{if \ensuremath{y\leq x}}\\ x & \text{if \ensuremath{y>x}} \end{cases} \end{equation} has weak derivative. Indeed, we have \begin{equation} \frac{dF_{x}}{dy}=\chi_{\left[0,x\right]}, \end{equation} i.e., the indicator function of the interval $\left[0,x\right]$. Hence if $f$ is a function with $f'\in L^{2}\left(\mathbb{R}\right)$ and $f\left(0\right)=0$, then \begin{equation} f\left(x\right)=f\left(x\right)-f\left(0\right)=\int_{0}^{x}f'\left(y\right)dy=\int_{\mathbb{R}}F_{x}'\left(y\right)f'\left(y\right)dy,\label{eq:a4} \end{equation} and the RHS in (\ref{eq:a4}) is the inner product from the Hilbert space defined by the RHS in (\ref{eq:1}). The corresponding implication follows from the general theory of RKHS. Recall that the RKHS of a kernel is a Hilbert space completion of the functions \begin{equation} y\longmapsto K\left(x,y\right) \end{equation} as $x$ varies over $\mathbb{R}$. Moreover, for $K\left(x,y\right)=x\wedge y$, \[ \left\langle K\left(\cdot,x_{1}\right),K\left(\cdot,x_{2}\right)\right\rangle _{\mathscr{H}_{K}}=K\left(x_{1},x_{2}\right)=x_{1}\wedge x_{2}, \] and we can compute as follows: \begin{align*} \int_{\mathbb{R}}\left(\frac{d}{dy}K\left(\cdot,x_{1}\right)\right)\left(\frac{d}{dy}K\left(\cdot,x_{2}\right)\right)dy & =\int_{\mathbb{R}}\chi_{\left[0,x_{1}\right]}\left(y\right)\chi_{\left[0,x_{2}\right]}\left(y\right)dy\\ & =\lambda\left(\left[0,x_{1}\right]\cap\left[0,x_{2}\right]\right)\\ & =x_{1}\wedge x_{2}=K\left(x_{1},x_{2}\right) \end{align*} where $\lambda=dy$ denotes the Lebesgue measure. \end{proof} \begin{rem} Note that the functions $v_{a,b}$ (called dipoles) in $\mathscr{H}_{K}$ which satisfy \[ f\left(b\right)-f\left(a\right)=\left\langle f,v_{a,b}\right\rangle ,\quad\text{for all \ensuremath{f\in\mathscr{H}_{K}}} \] (see (\ref{eq:1}) and (\ref{eq:a4})) are as follows: \[ v_{a,b}\left(x\right)=\begin{cases} 0 & \text{if }x<a\\ x-a & \text{if }a\leq x<b\\ b-a & \text{if }x>b, \end{cases} \] as illustrated in \figref{dp}. Also compare with \thmref{hmu} and \figref{rmu}, and the iterations in \secref{fwidth}. \end{rem} \begin{figure}[H] \includegraphics[width=0.5\textwidth]{dipole1} \caption{\label{fig:dp}The generating dipole function $\left\{ v_{a,b}\right\} $ indexed by pairs $a,b$ such that $a<b$. Compare with \figref{rmu} below.} \end{figure} \textbf{Induced metrics} For a general p.d. kernel $K$ on $X\times X$, there is an induced metric on $X$, \[ d_{K}:X\times X\rightarrow\mathbb{R}_{+} \] defined as (see e.g., \cite{MR4302453}) \begin{equation} d_{K}\left(x,y\right)=\left\Vert K\left(\cdot,x\right)-K\left(\cdot,y\right)\right\Vert _{\mathscr{H}_{K}}^{2}. \end{equation} In particular, \[ d_{K}\left(x,y\right)=K\left(x,x\right)+K\left(y,y\right)-2\Re\left\{ K\left(x,y\right)\right\} . \] Note that $d_{K}^{1/2}$ is also a metric on $X\times X$. \begin{example} For $K\left(x,y\right)=x\wedge y$ on $\mathbb{R}\times\mathbb{R}$ as in (\ref{eq:B2}), \begin{align*} \left\Vert K\left(\cdot,s\right)-K\left(\cdot,t\right)\right\Vert _{\mathscr{H}_{K}}^{2} & =\left\Vert \left(\cdot\wedge s\right)'-\left(\cdot\wedge t\right)'\right\Vert _{L^{2}}^{2}\\ & =\left\Vert \chi_{\left[0,s\right]}-\chi_{\left[0,t\right]}\right\Vert _{L^{2}}^{2}\\ & =\left|s-t\right|. \end{align*} \end{example} The results below deal with a general framework of pairs of sets $X$ and $Y$, each equipped with a positive definite kernel, $K$ resp., $L$, $K$ on $X$, and $L$ on $Y$. With view to realization of feedforward Neural Network-functions, we will present an explicit framework (see (\ref{eq:b9}) and (\ref{eq:ad1})) which allows us to pass from (nonlinear) functions $f:X\rightarrow Y$ to linear operators $T_{f}$ acting between the respective RKHSs $\mathscr{H}_{K}$ and $\mathscr{H}_{L}$. This will be a representation in the sense that composition of functions will map into products of the corresponding linear operators. Some care must be exercised as the linear operators $T_{f}$ will in general be unbounded. Nonetheless, we shall show that the operators still fall in a class where spectral resolutions are available, see \thmref{b9} and \corref{b10}. \begin{thm} \label{thm:mc}Consider p.d. kernels $X\times X\xrightarrow{\;K\;}\mathbb{R}$ and $Y\times Y\xrightarrow{\;L\;}\mathbb{R}$. Let $f:X\rightarrow Y$ be Lipschitz continuous with respect to the induced metrics $d_{K},d_{L}$, i.e., \[ d_{L}\left(f\left(x\right),f\left(y\right)\right)\leq c_{f}d_{K}\left(x,y\right),\quad x,y\in X, \] for some constant $c_{f}$. Define the operator $T_{f}:\mathscr{H}_{K}\rightarrow\mathscr{H}_{L}$ by \[ T_{f}\left(K_{x}\right)\left(y\right)=L\left(f\left(x\right),y\right) \] and extend it by linearity and density. Then, for any fixed $c\in Y$, the function \begin{equation} F_{f}:X\rightarrow\mathbb{R},\quad F_{f}\left(x\right):=L\left(f\left(x\right),c\right)\label{eq:b9} \end{equation} is in the RKHS $\mathscr{H}_{K}$ if, and only if \begin{equation} L_{c}\in dom(T_{f}^{*}), \end{equation} the domain of the adjoint operator. Moreover, \begin{align*} F_{f,y}\left(x\right) & :=L\left(f\left(x\right),y\right)\in\mathscr{H}_{K},\;\forall y\in Y\\ & \Updownarrow\\ T_{f}\: & \text{is closable}. \end{align*} (See also \thmref{mc2}.) \end{thm} \begin{proof} Let the setting be as in the statement of the theorem, i.e., $X\xrightarrow{\;f\;}Y$ assumed continuous with respect to the two metrics, $d_{K}$ on $X$ and $d_{L}$ on $Y$; so in particular, for pairs of points $x_{1},x_{2}\in X$, we have \begin{align} d_{K}\left(x_{1},x_{2}\right) & =\left\Vert K\left(\cdot,x_{1}\right)-K\left(\cdot,x_{2}\right)\right\Vert _{\mathscr{H}_{K}}^{2}\label{eq:d1}\\ & =K\left(x_{1},x_{1}\right)+K\left(x_{2},x_{2}\right)-2K\left(x_{1},x_{2}\right). \end{align} We further fix a point $c\in Y$, and set $F=F_{f,c}$, specified as follows: \[ F\left(x\right)=L\left(f\left(x\right),c\right),\;\text{for all \ensuremath{x\in X},} \] so $F:X\rightarrow\mathbb{R}_{+}\cup\left\{ 0\right\} $. Now, for every $N$, and every subset $S_{N}=\left(x_{1},x_{2},\dots,x_{N}\right)\subset X$, consider the following matrix operations (in $N$ dimensions): \begin{equation} \underset{\text{column vectors}}{\underbrace{F\big|_{N}:=\begin{bmatrix}F\left(x_{1}\right)\\ \vdots\\ F\left(x_{N}\right) \end{bmatrix}}},\;\text{and}\quad\underset{\text{matrix of a rank-1 operator}}{\underbrace{Q_{N}:=\left|F\big|_{N}\left\rangle \right\langle F\big|_{N}\right|}}, \end{equation} i.e., the rank-1 operator on $\mathbb{R}^{N}$ written in Dirac's notation, defined as \begin{equation} Q_{N}\left(\xi\right)=\left\langle F_{N},\xi\right\rangle F_{N} \end{equation} for all $\xi\in\mathbb{R}^{N}$. Set \begin{equation} K_{N}:=\left(K\left(x_{i},x_{j}\right)\right)_{i,j=1}^{N}=\begin{bmatrix}K\left(x_{1},x_{1}\right) & \cdots & K\left(x_{1},x_{N}\right)\\ \vdots & \ddots & \vdots\\ K\left(x_{N},x_{1}\right) & \cdots & K\left(x_{N},x_{N}\right) \end{bmatrix}, \end{equation} a sample matrix. For the convex cone of all positive definite $N\times N$ matrices, we introduce the following familiar ordering, $K\ll_{C}K'$ iff (Def.) $\exists C<\infty$ such that \begin{equation} \xi^{T}K_{N}\xi\leq C\xi^{T}K'_{N}\xi\;\text{for all }\xi\in\mathbb{R}^{N}. \end{equation} Now an application of \lemref{B1} above shows that the assertion in the theorem is equivalent to the existence of a finite constant $C$ (independent of $S_{N}=\left(x_{i}\right)_{i=1}^{N}$) satisfying $Q_{N}\ll_{C}K_{N}$, i.e., the estimate \begin{equation} \left|\sum_{i}\xi L\left(f\left(x_{i}\right),c\right)\right|^{2}\leq C\sum_{i}\sum_{j}\xi_{i}K\left(x_{i},x_{j}\right)\xi_{j} \end{equation} holds for all $N$, all $S_{N}=\left\{ x_{i}\right\} _{i=1}^{N}$, and all $\xi\in\mathbb{R}^{N}$. We get this from the assumption on $f$ in the theorem. See details below. \end{proof} \textbf{Summary of \thmref{mc}:} Start with $K$ p.d. on $X\times X$, $L$ p.d. on $Y\times Y$, and $f:X\rightarrow Y$. We introduce the metrics $d_{K}$ on $X$, and $d_{L}$ on $Y$, and we consider $f$ continuous, or Lipchitz. To get the desired conclusion \[ \left(x\longmapsto L\left(f\left(x\right),y\right)\right)\in\mathscr{H}_{K}, \] we must introduce an operator $T_{f}:\mathscr{H}_{K}\rightarrow\mathscr{H}_{L}$. The right choice is \[ L_{y}\in dom(T_{f}^{*}). \] See details below: Fixing two kernels $K$ and $L$, assumed p.d. on $X\times X$, and on $Y\times Y$. Pass to the corresponding RKHS $\mathscr{H}_{K}$ and $\mathscr{H}_{L}$. \begin{problem*} Find conditions on functions $X\xrightarrow{\;f\;}Y$ with the property that, for $\forall y\in Y$, then the induced function \begin{equation} \underset{F_{f,y}\left(\cdot\right)\text{ as a function on \ensuremath{X}}}{\underbrace{\left(X\ni x\longmapsto L\left(f\left(x\right),y\right)\right)}}\in\mathscr{H}_{K}.\label{eq:ad1} \end{equation} \end{problem*} The argument stressed below is via dual operators (bounded) \[ \xymatrix{\mathscr{H}_{K}\ar@/{}^{1.3pc}/[rr]^{T_{f}} & & \mathscr{H}_{L}\ar@/{}^{1.3pc}/[ll]^{T_{f}^{*}}} ; \] but the unbounded case is also interesting. Some remarks on the definition of the operator $T_{f}:\mathscr{H}_{K}\rightarrow\mathscr{H}_{L}$ in the case when no additional assumptions are placed on $X\xrightarrow{\;f\;}Y$. We define \[ T_{f}\left(K_{x}\right)\left(y\right)=L\left(f\left(x\right),y\right); \] and so we extend $T_{f}$ to linear combinations: \begin{equation} \mathscr{D}_{K}:=\underset{\text{function on \ensuremath{X}}}{\big\{\underbrace{\sum_{i}c_{i}K_{x_{i}}}\big\}}\xrightarrow{\quad T_{f}\quad}\underset{\text{function on \ensuremath{Y}}}{\underbrace{\sum_{i}c_{i}L\left(f\left(x_{i}\right),\cdot\right)}}.\label{eq:ad2} \end{equation} But to make sense of (\ref{eq:ad2}) so it is well defined, we must be careful with equivalence classes. If $f:X\rightarrow Y$ is a general function, the (generalized) operator (\ref{eq:ad2}) may be non-closable. However, we can still define the adjoint $T_{f}^{*}$ , but its domain might be ``small''. Set \begin{equation} \mathscr{D}_{K}:=span\left\{ K_{x}\right\} _{x\in X},\label{eq:ad3} \end{equation} then (Definition) a vector $\psi\in\mathscr{H}_{L}$ is in $dom(T_{f}^{*})$ iff $\exists C_{\psi}<\infty$ s.t. \begin{equation} \left|\left\langle T_{f}\varphi,\psi\right\rangle _{\mathscr{H}_{L}}\right|\leq C_{\psi}\left\Vert \varphi\right\Vert _{\mathscr{H}_{K}},\quad\forall\varphi\in\mathscr{D}_{K}.\label{eq:ad4} \end{equation} We then set $T_{f}^{*}\psi=$ the solution to \begin{equation} \left\langle T_{f}\varphi,\psi\right\rangle _{\mathscr{H}_{L}}=\left\langle \varphi,T_{f}^{*}\psi\right\rangle _{\mathscr{H}_{K}},\quad\forall\varphi\in\mathscr{D}_{K}.\label{eq:ad5} \end{equation} Let $K,L,f$ be as specified, and assume for some $y\in Y$, that we have $L_{y}\in dom(T_{f}^{*})$, then (\ref{eq:ad5}) for $L_{y}\in\mathscr{H}_{L}\xrightarrow{\;T_{f}^{*}\;}\mathscr{H}_{K}$, $\varphi=K_{x}$, $\psi=L_{y}$ yields \[ x\longmapsto T_{f}^{*}\left(L_{y}\right)\left(x\right)=L\left(f\left(x\right),y\right)\in\mathscr{H}_{K}. \] So the conclusion in \thmref{mc} that \begin{equation} \left(x\longmapsto L\left(f\left(x\right),y\right)\right)\in\mathscr{H}_{K}\label{eq:ad6} \end{equation} holds iff $L_{y}\in dom(T_{f}^{*})$. In this case there are no difficulties with (\ref{eq:ad2}) and we get a dual pair $T_{f}$ and $T_{f}^{*}$, \begin{equation} \left\langle T_{f}\varphi,\psi\right\rangle _{\mathscr{H}_{L}}=\left\langle \varphi,T_{f}^{*}\psi\right\rangle _{\mathscr{H}_{K}}\label{eq:ad7} \end{equation} for $\forall\varphi\in\mathscr{D}_{K}$, and $\psi\in\mathscr{D}_{L}$, $\xymatrix{\mathscr{H}_{K}\ar@/{}^{0.7pc}/[r]^{T_{f}} & \mathscr{H}_{L}\ar@/{}^{0.7pc}/[l]^{T_{f}^{*}}} $. Setting $\varphi=K_{x}$, and $\psi=L_{y}$, (\ref{eq:ad7}) implies \begin{equation} T_{f}^{*}\left(L_{y}\right)\left(x\right)=L\left(f\left(x\right),y\right)=\left(T_{f}\left(K_{x}\right)\right)\left(y\right).\label{eq:ad8} \end{equation} But the previous condition $L_{y}\in dom(T_{f}^{*})$ (compare (\ref{eq:ad7})) amounts to the assertion that $T_{f}^{*}\left(L_{y}\right)\in\mathscr{H}_{K}$, and by (\ref{eq:ad8}), this is then the conclusion for \thmref{mc}. \begin{cor}[composition operators] \label{cor:b7} Let $X$, $Y$, $K$ and $L$ be as specified above; in particular, $K$ is a fixed p.d. kernel on $X\times X$, and the RKHS $\mathscr{H}_{K}$ is a Hilbert space of scalar valued functions on $X$. Similarly, $\mathscr{H}_{L}$ is a Hilbert space of scalar valued functions on $Y$. Both $\mathscr{H}_{K}$ and $\mathscr{H}_{L}$ satisfy the defining axioms for RKHSs; see \lemref{B1} above. As noted, every function $f$, $X\xrightarrow{\;f\;}Y$, induces a linear operator \begin{equation} \mathscr{H}_{K}\xrightarrow{\;T_{f}\;}\mathscr{H}_{L},\label{eq:st1} \end{equation} with dense domain $\mathscr{D}_{K}$; see the statement of \thmref{mc}. For the adjoint operator $T_{f}^{*}$, \begin{equation} \mathscr{H}_{L}\xrightarrow{\;T_{f}^{*}\;}\mathscr{H}_{K},\label{eq:st2} \end{equation} we have the following: For a function $\psi$ in $\mathscr{H}_{L}$, the two characterizations (\ref{eq:st3a}) and (\ref{eq:st3b}) hold: \begin{gather} \psi\in dom(T_{f}^{*})\label{eq:st3a}\\ \Updownarrow\nonumber \\ \psi\circ f\in\mathscr{H}_{K}.\label{eq:st3b} \end{gather} In the affirmative, \begin{equation} T_{f}^{*}\left(\psi\right)=\psi\circ f:X\rightarrow\mathbb{R},\label{eq:st4} \end{equation} i.e., $T_{f}^{*}$ is the composition operator. \end{cor} \begin{proof} (\ref{eq:st3a})$\Rightarrow$(\ref{eq:st3b}). Assume (\ref{eq:st3a}), we then apply (\ref{eq:ad4}), and get $C_{\psi}<\infty$ with: \begin{equation} \left|\sum_{i}c_{i}\left(T_{f}^{*}\left(\psi\right)\right)\left(x_{i}\right)\right|^{2}\leq C_{\psi}\sum_{i}\sum_{j}c_{i}c_{j}K\left(x_{i},x_{j}\right).\label{eq:st5} \end{equation} But \begin{align} T_{f}^{*}\left(\psi\right)\left(x_{i}\right) & =\left\langle K_{x_{i}},T_{f}^{*}\left(\psi\right)\right\rangle _{\mathscr{H}_{K}}\label{eq:st6}\\ & =\left\langle T_{f}\left(K_{x_{i}}\right),\psi\right\rangle _{\mathscr{H}_{L}}\nonumber \\ & =\left\langle L\left(f\left(x_{i}\right),\cdot\right),\psi\right\rangle _{\mathscr{H}_{L}}\nonumber \\ & =\psi\left(f\left(x_{i}\right)\right),\nonumber \end{align} where we used the RKHS property for $\mathscr{H}_{L}$ in the last step. Substitution into (\ref{eq:st5}) yields \begin{equation} \left|\sum_{i}c_{i}\psi\left(f\left(x_{i}\right)\right)\right|^{2}\leq C_{\psi}\sum_{i}\sum_{j}c_{i}c_{j}K\left(x_{i},x_{j}\right),\label{eq:st7} \end{equation} and, so by \lemref{B1} applied to $F=\psi\circ f$, conclusion in (\ref{eq:st3b}) follows. (\ref{eq:st3b})$\Rightarrow$(\ref{eq:st3a}). Assume (\ref{eq:st3b}), we then reverse the above reasoning to get \begin{equation} \Big|\Big\langle T_{f}\underset{\in\mathscr{D}_{K}}{\underbrace{\left(\sum_{i}c_{i}K_{x_{i}}\right)}},\psi\Big\rangle_{\mathscr{H}_{L}}\Big|\leq\sqrt{C_{\psi}}\left\Vert \sum_{i}c_{i}K_{x_{i}}\right\Vert _{\mathscr{H}_{K}}\label{eq:st8} \end{equation} which states that $\psi\in dom(T_{f}^{*})$, which is condition (\ref{eq:st3a}). Now combine this with (\ref{eq:st6}), and we conclude that (\ref{eq:st4}) is satisfied for $\psi$, i.e., that $T_{f}^{*}\psi=\psi\circ f$ holds. \end{proof} \subsection{Basis approach} Let $X,Y,K,L,f$ be as usual, and define $T_{f}:\mathscr{H}_{K}\rightarrow\mathscr{H}_{L}$. Since $K$ is p.d. on $X\times X$, the RKHS $\mathscr{H}_{K}$ allows an ONB $\left\{ h_{i}\right\} _{i\in\mathbb{N}}$, $h_{i}\in\mathscr{H}_{K}$; by general theory, we get the pointwise formula: \begin{equation} K\left(x_{1},x_{2}\right)=\sum_{i\in\mathbb{N}}h_{i}\left(x_{1}\right)h_{i}\left(x_{2}\right).\label{eq:bd1} \end{equation} Then our condition in \thmref{mc} is equivalent to the following: \begin{gather*} \left(X\ni x\longmapsto L\left(f\left(x\right),y\right)\right)\in\mathscr{H}_{K}\tag{{a}}\\ \Updownarrow\\ \sum_{i\in\mathbb{N}}\left|\left(T_{f}\left(h_{i}\right)\right)\left(y\right)\right|^{2}<\infty.\tag{{b}} \end{gather*} \begin{proof} $\left(a\right)\Rightarrow\left(b\right)$ is detailed below; but the converse implication will follow by the same argument. So by $\left(a\right)$, $L_{y}\in dom(T_{f}^{*})$ and therefore: \begin{equation} T_{f}^{*}\left(L_{y}\right)\left(\cdot\right)\in\mathscr{H}_{K}.\label{eq:bd2} \end{equation} Since $\left\{ h_{i}\right\} $ is an ONB in $\mathscr{H}_{K}$, \begin{equation} \sum_{i}\Big|\big\langle h_{i},\underset{\in\mathscr{H}_{K}}{\underbrace{T_{f}^{*}\left(L_{y}\right)}}\big\rangle\Big|^{2}=\left\Vert T_{f}^{*}\left(L_{y}\right)\right\Vert _{\mathscr{H}_{K}}^{2}<\infty.\label{eq:bd3} \end{equation} But from the $\text{LHS}_{\left(\ref{eq:bd3}\right)}$: \[ \left\langle h_{i},T_{f}^{*}\left(L_{y}\right)\right\rangle _{\mathscr{H}_{K}}=\left\langle T_{f}\left(h_{i}\right),L_{y}\right\rangle _{\mathscr{H}_{L}}=\left(T_{f}\left(h_{i}\right)\right)\left(y\right), \] and $\left(b\right)$ follows. \end{proof} \textbf{Key Question: When is $F_{f,y}\left(\cdot\right)\in\mathscr{H}_{K}$?} The cleanest answer to the question of what functions $X\xrightarrow{\;f\;}Y$ have the property that \begin{equation} F_{f,y}\left(x\right)=L\left(f\left(x\right),y\right)\text{ is in \ensuremath{\mathscr{H}_{K}} }\label{eq:cd1} \end{equation} is this: \begin{thm} \label{thm:mc2}Let $K,L$ and $f$ be given, then \begin{equation} F_{f,y}\:\text{in }\left(\ref{eq:cd1}\right)\;\text{is in \ensuremath{\mathscr{H}_{K}\Longleftrightarrow L_{y}\in}dom(\ensuremath{T_{f}^{*}}),}\label{eq:cd2} \end{equation} where the operator $T_{f}:\mathscr{H}_{K}\rightarrow\mathscr{H}_{L}$ is given by \[ T_{f}\left(K_{x}\right):=L\left(f\left(x\right),\cdot\right). \] Moreover, (\ref{eq:cd2}) holds for all $y\in Y\Longleftrightarrow T_{f}$ is closable. \end{thm} \subsection{Dual pairs of operators} Consider a symmetric pair of operators with dense domains: \[ \xymatrix{\mathscr{H}_{K}\ar@/{}^{1.3pc}/[rr]^{T_{f}} & & \mathscr{H}_{L}\ar@/{}^{1.3pc}/[ll]^{T_{f}^{*}}} \] ($T=T_{f}$, since it will depend on $f$) where \begin{equation} span\left\{ K_{x}\right\} _{x\in X}\:\text{is dense in \ensuremath{\mathscr{H}_{K}}}\label{eq:dd2} \end{equation} and \begin{equation} span\left\{ L_{y}\right\} _{y\in Y}\:\text{is dense in \ensuremath{\mathscr{H}_{L}}}\label{eq:dd3} \end{equation} such that \begin{align} K_{x} & \in dom(T_{f}),\;\text{and}\label{eq:dd4}\\ L_{y} & \in dom(T_{f}^{*})\label{eq:dd5} \end{align} where ``$dom$'' denotes the respective operator domains. \begin{note*} We note that \[ T_{f}\left(K_{x}\right)\left(\cdot\right)=L\left(f\left(x\right),\cdot\right)\in\mathscr{H}_{L} \] is always well defined, with dense domain, but the secret is $T_{f}^{*}.$ Also note that (\ref{eq:dd5}) is the condition in \thmref{mc}. \end{note*} Let $f:X\rightarrow Y$ be as before, and the two p.d. kernels $K$ and $L$ are fixed. We then introduce the corresponding (densely defined) operator $T_{f}:\mathscr{H}_{K}\rightarrow\mathscr{H}_{L}$ by setting \begin{equation} T_{f}\left(K_{x}\right)=L\left(f\left(x\right),\cdot\right)\in\mathscr{H}_{L}.\label{eq:cc1} \end{equation} \textbf{Notation and convention. }$K_{x}\left(\cdot\right)$ is the kernel function in $\mathscr{H}_{K}$ as usual: \begin{align} K_{x}\left(t\right) & =K\left(x,t\right),\;\forall t\in X\:\text{and similarly,}\label{eq:cc2}\\ L_{y}\left(u\right) & =L\left(y,u\right),\;\forall u\in Y.\label{eq:cc3} \end{align} \begin{lem} \label{lem:b9}When $L_{y}\in dom(T_{f}^{*})$ then \begin{equation} \left(T_{f}^{*}\left(L_{y}\right)\right)\left(x\right)=L\left(f\left(x\right),y\right)\;\text{on \ensuremath{X},}\label{eq:cc4} \end{equation} equivalently, \begin{equation} T_{f}^{*}\left(L_{y}\right)\left(\cdot\right)=L\left(f\left(\cdot\right),y\right)\;\text{on \ensuremath{X}.}\label{eq:cc4'} \end{equation} \end{lem} \begin{proof}[Proof of (\ref{eq:cc4})] The conclusion (\ref{eq:cc4}) is equivalent to the following assertion: \[ \xymatrix{\xyC{0pc}\Big\langle\underset{L\left(f\left(x\right),\cdot\right)}{\underbrace{T_{f}\left(K_{x}\right)\left(\cdot\right)}},L_{y}\Big\rangle_{\mathscr{H}_{L}}\ar[dr] & = & \Big\langle K_{x},\stackrel[T_{f}^{*}\left(L_{y}\right)]{\in\mathscr{H}_{K}}{\underbrace{\overbrace{L\left(f\left(\cdot\right),y\right)}}}\Big\rangle_{\mathscr{H}_{K}}\ar[dl]\\ & L\left(f\left(x\right),y\right) } \] The conclusion (\ref{eq:cc4}) follows since the respective kernel functions span dense subspaces. \end{proof} Recall, \begin{align*} \text{the function }x & \longmapsto F_{f,y}\left(x\right)=L\left(f\left(x\right),y\right)\in\mathscr{H}_{K}\\ & \Updownarrow\\ L_{y} & \in dom(T_{f}^{*}). \end{align*} Assume that $L_{y}\in dom(T_{f}^{*})$ , then apply the condition for functions in $\mathscr{H}_{K}$ to $F_{f,y}\left(\cdot\right)$, so $\forall n$, $\forall\left(x_{i}\right)_{1}^{n}$, $\forall\left(c_{i}\right)_{1}^{n}$, $c_{i}\in\mathbb{R}$: \begin{eqnarray*} \left|\sum_{i}c_{i}F_{f,y}\left(x_{i}\right)\right|^{2} & = & \left|\sum_{i}c_{i}L\left(f\left(x_{i}\right),y\right)\right|^{2}\\ & \leq & \left|\left\langle \sum_{i}c_{i}K_{x_{i}},T_{f}^{*}\left(L_{y}\right)\right\rangle _{\mathscr{H}_{K}}\right|^{2}\\ & \underset{\text{Schwarz}}{\leq} & \left\Vert \sum_{i}c_{i}K_{x_{i}}\right\Vert _{\mathscr{H}_{K}}^{2}\left\Vert T_{f}^{*}\left(L_{y}\right)\right\Vert _{\mathscr{H}_{K}}^{2}\\ & = & \sum_{i}\sum_{j}c_{i}c_{j}K\left(x_{i},x_{j}\right)\left\Vert T_{f}^{*}\left(L_{y}\right)\right\Vert _{\mathscr{H}_{K}}^{2}. \end{eqnarray*} \begin{lem} The implication below is both directions: \begin{gather*} X\ni x\longmapsto L\left(f\left(x\right),y\right)\in\mathscr{H}_{K}\;\text{for \ensuremath{\forall y}}\\ \Updownarrow\\ \text{the condition in \ensuremath{\left(\ref{thm:mc}\right)} is satisfied}. \end{gather*} Even if we fix $y\in Y$, then \begin{equation} L_{y}\in dom(T_{f}^{*})\Longleftrightarrow\left(x\longmapsto L\left(f\left(x\right),y\right)\right)\in\mathscr{H}_{K}.\label{eq:f8} \end{equation} \end{lem} \begin{proof}[Proof sketch] By definition, $L_{y}\in dom(T_{f}^{*})$, $\exists C_{y}<\infty$ $\Longleftrightarrow$ \[ \left|\left\langle T_{f}\varphi,L_{y}\right\rangle _{\mathscr{H}_{L}}\right|=\left|\left(T_{f}\left(\varphi\right)\right)\left(y\right)\right|\leq C_{y}\left\Vert \varphi\right\Vert _{\mathscr{H}_{K}} \] holds for $\forall\varphi\in span\left\{ K_{x}\right\} _{x\in X}.$ But \begin{align} T_{f}\left(K_{x}\right)\left(y\right) & =L\left(f\left(x\right),y\right),\;\text{and}\label{eq:f9}\\ \Big|\sum_{i}c_{i}\underset{F_{f,y}\left(x_{i}\right)}{\underbrace{L\left(f\left(x_{i}\right),y\right)}}\Big| & ^{2}=\left|\Big\langle\sum_{i}c_{i}K_{x_{i}},T_{f}^{*}\left(L_{y}\right)\Big\rangle_{\mathscr{H}_{K}}\right|^{2}\\ & \leq\left\Vert T_{f}^{*}\left(L_{y}\right)\right\Vert _{\mathscr{H}_{K}}^{2}\underset{<\infty}{\underbrace{\sum_{i}\sum_{j}c_{i}c_{j}K\left(x_{i},x_{j}\right)}} \end{align} and so by the basic lemma for $\mathscr{H}_{K}$ (see the proof of \thmref{mc}), we conclude that functions $F_{f,y}\in\mathscr{H}_{K}$, i.e., $\left(x\longmapsto L\left(f\left(x\right),y\right)\right)\in\mathscr{H}_{K}$.\textbf{ } \textbf{Conclusion: }the bi-implication $\Longleftrightarrow$ in (\ref{eq:f8}) is valid. \end{proof} \subsection{Functions and Operators} In general if $T:\mathscr{H}_{1}\rightarrow\mathscr{H}_{2}$ is an operator with dense domain $\mathscr{D}\subset\mathscr{H}_{1}$, where $\mathscr{H}_{i}$, $i=1,2$, are two Hilbert spaces, we know that $T$ is closable $\Longleftrightarrow$ $T^{*}$ is densely defined, i.e., iff $dom(T^{*})$ is dense in $\mathscr{H}_{2}$ (see e.g., \cite{MR4274591}). So we apply this to $T=T_{f}$, $\mathscr{H}_{1}=\mathscr{H}_{K}$, $\mathscr{H}_{2}=\mathscr{H}_{L}$, and the condition in \thmref{mc} holds $\Longleftrightarrow$ $L_{y}\in dom(T_{f}^{*})$ $\forall y\in Y$. Since $span\left\{ L_{y}\right\} _{y\in Y}$ is dense in $\mathscr{H}_{L}$, the condition in \thmref{mc} $\Longrightarrow$ $T_{f}$ is closable. Given $K$ and $L$ as above, introduce \begin{align} \mathscr{F}_{ub}\left(K,L\right) & =\left\{ f:T_{f}\text{ is closable}\right\} ,\;\text{and}\label{eq:ff6}\\ \mathscr{F}_{b}\left(K,L\right) & =\left\{ f:T_{f}\text{ is bounded from \ensuremath{\mathscr{H}_{K}} into \ensuremath{\mathscr{H}_{L}}}\right\} .\label{eq:ff7} \end{align} In both cases, the operators $T=T_{f}$ depends on the choice of function $X\xrightarrow{\;f\;}Y$, but the two conditions (\ref{eq:ff6}) and (\ref{eq:ff7}) are different: \begin{equation} \left(T_{f}\left(K_{x}\right)\right)\left(y\right)=L\left(f\left(x\right),y\right)=\left(\left(T_{f}\right)^{*}\left(L_{y}\right)\right)\left(x\right),\label{eq:ff8} \end{equation} for all $x\in X$, and $y\in Y$. See details below: Some general comments about the operator $T_{f}:\mathscr{H}_{K}\rightarrow\mathscr{H}_{L}$. As before, $K$ and $L$ are fixed p.d. kernels, and $f:X\rightarrow Y$ is a function. We need to understand the conclusion from \thmref{mc}, i..e, when is \begin{equation} \left(X\ni x\longmapsto L\left(f\left(x\right),y\right)\right)\in\mathscr{H}_{K}\;\text{for all \ensuremath{y\in Y}}?\label{eq:cao1} \end{equation} Answer: (\ref{eq:cao1}) $\Longleftrightarrow$ $L_{y}\in dom(T_{f}^{*})$. Note that then the function in (\ref{eq:cao1}) is $T_{f}^{*}\left(L_{y}\right)$; see (\ref{eq:ff8}). But note that, starting with a function $X\xrightarrow{\;f\;}Y$, there are requirements for having (\ref{eq:ff8}) yield a well defined linear operator $T_{f}$ with dense domain in $\mathscr{H}_{K}$, s.t. \begin{equation} T_{f}\left(K_{x}\right)\left(\cdot\right)=L\left(f\left(x\right),\cdot\right).\label{eq:cao9} \end{equation} The case when $T_{f}$ is bounded is easy since then $dom(T_{f}^{*})=\mathscr{H}_{L}$. Notationally, $L\left(f\left(x\right),\cdot\right)\in L_{f\left(x\right)}\in\mathscr{H}_{L}$, but we must also verify the implicit kernel function for all finite sums: \[ \sum_{i}\sum_{j}c_{i}c_{j}K\left(x_{i},x_{j}\right)=0\Longrightarrow\sum_{i}\sum_{j}c_{i}c_{j}L\left(f\left(x_{i}\right),f\left(x_{j}\right)\right)=0. \] \subsection{The case when $K=L$} As demonstrated in \secref{kact} below, for applications to multi-level NNs, the recursive constructions simplify when the same p.d. kernel $K$ is used at each level. Hence below, we specialize to the case when $X=Y$, and $K=L$; see the setting in Theorems \ref{thm:mc} and \ref{thm:mc2}. \begin{thm} \label{thm:b9}Consider a positive definite kernel $K$ on $X\times X$, and the corresponding RKHS $\mathscr{H}_{K}$, i.e., the Hilbert completion of $\left\{ K_{x}\right\} _{x\in X}$ where $K_{x}:=K\left(\cdot,x\right)$. Fix a function $X\xrightarrow{\;f\;}X$ with the property (see \thmref{mc}) that \begin{equation} \left(X\ni x\longmapsto K\left(f\left(x\right),y\right)\right)\in\mathscr{H}_{K}\;\text{for all \ensuremath{y\in X}.}\label{eq:KL1} \end{equation} Hence, the operator $T_{f}:\mathscr{H}_{K}\rightarrow\mathscr{H}_{K}$ defined by \begin{equation} T_{f}\left(K\left(\cdot,y\right)\right):=K\left(f\left(\cdot\right),y\right)\label{eq:KL2} \end{equation} is a densely defined operator from $\mathscr{H}_{K}$ into $\mathscr{H}_{K}$, with domain \begin{equation} \mathscr{D}_{K}:=span\left\{ K_{x}\right\} _{x\in X}.\label{eq:KL3} \end{equation} \begin{enumerate} \item \label{enu:KL1}Then the closure of $T_{f}$ (also denoted $T_{f}$) is well defined and normal, i.e., the two operators $T_{f}$ and $T_{f}^{*}$ commute. \item \label{enu:KL2}In particular, $T_{f}$ has a projection-valued spectral resolution, i.e., there is a projection-valued measure $Q\left(\cdot\right)$ on $\mathscr{B}_{\mathbb{C}}\left(=\text{the Borel subsets in \ensuremath{\mathbb{C}}}\right)$ such that \begin{equation} T_{f}=\int_{spect\left(T_{f}\right)}\lambda\,Q\left(d\lambda\right):\mathscr{D}_{K}\rightarrow\mathscr{H}_{K}.\label{eq:KL4} \end{equation} \end{enumerate} \end{thm} \begin{proof} Note that part (\ref{enu:KL2}) follows from (\ref{enu:KL1}) and the Spectral Theorem for normal operators (in the Hilbert space $\mathscr{H}_{K}$.) Part (\ref{enu:KL1}). When the operator $T_{f}^{*}$ is introduced, we get the following commutativity: \begin{figure}[H] \[ \xymatrix{\xyC{4pc}K\left(\cdot,x\right)\ar[r]\sb(0.4){T_{f}}\ar[d]_{T_{f}^{*}} & K\left(\cdot,f\left(x\right)\right)\ar[d]^{T_{f}^{*}}\in\mathscr{D}_{K}\\ K\left(f\left(\cdot\right),x\right)\ar[r]\sb(0.4){T_{f}} & K\left(f\left(\cdot\right),f\left(x\right)\right)\in\mathscr{H}_{K} } \] \caption{Commutativity of $T_{f}$ and $T_{f}^{*}$.} \end{figure} which is the desired conclusion (\ref{enu:KL1}). \end{proof} Given a function $f:X\rightarrow X$ as in \thmref{b9}. Below we make use of the corresponding projection valued measure $Q^{\left(f\right)}$ from \thmref{b9} in order to establish an assignment from pairs $(x,y)$ of points in $X$, into systems of complex measures $\mu_{\left(x,y\right)}$ on the spectrum of $T_{f}$. In this assignment, $n$-fold composition-iteration of the function $f$ yields the $n$th moment of each of the measures $\mu_{\left(x,y\right)}$. \begin{cor} \label{cor:b10}Let $K,X$ and $f$ be as specified in \thmref{b9}, and let $Q=Q^{\left(K,f\right)}\left(\cdot\right)$ be the corresponding projection valued measure in (\ref{eq:KL4}). Then for every pair $x,y\in X$, we get a corresponding Borel measure \begin{equation} \mu_{x,y}^{\left(f\right)}\left(B\right)=\left\langle K_{x},Q\left(B\right)K_{y}\right\rangle _{\mathscr{H}_{K}}=\left(Q\left(B\right)\left(K_{y}\right)\right)\left(x\right), \end{equation} for all $B\in\mathscr{B}_{\mathbb{C}}$. Inductively, setting \[ f^{\circ n}=\underset{\text{\ensuremath{n} fold}}{\underbrace{f\circ\cdots\circ f}} \] we arrive at the following moment formula for the respective complex measures: \begin{equation} \mu_{f^{\circ n}\left(x\right),y}^{\left(f\right)}\left(B\right)=\int_{B}\lambda^{n}\,\mu_{x,y}^{\left(f\right)}\left(d\lambda\right). \end{equation} \end{cor} We now turn to the role of \emph{multipliers} in the RKHS $\mathscr{H}_{K}$. \begin{defn} A scalar valued function $\varphi$ on $X$ is said to be a \emph{multiplier} for $\mathscr{H}_{K}$ iff one of the two equivalent conditions hold: \begin{enumerate} \item \label{enu:m1}The multiplication operator $M_{\varphi}$ acting on $\mathscr{H}_{K}$ via $M_{\varphi}F=\varphi F$ (via pointwise product) leaves $\mathscr{H}_{K}$ invariant. \item \label{enu:m2}We have the following identity for the adjoint operator: \begin{equation} M_{\varphi}^{*}\left(K_{x}\right)=\varphi\left(x\right)K_{x}\;\text{for all \ensuremath{x\in X}}\label{eq:KL6} \end{equation} where $K_{x}$ denotes the kernel function $K_{x}=K\left(\cdot,x\right)$. \end{enumerate} \end{defn} \begin{rem} The equivalence of (\ref{enu:m1}) and (\ref{enu:m2}) follows from the standard reference on RKHSs; see e.g., \cite{MR4274591}. \end{rem} \begin{thm} \label{thm:b15}Let $K$ be a fixed p.d. kernel on $X\times X$, and let $\mathscr{H}_{K}$ be the corresponding RKHS. Let $X\xrightarrow{\;f\;}X$ be a function such that (\ref{eq:KL1}) holds, i.e., $\left(X\ni x\longmapsto K\left(f\left(x\right),y\right)\right)\in\mathscr{H}_{K}$ for all $y\in X$. Then for every multiplier $\varphi$ for $\mathscr{H}_{K}$, we have: \begin{equation} M_{\left(\varphi\circ f\right)}T_{f}^{*}=T_{f}^{*}M_{\varphi}.\label{eq:KL7} \end{equation} \end{thm} \begin{proof} It is clear that the conclusion (\ref{eq:KL7}) has the following equivalent form: \begin{equation} T_{f}M_{\left(\varphi\circ f\right)}^{*}=M_{\varphi}^{*}T_{f};\label{eq:KL8} \end{equation} and below we shall prove (\ref{eq:KL8}). Let $f$ and $\varphi$ be as specified in the theorem. We then have the following commutative diagram: \begin{figure}[H] \[ \xymatrix{\xyC{4pc}K\left(\cdot,x\right)\ar[r]\sb(0.4){M_{\left(\varphi\circ f\right)}^{*}}\ar[d]_{T_{f}} & \varphi\left(f\left(x\right)\right)K\left(\cdot,x\right)\ar[d]^{T_{f}}\\ K\left(\cdot,f\left(x\right)\right)\ar[r]\sb(0.4){M_{\varphi}^{*}} & \varphi\left(f\left(x\right)\right)K\left(\cdot,f\left(x\right)\right) } \] \caption{\label{fig:KL2} Commutative diagram corresponding to (\ref{eq:KL8}).} \end{figure} In the verification of the assertions in \figref{KL2}, we used the conclusions in Theorems \ref{thm:mc} and \ref{thm:mc2} above. \end{proof} \section{\label{sec:kact}Neural Network-activation functions from p.d. kernels} In the previous section we introduced the use of positive definite kernels, and associated generating function for the NN algorithms. Below we make use of the kernel analysis in design of the generating NN functions. The next definition makes use of the iterative generation of feedforward functions as in the literature, e.g., \cite{MR4185345,MR4399726,MR3457582}. The recursive steps used here in the definition and \lemref{c2} below serve as applications of our general framework from Theorems \ref{thm:mc} and \ref{thm:mc2} above. \begin{defn} Let $K$ be a positive definite kernel on $\mathbb{R}$. An $l$-layer feedforward network with kernel $K$ is a function of the form \begin{multline*} x\mapsto y_{1}=K\left(A_{1}x+b_{1},c_{1}\right)\mapsto y_{2}=K\left(A_{2}y_{1}+b_{2},c_{2}\right)\mapsto\cdots\\ \cdots\mapsto y_{l}=K\left(A_{l}y_{l-1}+b_{l},c_{l}\right)\mapsto y_{out}=K\left(\left\langle a_{l+1},y_{l}\right\rangle +b_{l+1},c_{l+1}\right) \end{multline*} where \begin{itemize} \item $x\in\mathbb{R}^{n_{0}}$; \item $A_{j}\in\mathbb{R}^{n_{j}\times n_{j-1}}$, $b_{j},c_{j}\in\mathbb{R}^{n_{j}}$ for $j=1,\cdots,l$; \item $a_{l+1}\in\mathbb{R}^{n_{l}}$, $b_{l+1},c_{l+1}\in\mathbb{R}$; \end{itemize} and for vectors $x,y\in\mathbb{R}^{m}$, \[ K\left(x,y\right):=\left[K\left(x_{1},y_{1}\right),\cdots,K\left(x_{m},y_{m}\right)\right]. \] \end{defn} \begin{lem} \label{lem:c2}Let $K\left(x,y\right)=\min\left(x,y\right)$, and $a,b,c,d$ be nonzero constants. Then \begin{enumerate} \item $K\left(ax+b,c\right)=aK\left(x,a^{-1}\left(c-b\right)\right)+b$; \item $K\left(K\left(x,a\right),b\right)=K\left(x,K\left(a,b\right)\right)$; \item $K\left(dK\left(ax+b,c\right)+e,f\right)=daK\left(x,K\left(a^{-1}\left(c-b\right),a^{-1}\left(d^{-1}\left(f-e\right)-b\right)\right)\right)+db+e$. \end{enumerate} \end{lem} \begin{proof} ~ \begin{enumerate} \item $K\left(ax+b,c\right)=\begin{cases} ax+b & x<a^{-1}\left(c-b\right)\\ c & x>a^{-1}\left(c-b\right) \end{cases}$ \item Assume $a<b$, then \[ K\left(K\left(x,a\right),b\right)=\begin{cases} x & x<a\\ a & x\geq a \end{cases}=K\left(x,a\right). \] The case $a>b$ is similar. \item This follows from (1)--(2): \begin{eqnarray*} & & K\left(dK\left(ax+b,c\right)+e,f\right)\\ & = & dK\left(K\left(ax+b,c\right),d^{-1}\left(f-e\right)\right)+e\\ & = & dK\left(aK\left(x,a^{-1}\left(c-b\right)\right)+b,d^{-1}\left(f-e\right)\right)+e\\ & = & d\left\{ aK\left(K\left(x,a^{-1}\left(c-b\right)\right),a^{-1}\left(d^{-1}\left(f-e\right)-b\right)\right)+b\right\} +e\\ & = & daK\left(K\left(x,a^{-1}\left(c-b\right)\right),a^{-1}\left(d^{-1}\left(f-e\right)-b\right)\right)+db+e\\ & = & daK\left(x,K\left(a^{-1}\left(c-b\right),a^{-1}\left(d^{-1}\left(f-e\right)-b\right)\right)\right)+db+e \end{eqnarray*} \end{enumerate} \end{proof} In what follows, all the networks are restricted to be defined on compact subsets $\Omega$ in $\mathbb{R}^{d}$, e.g., $\Omega=\left[0,1\right]^{d}$ (hypercubes). This is in consideration of standard normalizations in training neural networks. In Theorems \ref{thm:hk} and \ref{thm:hmu}, we present in detail the particular \emph{relative Reproducing Kernel Hilbert Spaces} which have as their respective dipole system (see (\ref{eq:c3})) the generalized ReLu functions illustrated here in Figures \ref{fig:dp} and \ref{fig:rmu}. Here we specify the kernel $K_{1}$ for Brownian motion $W$ indexed by $\mathbb{R}$. As a result, the corresponding p.d. kernel on $\mathbb{R}\times\mathbb{R}$ is as follows: \begin{equation} K_{1}\left(x,y\right)=\begin{cases} \left|x\right|\wedge\left|y\right|=\min\left(\left|x\right|,\left|y\right|\right) & \text{if}\;xy\geq0\;\text{(so same sign)}\\ 0 & \text{if \ensuremath{xy<0,} so opposite sign.} \end{cases}\label{eq:c1} \end{equation} \begin{proof} The connection between the kernel $K_{1}$ and the Brownian motion $\left\{ W_{x}\right\} _{x\in\mathbb{R}}$ is as follows: \begin{equation} K_{1}\left(x,y\right)=\mathbb{E}\left(\left(W_{x}-W_{0}\right)\left(W_{y}-W_{0}\right)\right)\label{eq:c2} \end{equation} for all $x,y\in\mathbb{R}$. The asserted formula (\ref{eq:c1}) follows from this, combined with the independence of increments for Brownian motion. \end{proof} \begin{thm} \label{thm:hk}Let $K_{1}$ be the p.d. kernel (\ref{eq:c1}) on $\mathbb{R}\times\mathbb{R}$, with the corresponding RKHS \[ \mathscr{H}_{K_{1}}=\left\{ f:f'\in L^{2}\right\} ,\quad\left\Vert f\right\Vert _{\mathscr{H}_{K_{1}}}^{2}=\int\left|f'\right|^{2}d\lambda_{1}. \] On $\Omega=\left[0,1\right]^{d}$, consider the p.d. kernel \[ K_{d}\left(x,y\right)=K_{1}\left(x_{1},y_{1}\right)\cdots K_{1}\left(x_{d},y_{d}\right), \] so that \[ \mathscr{H}_{K_{d}}=\left\{ f:\nabla f\in L^{2}\right\} ,\quad\left\Vert f\right\Vert _{\mathscr{H}_{K_{d}}}^{2}=\int\left|\nabla f\right|^{2}d\lambda_{d}, \] where $\lambda_{d}$ denotes the $d$-dimensional Lebesgue measure. Given $f:\Omega\rightarrow\mathbb{R}$, and a fixed $c\in\mathbb{R}$, set \[ F:\mathbb{R}^{d}\rightarrow\mathbb{R}^{1},\quad F\left(x\right)=K_{1}\left(f\left(x\right),c\right). \] Then, \[ F\in\mathscr{H}_{K_{d}}\Longleftrightarrow\iint_{f^{-1}\left(\left[0,c\right]\right)}\left|\nabla f\right|^{2}d\lambda_{d}<\infty. \] \end{thm} \begin{thm} \label{thm:hmu}Let $\mu$ be a non-atomic $\sigma$-finite measure on $\left(\mathbb{R},\mathscr{B}_{\mathbb{R}}\right)$, and consider Stieltjes measures $dF$ on $\left(\mathbb{R},\mathscr{B}_{\mathbb{R}}\right)$ such that \begin{equation} dF\ll\mu \end{equation} \textup{(absolutely continuous).} Then the relative RKHS $\mathscr{H}_{\mu}$ for the p.d. kernel \begin{equation} K_{\mu}\left(A,B\right)=\mu\left(A\cap B\right) \end{equation} consists functions $F$ such that \begin{equation} F\left(b\right)-F\left(a\right)=\left\langle v_{a,b}^{\left(\mu\right)},F\right\rangle _{\mathscr{H}_{\mu}}\label{eq:c3} \end{equation} \begin{equation} \int_{\mathbb{R}}\left|\frac{dF}{d\mu}\right|^{2}d\mu<\infty \end{equation} where the relative kernels $v_{a,b}^{\left(\mu\right)}$ are as follows: \[ \frac{dv_{a,b}^{\left(\mu\right)}}{d\mu}\left(x\right)=\chi_{\left[a,b\right]\left(x\right)}, \] see \figref{rmu}. \end{thm} \begin{figure}[H] \includegraphics[width=0.5\textwidth]{rmu}\caption{\label{fig:rmu}Illustration dipole functions that reproduce differences of values of functions in the space $\mathscr{H}_{\mu}$. Compare with \figref{dp} above.} \end{figure} \begin{proof} See \cite{MR3251728,doi:https://doi.org/10.1002/9781119414421.ch2} and the details in the proof of \thmref{hk}. \end{proof} \begin{rem} The positive definite kernel $K_{\mu}$ which is ``responsible'' for the relative RKHS $\mathscr{H}_{\mu}$ is defined on $\mathscr{B}\times\mathscr{B}$, where $\mathscr{B}$ denotes the Borel $\sigma$-algebra of subsets of $\mathbb{R}$. Using \cite{doi:https://doi.org/10.1002/9781119414421.ch2}, one checks that \begin{equation} K_{\mu}\left(A,B\right):=\mu\left(A\cap B\right)\;\text{for all \ensuremath{A,B\in\mathscr{B}.}} \end{equation} We further note that $K_{\mu}$ is the covariance for the generalized $\mu$-Brownian motion $\{W_{A}^{\left(\mu\right)}\}_{A\in\mathscr{B}}$, i.e., subject to \begin{equation} \mathbb{E}\left(W_{A}^{\left(\mu\right)}W_{B}^{\left(\mu\right)}\right)=\mu\left(A\cap B\right)\;\text{for all \ensuremath{A,B}\ensuremath{\in\mathscr{B}}.} \end{equation} The corresponding Ito-lemma for $W^{\left(\mu\right)}$ is defined for differentiable functions $f$ on $\mathbb{R}$ via \begin{equation} f\left(W_{A}^{\left(\mu\right)}\right)-f\left(0\right)=\int_{A}f'\left(W_{t}^{\left(\mu\right)}\right)dW_{t}^{\left(\mu\right)}+\frac{1}{2}\int_{A}f''\left(W_{t}^{\left(\mu\right)}\right)\mu\left(dt\right). \end{equation} In particular, the measure $\mu$ is the quadratic variation of $W_{t}^{\left(\mu\right)}$. \end{rem} \section{\label{sec:fwidth}Applications to fractal images} In recent decades, it has become evident that fractal features arise in diverse datasets, in time series and in image analysis, to mention two. (See e.g., \cite{MR4472252,MR4320089}.) Perhaps the best known examples of fractal features include precise symmetries of scales. Via a prescribed system of affine maps, they take the form of self-similarity. And a special case, includes iterated function systems (IFS), and maximal-entropy measures, also called IFS measures. The more familiar Cantor constructions, e.g., scaling by 3 or scaling by 4, are examples of IFS measures. For each of these cases, the RKHS framework we present in \secref{kact}, serve as ideal tools for such adapted NN algorithms. In particular, this may be illustrated with large numbers of images, say 5000 generated images, each one is a fractal, either 2D or 3D, with random rotation, with zooming, and coloring; half of them have scaling 3, the other half have scaling 4. This leads to training of a network serving to classify the images by scaling factors. In particular, the Cantor-type activation functions, or the cumulative functions of Cantor-like measures (\figref{rmu}), have vanishing derivatives over structured subintervals of $\left[0,1\right]$. This feature may lead to several benefits in neural networks. For example, such functions can introduce sparsity and regularization into the network, which improves its generalization performance and reduces the risk of overfitting. Additionally, these functions can make the network more robust to noise and other perturbations in the input data, which improves its performance on unseen data. Furthermore, activation functions whose derivative is zero over subintervals allow the network to learn more complex and non-linear patterns in the data. This can improve the expressiveness and flexibility of the network, making it more accurate and effective for a wider range of tasks. Additionally, these functions can make the network easier to optimize and train, since the gradient of the activation function is well-structured, thus reduce the computational complexity and improve the convergence rate of the training algorithm. More generally, a neural network with a custom activation function (see e.g. the dipoles in \figref{dp}) uses a non-standard activation function with adjustable parameters that can be trained and optimized during the learning process. This allows the network to learn more complex and non-linear relationships between the input and output data, which can improve the accuracy of the network's predictions. The use of a custom activation function with trainable parameters can be useful in a variety of applications, such as image recognition, natural language processing, and time series forecasting. It can also be used to improve the performance of other machine learning algorithms, such as decision trees and support vector machines (see e.g., \cite{zbMATH01669138,MR4329806,MR2849119,MR3108145,MR2274418,MR2246374}). Below we apply a custom activation function in a ConvNet to classify fractal images. In this setting, the activation function should be designed to capture the complex, self-similar patterns that are characteristic of the fractal images. The network is trained on a dataset of fractal images with corresponding labels. It is optimized using a gradient-based algorithm, such as stochastic gradient descent. Once trained, the network will be used to classify new fractal images and predict their classes with high accuracy. In the example below, a dataset\footnote{Available at https://www.kaggle.com/dsv/4791103.} of 15,000 Cantor-like 3D images is generated in Mathematica. Parameters of each image, such as zoom factor, viewing angle, and scaling factor, are uniformly distributed. A sample of the images is shown in \figref{sample}. The images are split into three categories according to their scaling factors, labeled as class ``1'', ``2'' and ``3'', respectively. The entire dataset is divided into a training set (size = 10,000), validation set (size = 2,500) and test set (size = 2,500). The task is to train a ternary classifier using the training set, along with the validation set (for model selection), whose performance is then tested on the test images. In the experiment, a small ConvNet is implemented in Keras. Its architecture is shown in \figref{cnn}. The loss and accuracy of the model are recorded for 20 epochs (\figref{compare}). In comparison with a standard Relu network (\figref{relu-act}) of the same architecture, the use of Cantor-like activation is better at reducing overfitting (\figref{cantor-act}); it is expected that with a systematic hyperparameter tuning, such a network has the potential to outperform Relu networks in certain applications. \begin{figure}[H] \begin{tabular}[t]{>{\centering}p{0.1\columnwidth}>{\centering}m{0.25\columnwidth}>{\centering}m{0.25\columnwidth}>{\centering}m{0.25\columnwidth}} Class 1 & \includegraphics[width=0.25\textwidth]{1-1} & \includegraphics[width=0.25\textwidth]{1-2} & \includegraphics[width=0.25\textwidth]{1-3}\tabularnewline Class 2 & \includegraphics[width=0.25\textwidth]{2-1} & \includegraphics[width=0.25\textwidth]{2-2} & \includegraphics[width=0.25\textwidth]{2-3}\tabularnewline Class 3 & \includegraphics[width=0.25\textwidth]{3-1} & \includegraphics[width=0.25\textwidth]{3-2} & \includegraphics[width=0.25\textwidth]{3-3}\tabularnewline \end{tabular}\caption{\label{fig:sample} A random sample of the dataset of 3D Cantor images.} \end{figure} \begin{figure}[H] \begin{tabular}{>{\raggedright}p{0.5\textwidth}>{\raggedright}p{0.25\textwidth}>{\raggedright}p{0.15\textwidth}} Model: \textquotedbl Cantor-activation\textquotedbl{} & & \tabularnewline \hline Layer (type) & Output Shape & Param \#\tabularnewline \hline input\_1 (InputLayer) & (None, 128, 128, 3) & 0\tabularnewline rescaling (Rescaling) & (None, 128, 128, 3) & 0\tabularnewline conv2d (Conv2D) & (None, 126, 126, 16) & 448\tabularnewline max\_pooling2d (MaxPooling2D) & (None, 63, 63, 16) & 0\tabularnewline conv2d\_1 (Conv2D) & (None, 61, 61, 32) & 4640\tabularnewline max\_pooling2d\_1 (MaxPooling2D) & (None, 30, 30, 32) & 0\tabularnewline conv2d\_2 (Conv2D) & (None, 28, 28, 64) & 18496\tabularnewline max\_pooling2d\_2 (MaxPooling2D) & (None, 14, 14, 64) & 0\tabularnewline conv2d\_3 (Conv2D) & (None, 12, 12, 128) & 73856\tabularnewline flatten (Flatten) & (None, 18432) & 0 \tabularnewline dense (Dense) & (None, 3) & 55299\tabularnewline \hline \end{tabular} \caption{\label{fig:cnn} A ConvNet for fractal image classification.} \end{figure} \begin{figure}[H] \subfloat[\label{fig:relu-act}Relu activation] \begin{tabular}{c} \includegraphics[width=0.45\textwidth]{relu1}\tabularnewline \includegraphics[width=0.45\textwidth]{relu2}\tabularnewline \end{tabular} }\hfill{}\subfloat[\label{fig:cantor-act}Cantor-like activation] \begin{tabular}{c} \includegraphics[width=0.45\textwidth]{cantor322-1}\tabularnewline \includegraphics[width=0.45\textwidth]{cantor322-2}\tabularnewline \end{tabular} } \caption{\label{fig:compare} Training loss and validation loss, illustrated with use of a ConvNet.} \end{figure} \pagebreak{} \bibliographystyle{amsalpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,081
Resorts World Catskills Opens with Promise to Rejuvenate Region. go to hird Judicial District Court home page. You Have Been Charged with a Misdemeanor -- A Brief Guide to What. Do not drive until you have the permit in your.Background Checks – Casino Employment. Some employers even go so far as to request you provide a family history,. Michigan Casino Listings.Immerse yourself in luxury and escape the everyday. Here you can unwind in the comfort of first class rooms and premium suites that are a retreat all on their own.It became a more prominent part of society when the British settled in America, bringing European games like cards and the idea of lotteries. The minimum gambling age is 19 at all five Kewadin casinos, plus the Odawa casino resort in Petoskey.Louis, Joliet, Metropolis, Aurora, Elgin, Alton, Des Plaines, East Peoria, Rock Island.Odanah, Hayward, Black River Falls, Madison, Nekoosa, Tomah, Baraboo, Wittenberg, Danbury, Green Bay, Bayfield, Lad du Flambeau, Webster,Keshena, Bowler, Mole Lake, Milwaukee, Carter, Turtle Lake.You may have to check on the pages of the casino online of your choice, but generally US players can use the following deposit methods in online casinos. OLG Slots and Casinos featuring slots, table games, e-table games. Fun wins every time. Sign up to receive latest offers, promos.Is It Legal to Serve Alcohol If You're Under 21?. almost any 18-year-old is allowed to be employed as an alcohol. Go To Free Enterprise Main Page >>. Choose Quality Inn hotels by Choice Hotels for value and exceptional amenities to 'Get Your Money's Worth'. So go to your people,.Of those, 7 million of them played online poker for real money at least once per month. In the 21st century, riverboat and land casinos can now be found in many other states, as can other forms of gambling like off-track betting, lotteries, and scratchcards. Most international online casinos and poker sites operate in United States dollars because it is a globally recognized currency.That differs from fixed odds betting where you know the payout when you place your bet.Americans have many resources for responsible gaming and problem gambling.A 2005 study showed that 85% of adults in the United States gambled at least once in their lives, and 80% of them had gambled in the past year.All three are open 24 hours and offer the following games: blackjack, craps, roulette, baccarat, mini-baccarat, Caribbean stud poker, three-card poker, pai gow poker, let it ride, big 6 wheel, Spanish 21, Mississippi stud and casino war. If you do not have enough in your account to pay for the fee,. Press the button for CORRECT to go on if the amount is. How do I protect my Michigan Bridge Card.What Is The Legal Gambling Age?. you have to go with the gambling age that is. and at least are old enough to gamble at the online casino or sportsbook you have.Ohio Concealed Carry Permit Information,. we have to go through a mandatory class,. My wife and I both have a 20 year old,. States that allow some forms of gambling for 18-year-olds (usually pari-mutuel and bingo) are.Soaring Eagle Casino SMS Text Alert Program Terms. or that you are at least 13 years old and have your parents' permission to register. Michigan 48858. Native Americans enjoyed the art of the gamble, and settlers later brought in new games like cards and dice. Gambling was at first concentrated in New Orleans and the areas around the Mississippi. Please note: Please check with the individual casinos in the location of your choice on what real money games they offer. And a glossary of gaming terms So you know how to speak the lingo.If you have ever wondered what states you can gamble in at 18, you now have the answers in the above age limit table.Parx Casino Wants to Limit Pennsylvania Online Gambling Licensees to One Skin Only.Lincoln City, Cascade Locks, Warm Springs, Chiloquin, North Bend, Burns, Canyonville, Grand Ronde, Florence, Pendleville.From then to today, Americans enjoy the challenge and the risk-taking involved in the many forms of real money gambling available to them. Amilia, Baton Rouge, Bossier City, Harvey, Kinder, Marksville, Charenton, Vinton, Shreveport, Opelousas, New Orleans, West Lake, Dry Prong,Lake Charles, Kenner. The age limit to gamble in online casinos corresponds to the age limit for brick and mortar casinos, i.e.Complete Guide to USA Casino Gambling. Visit our Local Casino Finder to find brick and mortar casinos all over the USA. How old do you have to be to gamble online?. Top 10 Tips for Beating Casino Tactics. WhtKnt. The casinos love to quote the old. so do small wins. We go to the casino to eat about once every.How old do I have to be to get a job? Where You Can Work If You're 14 or 15. Where You Can't Work If You're 14 or 15. Where You Can't Work If You're Under 18.
{ "redpajama_set_name": "RedPajamaC4" }
932
Do you like Sambazon Juice? Get it for FREE at Whole Foods with the new $2/1 coupon. You can also get a FREE coupon if you 'like' them on Facebook and there are reports that Whole Foods will accept the coupon. coupons.com still offers $2.00 off at zip code 33133 under beverages. just printed 2 more. whole foods sale might be off – i'm not sure – but maybe someone else has this juice on sale! if you know of any sales – let me know! Okay, the link is working again and you are right..it's now $1.50. But i changed the zip code to 97225 and that is at $2.00! Better hurry and print before they change it! The coupon printed says it's for $1.50, not $2. Is that true for everyone? Thank you Rebecca! I'm going to turn on my 2nd computer and print 2 more and make the trip! That's a sale price. They're on sale 2 for $4. I believe their regular price is $2.99. I'm not sure when the sale ends though I know it's been on sale for a few days now, and I think their new sales start on Wed or Thurs depending on t he area. I just saw a picture of the actual shelf sticker, and it looks like it says the sale is good until 1/31/12. It's blurry, but that's what I think it says.
{ "redpajama_set_name": "RedPajamaC4" }
6,364
The LIU Sharks women's basketball team represents Long Island University in NCAA Division I basketball competition. They play their home games at their Brooklyn Campus in the Steinberg Wellness Center and are members of the Northeast Conference. Their current head coach is Rene Haynes, who was hired in April 2019. The LIU Sharks are the result of the July 1, 2019 unification of the athletic departments which had previously represented two separate campuses of LIU, the Division I LIU Brooklyn Blackbirds and the Division II LIU Post Pioneers. NCAA tournament results As the Blackbirds, Long Island went to the NCAA Tournament once. Their record is 0-1. References External links Official website
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,686
\section*{Abstract} In this project, the Quadrature Phase Shift Keying (QPSK) digital modulation scheme was implemented using Software Defined Radios (SDRs). For this system, a deep learning based detector was proposed and implemented alongside the conventional method. The implementation was successfully achieved for both the conventional and deep learning based data detection techniques, despite the challenges faced. The results show that the proposed deep learning method is able to outperform the conventional detector. The code of this project is made publicly accessible at \url{https://github.com/ABadi13/QPSK_SDR_DNN_Detector}. \section{Introduction} \subsection{Objectives} To show experimentally and discuss, the implementation of the QPSK digital modulation scheme. Furthermore, to utilize deep learning techniques to design and develop a data detection technique for the QPSK modulation scheme. \subsection{Background} As the world's technology becomes ever more sophisticated and powerful, techniques such as Artificial Intelligence (AI) were once thought of as a pipe dream, now AI has been applied to almost every field imaginable. The unique abilities and characteristics of Artificial Neural Networks (ANNs) have made this possible, most notably, their reduced complexities when compared to conventional methods with the same performance. This reduced complexity, not only enables a faster computation time, but also reduced energy consumption and computational requirements. Furthermore, ANNs can provide better awareness of their operational environments, with the ability respond to any imperfections that cannot be accounted for, using conventional methods. ANNs and Deep Neural Networks (DNNs) have been undergoing substantial advancements in the field of wireless communication, replacing conventional methods in many applications and opening up opportunities in many others. Their reduced complexities allow for cheaper equipment, reduced latency, reduced power consumption, etc. Their greater awareness enables end-to-end learning, allowing them to jointly perform multiple functions, even further simplifying the system and reducing components \cite{web:ai}. Channel estimation, in particular, has benefited greatly from DNNs. For instance, the authors in \cite{Ye2018} proposed a DNN method for joint channel estimation and symbol detection for OFDM systems. The results of which were promising, with the DNN performing comparably to the conventional channel estimation technique Minimum Mean Square Error (MMSE), and outperforming it in low overhead situations. Additionally, the authors in \cite{Xiang2020} proposed a DNN pair for joint channel estimation and signal detection for the developing spatial modulation scheme. It was concluded that the proposed DNN pair outperforms conventional methods in highly dynamic channel conditions. Instances such as these and many others, show the potential of deep learning in channel estimation. Cognitive radio is built upon making intelligent decisions to improve the overall utilization of the frequency spectrum. It is a method that allows an unlicensed user to locate and utilize unused licensed spectrum, while also ensuring no interference is caused to the licensed user. One such challenge faced especially in cognitive radio is Automatic Modulation Classification (AMC), in \cite{Meng2018} a Convolution Neural Network (CNN) based AMC approach is proposed. The simulation results showed that the proposed CNN based approach was able to outperform the feature based approach and closely match the much more complex maximum likelihood approach, while maintaining significantly lower computational complexity. There are many downsides to using DNNs in wireless communication, one important drawback is the DNN's inherent need for large datasets for training. The increased number of layers and neurons need more data to learn the considered system without overfitting. More data is not much of an issue in other fields, although, in wireless communication this is much more difficult. As opposed to regular ANNs, where typically features are manually extracted from the dataset, DNNs typically extract features internally. Meaning that for DNNs, there is a much larger emphasis on the data used. To combat the large amounts of data, mathematical model-based channels have been utilized to simulate real world environments. Given full Channel State Information (CSI), the model can very effectively estimate the channel. However, the CSI is not always fully available, and some channels can be very dynamic in nature. If the data used to train the DNN is not representative of the real-world channel, then a significant loss in performance is expected. Software Defined Radios (SDRs), provide the ability to transmit a signal generated and processed in software, or receive a signal to then manipulate it in software. This device practically permits the user freedom in creating any conceivable system, within the realm of the SDRs capability. Real world data can easily be produced using this device, enabling the implementation of ANNs and DNNs directly. The developments and advancements recently made in SDR technology have been rather rapid in recent years. Several low-cost consumer-focused SDRs have been released such as HackRF One, RTL-SDR, BladeRF, etc. With such success for SDRs, more development and competition is bound to happen in the SDR market, paving the way for technological advancements and accessibility. Academic research utilizing SDR technology has been seen significant success, for instance, the authors in \cite{Zhang2010} have implemented cooperative communication scheme using SDR technology. The results show that the cooperative approach achieves significant performance enhancements. Furthermore, the autors in \cite{Ru2009} presented a digitally enhanced SDR receiver robust to out of bound interference. More recently, ANNs have been implemented using SDR technology. Instances such as \cite{Jagannath2018} and \cite{Gecgel2019} show the potential of utilizing SDR technology for real world implementation of wireless communication techniques. The authors in \cite{Jagannath2018} designed an ANN based AMC that is able to perform over a wide range of Signal to Noise Ratio (SNR) and then implemented on a SDR test-bed, proving the feasibility of the proposed technique. While in \cite{Gecgel2019} the authors implement two ANNs to jointly determine the presence of a jammer as well as its characteristics. The results in \cite{Gecgel2019} showed that the proposed methods can detect jamming with an $85\%$ accuracy. Despite instances such as this and many others, along with deep learning techniques becoming more relevant than ever in the field of wireless communication, research on the implementation of ANNs leaves much to be desired. \section{Proposed System Architecture} This section will outline and discuss the system architectures and algorithms necessary to modulate and demodulate a randomly generated bitstream, using a QPSK digital modulation scheme with a DNN based data detector and implemented using SDRs. \subsection{QPSK System Architecture} The QPSK system is illustrated in the form of a block diagram in Fig. \ref{fig:sdr_block}. It is made up of two main sections, the transmitter and the receiver, separated by a wireless channel. Both the transmit and receive SDRs were connected to the same computer, the signal processing, however, will remain entirely independent of each other, with no feedback loop. \begin{figure} \centering \includegraphics[width=\linewidth]{Figs/sdr_block.png} \caption{QPSK system utilizing SDR with a DNN based detector} \label{fig:sdr_block} \end{figure} At the transmitter side the GNU Radio Companion software is used to generate a random bit stream, it is then QPSK baseband modulated. The modulated signal is then sent to the \emph{HackRF SDR} to be transmitted over the wireless Radio Frequency (RF) channel. The implemented channel was a short clear Line Of Site (LOS) channel, about a meter in length. The reason for this choice was due to the SDR's incapability of high power transmission. The sample rate for both SDRs was 2 MHz (samples per second), the lowest common sample rate both SDRs can operate on. For synchronization and channel estimation, there will be a pilot every frame, this pilot will be omitted when carrying out any performance evaluation. The pilot structure must also remain consistent throughout the entire process. On the receiver side, the \emph{RTL-SDR} receives the signal sent by the HackRF and then directs it to the GNU Radio Companion software. The received signal is then filtered using an LPF to remove any unnecessary noise. It is then synchronized using a Polyphase Clock Synchronizer and a Costas loop. Once the signal is synchronized, it is stored so that it can be decoded using the conventional method and the proposed DNN. Using the conventional method helps form a baseline for the performance results of the DNN. \subsection{Algorithm for Creating the Datasets} \begin{figure} \centering \includegraphics{Figs/sdr_flow.png} \caption{Flowchart outlining the algorithm used for producing the datasets for training and testing.} \label{fig:sdr_flow} \end{figure} The flowchart shown in Fig. \ref{fig:sdr_flow} illustrates the algorithm this system uses to create datasets and evaluate them using the conventional and the DNN methods. In order to adequately train DNNs with supervised learning techniques, datasets with accurate labels need to be made. These datasets contain the original bitstream, the received bitstream decoded using the conventional method, and the synchronized received signal. The flowchart starts of as was explained in the previous section, a randomly generated bitstream is modulated, transmitted with the HackRF, received with the RTL-SDR, and then synchronized. After this, the synchronized signal is then decoded using conventional methods. Using the decoded bitstream, a pilot search algorithm is run, if no pilots are found then an error has occurred, and the process of transmission is repeated. If the pilots were found, however, another algorithm is run in order to find the delay of the decoded bitstream. If the delay cannot be found then there is an error and the transmission is repeated. If both the phase and the delay can be found, then the training and testing datasets can be created. The training dataset is fed to the DNN to be trained. Then the testing dataset is evaluated using both the conventional and the proposed DNN methods. It is important to note here that the deployment of a trained DNN is not dependent on the conventional system. It is however, dependent on the conventional system during training to determine only the system delay, in order to accurately label the input samples for supervised learning. It does not use the bitstream decoded using the conventional methods. \subsection{GNU Radio Companion Flowchart} The signal processing will be carried out through the open-source software GNU Radio Companion. This software uses blocks that perform basic operations, when connected together these blocks make up a flowgraph, like that shown in Fig. \ref{fig:gnu_flow}. With the flexibility this software allows, a QPSK transmitter and receiver was assembled, each of their corresponding sections are shown in Fig. \ref{fig:gnu_flow}. \begin{sidewaysfigure}[htbp] \centering \includegraphics[trim= 0cm 0cm 0cm 0cm, clip, width=\linewidth]{Figs/gnu_flow.pdf} \caption{GNU Radio Companion flowchart of the considered QPSK system.} \label{fig:gnu_flow} \end{sidewaysfigure} Firstly, the random message signal is generated, and for synchronization a pilot is then inserted once every frame. The random bitstream is then QPSK modulated at the baseband using the flexible Constellation Modulator, which modulates according to its constellation object parameter. This block also uses a Root Raised Cosine (RRC) pulse shaping filter to reduce Inter-Symbol Interference (ISI). The modulated signal is then sent to the HackRF SDR which is supported by the OsmoSDR standard. The chosen sample rate was $2$ MHz, with $8$ samples/symbol in order to reduce the signal processing on the computer. The operating frequency was set to $1.033$ GHz for no reason other then being unoccupied. After the signal passes through the wireless channel, it is received by the RTL-SDR, and then sent back to the computer for processing. An LPF is used in order to remove any unnecessary noise from the received signal. After this, the signal is timing synchronized with the Polyphase Clock Sync. It is equalized using the Constant Modulus Adaptive (CMA) equalizer, which is used for constant amplitude (as the name would suggest) signals, such as the QPSK signal used for this system. Carrier synchronization is achieved using the Costas Loop, the carrier frequency is extracted and the signal is reverted back to the baseband, eliminating any frequency offset experienced by the signal. Finally, after the signal has been synchronized it can be demapped. This signal was then stored and demapped using both, the conventional method utilizing the Constellation Decoder block, and the proposed DNN discussed in the following section. The Unpack blocks are used in order to convert the symbols into bitstreams, so that they can be more easily manipulated when creating the datasets. \subsection{DNN Architecture} The proposed DNN system attempts to take the place of the Constellation Decoder. By utilizing the synchronization pilots, the DNN is able to perform channel estimation and potentially improve on the Bit Error Rate (BER) of the conventional method. The proposed DNN will perform the decoding on a frame-by-frame basis. Meaning that the DNN will take the entire frame into consideration when performing the decoding. Its inputs will be all the samples of the synchronized signal in a frame separated into their real and imaginary counterparts. The hidden layers will utilize the \emph{ReLU} activation function. The output layer uses the \emph{Sigmoid} activation function, and attempts to decode the entire frame as bits, excluding any pilots. Due to the inherent nature of the DNN requiring large amounts of data for training, the network will be deployed in two separate stages, namely, \emph{training} and \emph{testing}. The training stage consists of producing the necessary datasets, this can be achieved by transmitting a priorly known modulated signal, and trained accordingly using supervised learning. After the training stage has been fully fulfilled, the testing stage will be implemented using the produced dataset. In this stage the received modulated signal will be demodulated on a frame-by-frame basis. \label{ssec:dnn_arc} \section{Methodology and Challenges} In this section, the methods used and the challenges encountered during the implementation of the proposed DNN based data detector for the QPSK digital modulation scheme will be outlined and discussed. \subsection{Deep Learning in Wireless Communications} Deep learning has seen a substantial amount of research regarding its simulation and how it can be introduced in many applications. Although, real world implementation of machine learning and deep learning of these suggested techniques is not as common as one would assume. Implementing deep learning presents many challenges and potentially allows even further advances in research, and the techniques that can be used to implement it. The Python TensorFlow environment was used to train and test the DNN, due to its robustness, speed, and flexibility. \subsection{The SDR Platform} Until very recently, implementation using SDRs could only be accomplished with the high-cost Universal Software Radio Peripheral (USRP) devices. Fortunately, low cost SDRs have seen a huge rise in popularity recently, such as the HackRF One, BladeRF, RTL-SDR, etc. that found their way into not only researchers and academics, but also enthusiasts and consumers. As stated earlier, SDRs allow for the signal processing to be performed through software, opening up many avenues for flexibility in design and configuration. One such option, is the ability to implement machine learning systems, which otherwise would need very specific equipment to implement. However, when using the SDR platform, all that is required is a computer alongside the SDRs to implement the required system. The SDRs chosen for this system were the \emph{HackRF One} as the transmitter and the \emph{RTL-SDR} as the receiver. The HackRF One device is a half-duplex transceiver, it a is a low-cost open-source hardware project, which would describe its popularity around the SDR platform community. The RTL-SDR on the other hand, is only a receiver, due to its low price, it also has a lot of popularity around the SDR community. Together, they can allow for one way transmission with reasonable pricing. When implementing custom systems for the SDR platform, there are few software options to choose from, the most popular is the open-source GNU Radio software, due to its wide compatibility for operating systems and SDRs. It is primarily written in Python and C++, and is flexible enough to allow the user to write their own signal processing blocks. \subsection{Synchronization} Synchronization is important for any wireless communication system, it helps ensure that the received data actually has meaning and can be utilized, otherwise it may be difficult to make sense of it. Throughout this project, many types of synchronization were encountered and studied such as, timing and carrier synchronization for analog signals, and data synchronization for the decoded digital signal. Despite it being such an important concept, it is difficult to encounter texts that truly address these challenges when performing hardware implementation and how to overcome them. This created a few challenges that were eventually overcome with research and rigorous trial and error. The timing and carrier synchronization used in this system is the same as that in \cite{web:qpsk_tut}, it was also thoroughly explained. The system in \cite{web:qpsk_tut} in turn became a starting point for the system in this project. Due to the low-cost RTL-SDR, there was a significant frequency offset. To overcome this issue the system was calibrated beforehand, to ensure that the frequency is aligned as best as possible. The timing synchronization, was mainly addressed by \cite{web:qpsk_tut}. However, to fully perform the synchronization and lock onto a phase and the carrier frequency, it takes a short period. During this period, the signal is not synchronized and produces incorrect results when decoded, thus truncation of this period is necessary to ensure that the decoded bitstream was synchronized. Phase ambiguity, although easy to solve, presented many challenges. Classically, the phase ambiguity is addressed by sending preamble that is known priorly, allowing the bits to be remapped to match the preamble. This works quite well given that the phase lock is consistent, otherwise, if the phase lock may change during operation, pilots (overhead) are required to address this issue, this is called framing synchronization. When the system was first implemented, only a preamble was used, and since the phase lock was inconsistent, especially at low SNR values, this method was not able to properly address this issue. For this reason, pilots were inserted into the bitstream at specific intervals, in order to assist with this synchronization issue. One of the biggest challenges faced when implementing the SDR system was the delay encountered when transmitting and receiving symbols. Typically, for simple QPSK systems such as that in \cite{web:qpsk_tut}, the delay is expected to be very small and consistent given certain channel conditions, thus, the search for the delay can fairly easily be completed manually. However, for the implemented SDR system, the delay was between 0.1 to 0.3 seconds, and considering that the sampling rate was $2$ MHz, manual search was simply not plausible. To overcome this issue, the transmitted and received (decoded using the conventional decoder) bitstreams needed to be studied closely, in order to understand what was sent and ensure that the data was truly received without error. A program was then written to iteratively delay the \emph{received} data until it would match the \emph{original} data, this was repeated for several situations and it was concluded that the delay was inconsistent and needs to be found for every dataset created. In summary, whenever a new dataset is to be created, some of the data at the beginning must be \emph{truncated}, the \emph{phase ambiguity} must be solved, and finally the \emph{delay} must be found. Using all these techniques, datasets for training and testing the DNN were produced. \section{Implementation Results} In this section the results of the implementation are shown. The frequency spectrum of the received signal is shown and compared to the transmitted signal. Then the Bit Error Rate (BER) performance of the QPSK system using the conventional and the proposed DNN detection methods are evaluated and compared. Throughout the experiment, the channel was kept stationary at a length of approximately $1$ m, with a clear LOS between the transmitter and the receiver. The sample rate was set to $2$ MHz and the samples per symbol was set to $8$, in order to reduce computation requirements of the system. A frame size of $4$ symbols was also considered. The proposed DNN uses the framework discussed in section \ref{ssec:dnn_arc}, had five layers of sizes $8$, $100$, $50$, $20$, and $6$, respectively. \subsection{The Transmitted and The Received Signal} \begin{figure} \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{Figs/Tx_Rx_15__4.jpg} \caption{All frequency components received by the SDR.} \label{fig:txrx} \end{subfigure} \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{Figs/Tx_Rx_zoom_15__4.jpg} \caption{Zoomed in for a better view of the modulated signal} \label{fig:txrx_zoom} \end{subfigure} \caption{Frequency spectrum of the transmitted signal and the received signal after the LPF.} \end{figure} Figures \ref{fig:txrx} and \ref{fig:txrx_zoom} show the frequency spectrum of the transmitted signal along with the received signal after being low pass filtered. It is recognized from the transmitted signal that there is a steep dropoff in the magnitude of the frequency components. This is owed to the RRC pulse shaping filter, which in turn is used for timing synchronization for optimal sampling, reducing ISI. The bandwidth of this signal can be increased by increasing the Excess BW parameter of the Constellation Modulator block. Ideally, there would be no frequency components after the bandwidth of the signal, however, due to it being processed digitally, the floating point precision comes into play and introduces high frequency components that are mostly negligible. These figures show how well the signal was received with very little distortion, owing to the short LOS channel. Also only a slight frequency offset is noticed, due to the SDRs being calibrated beforehand. From the figures, five impulses are also observed at both the transmitted and received signals. These impulses are a result of the pilot symbol being sent so frequently, increasing the power received from the pilots relative to the rest of the symbols. \begin{figure} \begin{subfigure}{\textwidth} \includegraphics{Figs/Rx_15__4.jpg} \caption{All frequency components received by the SDR.} \label{fig:rx} \end{subfigure} \begin{subfigure}{\textwidth} \includegraphics{Figs/Rx_zoom_15__4.jpg} \caption{Zoomed in for a better view of the modulated signal.} \label{fig:rx_zoom} \end{subfigure} \caption{Frequency spectrum of the received signal before and after being low pass filtered.} \end{figure} Shown in Fig. \ref{fig:rx} is the LPF output compared to the received signal. Before around 200 kHz the two signals very closely resemble each other, above this frequency however, heavy attenuation from the LPF eliminates high frequency components, reducing noise and allowing for more accurate demodulation. \subsection{Constellation Diagram} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Figs/Const_15__4.png} \caption{Constellation diagram of the received signal.} \label{fig:const} \end{figure} The constellation diagram at an SNR of approximately 19 dB is shown in Fig. \ref{fig:const}, it shows the in-phase and quadrature components of the synchronized signal. From this constellation diagram, the clusters can be viewed clearly and separated very easily. With careful inspection, it also shows a nonuniformity in the distribution of received symbol in each quarter. In other words, one of the received symbols is received far more frequently than the others, this is quite clearly due to the pilot symbol, which takes up to a quarter of the frame time for the considered system. \subsection{BER Performance} \begin{figure}[!htbp] \centering \includegraphics[width=0.8\textwidth]{Figs/ber__4.png} \caption{BER performance of the proposed DNN data detector.} \label{fig:ber} \end{figure} The BER performance of the conventional and the proposed DNN approaches are shown in Fig. \ref{fig:ber}. The BER performance is measured against variation in the SNR value of the signal. The SNR is measured by finding the difference between the noise floor and the average power of the received signal, which could then be adjusted by altering the RF gain value of the RTL-SDR. \begin{equation} SNR = P_{R}+G_{R}-N \label{eq:snr} \end{equation} where $P_{R}$ is the received average power with no receiver gain, $G_R$ is the gain of the receiver, and $N$ is the noise floor. From Fig. \ref{fig:rx_zoom} the noise floor was measured to be approximately $N=-68$ dB, and the average power of the received signal is approximately $P_R+G_R=-49$ dB with the gain of the RTL-SDR set to $G_R=15$ dB. Therefore, assuming that both the noise floor and the received power without receiver gain remain constant, the SNR can be approximately measured directly from the gain of the receiver as such \begin{equation} SNR = 4 \text{ dB} + G_R \end{equation} It is observed from this figure that the proposed DNN clearly outperforms the conventional approach. Proving the potential of DNNs to perform well under real world channels. The figure also shows that the smooth graphs seen in simulation may not be as easy to maintain in real world channels, further proving the need for implementation studies. \section{Conclusions and Future Work} In this project, a DNN based data detection method for the QPSK digital modulation scheme was designed and implemented using the HackRF One and the RTL-SDR. The implementation results show that the signal was received well, with distinctive clusters for each symbol. The BER performance was also evaluated, it showed that the DNN based detector outperforms the conventional method for the considered system. It was also recognized that implementation presents numerous challenges not encountered when performing simulations, fortunately these challenges were overcome and discussed in this report. Future work will expand this methodology to different digital modulation schemes, such as Amplitude Shift Keying (ASK) and Frequency Shift Keying (FSK). Spatial modulation can also be implemented, however, doing so would require multiple transmit SDRs. Furthermore, cognitive radio applications present numerous avenues to explore, such as modulation classification, reconfigurable smart transceivers, spectrum sensing, etc. The flexibility of SDRs and the GNU Radio software allow for endless applications \section{Acknowledgement} This work was partially funded by the Libyan Foundation for Projects Development.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,955
{"url":"http:\/\/www.trillionbyte.com\/how-to-teleport\/","text":"# How to teleport?\n\nAfter some thought, here is what I came up with to get you to feel the difference between a classical and a quantum computer. I borrowed this idea from a book called \"Flatland\" by\u00a0Edwin Abbott Abbott. Although this book had nothing to do with quantum computing, I think it will work for us here beautifully. Imagine yourself (of course you can't) to be living in a 2 dimensional flat land. Your society does not have a concept of 3 Dimensional space intuitively (the society must have mathematicians who could cook up the properties of 3D space, but not the plebeians). How would you live in such a land? You can't really turn around here along an azimuthal axis, you can only look up or down or left or right (imagine your land is constrained on XZ plane). Now imagine the the computations that your society can do, can only be done on this azimuthal axis. This means that you can only turn your bits instantaneously from perhaps up, and the down. You can not really have in-between turns, because your dimensions are restricted. Would you believe if I tell you now, that our present form of computers, which of course exist in 3D, still work on the same principles as a \u00a02D or even a 1D flatlander's computer would work? The problem with modern day computers is we only have 1s and 0s. Ups or Downs. We have yet not utilized the full power of a classical computer, which is to have the ability to create those 'in-between' states with our bits. The reason we have not been able to create such states is because classical physics is deterministic and classical computers are based on the mathematical idea of Church-Turing hypothesis (I shall leave this discussion for later). What do you mean when I say in-between states? It is either on, or off right? That is where the problem lies. Humans like Richard Feynman et.al. realized that one could use the power quantum superpositions to create those lovely in-between states using generally spin $\\frac{1}{2}$ fermions. For all intensive purposes here, we shall consider two state systems, which could be composed of single particle fermions or composite particles (atoms) which behave like fermions. We will abstract away the exact system and imagine the two states to be $|1>$ and\u00a0$|0>$. A qubit, or a quantum bit is composed of a superposition of these states and can be imagined to be residing on something called a Bloch-sphere\n\n$|\\psi> = e^{i\\gamma}(cos(\\frac{\\theta}{2}) + e^{i\\phi}sin(\\frac{\\theta}{2}))$\n\nIn general a qubit is in a superposition of |0> and |1> state. The shear amount of freedom that one gets if our system is in a superposition is immense as we saw. For further discussion, we will ignore the global phase factor from here, because it will not be important when we are performing a measurement. If we also let $\\phi = 0$ then we can simplify our equation further to the following\n\n$|\\psi> = cos(\\frac{\\theta}{2}) + sin(\\frac{\\theta}{2})$\n\nWhich can be simply written as\n\n$|\\psi> = \\alpha |0> + \\beta |1>$\n\nAbstracting further, if we have a composite system made up of four qubits, such that first qubit $\\alpha_1 \\in H_1$ and $\\alpha_2 \\in H_2$ where $H_1$ and $H_2$ are hilbert spaces of same dimensions (2) then a general state can be written as\n\n$|\\psi> = \\gamma_1 |0>\\otimes |0> + \\gamma_2 |0>\\otimes |1> + \\gamma_3 |1>\\otimes |0> + \\gamma_4 |1> \\otimes |1>$\n\nI will avoid explicitly showing the tensor product symbol from now on and simply write it as\n\n$|\\psi> = \\gamma_1 |0>|0> + \\gamma_2 |0>|1> + \\gamma_3 |1>|0> + \\gamma_4 |1> |1>$\n\nNow that you know how to create these states without worrying much about how to experimentally realize them, let us have some fun with them. From the above picture, you could pretty much see, if I measure the above quantum system N times, I shall get the state |00> about $(\\gamma_1 * \\gamma_1)\\times N$ times. Suppose I measure only the first qubit to be in the state |0>, then what will our wave function after the measurement be? Well we know that we have collapsed the wave function to only give |0> as the first qubit output, thus our post measurement state would be\n\n$|\\psi> = \\frac{\\gamma_1 |00> + \\gamma_2 |01>}{\\sqrt{\\gamma_1^2 + \\gamma_2^2}}$\n\nNow that I have hopefully taught you what are qubits, let us look into various gates which we can create using unitary operators. One thing to note before we directly dive in to the kinds of gates we can create using quantum computers, we will first try to understand what basic properties must such gates satisfy before being called an official, so to say, quantum gate. First and foremost, our gate can never change the magnitude of our qubit in its respective hilbert space. Here changing the magnitude simply means that the sum of all the coefficients squared must be 1. This is another statement of conservation of probability. Due to this constraint of not being able to change the magnitude, the only thing our gates can do is rotate our qubit in the hilbert space. Gates in quantum computing are generally represented in two forms, one is the operator form, and the other one is matrix form, and both are infact simply linear transformations acting on the qubit. Such transformations which do not affect the magnitude but only rotate a vector (a complex number represented in vector form) in its respective space, or in other words those transformations which preserve the inner product of the vector by itself before and after the transformation, are known as 'unitary transforms' in quantum mechanics. Therefore, all our quantum gates would have to be unitary transformations. One good example of such a unitary transform is the hadamard transform, given by\n\n$H|0> = \\frac{|0> + |1>}{sqrt{2}}$\n\n$H|1> = \\frac{|0> - |1>}{sqrt{2}}$\n\n$H|00> = \\frac{|00> + |01> + |10> + |11>}{2}$\n\nIn general its given by\n\n$H|j_1 j_2 ... j_n> =\\sum_{k\\in N} \\frac{(-1)^{k_1 k_2 ... k_n}|k_1 k_2 ... k_n> }{\\sqrt{2^n}}$\n\nOne can clearly see that the sum of the squared of coefficients before and after the hadamard transformation is conserved , and since they were initially normalized, they would remain so after carrying out the transform. The cool thing about the hadamard transformation is that it takes input a single state, and creates an equal probability superposition of all quantum states possible in the system. The use of this would be demonstrated later in this discussion. Carrying on with our discussion, most quantum gates come with their controlled-\"gate\" analogs, for example the hadamard gate also has a \"controlled-Hadamard\" or a controlled-H gate. In such gates, one bit is set to be the control bit and the other one to be the target bit. If the control bit happens to be |0>, then the gate does not act on the target bit, however if the control bit is |1>, then the c-H acts on the target bit. If we set our first qubit as the control bit, and the second bit as the target, then following examples may help in clarifying how it works.\n\n$c-H|00> = |00>$\n\n$c-H|01> = |01>$\n\n$c-H|10> = |1>\\otimes \\frac{|0> + |1>}{\\sqrt{2}} = \\frac{|10> + |11>}{\\sqrt{2}}$\n\n$c-H|11> = |1>\\otimes \\frac{|0> - |1>}{\\sqrt{2}} = \\frac{|10> - |11>}{\\sqrt{2}}$\n\nHadamard gate, along with the self explanatory controlled-Not gate could be utilized in the following circuit \u00a0(take from nature journal's open source repository) to produce various Bell states, which will be important to us for teleportation.\n\nAs we can see above, our input is the state |00>, which is very simple to realize in \u00a0laboratory, then we act hadamard gate on the first bit causing following change\n\n$H|00> = \\frac{|0> + |1>}{\\sqrt{2}}\\otimes |0> = \\frac{|00> + |10>}{\\sqrt{2}}$\n\nAfter this we act upon it with the controlled not gate, where the control bit is our first qubit, represented with the big black dot, while the second qubit will be the target bit\n\n$c-X\\frac{|00> + |10>}{\\sqrt{2}} =\\frac{|00> + |11>}{\\sqrt{2}} = |\\beta_{00}>$\n\nThis is the first bell state. Similarly we can create all the other bell states by giving |01>, |10> and |11> as inputs to such circuit. Following is a list on how to create such states -\n\n\u2022 Bell 01, $|\\beta_{01}>$\n\n$H|01> = \\frac{|0> + |1>}{\\sqrt{2}}\\otimes |1> = \\frac{|01> + |11>}{\\sqrt{2}}$\n\n$c-X\\frac{|01> + |11>}{\\sqrt{2}} =\\frac{|01> + |10>}{\\sqrt{2}} = |\\beta_{01}>$\n\n\u2022 Bell 10, $|\\beta_{10}>$\n\n$H|10> = \\frac{|0> - |1>}{\\sqrt{2}}\\otimes |0> = \\frac{|00> - |10>}{\\sqrt{2}}$\n\n$c-X\\frac{|00> - |10>}{\\sqrt{2}} =\\frac{|00> - |11>}{\\sqrt{2}} = |\\beta_{10}>$\n\n\u2022 Bell 10, $|\\beta_{11}>$\n\n$H|11> = \\frac{|0> - |1>}{\\sqrt{2}}\\otimes |1> = \\frac{|01> - |11>}{\\sqrt{2}}$\n\n$c-X\\frac{|01> - |11>}{\\sqrt{2}} =\\frac{|01> - |10>}{\\sqrt{2}} = |\\beta_{11}>$\n\nThere are many reasons why being able to create such states are important, one such reason being the 'Bell's inequality'. I shall not go towards this path, but rather use such bell states to come to our ultimate goal in this article, which is to be able to teleport. We have so far learned what are qubits, how do quantum gates work and their properties, as well as how to create quantum circuits which produce bell states. We need one more input in our toolbox to work on teleportation, which is the quantum Z gate. Z gate simply changes the phase of the states, leaving the states themselves, along with their complex coefficients intact. Z gate changes the phase of |1> state by $\\pi$ whereas leaves the |0> state unchanged.\n\nA list of more symbols for quantum circuits given below (http:\/\/bio.freelogy.org\/)\n\nThe teleportation\n\nThe goal here is for say, Alice to prepare an arbitrary superposition say $|\\psi> = \\alpha |0> + \\beta |1>$ and teleport this to some other person, as always, named Bob. (circuit source: Nielsen & Chuang - Quantum information and computation)\n\nLet us start from the very beginning. The convention is, that the first two qubits belong to Alice, while the last qubit belongs to Bob. The trick here is, just by doing local manipulations on her two qubits, Alice can utilize the spooky quantum superpositions in her advantage, and teleport the state $|\\psi>$ to Bob. The first thing to be done here is jointly prepare the following state\n\n$|\\psi_0> = |\\psi>\\otimes |\\beta_{00}> = (\\alpha |0> + \\beta |1>)\\otimes (\\frac{|00> + |11>}{\\sqrt{2}})$\n\nWhich can be simplified to\n\n$|\\psi_0> = \\frac{1}{\\sqrt{2}}(\\alpha |0>\\otimes (|00> + |11>) + \\beta |1>\\otimes (|00> + |11>))$\n\nAs we have already seen in our previous study of how a c-Not or X gate works, Alice would send her first qubit through a c-Not gate where the control bit is the first qubit, and the target bit is the second qubit. Notice though, that Alice can't see or touch the third qubit, which belongs only to Bob. She can only work with her first two qubits. After passing through the c-Not gate, following would be the output\n\n$|\\psi_1> = \\frac{1}{\\sqrt{2}}[\\alpha |0>\\otimes (|00> + |11> ) + \\beta |1>\\otimes (|10> - |01> ) ]$\n\nAlice then proceeds to send her first qubit through the Hadamard gate to get following results\n\n$|\\psi_1> = \\frac{1}{2}[\\alpha (|0> + |1>)\\otimes (|00> + |11> ) + \\beta (|0> - |1>)\\otimes (|10> - |01> ) ]$\n\nWhich can be simplified by taking the tensor products\n\n$|\\psi_2> = \\frac{1}{2}[|00>\\otimes(\\alpha |0> + \\beta |1> ) + |01>\\otimes (\\alpha |1> + \\beta |0> ) + |10>\\otimes (\\alpha |0> - \\beta |1> ) + |11>\\otimes (\\alpha |1> - \\beta |0> )]$\n\nNow comes the part where Alice makes a measurement. The result of the measurement can leads to the collapse of Alice's qubits and leads us to the following possibilities which finally completes our teleportation procedure\n\n1. If Alice measures |00>, she does nothing to the qubits and Bob recovers the original\u00a0\u00a0state $|\\psi>$\n2. If Alice measures |01>, she does $X^{1}Z^{0}(\\alpha |1> + \\beta |0>) = X(\\alpha |1> + \\beta |0>) = (\\alpha |0> + \\beta |1>) = |\\psi>$, thus Bob again recovers the original state\n3. If Alice measures |10>, she does $X^0 Z^1 (\\alpha |0> - \\beta |1>) = Z(\\alpha |0> - \\beta |1>) = (\\alpha |0> + \\beta |1>) = |\\psi>$\n4. If Alice measures |11>, she does $X^1 Z^1 (\\alpha |1> - \\beta |0>) = X(\\alpha |1> + \\beta |0>) = (\\alpha |0> + \\beta |1>) = |\\psi>$\n\nAs you can finally see, we have teleported a quantum state from Alice to Bob. Notice however, that this is a breakthrough in a sense that we would transfer a quantum state from one person to other without ever measuring it, however, we have not teleported the state instantaneously. To teleport the state to Bob, Alice at some point had to use classical channels for the teleportation to be successful with a probability of 1. The use of classical channels means that we are not really teleporting instantaneously, but rather at most with the speed of light.\n\nConclusion\n\nThe article ignored allot of mathematical subtleties due to its popular nature, and thus I could not include the rigor which is needed to truly understand such a technique, which is not to say I have compromised whatever physics I have used. Quantum teleportation is not a simple topic to understand, and even this article could only be well understood by a physics undergrad who has some familiarity with quantum mechanics. The key point was to cause a spark of interest with the catchy headline, and then drag you through the dark and treacherous\u00a0trail\u00a0of quantum mechanics and linear algebra to see if you can survive. I hope the article teaches something to everyone, and hopefully I will revise it further to clarify more points as per the feedback.","date":"2018-09-24 00:29:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 46, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7379498481750488, \"perplexity\": 489.77062707902024}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-39\/segments\/1537267159938.71\/warc\/CC-MAIN-20180923232129-20180924012529-00270.warc.gz\"}"}
null
null
// // // Copyright 2020 gRPC authors. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // // #ifndef GRPCPP_XDS_SERVER_BUILDER_H #define GRPCPP_XDS_SERVER_BUILDER_H #include <grpc/impl/codegen/port_platform.h> #include <grpcpp/server_builder.h> namespace grpc { class XdsServerServingStatusNotifierInterface { public: struct ServingStatusUpdate { grpc::Status status; }; virtual ~XdsServerServingStatusNotifierInterface() = default; // \a uri contains the listening target associated with the notification. Note // that a single target provided to XdsServerBuilder can get resolved to // multiple listening addresses. // The callback is invoked each time there is an update to the serving status. // The API does not provide any guarantees around duplicate updates. // Status::OK signifies that the server is serving, while a non-OK status // signifies that the server is not serving. virtual void OnServingStatusUpdate(std::string uri, ServingStatusUpdate update) = 0; }; class XdsServerBuilder : public grpc::ServerBuilder { public: // NOTE: class experimental_type is not part of the public API of this class // TODO(yashykt): Integrate into public API when this is no longer // experimental. class experimental_type : public grpc::ServerBuilder::experimental_type { public: explicit experimental_type(XdsServerBuilder* builder) : ServerBuilder::experimental_type(builder), builder_(builder) {} // EXPERIMENTAL: Sets the drain grace period in ms for older connections // when updates to a Listener is received. void set_drain_grace_time(int drain_grace_time_ms) { builder_->drain_grace_time_ms_ = drain_grace_time_ms; } private: XdsServerBuilder* builder_; }; // It is the responsibility of the application to make sure that \a notifier // outlasts the life of the server. Notifications will start being made // asynchronously once `BuildAndStart()` has been called. Note that it is // possible for notifications to be made before `BuildAndStart()` returns. void set_status_notifier(XdsServerServingStatusNotifierInterface* notifier) { notifier_ = notifier; } /// NOTE: The function experimental() is not stable public API. It is a view /// to the experimental components of this class. It may be changed or removed /// at any time. experimental_type experimental() { return experimental_type(this); } private: // Called at the beginning of BuildAndStart(). ChannelArguments BuildChannelArgs() override { ChannelArguments args = ServerBuilder::BuildChannelArgs(); if (drain_grace_time_ms_ >= 0) { args.SetInt(GRPC_ARG_SERVER_CONFIG_CHANGE_DRAIN_GRACE_TIME_MS, drain_grace_time_ms_); } grpc_channel_args c_channel_args = args.c_channel_args(); grpc_server_config_fetcher* fetcher = grpc_server_config_fetcher_xds_create( {OnServingStatusUpdate, notifier_}, &c_channel_args); if (fetcher != nullptr) set_fetcher(fetcher); return args; } static void OnServingStatusUpdate(void* user_data, const char* uri, grpc_serving_status_update update) { if (user_data == nullptr) return; XdsServerServingStatusNotifierInterface* notifier = static_cast<XdsServerServingStatusNotifierInterface*>(user_data); notifier->OnServingStatusUpdate( uri, {grpc::Status(static_cast<StatusCode>(update.code), update.error_message)}); } XdsServerServingStatusNotifierInterface* notifier_ = nullptr; int drain_grace_time_ms_ = -1; }; } // namespace grpc #endif /* GRPCPP_XDS_SERVER_BUILDER_H */
{ "redpajama_set_name": "RedPajamaGithub" }
7,104
Cortinarius fasciatus (Giovanni Antonio Scopoli, 1772 ex Elias Magnus Fries, 1838) a încrengăturii Basidiomycota, din familia Cortinariaceae și de genul Cortinarius, este o ciupercă necomestibilă doar rar întâlnită care coabitează, fiind un simbiont micoriza (formează micorize pe rădăcinile arborilor). Un nume popular nu este cunoscut. În România, Basarabia și Bucovina de Nord trăiește de la deal la munte în grupuri pe sol acru, în păduri de conifere și mlaștini oligotrofe sub molizi, cu predilecție între mușchi. Timpul apariției este din (iulie) august până în octombrie (noiembrie). Taxonomie Numele binomial Agaricus fasciatus a fost determinat de naturalistul italian Giovanni Antonio Scopoli în volumul 2 al operei sale Flora carniolica din 1772. Apoi, în 1838, renumitul micolog suedez Elias Magnus Fries a transferat specia la genul Cortinarius sub păstrarea epitetului, de verificat în cartea sa Epicrisis systematis mycologici, seu synopsis hymenomycetum, fiind numele curent valabil (2022). Sinonime obligatorii sunt Hydrocybe fasciata a botanistului german Friedrich Otto Wünsche din 1877, bazând pe descrierea lui Scopoli precum Gomphos fasciatus a compatriotului său Otto Kuntze din 1891, bazând pe modificarea lui Fries. Alte redenumiri nu sunt cunoscute. Epitetul specific este derivat din cuvântul latin (=a înfășura, a se foi în jur). Descriere Pălăria: higrofană are un diametru de 2-6 cm, fiind la început conică, convexă și în vârstă cu marginea striată răsucită în sus. De la începutul maturizării este mereu cocoșată ascuțit. Cuticula este netedă, pe vreme uscată lucioasă și la umezeală lipicioasă până unsuroasă. Coloritul diferă între arămiu, brun-ruginiu și brun-roșcat, în centru mai închis, ocazional la bătrânețe aproape negricios. Lamelele: sunt subțiri și forte îndepărtate între ele, ușor bombate precum intercalate cu lameluțe de lungime diferită, fiind atașate la picior cu un dinte și acolo rotunjite. Culoarea inițial galben-ruginie până brun-portocalie schimbă cu maturitatea spre brun de scorțișoară, muchiile fiind colorate mai deschis. În stadiul tânăr al ciupercii sunt acoperite cu fragmente alb-gălbuie ale vălului parțial foarte subțiri. Piciorul: cu suprafața netedă, uneori presărată de resturi ale vălului, are o lungime de 5-8 cm și o lățime de 0,4-0,8 cm, este destul de lung, subțire, aproape cilindric, tare fibros și gol pe dinăuntru, fiind bulbos și alb pâslos spre bază cu un miceliu rozaliu. Suprafața pentru mult timp albicioasă devine în sfârșit brun-deschis până brun-roșiatic, decolorarea începând de la vârf. Nu poartă un inel. Carnea: este subțire și fibroasă mai presus de toate în picior, albicios-gălbuie în tinerețe, apoi de colorit brun-gălbui până carneu, fiind fără miros și gust particular. Caracteristici microscopice: are spori ocru-ruginii, elipsoidali cu un apicol, fin punctat-verucoși pe exterior și granulați pe dinăuntru, măsurând 6-9 x 4-5 microni. Pulberea lor este ruginie. Basidiile clavate cu preponderent 4 sterigme fiecare măsoară 20-22 (25) x 6-7 microni. Cistidele (elemente sterile situate în stratul himenal sau printre celulele din pielița pălăriei și a piciorului, probabil cu rol de excreție) de aceiași dimensiune au forma de măciucă cu vârfuri rotunjite. Pileocistidele (elemente sterile de pe suprafața pălăriei) are hife velare cu o dimensiune de 5,5-6,9 µm. Cleme nu sunt prezente. Reacții chimice: Nu sunt cunoscute. Confuzii Specia poate fi confundată în primul rând cu gemenul ei Cortinarius fulvescens (necomestibil), dar, de asemenea, de exemplu cu Cortinarius brunneus (otrăvitor), Cortinarius cinnamomeus (otrăvitor) Cortinarius croceus (otrăvitor), Cortinarius depressus sin. Cortinarius adalberti (necomestibil ), Cortinarius flexipes (otrăvitor), Cortinarius gentilis (otrăvitor), Cortinarius hinnuleus (necomestibil), Cortinarius malicorius (otrăvitor), Cortinarius orellanus (mortal), Cortinarius rigens (necomestibil), Cortinarius rigidus sin. Cortinarius umbrinolens (necomestibil, miros pământos, mucegăios, gust blând, crește sub mesteceni),+ imagini, Cortinarius rubellus (mortal), Cortinarius semisanguineus (otrăvitor). sau Cortinarius uraceus (necomestibil).Cortinarius uraceus (necomestibil). Specii asemănătoare în imagini Valorificare Cortinarius fasciatus nu este toxic, dar din cauza calității inferioare a cărnii necomestibil. În plus, specia poate fi confundată ușor cu unele otrăvitoare, și, din cauza rarității, ar trebui fi cruțată și lăsată la loc. Note Bibliografie Bruno Cetto: "I funghi dal vero", vol. 1-7, Editura Arte Grafiche Saturnia, Trento 1976-1993 (pentru cercetarea în total) German Josef Krieglsteiner (ed.), Andreas Gminder: "Verbreitungsatlas der Großpilze Deutschlands (West)", Editura Ulmer, Stuttgart 1991, ISBN 3-8001-3536-1 Edmund Michael: "Führer für Pilzfreunde", Editura Salzwasser Verlag GmbH, Paderborn 2010, ISBN: 978-3-86195-499-6 Meinhard Michael Moser în Helmut Gams: "Röhrlinge und Blätterpilze - Kleine Kryptogamenflora Mitteleuropas" ediția a 5-ea, vol. 2, Editura Gustav Fischer, Stuttgart 1983 Roger Phillips: "Mushrooms: A comprehensive guide to mushroom identification", Editura Macmillan, Londra sia Oxford 2013, ISBN: 978-0-330-44237-4 Albert Pilát: "Mushrooms and Other Fungi", Editura P. Nevill, Londra 1961 Carleton Rea: "British Basidiomycetae: A handbook to the larger British fungi", Editura Cambridge University Press, Cambridge 1922, p. 182 Unione micologica italiana: "Micologia italiana", vol. 29-30, Editura Edagricole, Bologna 2000 Legături externe Cortinarius fasciatus, mai multe imagini 1 Cortinarius fasciatus, mai multe imagini 2 + spori Cortinarius fasciatus, mai multe imagini 3 + spori Cortinarius Micoriză Ciuperci necomestibile 1772 în știință
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,556
\section{Introduction and background}\label{sec_I} \subsection{Q-learning algorithm working principles}\label{sub_I_A} The term machine learning (ML) was coined in 1959 by Arthur Samuel, an American IBMer and pioneer in the field of computer gaming and artificial intelligence \cite{Samuel_A}. ML consists essentially in the exploitation of particular algorithms of self-learning to get information from data in order to make predictions. Because of this, machine learning facilitates computers in building models from sample data in order to build decision-making processes based on data inputs \cite{Raschka, Algorithms_for_reinforcement_learning}. The applications of machine learning techniques to physics experienced a huge development in the last decades, with approaches based on topological optimization, evolutionary strategies, deep learning and reinforcement learning \cite{krenn2020computerinspired, Krenn_2016, Melnikov_2018}. In particular, the application of ML to "creative tasks", such as designing new quantum experiments, is not completely explored yet. There are a few aspects of quantum mechanics that lead researchers to think that machine learning and computer aided techniques offer interesting and promising perspectives in this regard \cite{Melnikov_2018}. For example, as we start to deal with many entangled particles, the Hilbert space dimension becomes so large that the problem is no more treatable without computer aid. Moreover, to build new experiments, physicists usually have to handle a huge number of variables, and ML techniques are yet in use to handle this kind of complexity \cite{krenn2020computerinspired}. Moreover, it is worth asking whether ML and artificial intelligence (AI) can boost human intuition in dealing with the intrinsic counter-intuitive nature of quantum mechanics, and if ML can also help to handle multi-dimensional and multi-partite entangled systems. An emblematic example is the Melvin algorithm developed by Krenn et al. in 2016 \cite{Krenn_2016}. Indeed, this algorithm has uncovered solutions to previously unsolved questions and has also inspired the discovery of new scientific insights \cite{Krenn_2017}. In this work we will present an application of ML techniques to quantum entanglement, a fundamental resource for quantum computation. In particular we will apply ML to design new quantum experiments aimed at engineering quantum mechanical states with desired entanglement properties. machine learning techniques can be grouped into three basic types: supervised learning, unsupervised learning and reinforced learning. Q-learning is one of the best known reinforcement learning algorithms \cite{ Algorithms_for_reinforcement_learning,Q_learn}. Although the name may suggests otherwise, the Q does not stand for anything quantum-related, it is in fact a historical notation based on the name of the cost function to be optimized during the algorithm. Thus, as such, Q-learning is a type of classical reinforcement learning (RL). Alternative ML reinforcement methods, such as the Projective Simulation (PS) algorithms \cite{PS} have been extended to the quantum domain \cite{Davide} obtaining a quantum advantage for the first time over their classical counterparts. Whereas a lot of activity is being generated for quantum reinforcement learning \cite{Vedran,Lamata}, the present works remains classical with respect to the algorithm employed with the goal being the generation of quantum states. Q-learning, like all the RL procedures, involves an agent (the algorithm itself), an environment and a set of actions by which the agent interacts with it (see Fig.\ref{fig:reinforcementLearning}). In the environment are placed some \textit{objectives} which the agent must reach \cite{sutton2018reinforcement}. It is a model-free RL algorithm i.e. there is no previous knowledge of the environment \cite{Q_learn}. It can explore this latter by performing stochastic actions and analyzing the feedback it receives. \begin{figure} \includegraphics[width=0.7\linewidth]{Reinforcement_learning.eps} \caption{Flowchart of reinforcement learning working principle. It consists of an agent (the algorithm itself), an environment and a set of actions by which the agent interacts with the environment and is rewarded.} \label{fig:reinforcementLearning} \end{figure} By interacting with the environment, the agent changes its state and eventually earns some rewards, when finding its objectives. By keeping track of this feedback, it can learn how to interact with the environment to reach the objectives while maximizing the rewards. Hence, the agent will be able to select its actions with an optimal policy \cite{Q_learn}, in the sense of the reward maximization. A simple illustrative description of the Q-learning algorithm is shown in Fig.\ref{fig:Mouseqlearning} for the mouse-labyrinth example. We can identify the following elements: \begin{enumerate} \item[] \begin{minipage}[t]{0.5\linewidth} \small \begin{enumerate} \item ensemble of actions: all the possible movements that the mouse can perform, \item states of the agent: its positions in the labyrinth, \item rewards: the pieces of cheese that he can reach while walking in the labyrinth. \end{enumerate} \end{minipage} \begin{minipage}[t]{0.3\linewidth} \centering \strut\vspace*{-\baselineskip}\newline\includegraphics[width=0.4\linewidth, height=0.14\textheight]{Mouse_list_2.eps} \end{minipage} \end{enumerate} The actions of the mouse (agent), allows it to explore the labyrinth (environment). The states in which the mouse could be corresponds to all the positions in the labyrinth. The rewards in the environment are encoded by a user defined function, that establishes the "prizes" that the agent will gain during its exploration. Usually this means that, if we want the agent to learn how to end up in specific states in the environment, we have to provide rewards every time it performs actions that lead it in those states. We will call these states the "objective states". The reward function is usually implemented as a matrix called Reward matrix (R-matrix). It has non-zero elements for \textit{state-action} pairs that lead directly to the objective states. While the agent explores the environment, it needs to record the rewards that it earns and to recall them during the exploration. In particular it needs an instrument to keep track of the pairs \textit{state-action} that can bring to rewards. In Q-learning this tool is the Quality matrix, or Q-matrix. \begin{figure} \includegraphics[width=1\linewidth]{Mouse_q_learning.eps} \caption{Q-learning scheme through the mouse-labyrinth example. In this image we represent the agent as a mouse that can move around a labyrinth, with the aim of finding the exits, where some cheese-rewards are placed. In the learning process, it selects and performs random movements starting from random positions in the labyrinth. It eventually earns a reward when it approaches the escapes and records this event. By performing random movements and by keeping track of the rewards earned, it can trace the optimal path to escape from the labyrinth, that maximizes the rewards earned.} \label{fig:Mouseqlearning} \end{figure} The Q-learning algorithm involves two phases: training and testing \cite{Q_learn}. The training part consists in the exploration of the environment and proceeds with single \textit{episodes}. At each time $t$ an episode takes place, the agent is placed in a random state $s_t$ in the environment, from which it performs a random action $a_t$. It records the reward that it eventually obtains from the R-matrix, $R_t$. The numerical value inserted in the Q-matrix is calculated with a Bellmann equation \cite{bellman_equation}, or Q-learning formula, that allows to update the old Q-matrix value $Q(s_t,a_t)$ to the new one $Q^{new}(s_t,a_t)$. The agent assigns to each pair \textit{state-action} a quality value (Q-value) that depends not only on the reward earned in the current episode ($R_t$), but also on the rewards received in the past ones. Further details about the calculation of this Q-value will be given in the following section. Below are shown the steps consisting in a single episode: \begin{itemize} \item the agent observes its current state $s_t$; \item selects and performs a random action $a_t$; \item observes the subsequent state $s_{t+1}$; \item eventually receives a reward $R_t$; \item updates its $Q(s_t,a_t)$ value to $Q^{new}(s_t,a_t)$ using the Q-learning formula; \item the agent state is initialized to a new random one $s_{t+1}$. \end{itemize} At the end of the training part, the agent has updated its Q-matrix with weighted rewards associated with the pairs \textit{state-action}. Each value in the Q-matrix quantifies how much is \textit{good} to take a specific action in a certain state, in order to reach the objective with an optimal path.\\ \\ After the training part, if the agent went through a sufficiently high number of episodes, the values of its Q-matrix no longer change significantly. The criteria for this convergence must be established depending on the problem treated, but in general it is proven that the Q-learning algorithm converges \cite{Q_learn}. In the testing part, by using the Q-matrix values, the agent is able to reach the \textit{a priori} established objectives by following the best rewarded pairs \textit{state-action}, starting from a chosen initial state. The agent selects from the Q-matrix the most rewarded action associated with its current state. It performs the action and changes its state, repeating this procedure until it reaches the final objective. In this way it can find the optimal path towards its goals, maximizing the earned rewards (see Fig.\ref{fig:Mousetesting}). \begin{figure} \includegraphics[width=1\linewidth]{Mouse_testing2.eps} \caption{At the end of the training part, the mouse has updated its Q-matrix with weighted rewards associated with the pairs \textit{state-action}. This values quantify how much is \textit{good} to take a specific action in a certain state, in order to reach the objective. In the grid showed in this picture, the size of the cheese-values indicates the \textit{goodness} of an action, given the state in which it is performed. Once an initial position is established, the mouse follows his grid of \textit{state-action} pairs to select his next movement, choosing the most rewarded ones. This will carry it to the exit of the labyrinth.} \label{fig:Mousetesting} \end{figure} \subsection{Q-learning cost function}\label{sub_I_B} As we mentioned, the Q matrix is derived from the R-matrix, and it is updated at each episode of the training process using the Q-learning formula: \begin{gather}\label{eq:Q_formula} Q^{new}(s_{t}, a_{t})\leftarrow \underbrace{Q(s_{t}, a_{t})}_{\text{old value}}+ \underbrace{\alpha}_{\substack{\text{learning}\\ \text{ rate}}} \big(\underbrace{R_{t}}_{\text{reward}}+\\\notag +\underbrace{\gamma}_{\substack{\text{discount}\\ \text{ factor}}} \underbrace{max_{\text{a}}Q(s_{t+1},a)}_{\substack{\text{optimal future}\\ \text{ value}}} - \underbrace{Q(s_{t},a_{t})}_{\text{old value}} \big). \end{gather} This expression is a Bellman equation \cite{bellman_equation}, where $Q^{new}(s_{t}, a_{t})$ expresses the quality value of a \textit{state-action} pair. In this element, the variable $t$ represent a discrete timing of the episodes. We can compute the value $Q^{new}(s_{t}, a_{t})$ using the prior value of the Q-matrix for that \textit{state-action} pair, $Q(s_{t}, a_{t})$, and adding to it the reward earned in the current episode, $R_{t}$. We add to this latter an evaluation of the maximum reward for the possible future actions, $max_{\text{a}}Q(s_{t+1},a)$, in fact, the $a$ index runs through all the actions that can be performed from the resulting state $s_{t+1}$. If in the past episodes the agent gained some rewards, thanks to the quantity $max_{\text{a}}Q(s_{t+1},a)$, it will take into account these past rewards as a positive contribution to the $Q^{new}(s_{t}, a_{t})$ value. This means that, in order to assign a quality value to the current \textit{state-action} pair, the agent is looking at the values of the possible next steps. Indeed, with this procedure, the agent learns to follow the best rewarded path. These past rewards are scaled by a \textit{discount factor}, $\gamma$. \begin{itemize} \item $\gamma$: is a real number between zero and one, $0<\gamma<1$, and sets how much the reward of future actions influences the new value $Q^{new}(s_{t}, a_{t})$. \end{itemize} If $\gamma$ is chosen close to $0$, it will prevent the algorithm to see the future rewards by making it "myopic" \cite{Q_learn}, considering only current rewards. If it is close to $1$, it would be hard to reach a long term high reward, because the past rewards will be taken much more into account than the new ones coming from the R-matrix. All these terms are scaled again by the so called \textit{learning rate} $\alpha$. \begin{itemize} \item $\alpha$: it is a real number between $0$ and $1$, like $\gamma$, it is set to establish the learning speed of the algorithm. \end{itemize} If it is closer to $1$ it allows the algorithm to learn quickly, as the values of the Q-matrix will be updated with a high weight, otherwise, the learning part of the algorithm is slower but is capable to "remember" better the rewards taken in the past. In other words, $\alpha$ defines how much we override the old values of the Q-matrix with the new ones. Hence, the setting of $\alpha$ requires to choose between two opposite strategies. The first, with $\alpha$ close to $1$, consists in a faster exploration, that easily forgets the past rewards, in favor of the new ones. The second, with $\alpha$ close to $0$, implies a slower exploration in which, however, the algorithm can remember each past reward, but it will take a longer time to explore the environment. After setting those parameters to suitable values, the algorithm will update the Q-matrix during the training part.\\ The Q-values are proven to converge to those of a $Q^*(s,a)$ matrix, that represent the optimal policy of the algorithm, i.e. the set of quality values that make the agent able to reach the objective with an optimal path \cite{Q_learn}. However, the training part should include an infinite number of trials in order to have an ideal convergence to the optimal policy \cite{Q_learn, Francisco_melo}, in practice there is a residual convergence that is negligible. We will consider the training completed when the values of the Q-matrix remain stable under a certain threshold. \subsection{Entanglement basics}\label{sub_I_C} Entanglement is one of the most important and interesting features of quantum mechanics, it has a key role in the fields of quantum communication, quantum cryptography \cite{nielsen_chuang_2010,RMP02} and entangled states are a fundamental ingredient to build quantum algorithms in quantum computation \cite{nielsen_chuang_2010, RMP02}.\\ For pure quantum states we can say that if we have two or more systems entangled the state of each system cannot be described independently from the others. In other words, for a system with Hilbert space $H=H_A\otimes H_B$ it holds: \begin{gather}\label{entanglement} \ket{\psi}\ne \ket{\psi}_A \otimes \ket{\psi}_B, \end{gather} with $\ket{\psi}$ being the state of the whole system in $H$, $\ket{\psi}$ cannot be factorized into the states of the subsystems $A$ and $B$. We focus our discussion on two-level quantum systems, i.e. the qubits (such as linearly polarized photons or electrons spin)), which generic state reads: \begin{gather} \ket{\phi}=\alpha\ket{0}+\beta\ket{1}, \end{gather} with $\alpha$ and $\beta$ complex coefficients. We can deal at the same time with more than one qubit, and thus we can have multiqubit entanglement. Entanglement between qubits is fundamental for quantum computing and of course for the realization of modern prototypes of quantum computers \cite{nielsen_chuang_2010,RMP02}. In this framework, the study of qubits entanglement is of crucial importance: many studies and applications have been made to obtain desired quantum states, with particular entanglement properties, and the entanglement complexity poses major obstacles to this research field \cite{Melnikov_2018}. For this reason, many recent works employ machine learning techniques in order to help scientists in difficult computational tasks, but also to explore entanglement features that are far from human intuition \cite{krenn2020computerinspired, Krenn_2016}. With such background, this work is devoted to explore the design of quantum circuits that can reproduce entangled states of qubits, with the aid of a reinforcement learning algorithm. In particular, we will focus our attention on entangled states of four qubits, based on the classification made previously in the literature \cite{Verstraete2, Verstraete_2002,slocc_in_efs}. \subsection{SLOCC classification}\label{sub_I_D} The complexity of quantum entangled states requires a clear picture that allows to classify them. In order to categorize different types of entanglement, we can divide the Hilbert space of of a multipartite system into equivalence classes, using an operational definition of equivalence. Following the scheme in \cite{Chitambar_2014, AULBACH_2012} we can use the Local Unitary (LU) equivalence, with LU being deterministic and reversible operations. If it is possible to transform a state into another via LU operations, then the two states are LU-equivalent \cite{AULBACH_2012}. Two LU-equivalent states have the same physical properties, in particular the same entanglement ones. However, the LU operations do not include all the possible operations that preserve the entanglement and can be performed experimentally. In particular they do not allow to perform joint operations on spatially separated particles. In order to classify properly the entangled states, we need to include in the conversion operations the support of classical communication \cite{Bennett_1996}. This leads to the paradigm of Local Operations assisted with Classical Communication (LOCC): quantum states are transformed by performing Local Operations (LOs) on the subsystems and allowing the transmission of classical communication between the spatially separated parties \cite{AULBACH_2012, Bennett_1996}. For the case of pure states, however, it has been shown \cite{Entan_monotones, LUequival} that two states are LOCC-equivalent iff they are LU-equivalent. This means that the classes defined by LOCC are the same as those defined by LU operations. It has been demonstrated that two pure bipartite states are LOCC-equivalent iff they have the same Schmidt coefficients \cite{Majorization, Schmidt, tenso_rank}: \begin{gather} \ket{\psi} \mathop \leftrightarrow \limits^{LU} \ket{\phi} \Leftrightarrow \ket{\psi} \mathop \leftrightarrow \limits^{LOCC} \ket{\phi} \Leftrightarrow \alpha_i = \alpha^{\prime}_i \:, \; \forall i, \end{gather} with $\ket{\psi}\to \left\{ {\alpha_i} \right\} $, $\ket{\phi}\to \left\{ \alpha^{\prime}_i\right\}$ being their Schmidt decompositions. Hence, through these LOCC we can transform one state into another deterministically (with probability of success equal to $1$), i.e the two states belong to the same LOCC entanglement class \cite{D_r_2000}, and they have the same entanglement properties. Otherwise they belong to different entanglement classes and thus have different entanglement properties. We notice that this criterion is interesting in quantum information theory because all the parties involved can use these LOCC equivalent states for exactly the same tasks \cite{D_r_2000}. This Schmidt criterion of classification become useless when dealing with multipartite Hilbert spaces, in fact, the Schmidt decomposition is only possible for the bipartitions of a system \cite{nielsen_chuang_2010}. The most promising classification for multipartite Hilbert spaces states is the one based on the equivalence under Stochastic Local Operations and Classical Communication (SLOCC). This latter is identical to LOCC equivalence except that the interconversion of two states do not need to be deterministic, the success probability of a conversion only needs to be non-zero \cite{nielsen_chuang_2010}. Thus LOCC-equivalence implies SLOCC-equivalence, and therefore the partition of the Hilbert space into LOCC equivalence classes is a refinement of the partition into SLOCC classes, Fig.\ref{fig:LOCC-SLOCC}. \begin{figure} \centering \includegraphics[width=0.5\linewidth]{LOCC-SLOCC.eps} \caption{LOCC classification as a refinement of SLOCC classification. States that are LOCC equivalent result also SLOCC equivalent \cite{AULBACH_2012}.} \label{fig:LOCC-SLOCC} \end{figure} We remind that SLOCC operations cannot increase, on average, the amount of entanglement \cite{AULBACH_2012, nielsen_chuang_2010}. In particular, it is not possible to generate entangled states from separable states by SLOCC, even probabilistically \cite{AULBACH_2012, nielsen_chuang_2010}. With two qubits, there exist only two SLOCC equivalence classes, the class of separable states, and the class of entangled states. Any pure entangled state of two qubits can be converted via SLOCC into any other pure entangled state with non-zero probability. For three qubits, there exist six SLOCC classes \cite{D_r_2000}, namely the separable class, the three bipartite entangled classes AB-C, AC-B, BC-A, the class with W-type entanglement and the class with GHZ-type entanglement \cite{D_r_2000, perfect_W}. For four qubits, the number of SLOCC classes seemed to become infinite \cite{AULBACH_2012}. Because of this, various attempts have been made to find alternative and physically meaningful classification schemes for the four qubit case. The technique that we will take into account is the one used by F. Verstraete et al. \cite{Verstraete2,Verstraete_2002} and then further analyzed by D. Li et al. in \cite{slocc_in_efs}. In \cite{Verstraete_2002} the concept of Entangled Families (EF) was introduced, and nine different EFs were found, while in \cite{slocc_in_efs} 49 different SLOCC classes are identified within these EFs. This classification of four qubits states will be the basis for the application of our quantum circuit design algorithm. \subsection{SLOCC families for four qubits entangled states}\label{sub_I_E} As we explained in the previous section, Verstraete et al. categorize the four qubits entangled states into nine entanglement families basing the categorization on SLOCC classification \cite{Verstraete_2002}. Since they prove that each SLOCC equivalence class belongs to exactly one EF \cite{Verstraete_2002}, the SLOCC classes are a refinement of the EFs. In Fig.\ref{fig:EFs-SLOCC} a graphical representation of this relationship is shown. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{EFs-SLOCC.eps} \caption{Graphical representation of the relationship between EFs and SLOCC classes. Even if each EF does not contain only one SLOCC class, each SLOCC class is demonstrated to belong to only one EF \cite{Verstraete_2002}.} \label{fig:EFs-SLOCC} \end{figure} Here we report all the nine EFs for four qubits, we adhere to the terminology used in \cite{Verstraete_2002}: \begin{small} \begin{alignat}{2}\notag G_{abcd}=&\frac{d+a}{2}(\ket{0000}+\ket{1111})+\frac{a-d}{2}(\ket{0011}+\ket{1100})+\\\notag &+\frac{c-b}{2}(\ket{0110}+\ket{1001})+\frac{b+c}{2}(\ket{0101}+\ket{1010})\\\notag L_{abc_2}=&\frac{a+b}{2}(\ket{0000}+\ket{1111})+\frac{a-b}{2}(\ket{0011}+\ket{1100})\\\notag &+c(\ket{0101}+\ket{1010})+\ket{0110}\\\notag L_{a_2b_2}=&a(\ket{0000}+\ket{1111})+b(\ket{0101}+\ket{1010})+\ket{0110}\\\notag &+\ket{0011}\\\notag L_{ab_3}=&a(\ket{0000}+\ket{1111})+\frac{a+b}{2}(\ket{0101}+\ket{1010})+\\\notag &+\frac{a-b}{2}(\ket{0110}+\ket{1001})+\frac{i}{\sqrt{2}}(\ket{0001}+\ket{0010}+\\\notag &+\ket{0111}+\ket{1011})\\\notag L_{a_4}=&a(\ket{0000}+\ket{0101}+\ket{1010}+\ket{1111})+(i\ket{0001}\\\notag &+\ket{0110}+i\ket{1011})\\\notag L_{a_20_{3\oplus \bar{1}}}=&a(\ket{0000}+\ket{1111})+(\ket{0011}+\ket{0101}+\ket{0110})\\\notag L_{0_{5\oplus\bar{3}}}=&\ket{0000}+\ket{0101}+\ket{1000}+\ket{1110}\\\notag L_{0_{7 \oplus\bar{1}}}=&\ket{0000}+\ket{1011}+\ket{1101}+\ket{1110}\\ L_{0_{3\oplus\bar{1}}0_{3\oplus\bar{1}}}=&\ket{0000}+\ket{0111}, \label{eq:nine_classes} \end{alignat} \end{small}\\ where $a$,$b$,$c$ and $d$ are four complex parameters. The SLOCC classes that can be identified in these nine families depend on constraints applied to those four complex parameters \cite{slocc_in_efs}. The work by D. Li et al. \cite{slocc_in_efs} distinguishes at least 49 true SLOCC entanglement classes among all the nine families. For example, for family $G_{abcd}$ they identify 13 different true SLOCC classes; for family $L_{abc_2}$, 19 true SLOCC classes and so on. They give the complete SLOCC classifications for families $L_{a_4}$, $L_{a_20_{3\oplus \bar{1}}}$, $L_{0_{5\oplus\bar{3}}}$, $L_{0_{7 \oplus\bar{1}}}$, and $L_{0_{3\oplus\bar{1}}0_{3\oplus\bar{1}}}$, but they do not grant that the other classes have been completely explored. In Tab.\ref{tab:slocc_classes_all} we summarize the 49 true SLOCC classes, specifying the conditions on the coefficients $a$,$b$,$c$ and $d$. Also here we adhere to the terminology used in \cite{slocc_in_efs}. Notice that, for $L_{a_20_{3\oplus \bar{1}}}$, $L_{0_{5\oplus\bar{3}}}$ and $L_{0_{7 \oplus\bar{1}}}$ in Eq.\eqref{eq:nine_classes}, there is a $1:1$ correspondence with SLOCC classes, while $L_{0_{3\oplus\bar{1}}0_{3\oplus\bar{1}}}$ does not contain four qubit entanglement, in fact it is a product state of the one-qubit state $\ket{0}$ and the three-qubit $\ket{GHZ}$ state. \begin{table} \begin{tabular}{p{2.3cm} m{5.4cm}} \hline \multicolumn{1}{c}{Family, SLOCC class} & {Conditions on coefficients} \\ \hline $G_{abcd}$, A1.1 & $b=c=0, ad\neq0, a=\pm d$ \\ $G_{abcd}$, A1.2 & $b=c=0, ad\neq0, a\neq \pm d, a^2+d^2=0$ \\ $G_{abcd}$, A1.3 & $b=c=0, a^2+d^2 \neq 0$ \\ $G_{abcd}$, A2.1 & $a=d, a \neq \pm b, b=+c, a^2+b^2=0$ \\ $G_{abcd}$, A2.2 & $a=d, b=c, a \neq \pm b, a^2+b^2 \neq 0$ \\ $G_{abcd}$, A3.1 & $a=d, a \neq \pm b, b=-c, a^2+b^2=0$ \\ $G_{abcd}$, A3.2 & $a=d, b=-c, a \neq \pm b, a^2+b^2 \neq 0$ \\ $G_{abcd}$, A4.1 & $a=d,$ either $a= \pm b$ or $\pm c, 2a^2+b^2+c^2 =0$ \\ $G_{abcd}$, A4.2 & $a=d,$ either $a= \pm b$ or $\pm c, 2a^2+b^2+c^2 \neq 0$\\ $G_{abcd}$, A4.3 & $a=d, a \neq \pm b, a \neq \pm c, 2a^2+b^2+c^2 = 0$\\ $G_{abcd}$, A4.4 & $a=d, a \neq \pm b, a \neq \pm c, 2a^2+b^2+c^2 \neq 0$\\ $G_{abcd}$, A4.5 & $a^2+b^2+c^2+d^2=0$ \\ $G_{abcd}$, A4.6 & $a^2+b^2+c^2+d^2 \neq 0$ \\ $L_{abc_2}$, B1.1 & $c=0, a=b \neq 0$ \\ $L_{abc_2}$, B1.2 & $c=0, a=-b \neq 0$ \\ $L_{abc_2}$, B1.3 & $c=0, a \neq \pm b, a^2+b^2=0$ \\ $L_{abc_2}$, B1.4 & $c=0, a \neq \pm b, ab \neq 0, a^2+b^2 \neq 0$ \\ $L_{abc_2}$, B1.5 & $c=0, a \neq \pm b, ab=0$ \\ $L_{abc_2}$, B2.1 & $abc \neq 0, a=b, a= \pm c$ \\ $L_{abc_2}$, B2.2 & $abc \neq 0, a=b, a \neq \pm c, a^2+c^2=0$ \\ $L_{abc_2}$, B2.3 & $abc \neq 0, a=b, a \neq \pm c, a^2+c^2 \neq 0$\\ $L_{abc_2}$, B3.1 & $abc \neq 0, a=-b, a = \pm c$ \\ $L_{abc_2}$, B3.2 & $abc \neq 0, a=-b, a \neq \pm c, a^2+c^2=0$ \\ $L_{abc_2}$, B3.3 & $abc \neq 0, a=-b, a \neq \pm c, a^2+c^2 \neq 0$ \\ $L_{abc_2}$, B4.1 & $abc \neq 0, a \neq \pm b, a= \pm c, 3a^2+b^2=0$\\ $L_{abc_2}$, B4.2 &$abc \neq 0, a \neq \pm b, a= \pm c, 3a^2+b^2 \neq 0$ \\ $L_{abc_2}$, B4.3 & $abc \neq 0, x \neq \pm y, a^2 + b^2 + 2c^2 = 0$.\\ $L_{abc_2}$, B4.4 & $abc \neq 0, x \neq \pm y, a^2 + b^2 + 2c^2 \neq 0$.\\ $L_{abc_2}$, B5.1 & $c \neq 0, a=b=0$ \\ $L_{abc_2}$, B5.2 & $c \neq 0, a=0, b=c$ \\ $L_{abc_2}$, B5.3 & $c \neq 0, a=0, b \neq \pm c, b^2+2c^2=0$ \\ $L_{a_2b_2}$, B5.4 & $c \neq 0, a = 0, b \neq \pm c, b^2 + 2c^2 \neq 0$ \\ $L_{a_2b_2}$, V1 & $a= \pm b \neq 0$ \\ $L_{a_2b_2}$, V2 & $a \neq \pm b, ab \neq 0, a^2+b^2=0$ \\ $L_{a_2b_2}$, V3 & $a \neq \pm b, ab \neq 0, a^2+b^2\neq 0$ \\ $L_{a_2b_2}$, V4 & $a \neq \pm b, ab=0$ \\ $L_{ab_3}$, R1.1 & $a=b=0$ \\ $L_{ab_3}$, R1.2 & $a=b \neq 0$ \\ $L_{ab_3}$, R1.3 & $a=-b \neq 0$ \\ $L_{ab_3}$, R2.1 & $a=0, b \neq 0$ \\ $L_{ab_3}$, R2.2 & $a \neq 0, b=0$ \\ $L_{ab_3}$, R3.1 & $a \neq \pm b, ab \neq 0, \; 3a^2+b^2\neq 0$ \\ $L_{ab_3}$, R3.2 & $a \neq \pm b, ab \neq 0, 3a^2+b^2=0, b=+i\sqrt{3}$ \\ $L_{ab_3}$, R3.2* & $a \neq \pm b, ab \neq 0, 3a^2+b^2=0, b=-i\sqrt{3}$ \\ $L_{a_4}$, La.1 & $a=0$ \\ $L_{a_4}$, La.2 & $a \neq 0$ \\ $L_{a_20_{3\oplus\bar{1}}}$& $a \neq 0$ \\ $L_{0_{5\oplus\bar{3}}}$ & no conditions \\ $L_{0_{7\oplus\bar{1}}}$ & no conditions \\ \hline \end{tabular} \caption{In this table we show the 49 true SLOCC classes identified in \cite{slocc_in_efs} among the nine families reported in Eqs.\eqref{eq:nine_classes}, with the conditions on the parameters $a$, $b$, $c$ and $d$. Notice that the family $L_{0_{3\oplus\bar{1}}0_{3\oplus\bar{1}}}$ is not included because it is not characterized by four qubit entanglement since its representative state is a product state of the one-qubit state $\ket{0}$ and the three-qubit $\ket{GHZ}$ state \cite{slocc_in_efs}.} \label{tab:slocc_classes_all} \end{table} It is clear that, with the increasing number of qubits, the number of different SLOCC classes increase dramatically, i.e. the complexity of the entanglement grows exponentially with the number of qubits involved. To overcome this complexity and approach entangled states from a computational point of view, we decide to apply the RL algorithm described in Sec.(\ref{sub_I_A}) and Sec.(\ref{sub_I_B}), called Q-learning \cite{Q_learn}, in order to synthesize quantum circuits that can produce four qubits entangled states belonging to the above showed EFs. We will illustrate in detail in the next sections how Q-learning is exploited for our purpose. \section{Quantum circuits design with reinforcement learning}\label{sec_II} \subsection{Q-learning algorithm applied to quantum circuits design}\label{sub_II_A} In order to apply the Q-learning to our case, we implement the basic components of the algorithm as follows: \begin{itemize} \item the \textbf{objectives} that we will specifically search for are the SLOCC classes (Tab.\ref{tab:slocc_classes_all}) included in the nine EFs (Eq.\eqref{eq:nine_classes}), hence, the representative states of the SLOCC classes of four qubits; \item the \textbf{environment} is located in the four qubits state space; \item the \textbf{actions} that the \textbf{agent} (the algorithm itself) can perform consist in the application of quantum gates from a chosen set of gates; \item the \textbf{rewards} are encoded in a R-matrix whose entries corresponds to state-action pairs. \end{itemize} Reminding that the algorithm goal is to design circuits to generate entangled states of four qubits, the training part consists in updating the Q-matrix whereas in the testing part the agent reaches the final objective state, producing the desired circuit. More specifically: \paragraph{Objectives.} The algorithm can only focus on one target state at a time, thus we fix a single objective state, the representative of a SLOCC class in Tab.\ref{tab:slocc_classes_all}. In order to suit the algorithm procedure, the representative state chosen is encoded as a list of its superposition terms, e.g. if the SLOCC class is A1.1 and the representative state reads $\ket{\Psi}_{A1.1}=1/\sqrt{2}\left(\ket{0000}+\ket{1111}\right),$ then it is encoded as $\Psi_{A1.1}=\{0000,1111\}$. This element is a representation of the objective state that the algorithm can elaborate. In general, the state that we want to target can be written as: \begin{gather}\label{eq:obj_state} \ket{\Psi}=\sum_{j=1}^{16}\alpha_j \ket{\psi}_j, \end{gather} where $\ket{\psi}_j$ are the sixteen basis states for the four qubits state space, $\alpha_j$ are the superposition coefficients which can be either $0$ or $\neq 0$. This state, for the purpose of the algorithm, can be encoded as: \begin{gather}\label{eq:obj_represent} \Psi=\{\psi_1,\dots,\psi_n\} \end{gather} with $n$ being the number of terms of the objective quantum state $\ket{\Psi}$. Notice that in Eq.\eqref{eq:obj_represent} only the basis states $\ket{\psi_j}$ with $\alpha_j \neq 0$ are listed. We specify that there is a clear distinction between $\ket{\Psi}$, the physical representation of the quantum state, and the abstract list-object $\Psi$ that that represents it for the algorithm procedure. In fact, as we will see in the next sections, after the Q-learning procedure we need to make further manipulations on the output of the algorithm to obtain the physical objective state $\ket{\Psi}$ (see Appendix \ref{Appendix}).\\ \paragraph{Environment} The environment of our algorithm is made up of quantum states of four qubits. We divide the Hilbert space into sub-sets characterized by states with a fixed number of terms in their superposition, i.e. single-term set, double-term set etc. Moreover, in order to have a finite and discrete number of states to explore, we represent each state of the sub-sets as a list of its own superposition terms, without considering the states' coefficients. In fact, recording continuous coefficients would be an impossible task in terms of computational resources. However, to overcome the issues that can rise from this limitation, we will apply a post-processing procedure that allow us to tune the coefficients of the resulting state, in order to match the desired ones of the objective quantum state $\ket{\Psi}$ (see Appendix \ref{Appendix}). If the representative state of the class has $n$ terms, then the algorithm will explore the sub-sets with $m$ terms, $m \leq n$. Notice that the dimension of the environment (i.e. the total dimension of the sub-sets involved) grows with the number of terms of the target state. This choice allows to explore a limited number of sub-sets. Although this method limits the possibility to explore some of the four qubits entanglement classes, our algorithm proves to be an efficient way to design quantum protocols for a significant part of them.\\ \paragraph{Actions.} The actions that the agent can perform consist in the application of single quantum gates (both single qubit and multiple qubits gates). The set of gates that the algorithm can apply is established before the training part. However, depending on our necessities we can update the set of gates before each search. For example, in order to reach specific classes, we added new gates with respect to the initial ones.\\ \paragraph{Rewards.} The R-matrix entries correspond to the pairs \textit{state-gate}, where the states are all the sub-sets of states with $m$ terms that compose the environment. The only entries that carry a value $\neq 0$ are those where the \textit{gate} applied to the \textit{state} gives as a result the target state. \subsection{Algorithmic procedure}\label{sub_II_B} When a target state $\ket{\Psi}$ is established, and then encoded in $\Psi$ as a list-shaped object showed in Eq.\eqref{eq:obj_represent}, the algorithm first builds the R-matrix and initialize the Q-matrix as a zero matrix with the same shape as the R-matrix. We recall that the size of the R-matrix depends on the number of terms of the target state and on the number of gates that we decide to use. \paragraph{Training part.} During each episode of the training part, the algorithm is initialized in a random quantum state among the ones belonging to the sub-sets involved. Then it applies to the current state a random gate, taken from the gates set. It reads the resulting state and checks if the desired target state has been reached. By means of the Q-learning cost function (Eq.\eqref{eq:Q_formula}), it updates the value of the Q-matrix entry, corresponding to the pair \textit{current state-gate}. This procedure, consisting in a single episode of the Q-learning algorithm, is repeated for a fixed number of times, established before the training part. At the end of this run, we calculate what we can call the changing rate (CR). This quantity evaluates how much the Q-matrix values change, with respect to the values that it had before the run. We can set a threshold under which the Q-matrix is considered substantially unchanged. After a sufficient number of episodes, if the algorithm reaches the desired threshold, we can consider that the Q-learning is at convergence, and thus we can proceed to the testing part. The number of repetitions needed to explore the whole environment depends on the dimension of this latter (that, as we mentioned, is linked to the number of terms of the searched state). In Fig.\ref{fig:Class_B1.1_percentage_modification} we can see an example of this convergence procedure: the threshold for the CR is set at $10\%$ and we can see how the CR of the Q-matrix decreases while number of episodes increases. \begin{figure} \includegraphics[width=1\linewidth]{Class_B1.1_percentage_modification.eps} \caption[]{ Decreasing of the Q-matrix changing rate (CR) with the increasing number of episodes in the training part. We consider that the algorithm has successfully updated the Q-matrix when the it reaches or pass the CR threshold.} \label{fig:Class_B1.1_percentage_modification} \end{figure} \paragraph{Testing part.} In this part we check if the algorithm is able to find a suitable protocol to reach the objective state. We select an arbitrary initial state, e.g. $\ket{0000}$ encoded as $\{0000\}$. The algorithm checks the Q-matrix values in the cells that link the current state to the quantum gates, i.e. entries with coordinates ($\{0000\}$,\textit{gate}$_i)$, with $i$ scrolling all the gates of the gates-set. It then selects the action corresponding to the largest Q-value, and applies the gate to the current state. After reading the resulting state, it sets this latter as the new current state, and repeats the same procedure, until the final target state is reached. By recording the sequence of states and applied gates, it builds a quantum protocol that starts from the arbitrary initial state and reaches to the objective one. Let us call $\ket{\Psi}_{out}$ the output state of the quantum protocol synthesized by the algorithm. The algorithm representation of the output state should be equal to the searched one $\Psi$: in other words, $\ket{\Psi}_{out}$ must have the same superposition terms showed in the $\Psi$ list, i.e. the same superposition terms of $\ket{\Psi}$. This outcome proves that the training procedure was completed successfully, and thus the algorithm is able to produce the quantum circuit that generates the target state. As we already mentioned, in order to have an output state $\ket{\Psi}_{out}$ that matches also the coefficients of the quantum state $\ket{\Psi}$ we developed a post-processing procedure showed in Appendix \ref{Appendix}. \section{Machine learning generation of entangled 4-qubit states}\label{sec_III} In this section we apply the Q-learning algorithm to design proper quantum protocols to achieve the four qubits entanglement classes. Using the classification in Tab.\ref{tab:slocc_classes_all} \cite{slocc_in_efs}, we present some of the quantum circuits generated by the algorithm to reach the representative states of those classes. We find out that not all these true SLOCC classes can be handled with the aid of our algorithm. Therefore, we try to point out and explain the features of the circuits that we manage to find for some of the classes, focusing on the specific gates that we introduced. We start our search of quantum circuits with a simple set of gates, focusing firstly on the EF $L_{0_{3\oplus\bar{1}}0_{3\oplus\bar{1}}}$ i.e. the ninth EF. Although it is characterized by only three-partite entanglement, and thus it is not a true SLOCC class, it is a good starting point to test our algorithm, due to the possibility to validate our result with pre-existing literature concerning three qubits entangled states \cite{D_r_2000, nielsen_chuang_2010}. Moreover, this family representative state has only two terms meaning that the environment we have to build is extremely limited, thus, faster to explore. Therefore, we take the representative state of the ninth family as a delightful example of the algorithm's performance, and we present the results obtained with a simple set of gates, observing also its intrinsic limits. Secondly, we show how this reduced gates-set turns out to be not successful in reaching some of the other classes. Indeed, we take as an example the seventh and eighth EFs, since they coincide with true SLOCC classes (see Eqs.\eqref{eq:nine_classes} Tab.\ref{tab:slocc_classes_all}). For these and other classes we are not able to produce a suitable circuit with the initial set of gates. Hence, to overcome this obstacle, we update it by adding the Toffoli gate: this allows us to reach some of the classes that previously were precluded. Afterwards, we show that also this new gate-set is not yet sufficient, as it prevents to reach some of the classes whose representatives have an odd number of terms, as it will be elucidated afterwards. The solution that we found to this problem consists in the addition of the controlled-Hadamard (C-H) gate, to our gate set. By adding new gates to our set we manage to reach a large number of SLOCC classes in Tab.\ref{tab:slocc_classes_all}, without altering the whole structure of the algorithm. Finally, we summarize in Tab.\ref{tab:slocc_classes_EF1_EF2}, \ref{tab:slocc_classes_EF3_EF4_EF5}, \ref{tab:slocc_classes_EF6_EF7_EF8} which of the 49 true SLOCC classes (Tab.\ref{tab:slocc_classes_all}) we manage to reach with quantum protocols built by the algorithm or with our algorithm followed by the post-processing procedure in Appendix \ref{Appendix}. Moreover, we suggest that some of the classes that we did not manage to reach, could be approached with phase-dependent gates or with new subroutines that can be queued to our procedure. \subsection{First set of gates}\label{sub_III_A} As a universal set of quantum gates we introduce the set comprising the Hadamard gate, the phase gate, the controlled-not (C-NOT) gate, and the Toffoli gate (C-C-NOT). This is equivalent to the standard universal gates set that includes Hadamard, phase gate, C-NOT and $\pi/8$ gate \cite{nielsen_chuang_2010,RMP02}. \begin{table} \begin{tabular}{lcc} \hline \multicolumn{1}{|c|}{Name} & \multicolumn{1}{c|}{Symbol} & \multicolumn{1}{c|}{Matrix representation} \\ \hline \\ Hadamard (H) & \Qcircuit @C=1em @R=.6em { & \gate{H} & \qw} & $\frac{1}{\sqrt{2}}\bigg( {\begin{array}{*{20}{c}} 1&1\\ 1&{ - 1} \end{array}} \bigg)$ \\ \\ Phase (S) & \Qcircuit @C=1em @R=.6em { & \gate{S} & \qw} & $\bigg( {\begin{array}{*{20}{c}} 1&0\\ 0&i \end{array}} \bigg)$ \\ \\ Control-not (C-NOT) & \Qcircuit @C=1em @R=.6em { & \ctrl{1} & \qw \\ & \targ & \qw} & \footnotesize $\left( {\begin{array}{*{20}{c}} 1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0 \end{array}} \right)$ \normalsize \\ \\ Toffoli (C-C-NOT) & \Qcircuit @C=1em @R=.6em { & \ctrl{1} & \qw \\ & \ctrl{1} & \qw \\ & \targ & \qw} & \tiny $\quad \left( {\begin{array}{*{20}{c}} 1&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0\\ 0&0&1&0&0&0&0&0\\ 0&0&0&1&0&0&0&0\\ 0&0&0&0&1&0&0&0\\ 0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&0&1\\ 0&0&0&0&0&0&1&0 \end{array}} \right)$ \normalsize \\ \\ \hline \end{tabular} \caption{Universal set of quantum gates \cite{nielsen_chuang_2010}.} \label{tab:universal_set} \end{table} However, as a first attempt we will use a reduced set of gates, made up of C-NOT, X gate (quantum NOT) and Hadamard. This choice is made aiming at reaching "simple classes" in the most direct and easiest way. Some of the classes, indeed, can be trivially obtained with this reduced set of gates, since approaching them with a universal set would introduce an unnecessary complication and would be more difficult in terms of computational resources. Thus, we start with a simple set and, once we meet some obstacles with this initial toolbox, we add new gates. With this approach it can be argued that we end up with a set that includes a universal set but has some redundancy, meaning that not all the gates are independent. However, by adding redundancy we gain in efficiency; indeed, a richer toolbox improves the performances of the learning part in terms of computational time. Furthermore, by including unconventional gates in the toolbox (such as control-Hadamard and Toffoli) we help the algorithm in finding optimal circuits that are characterized by a smaller number of gates. Moreover, handling different entanglement classes with different sets of gates, growing in complexity, allows us to better observe their entanglement properties. For example, some of the entanglement classes explicitly need gates that can act on three qubits (see Sec.\ref{sub_III_B} for the introduction of the Toffoli gate), while some other classes do not require this kind of gates. In other words, adding gates step by step helps us to point out their key role in reaching specific SLOCC classes. As we anticipated, we firstly choose to analyze the ninth EF. As its environment is extremely limited, the time needed to check if the algorithm succeeds is relatively short. In fact, since the representative of the class is: \begin{gather} L_{0_{3\otimes1}0_{3\otimes1}}=\ket{0000}+\ket{0111}, \label{eq:class9} \end{gather} its algorithm representation reads $\{0000,0111\}$, thus the only rewarding matrices, and quality matrices, that we need to build are those that connect $actions$ (i.e. application of a gate) to single-term-states (ST) and double-term-states (DT). The ST and DT reads: \begin{gather*}\small ST=\left\{ {\begin{array}{*{20}{c}} { {{\rm{0000}}} }\\ { {{\rm{0001}}} }\\ { {{\rm{0010}}} }\\ \vdots \end{array}} \right\} \qquad DT=\left\{ {\begin{array}{*{20}{c}} {\{ {{\rm{0000}}{\rm{,0001}}} \}}\\ {\{ {{\rm{0000}}{\rm{,0010}}} \}}\\ {\{ {{\rm{0010}}{\rm{,0011}}} \}}\\ \vdots \end{array}} \right\}. \end{gather*} From now on, we will refer to the representative states of the nine families as $\ket{\Psi}_J$, with $J=1,\dots,9$, and the representative states of the SLOCC classes with the notation used in Tab.\ref{tab:slocc_classes_all} and \cite{slocc_in_efs}. In Fig.\ref{fig:class9gridr1insight} we can see an insight of the rewarding matrix of the ninth EF of Eq.\eqref{eq:class9} for the DT set, carrying rewards only in correspondence of some \textit{state-action} pairs, highlighted in green. After training the algorithm, the Q-matrices are built: in Fig.\ref{fig:class9gridqinsight_ST} we can see the insight of the Q-matrix for the ST sub-set. The pattern of weighted rewards, i.e. the quality values $Q(s,a)$, is clearly visible. \begin{figure} \includegraphics[width=1\linewidth]{Class_9_Grid_R1_reloaded.eps} \caption{Rewarding matrix for the DT states for the EF $\ket{\Psi}_9$ i.e. $L_{0_{3\oplus\bar{1}}0_{3\oplus\bar{1}}}$. The green cells represent the rewards, and they are present only for the pairs \textit{state-action} that directly link to the objective class.} \label{fig:class9gridr1insight} \end{figure} \begin{figure} \includegraphics[width=1\linewidth]{Class_9_Grid_Q0_reloaded.eps} \caption{Q-matrix of the ninth EF, built for the ST states. The cells represent the pairs \textit{state-action} and the color-grade indicates the Q-value itself. While the R-matrix include just the single-gate rewards, this matrix contains a pattern of rewards that help the algorithm to choose the best gate to apply when placed in a certain state.} \label{fig:class9gridqinsight_ST} \end{figure} In Fig.\ref{fig:class9circ} the result of the testing part is displayed in form of a quantum circuit. The sequential application of the gates produces the output state $\ket{\Psi}_{out}$ which, in terms of its algorithm representation, meets exactly the desired representative state $\Psi_9 \rightarrow \{0000,0111\}$. \begin{figure} \centering \includegraphics[width=0.4\linewidth]{Class_9_circ_no_toff_CH.eps} \caption{Quantum circuit for the ninth family resulting from the reinforcement learning algorithm.} \label{fig:class9circ} \end{figure} Based on this first successful result we may start dealing with more time-demanding classes.\\ In Sec.\ref{sub_III_B} and \ref{sub_III_C}, we explain why we decide to add new gates to the reduced toolbox, by elucidating what kind of difficulties we encounter trying to reach some of the entanglement classes. \subsection{Adding the Toffoli gate}\label{sub_III_B} Two of the entanglement classes that the algorithm struggled to handle at this stage are the seventh and the eight: \begin{gather} \ket{\Psi}_7=\ket{0000}+\ket{0101}+\ket{1000}+\ket{1110},\\ \ket{\Psi}_8=\ket{0000}+\ket{1011}+\ket{1101}+\ket{1110}. \end{gather} For an entangled state with four terms, like $\ket{\Psi}_7$ and $\ket{\Psi}_8$, we have to explore all the sub-sets including states with $n\leq 4$ terms, i.e. ST sub-set, DT sub-set, three-terms sub-set (TT) and four-terms sub-set (FT). During the training part the Q-matrices should be updated, meaning that there is an exploration of these sub-sets that spreads the weighted rewards. The problem is that the quality matrices for these classes are built only partially. Indeed, we notice that, at the end of the training, only the Q-matrix related to the FT states has been updated, while the other three Q-matrices (related to TT states, DT states and ST states) remain unchanged. This affects our capability to reach the desired entangled state from an arbitrary quantum state in our environment. Indeed, if the agent is placed in one of the TT, DT or ST states, it will have no clue on which is the best action to take in order to reach the desired state in the four-terms sub-set, due to the fact that there are no rewards spread in that part of the environment where it is located. As a consequence, during the testing part, if the agent is placed in one of the ST, DT or TT sub-sets, it will only take random actions, because, since the update was incomplete, all the state-action pairs have the same quality weight (equal to zero). We can visualize the connections between quantum states, encoded in the Q-matrix, with a state-link graph (SLG) shown in Fig.\ref{fig:class7graphnotoffchtotal}. The nodes of the graph represent the environment states and the links that connect two states correspond to the rewarded application of a single gate. Different concentric shells correspond to different sub-sets, i.e. to sub-sets which states have different number of terms. The innermost shell is the one related to the four-terms states: it is the shell where the representative state of the class is located. The other shells, from the outer one to the inner one, are the ST states, the DT states and the TT states. The color grade of the nodes refers to the number of connections that each state has with other states and the colors of the links correspond to the Q-values. We can see that the nodes of the outer shells have no rewarded connections, while the four-terms shell has links between its states. \begin{figure} \includegraphics[width=1.1\linewidth]{Class_7_Graph_no_toff_CH.eps} \caption[]{State-link graph (SLG) encoding the information of the Q-matrix for the $\ket{\Psi}_7$ SLOCC class. Each circular shell represents a sub-set of states, from the outer one to the inner one: single-terms (ST), double-terms (DT), three-terms (TT) and four-terms (FT). The nodes represent the states, while the rewarded single gate transformations are represented by the links between nodes. The color grade of the nodes refers to the number of connections that each state has with other states, belonging to the same shell or to different shells. Whereas the color of the edges indicates the weighted reward associated to that gate application.} \label{fig:class7graphnotoffchtotal} \end{figure} It is clear that the state space is not completely explored and this prevents from reaching the objective state from our standard starting $\ket{\psi}_0=\ket{0000}\rightarrow\psi_0=\{0000\}$. This behavior indicates that the algorithm could not spread the reward pattern in the Q-matrix outside a certain region of the environment. If we want our algorithm to start its path from an arbitrary inital state, we have to provide shortcuts that can connect different shells with a rewarded link. Notice that this obstacle cannot be traced back to the number of terms in the superposition of the objective state, because for other classes with four terms the algorithm worked efficiently. As an example we can look at the entanglement family $G_{abcd}$, in particular the SLOCC class A4.1 in Tab.\ref{tab:slocc_classes_all}, where the parameters are set as: $a=d,\; a= \pm b, \;2a^2+b^2+c^2 =0$, with $a=0$: \begin{gather}\notag \ket{\Psi}_{A4.1}(a=0)=\ket{0101}+\ket{1010}+\ket{0110}+\ket{1001}\\ \notag \downarrow \text{algorithm representation}\\ \Psi_{A4.1}(a=0)=\{0101,1010,0110,1001\} \end{gather} For this class the algorithm is able to find the circuit showed in Fig.\ref{fig:classA4.1circ}. \begin{figure} \includegraphics[width=0.5\linewidth]{Class_A4.1_a0_NO_CH_circ.eps} \caption{Quantum circuit for representative state $\ket{\Psi}_{A4.1}(a=0)$} \label{fig:classA4.1circ} \end{figure} The reason for this behavior lies on the particular terms of the superposition characterizing the two classes $\ket{\Psi}_8$ and $\ket{\Psi}_7$: we can see that both have one or more terms with three qubits in the state $\ket{1}$. In fact $\ket{\Psi}_8$ is in a superposition of $\ket{1011}$, $\ket{1101}$ and $\ket{1110}$, and $\ket{\Psi}_7$ includes $\ket{1110}$. To account for this feature it becomes necessary to introduce the Toffoli gate (C-C-NOT). It performs a quantum not on a target qubit only if the two control qubits are in the state $\ket{1}$ at the same time. Indeed, acting on three qubits the Toffoli gate generates this type of superposition without creating or canceling other terms in the state. For our purpose, the Toffoli gate allows the algorithm to reach superpositions between base elements that include terms with three qubits in the state $\ket{1}$. Indeed, after the introduction of C-C-NOT in the set of gates, the algorithm manages to find successful circuits for $\ket{\Psi}_7$, $\ket{\Psi}_8$ and for similar cases. In Fig.\ref{fig:class7circ} we can see how the quantum circuit for the seventh EF is built. \begin{figure} \centering \includegraphics[width=0.5\linewidth]{Class_7_circ_NO_CH.eps} \caption{Quantum circuit built for $\ket{\Psi}_7$ with the aid of the Toffoli gate.} \label{fig:class7circ} \end{figure} We can also observe how the Q-matrices are widely updated, once the Toffoli is introduced: in fact, the state-link graph (SLG) appears completely filled with links that connect different shells Fig.\ref{fig:class7graphtotal}. \begin{figure} \includegraphics[width=1.1\linewidth]{Class_7_Graph_NO_CH.eps} \caption{State-link graph (SLG) for the objective state $\ket{\Psi}_{7}$ after the introduction of the Toffoli gate. In this graph the four shells are completely connected by single rewarded gates, as opposed to the situation in Fig.\ref{fig:class7graphnotoffchtotal}.} \label{fig:class7graphtotal} \end{figure} With the addition of the Toffoli gate we almost reached a universal set of gates, that comprehends: Hadamard gate, Z gate, C-NOT, and Toffoli gate \cite{nielsen_chuang_2010}. However, due to the fact that we are not keeping track of the coefficients, we do not need the phase gate. \subsection{Adding the Controlled-Hadamard gate}\label{sub_III_C} Proceeding with the analysis of the SLOCC classes we find that, at this stage, the algorithm is unable to reach some states with an odd number of terms in their superposition, in particular states which representative has three terms. The best candidate that can help us to reach those states is the Hadamard gate, because it introduces superposition. Indeed, its action reads: \begin{gather} H\ket{0}\rightarrow\frac{1}{\sqrt{2}}\left(\ket{0}+\ket{1}\right),\quad H\ket{1}\rightarrow\frac{1}{\sqrt{2}}\left(\ket{0}-\ket{1}\right), \end{gather} In terms of a quantum optics device, by taking the polarization as our degree of freedom and considering linearly polarized photons, the Hadamard corresponds to a polarization rotator that performs a rotation of $\pi/4$, i.e., a half-wave plate. Indeed, horizontal and vertical polarizations can be encoded respectively as $\ket{0}$ and $\ket{1}$ of the computational basis. What happens to a qubit when it is subject to the action of a Hadamard gate is analogous to what happens to a photon that passes through a half-wave plate: it is projected onto the basis $\{\ket{+}=\frac{1}{\sqrt{2}}\left(\ket{0}+\ket{1}\right), \ket{-}=\frac{1}{\sqrt{2}}\left(\ket{0}-\ket{1}\right)\}$ \cite{scully_zubairy_1997}. Applying Hadamard to a single-term state we obtain a double-terms state. Of course, applying it to a double-term state we may obtain a three-terms state. However, when we look at what happens when a Hadamard is applied to a double-terms state, \textit{in an homogeneous superposition}, we can see that there are just a few possible outcomes. We have two different cases, depending on the entanglement. If the state is entangled and it is in an homogeneous superposition, the action of the Hadamard gate can only lead to one type of result, e.g. on one of the Bell's states it reads: \begin{gather} H(A)(\ket{00}+\ket{11})\to \ket{00}+\ket{10}+\ket{01}-\ket{11}. \end{gather} where $H(A)$ is the Hadamard acting on the first qubit. A similar superposition of four terms can be obtained from the action of one Hadamard gate on the other three Bell states. The other possibility is that the two-qubit state, on which the Hadamard acts is not entangled, then the result can be summarized in the following way: \begin{itemize} \item[\textit{i}] Action on the factorizable qubit\\ $H(A)(\ket{01}+\ket{00})\to H(A)\ket{0}_A(\ket{1}_B+\ket{0}_B)\\ \to \ket{01}+\ket{11}+\ket{00}+\ket{01}.$ \item[\textit{ii}] Action on the non-factorizable qubit\\ $H(B)(\ket{01}+\ket{00})\to \ket{0}_AH(B)(\ket{1}_B+\ket{0}_B)\\ \to \ket{00}.$ \end{itemize} This means that the Hadamard gate, acting on the two qubits states in homogeneous superpositions, can only double the number of terms (from two to four terms) or halves it, returning to a single-term state. This discussion can be generalized to the three and four qubit case. Indeed, as the Hadamard gate is a single-qubit gate, if we want to obtain a recombination of terms, i.e. to sum two or more terms, we need superpositions of terms that differ from each other for a single qubit and the Hadamard gate should act on that specific qubit: \begin{gather} H(A)\underbrace{(\ket{01}+\ket{11})}_{ \substack{\text{These two terms differ}\\ \text{ for the first qubit A}}}\to \ket{01}+\ket{11}+\ket{01}-\ket{11}\to \ket{01}. \end{gather} If we apply the Hadamard gate to a two-terms state of four or three qubits with terms that differ for more than one qubit, we will obtain always four terms as a result, because the sum of the terms would not be possible, keeping in mind that the Hadamard acts on only one qubit at a time. This observation leads to the following results regarding the four qubits states, analogues to those listed above for the two-qubit states: \begin{itemize} \item[\textit{i*}] The two terms differ from each other for more than one qubit, and the Hadamard gate acts on one of them\\ $H(B)(\ket{0100}-\ket{0010})\to \ket{0000}-\ket{0100}-\ket{0010}+\ket{0110}.$\\ We obtain a four-terms superposition from a two-terms one. \item[\textit{ii*}] The two terms differ one another for more than one qubit and the Hadamad gate acts on the factorizable ones (analogue to the case \textit{i})\\ $H(A)(\ket{0100}+\ket{0010})\to \ket{0100}+\ket{1100}+\ket{0010}+\ket{1010}.$\\ We obtain a four-terms superposition from a two-terms one. \item[\textit{iii*}] The two terms differ for a single qubit and the Hadamard gate acts on the factorized ones (this case is not different from the previous one \textit{ii*}, due to the fact that the Hadamard gate can only change one qubit at a time)\\ $H(A)(\ket{0100}+\ket{0000})\to \ket{0100}+\ket{1100}+\ket{0000}+\ket{1000}.$\\ We obtain a four-terms superposition from a two-terms one. \end{itemize} If the two terms differ by only one qubit, which corresponds to the qubit targeted by the Hadamard gate, then the result can be trivially referred to the two-qubit case \textit{ii}, without loss of generality: \begin{itemize} \item[\textit{iv*}] The two terms differ for a single qubit and the Hadamard gate acts on it\\ $H(D)(\ket{0000}+\ket{0001})\to\\ \to \ket{0000}+\ket{0001}+\ket{0000}-\ket{0001}\to \ket{0000}.$ We obtain a state that is no more in a superposition, a single-term state from a two-terms one. \end{itemize} We are performing a rotation from the polarization diagonal basis $\ket{+}_D=\ket{0}_D+\ket{1}_D$, $\ket{-}_D=\ket{0}_D-\ket{1}_D$ to the basis $\ket{0}_D/\ket{1}_D$. Thus, the Hadamard gate \textit{cannot provide three terms in a superposition}. The discussion becomes different if we consider states of four terms and we want to reach five terms. In that case the Hadamard gate, together with the action of NOT or C-NOT gates, can recombine terms and a state with five terms can be built from a four-terms state. Since some of the entanglement classes that we want to reach have representative states with three terms, we choose to add a new gate to the previous set of gates, namely, the Controlled-Hadamard (C-H) gate. To see the effectiveness of this, we take as an example the SLOCC class B1.1, whose representative state is $\ket{\Psi}_{B1.1}=\ket{0000}+\ket{1111}+\ket{0110}$ \cite{slocc_in_efs}. In Fig.\ref{fig:classB1.1graphtotalnotoffCH} we can see the state-link graph (SLG) related to this entanglement class without the addition of the C-H gate. \begin{figure} \includegraphics[width=1.1\linewidth]{Class_B1.1_Graph_NO_CH.eps} \caption{The state-link graph (SLG) for the the objective state $\ket{\Psi}_{B1.1}$ without the C-H gate added, showing that the shells related to ST states (outer shell) and DT (next-to-outer shell) states are not connected.} \label{fig:classB1.1graphtotalnotoffCH} \end{figure} We notice that the TT states are isolated from the others, as it happened before with $\ket{\Psi}_7$. With the C-H included in the toolbox we can overcome this obstacle and we are able to generate states that have three terms in their superposition. Without loss of generality, we can make an example of the action of a C-H on a simple two-qubits state, omitting normalization coefficients: \begin{gather} C\text{-}H(B,A)(\ket{01}_{AB}+\ket{10}_{AB})\rightarrow \ket{01}+\ket{11}+\ket{10}. \end{gather} In this example the C-H gate has qubit B as the control qubit and qubit A as te target. Due to the non-symmetric action of this gate we can reach states that have an odd number of terms with a shortcut that connects even-terms shells and odd-terms shells in the SLG. In Fig.\ref{fig:class_B1.1_graphtotal} we can see how the graph related to $\ket{\Psi}_{B1.1}$ is now connected by edges that allows to reach the objective state. \begin{figure} \includegraphics[width=1.1\linewidth]{Class_B1.1_Graph.eps} \caption{The state-link graph (SLG) for the objective state $\ket{\Psi}_{B1.1}$ with the C-H gate added showing how the shells, corresponding to all number of terms, link with one another.} \label{fig:class_B1.1_graphtotal} \end{figure} We report the quantum circuit to generate $\ket{\Psi}_{B1.1}$ in Fig.\ref{fig:classB1.1circ}, and the corresponding optimal path in a graph form in Fig.\ref{fig:classB1.1path}. \begin{figure} \includegraphics[width=0.4\linewidth]{Class_B1.1_circ.eps} \caption{Quantum circuit obtained for the SLOCC class B1.1 (Tab.\ref{tab:slocc_classes_all}) generated with the reinforcement learning algorithm.} \label{fig:classB1.1circ} \end{figure} \begin{figure} \includegraphics[width=1.1\linewidth]{Class_B1.1_Graph_optimal_path.eps} \caption{Quantum circuit for the class B1.1 in the SLG representation. These links represent connections created by the single gates of the protocol in Fig.\ref{fig:classB1.1circ}. The starting state on the right is $\Psi_0=\{0000\}$.} \label{fig:classB1.1path} \end{figure} \subsection{Complete and incomplete exploration of the entanglement classes}\label{sub_III_E} We summarize in Tabs.\ref{tab:slocc_classes_EF1_EF2}, \ref{tab:slocc_classes_EF3_EF4_EF5}, \ref{tab:slocc_classes_EF6_EF7_EF8} the sub-group of the 49 classes, showed in Tab.\ref{tab:slocc_classes_all}, that we managed to approach with our reinforcement learning algorithm. We recall that these SLOCC classes are identified from the original nine entanglement families in Eqs.\eqref{eq:nine_classes}, with constraints on the complex parameters $a$, $b$, $c$ and $d$ \cite{slocc_in_efs}. The colored dots in the \textit{Feasibility} column have the following meaning: \begin{itemize} \item[ ]\tikzcircle[fill=black!40!green]{4pt} $\rightarrow$ class that we are able to reach with the set of gates \{X, H, C-NOT, C-C-NOT, C-H\}, without the post-processing procedure, \item[ ] \tikzcircle[fill=green]{4pt} $\rightarrow$ class that can be reached with the previously mentioned set of gates and the post-processing procedure described in the Appendix \ref{Appendix}, \item[ ] \tikzcircle[fill=yellow]{4pt} $\rightarrow$ class that requires post-processing procedure shown in the Appendix \ref{Appendix} and phase gates addition. \end{itemize} For the SLOCC classes marked with a dark green dot (\tikzcircle[fill=black!40!green]{4pt}), we are able to produce the quantum circuits generating their representative state exploiting only the Q-learning procedure. We showed some of them in the previous sections. These classes have representative states that are homogeneous superpositions with real coefficients, i.e. their terms have all the same real coefficients. For these classes, we managed to find suitable protocols, reported in Sect.~\ref{Quantum_Circuits}, that can reproduce their representative states. We consider these results optimal given the settings of the algorithm. The classes with a light green mark (\tikzcircle[fill=green]{4pt}) required the post-processing procedure, described in the Appendix \ref{Appendix}. Indeed, even though the circuit produced by the Q-learning procedure has an output state with the same terms of the representative one, we need to tune the coefficients of this output state, to match those of the desired state. The classes with a yellow mark (\tikzcircle[fill=yellow]{4pt}) have representative states in which one or more terms have imaginary coefficients. Hence, our algorithm is able to reach states with the right terms of the superposition but our post processing procedure is not able to match the coefficients in the end, as it is designed only to tune real coefficients.. These states can be reached with a proper addition of the phase gates (S), the control-phase gates (C-S) or the $\pi/8$ gates. By means of them we can rearrange the phases of the coefficients accordingly, in order to achieve the desired representative state. In any case, our algorithm can be used as a fruitful starting point. There are some classes in Tab.\ref{tab:slocc_classes_all} that are not included in this summary. This is because they are not fully classified \cite{slocc_in_efs}, meaning that they could either be true SLOCC classes or not. Let us recall that this feasibility categorization is based on the capabilities of our algorithm and it is intrinsically linked to its working principles. Thus, among the classes that are listed in this section, we consider \textit{completely explored} the classes marked in dark green and light green, while we consider the others as partially explored, which could be finalized along the lines mentioned here or by further developments. We divide the SLOCC classes into three tables. The first one, Tab.\ref{tab:slocc_classes_EF1_EF2}, refers to the SLOCC classes that derive from the first two EFs in Eqs.\eqref{eq:nine_classes}. We notice that the four-qubit GHZ state is included in the first EF, the representative state of class A1.1. being, \begin{gather} \ket{\text{GHZ}}_{4}=\frac{1}{\sqrt{2}}\left(\ket{0000}+\ket{1111}\right). \end{gather} \renewcommand{\arraystretch}{1.6} \begin{table}[h] \centering \begin{tabularx}{\linewidth}{c | X | c} \hline Family, SLOCC class & Conditions on parameters & Feasibility \\ \hline $G_{abcd}$, A1.1 & $b=c=0, ad\neq0, a=\pm d$ & \tikzcircle[fill=black!40!green]{4pt}\\ $G_{abcd}$, A1.2 & $b=c=0, ad\neq0, a\neq \pm d, a^2+d^2=0$ & \tikzcircle[fill=yellow]{4pt}\\ $G_{abcd}$, A2.1 & $a=d, a \neq \pm b, b=+c, a^2+b^2=0$ & \tikzcircle[fill=yellow]{4pt}\\ $G_{abcd}$, A3.1 & $a=d, a \neq \pm b, b=-c, a^2+b^2=0$ & \tikzcircle[fill=yellow]{4pt}\\ $L_{abc_2}$, B1.1 & $c=0, a=b \neq 0$ & \tikzcircle[fill=green]{4pt}\\ $L_{abc_2}$ , B1.2 & $c=0, a=-b \neq 0$ & \tikzcircle[fill=green]{4pt}\\ $L_{abc_2}$, B1.3 & $c=0, a \neq \pm b, a^2+b^2=0$ & \tikzcircle[fill=yellow]{4pt}\\ $L_{abc_2}$, B1.5 & $c=0, a \neq \pm b, ab=0$ & \tikzcircle[fill=green]{4pt}\\ $L_{abc_2}$, B2.1 & $abc \neq 0, a=b, a= \pm c$ & \tikzcircle[fill=green]{4pt}\\ $L_{abc_2}$, B2.2 & $abc \neq 0, a=b, a \neq \pm c, a^2+c^2=0$ & \tikzcircle[fill=yellow]{4pt}\\ $L_{abc_2}$, B3.1 & $abc \neq 0, a=-b, a = \pm c$ & \tikzcircle[fill=green]{4pt}\\ $L_{abc_2}$, B3.2 & $abc \neq 0, a=-b, a \neq \pm c, a^2+c^2=0$ & \tikzcircle[fill=yellow]{4pt}\\ $L_{abc_2}$, B5.1 & $c \neq 0, a=b=0$ & \tikzcircle[fill=green]{4pt}\\ $L_{abc_2}$, B5.2 & $c \neq 0, a=0,b=c=1$ & \tikzcircle[fill=green]{4pt}\\ $L_{abc_2}$, B5.3 & $c \neq 0, a=0, b \neq \pm c, b^2+2c^2=0$ & \tikzcircle[fill=yellow]{4pt}\\ \hline \end{tabularx} \caption{In this table we report the feasibility categorization of our algorithm and post-processing procedure for the SLOCC classes belonging to the EFs $G_{abcd}$ and $L_{abc_2}$.} \label{tab:slocc_classes_EF1_EF2} \end{table} In Tab.\ref{tab:slocc_classes_EF3_EF4_EF5} the SLOCC classes that derive from the third, fourth and fifth families in Eqs.\eqref{eq:nine_classes} are reported. \renewcommand{\arraystretch}{1.6} \begin{table}[h] \centering \begin{tabularx}{\linewidth}{c | X | c} \hline Family, SLOCC class & Conditions on parameters & Feasibility \\ \hline $L_{a_2b_2}$, V1 & $a= \pm b \neq 0$ & \tikzcircle[fill=green]{4pt}\\ $L_{a_2b_2}$, V2 & $a \neq \pm b, ab \neq 0, a^2+b^2=0$ & \tikzcircle[fill=yellow]{4pt}\\ $L_{a_2b_2}$, V4 & $a \neq \pm b, ab=0$ & \tikzcircle[fill=green]{4pt}\\ $L_{ab_3}$, R1.1 & $a=b=0$ & \tikzcircle[fill=black!40!green]{4pt}\\ $L_{ab_3}$, R1.2 & $a=b \neq 0$ & \tikzcircle[fill=yellow]{4pt}\\ $L_{ab_3}$, R1.3 & $a=-b \neq 0$ & \tikzcircle[fill=yellow]{4pt}\\ $L_{ab_3}$, R2.1 & $a=0, b \neq 0$ & \tikzcircle[fill=yellow]{4pt}\\ $L_{ab_3}$, R2.2 & $a \neq 0, b=0$ & \tikzcircle[fill=yellow]{4pt}\\ $L_{ab_3}$ , R3.2 & $a \neq \pm b, ab \neq 0, 3a^2+b^2=0$ & \tikzcircle[fill=yellow]{4pt}\\ $L_{ab_3}$ , R3.2* & $a \neq \pm b, ab \neq 0, 3a^2+b^2=0$ & \tikzcircle[fill=yellow]{4pt}\\ $L_{a_4}$, La.1 & $a=0$ & \tikzcircle[fill=yellow]{4pt}\\ $L_{a_4}$, La.2 & $a \neq 0$ & \tikzcircle[fill=yellow]{4pt}\\ \hline \end{tabularx} \caption{Here we show the feasibility for the EFs $L_{a_2b_2}$, $L_{ab_3}$ and $L_{a_4}$. We can see that, due to the complex coefficients of the EF $L_{ab_3}$ in Eqs.\eqref{eq:nine_classes}, the SLOCC classes related to this EF are not approachable with our post-processing procedure.} \label{tab:slocc_classes_EF3_EF4_EF5} \end{table} In Tab.\ref{tab:slocc_classes_EF6_EF7_EF8} are reported the SLOCC classes belonging to the remaining families in Eqs.\eqref{eq:nine_classes}, the sixth, the seventh and the eighth families, since the ninth family does not contain four qubits entanglement. \begin{table}[h] \centering \begin{tabularx}{\linewidth}{c | X | c} \hline Family, SLOCC class & Conditions on parameters & Feasibility \\ \hline $L_{a_20_{3\oplus\bar{1}}}$& $a \neq 0$& \tikzcircle[fill=green]{4pt}\\ $L_{0_{5\oplus\bar{3}}}$& no constraints&\tikzcircle[fill=black!40!green]{4pt}\\ $L_{0_{7\oplus\bar{1}}}$& no constraints& \tikzcircle[fill=green]{4pt}\\ \hline \end{tabularx} \caption{The last three SLOCC classes that we report are among those showed as exaples in this paper, $\ket{\Psi}_6$,$\ket{\Psi}_7$ and $\ket{\Psi}_8$.} \label{tab:slocc_classes_EF6_EF7_EF8} \end{table} \subsection{Quantum circuits for SLOCC classes of 4 qubits}\label{sec_IV} \label{Quantum_Circuits} This subsection summarizes the main results of this work. We report in Tabs.\ref{tab:quantum_protocols_1} and \ref{tab:quantum_protocols_2} the quantum circuits that we find with our reinforcement learning algorithm for some of the classes listed in Tabs.\ref{tab:slocc_classes_EF1_EF2}, \ref{tab:slocc_classes_EF3_EF4_EF5}, \ref{tab:slocc_classes_EF6_EF7_EF8}, marked with a green (\tikzcircle[fill=green]{4pt}) and a dark green dot (\tikzcircle[fill=black!40!green]{4pt}). We can see that, in some cases the algorithm uses the toffoli gate and the C-H gate extensively: this is due to the optimization of the circuital depth. \begin{table}[h] \begin{tabular}{>{\centering}m{3cm}|m{5cm}} \hline \begin{tabular}{@{}p{3cm}@{}}SLOCC class\\ Representative state\end{tabular} & Quantum protocol \\ \hline \begin{tabular}{@{}m{3cm}@{}}A1.1 \\ $\ket{0000}+\ket{1111}$\end{tabular} & \vspace{0.1cm} \begin{minipage}{0.30\textwidth} \includegraphics[width=0.6\linewidth]{Class_A1.1_circ.eps} \end{minipage} \\ \hline \begin{tabular}{@{}m{3cm}@{}}B1.1\\$\ket{0000}+\ket{1111}+\ket{0110}$\end{tabular} & \vspace{0.2cm} \begin{minipage}{.30\textwidth} \includegraphics[width=0.6\linewidth]{Class_B1.1_circ.eps} \end{minipage} \\ \hline \begin{tabular}{@{}m{3cm}@{}}B1.2\\$\ket{0011}+\ket{1100}+\ket{0110}$\end{tabular} & \vspace{0.2cm} \begin{minipage}{.30\textwidth} \includegraphics[width=0.8\linewidth]{Class_B1.2_circ.eps} \end{minipage} \\ \hline \begin{tabular}{@{}m{3cm}@{}}B2.1\\$\ket{0000}+\ket{1111}+\ket{0101}+\ket{1010}+\ket{0110}$\end{tabular} & \vspace{0.2cm} \begin{minipage}{.30\textwidth} \includegraphics[width=0.8\linewidth]{Class_B2.1_circ.eps} \end{minipage} \\ \hline \begin{tabular}{@{}m{3cm}@{}}B3.1\\$\ket{0011}+\ket{1100}+\ket{0101}+\ket{1010}+\ket{0110}$\end{tabular} & \vspace{0.2cm} \begin{minipage}{.30\textwidth} \includegraphics[width=0.8\linewidth]{Class_B3.1_circ.eps} \end{minipage} \\ \hline \end{tabular} \caption{Quantum protocols for part of the SLOCC classes of four-qubit entangled states.}\label{tab:quantum_protocols_1} \end{table} \begin{table}[h] \centering \begin{tabular}{>{\centering}m{3cm}|m{5cm}} \hline \begin{tabular}{@{}p{3cm}@{}}SLOCC class\\ Representative state\end{tabular} & Quantum protocol \\ \hline \begin{tabular}{@{}m{3cm}@{}}B5.1\\$\ket{0101}+\ket{1010}+\ket{0110}$\end{tabular} & \vspace{0.2cm} \begin{minipage}{.30\textwidth} \includegraphics[width=0.8\linewidth]{Class_B5.1_circ.eps} \end{minipage} \\ \hline \begin{tabular}{@{}m{3cm}@{}}V4\\ $\ket{0000}+\ket{1111}+\ket{0110}+\ket{0011}$\end{tabular} & \vspace{0.2cm} \begin{minipage}{.30\textwidth} \includegraphics[width=0.8\linewidth]{Class_V4_NO_CH_circ.eps} \end{minipage} \\ \hline \begin{tabular}{@{}m{3cm}@{}}R1.1\\ $\ket{0001}+\ket{0010}+\ket{0111}+\ket{1011}$\end{tabular} & \vspace{0.2cm} \begin{minipage}{.30\textwidth} \includegraphics[width=0.8\linewidth]{Class_R1.1_circ.eps} \end{minipage} \\ \hline \begin{tabular}{@{}m{3cm}@{}}La.1\\ $\ket{0001}+\ket{0110}+\ket{1011}$\end{tabular} & \vspace{0.2cm} \begin{minipage}{.30\textwidth} \includegraphics[width=0.8\linewidth]{Class_La.1_circ.eps} \end{minipage} \\ \hline \begin{tabular}{@{}m{3cm}@{}}$L_{a_20_{3\oplus\bar{1}}}$\\ $\ket{0000}+\ket{1111}+\ket{0011}+\ket{0101}+\ket{0110}$\end{tabular} & \vspace{0.2cm} \begin{minipage}{.30\textwidth} \includegraphics[width=0.8\linewidth]{Class_6_circ.eps} \end{minipage} \\ \hline \begin{tabular}{@{}m{3cm}@{}}$L_{0_{5\oplus\bar{3}}}$\\ $\ket{0000}+\ket{0101}+\ket{1000}+\ket{1110}$\end{tabular} & \vspace{0.2cm} \begin{minipage}{.30\textwidth} \includegraphics[width=0.8\linewidth]{Class_7_circ_NO_CH.eps} \end{minipage} \\ \hline \begin{tabular}{@{}m{3cm}@{}}$L_{0_{7\oplus\bar{1}}}$\\ $\ket{0000}+\ket{1011}+\ket{1101}+\ket{1110}$\end{tabular} & \vspace{0.2cm} \begin{minipage}{.30\textwidth} \includegraphics[width=0.8\linewidth]{Class_8_circ.eps} \end{minipage} \\ \hline \end{tabular} \caption{Quantum protocols for part of the SLOCC classes of four-qubit entangled states.}\label{tab:quantum_protocols_2} \end{table} \section{Conclusions}\label{sec_V} We have shown that with our implementation of the Q-learning algorithm, we manage to successfully build quantum protocols able to generate representative states for some of the 49 true SLOCC classes of the four-qubit entanglement states. In particular, we are able to reach at least one true SLOCC class for each of the nine entanglement families. Further, we observe that many of the other SLOCC classes can be approached by adding other quantum gates to the set, and modifying accordingly the algorithm in order to use them. Therefore, this machine learning algorithm is useful in reaching a large number of four-qubit entangled states, and could be employed to better understand their properties and to devise new procedures to construct them in a real experiment. Furthermore, we can discover new connections between specific entanglement features and the role of certain quantum gates. In this sense, thanks to its simplicity and intuitiveness, the Q-learning algorithm turns out to be widely profitable for these kind of tasks, and thus it may represent a true resource in approaching the study of higher $n-qubits$ entangled states. Something similar to this was discovered with the Melvin algorithm \cite{Krenn_2016,Krenn_2017} for a low number multidimensional (a few qutrits etc.) quantum states. It is conceivable that those multidimensional quantum states can be addressed with the tools developed in this work. Due to our limited computational resources and the intricacies of some of the entangled states addressed, we have not completed the full generation of all the 49 classes of entangled 4 qubit states. Thus, a possible next step to take to reach the representatives of the remaining classes would be to make the algorithm capable of handling states in a non-homogeneous superposition, with complex coefficients. In this way, the number of SLOCC classes for which we could be able to provide protocols will further increase. Therefore, even if the Q-learning in its current form is not suitable for the very complete exploration of the nine entanglement families, it provides reasonable clues as to how to attempt to handle the unsolved cases, if further capabilities are included in the resource toolbox of the reinforcement Q-learning algorithm. We have devised a graphical tool called the state-link graph (SLG) to represent the construction of the Q-matrix for a given objective state belonging to one of the entanglement classes. See examples in figures \ref{fig:class7graphnotoffchtotal}, \ref{fig:class7graphtotal} etc. These graphs are very useful to detect whether the learning algorithm is exploring the set of multiple terms needed to reconstruct the objective state. This way, when it is detected that some of the shells in SLG are not connected, it is an indication that our gate-set chosen is not rich enough so as to build the given state. Then, it means that it is the moment to enlarge the quantum gate-set. This is precisely the process that we have followed to synthesize the quantum circuits found in Tabs.\ref{tab:quantum_protocols_1}, \ref{tab:quantum_protocols_2}. Some of the results obtained for the synthesis of 4-qubit states with remarkable entanglement properties, such as those in Tabs.\ref{tab:quantum_protocols_1}, \ref{tab:quantum_protocols_2}, may be useful to investigate statements about the local and realistic properties of our universe with experimental means as was originally proposed with the Melvin algorithm \cite{Krenn_2016,Krenn_2017}. In our Q-learning algorithm, we do not need to know about the concept of Schmidt coefficients as the Melvin algorithm does. The reinforcement machine learning algorithm does not rely on previous knowledge nor on often flawed intuition. By construction, the quantum circuits found with our reinforcement learning algorithm are optimal with respect to the quantum gate-set chosen. This is guaranteed by the convergence of the training part and the subsequent construction of the quantum circuits in the testing part relying on optimal state-action pairs found for the Q-matrix. This result is useful for the automated quantum circuit synthesis (QCS) where optimal implementations of quantum algorithms are designed from quantum logic gates belonging to known universal sets \cite{QCS1,QCS2,QCS3,QCS4,QCS5}. These are the type of automated tasks needed to construct quantum compilers. Machine learning methods to synthesize optimal circuits for continuous variable quantum computation have been proposed with photonics architectures \cite{ nichols2019designing, Arrazola}. It is worth noticing that our machine reinforcement learning algorithm is not scalable as the number of entangled qubits increases since the number of multiple terms needed to construct the objective states and the environment, grows exponentially with the number of entangled qubits. Thus, although we can boost the task of automatically constructing the quantum circuits for many of the entanglement classes of 4 qubit states, in the end we will also face the wall of the exponential complexity of quantum entangled states with an arbitrary number of qubits. Nevertheless, the quantum circuits synthesized with machine algorithms with reinforcement learning can serve as a benchmark for more complex quantum compilers. \begin{acknowledgements} We acknowledge support from the CAM/FEDER Project No.S2018/TCS-4342 (QUITEMAD-CM), Spanish MINECO grants MINECO/FEDER Projects, PGC2018-099169-B-I00 FIS2018, MCIN with funding from European Union NextGenerationEU (PRTR-C17.I1) an Ministry of Economic Affairs Quantum ENIA project. M. A. M.-D. has been partially supported by the U.S.Army Research Office through Grant No. W911NF-14-1-0103. S.G. acknowledges support from a QUITEMAD grant. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,535
Q: Generating Sequences with Irregular Patterns in R I'm attempting to generate a column that shows persistence throughout a field. The field is sequential and numeric, but not conventionally increasing. Essentially, it goes up by 7 (when it ends in 2) and then 3 (when it ends in 9) by each ID. It's possible for an ID to miss one or more of the sequence, but then return to the same pattern. The data looks like this: ID Col 1 0769 1 0772 1 0779 1 0782 1 0799 1 0802 1 0812 2 0769 2 0772 2 0779 3 0782 3 0799 3 0802 3 0812 What I'm trying to do is generate this: ID Col Persistence 1 0769 1 1 0772 1 1 0779 1 1 0782 1 1 0799 2 1 0802 2 1 0812 3 2 0769 1 2 0772 1 2 0779 1 3 0782 1 3 0799 2 3 0802 2 3 0812 3 A: If you just want to make sure the jump is either 3 or 7, you can write a helper function to increment when a jump of a different size occurs jumpchange <- function(x) c(0,cumsum(!diff(x) %in% c(3,7)))+1 Then you can apply this to each group most easily with dplyr library(dplyr) dd %>% group_by(ID) %>% mutate(persistence = jumpchange(Col)) Or you can use transform/ave with just base R transform(dd, persistence=ave(Col, ID, FUN=jumpchange))
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,587
The RTP Streams analysis tool was added in CloudShark 1.9 and allows you to view and play RTP streams that have been detected in a capture file. Each stream is displayed in its own row. The speaker button allows you to switch to the player view for the stream. If the RTP codec is supported, a play button allows you to play an audio stream extracted from the RTP stream. Supported codecs include G.711, G.722, G.729, and GSM. The audio stream will be played back by the audio capabilities of your web browser. Alternatively, you may download the audio file in various formats. The UDP ports used by RTP are normally dynamic. CloudShark depends on the signaling packets to find the RTP streams in a capture file. If the capture file does not contain the signaling packets, you may need to add a Decode Protocol As rule to your capture. Alternatively, you can ask your CloudShark Administrator to modify the decode options to try and detect RTP streams automatically. This is done by setting the rtp.heuristic_rtp: TRUE option.
{ "redpajama_set_name": "RedPajamaC4" }
8,640
Q: Use Responsive Image together with Flexbox I want to use the flexbox and the responsive image (the image with scrset and sizes). But the result seems horrible. The images have different height even if I used flex-grow to pull aligned, because the browser will load the different image with different sizes. I want the height of all images to be the same. I found out sizes of images is hard to set if I use flexbox, it's quit unpredictable. Is there any way to solve this? Here is the JSfiddle (Please resize the result window larger and the I used inline css for flex-grow) A: Set the image height property to: height:100%; And if you want your all images to be the same size set the same value for flex-grow. flex-grow:1;
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,465
Historic Sites Committee Toll Gates on the Proof Line Road 9 November 2009 - 12:22pm Plaque no. 37 Date of plaque unveiling John Lutman 1110 Richmond Street, London, Ontario Take a tour of Gates on the Proof Line Road on HistoryPin Select thumbnail image to view Photo credit: PG F 99, Ivey Family London Room, Central Library, London Public Library, 251 Dundas Street, London, Ontario, Henry G. Hines, July 30, 1907 The first settlers to move into this area made their way north along the blazes and stakes of Mahlon Burwell's proof line through the middle of London Township. Laid out as a road allowance, it followed Wharncliffe Road northward, bypassing the riverlands of the Medway Creek and North Thames River, and following the present Western Road and Richmond Street route before continuing northward. Early settlers were quick to demand road improvements. Many roads were mere dirt trails through bogs and forests. In swampy areas it was necessary to lay down layers of logs to keep horses and carriages from sinking. Ruts from wagon wheels and tree stumps were further obstacles, and in wet weather roads became rivers of mud. To deal with this situation, the Legislature of Upper Canada in 1810 delegated local justices of the peace to appoint surveyors to lay out and regulate proper roads. Roads were to be constructed and maintained with the costs assessed to local landowners. In 1849, the Provincial Legislature passed legislation permitting private companies to build toll roads. That same year, a local group formed the "Proof Line Road Joint Stock Company" to grade, macadamize, and bridge the Proof Line Road. The completed road had three toll gates and followed the Richmond Street route north through Arva, Birr, and Elginfield. Several hotels and taverns opened along the road, an indication of its heavy use. By 1882, however, all publicly owned county roads had been declared free of tolls. The Proof Line Road came to be seen as an anachronism, and citizens often detoured to avoid the toll gates. In 1907, local councils and the province bought the Proof Line Road for $11,000. The occasion was marked by a huge celebration in Arva, during which the collected toll gates were burned in a large bonfire.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,529
package com.xmomen.module.report.controller; import com.xmomen.framework.utils.StringUtilsExt; import com.xmomen.module.base.constant.AppConstants; import com.xmomen.module.order.model.OrderQuery; import com.xmomen.module.order.service.OrderService; import com.xmomen.module.report.model.*; import com.xmomen.module.report.service.ReportOrderService; import com.xmomen.module.stockdaily.model.StockDailyModel; import com.xmomen.module.stockdaily.service.StockDailyService; import org.apache.shiro.SecurityUtils; import org.jeecgframework.poi.excel.entity.ExportParams; import org.jeecgframework.poi.excel.entity.vo.NormalExcelConstants; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Controller; import org.springframework.ui.ModelMap; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.RequestParam; import java.util.List; /** * Created by tanxinzheng on 16/9/3. */ @Controller public class OrderReportController { @Autowired ReportOrderService reportOrderService; @Autowired StockDailyService stockDailyService; /** * 订单导出 * * @param modelMap * @return */ @RequestMapping(value = "/report/order", method = RequestMethod.GET) public String exportOrder( @RequestParam(value = "beginTime", required = false) String beginTime, @RequestParam(value = "endTime", required = false) String endTime, @RequestParam(value = "companyId", required = false) Integer companyId, @RequestParam(value = "managerId", required = false) Integer managerId, ModelMap modelMap) { ReportQuery reportQuery = new ReportQuery(); if (StringUtilsExt.isNotBlank(beginTime)) { reportQuery.setBeginTime(beginTime); } if (StringUtilsExt.isNotBlank(endTime)) { reportQuery.setEndTime(endTime); } reportQuery.setCompanyId(companyId); reportQuery.setManagerId(managerId); //客服经理过滤 如果有客服组权限则不过滤 if (SecurityUtils.getSubject().hasRole(AppConstants.CUSTOMER_MANAGER_PERMISSION_CODE) && !SecurityUtils.getSubject().hasRole(AppConstants.CUSTOMER_PERMISSION_CODE) && !SecurityUtils.getSubject().hasRole(AppConstants.HOU_TAI_CODE) && !SecurityUtils.getSubject().hasRole(AppConstants.ADMIN) && !SecurityUtils.getSubject().hasRole(AppConstants.SUPER_ADMIN) && !SecurityUtils.getSubject().hasRole(AppConstants.CWU) && !SecurityUtils.getSubject().hasRole(AppConstants.WULIUZXB)) { Integer userId = (Integer) SecurityUtils.getSubject().getSession().getAttribute(AppConstants.SESSION_USER_ID_KEY); reportQuery.setManagerId(userId); } List<OrderReport> list = reportOrderService.getOrderReportList(reportQuery); String[] beginTimes = beginTime.split("-"); String[] endTimes = endTime.split("-"); modelMap.put(NormalExcelConstants.FILE_NAME, beginTimes[0] + "年" + beginTimes[1] + "月" + beginTimes[2] + "日-" + endTimes[0] + "年" + endTimes[1] + "月" + endTimes[2] + "日订单报表"); modelMap.put(NormalExcelConstants.PARAMS, new ExportParams()); modelMap.put(NormalExcelConstants.CLASS, OrderReport.class); modelMap.put(NormalExcelConstants.DATA_LIST, list); return NormalExcelConstants.JEECG_EXCEL_VIEW; } /** * 物流导出 * * @param modelMap * @return */ @RequestMapping(value = "/report/express", method = RequestMethod.GET) public String exportExpress( @RequestParam(value = "beginTime", required = false) String beginTime, @RequestParam(value = "endTime", required = false) String endTime, @RequestParam(value = "companyId", required = false) Integer companyId, @RequestParam(value = "managerId", required = false) Integer managerId, ModelMap modelMap) { ReportQuery reportQuery = new ReportQuery(); if (StringUtilsExt.isNotBlank(beginTime)) { reportQuery.setBeginTime(beginTime); } if (StringUtilsExt.isNotBlank(endTime)) { reportQuery.setEndTime(endTime); } reportQuery.setCompanyId(companyId); reportQuery.setManagerId(managerId); //客服经理过滤 如果有客服组权限则不过滤 if (SecurityUtils.getSubject().hasRole(AppConstants.CUSTOMER_MANAGER_PERMISSION_CODE) && !SecurityUtils.getSubject().hasRole(AppConstants.CUSTOMER_PERMISSION_CODE) && !SecurityUtils.getSubject().hasRole(AppConstants.HOU_TAI_CODE) && !SecurityUtils.getSubject().hasRole(AppConstants.ADMIN) && !SecurityUtils.getSubject().hasRole(AppConstants.SUPER_ADMIN) && !SecurityUtils.getSubject().hasRole(AppConstants.CWU) && !SecurityUtils.getSubject().hasRole(AppConstants.WULIUZXB)) { Integer userId = (Integer) SecurityUtils.getSubject().getSession().getAttribute(AppConstants.SESSION_USER_ID_KEY); reportQuery.setManagerId(userId); } List<ExpressReport> list = reportOrderService.getExpressReportList(reportQuery); String[] beginTimes = beginTime.split("-"); String[] endTimes = endTime.split("-"); modelMap.put(NormalExcelConstants.FILE_NAME, beginTimes[0] + "年" + beginTimes[1] + "月" + beginTimes[2] + "日-" + endTimes[0] + "年" + endTimes[1] + "月" + endTimes[2] + "日物流报表"); modelMap.put(NormalExcelConstants.PARAMS, new ExportParams()); modelMap.put(NormalExcelConstants.CLASS, ExpressReport.class); modelMap.put(NormalExcelConstants.DATA_LIST, list); return NormalExcelConstants.JEECG_EXCEL_VIEW; } /** * 财务导出 * * @param modelMap * @return */ @RequestMapping(value = "/report/finance", method = RequestMethod.GET) public String exportFinance( @RequestParam(value = "beginTime", required = false) String beginTime, @RequestParam(value = "endTime", required = false) String endTime, @RequestParam(value = "companyId", required = false) Integer companyId, @RequestParam(value = "managerId", required = false) Integer managerId, ModelMap modelMap) { ReportQuery reportQuery = new ReportQuery(); if (StringUtilsExt.isNotBlank(beginTime)) { reportQuery.setBeginTime(beginTime); } if (StringUtilsExt.isNotBlank(endTime)) { reportQuery.setEndTime(endTime); } reportQuery.setCompanyId(companyId); reportQuery.setManagerId(managerId); //客服经理过滤 如果有客服组权限则不过滤 if (SecurityUtils.getSubject().hasRole(AppConstants.CUSTOMER_MANAGER_PERMISSION_CODE) && !SecurityUtils.getSubject().hasRole(AppConstants.CUSTOMER_PERMISSION_CODE) && !SecurityUtils.getSubject().hasRole(AppConstants.HOU_TAI_CODE) && !SecurityUtils.getSubject().hasRole(AppConstants.ADMIN) && !SecurityUtils.getSubject().hasRole(AppConstants.SUPER_ADMIN) && !SecurityUtils.getSubject().hasRole(AppConstants.CWU) && !SecurityUtils.getSubject().hasRole(AppConstants.WULIUZXB)) { Integer userId = (Integer) SecurityUtils.getSubject().getSession().getAttribute(AppConstants.SESSION_USER_ID_KEY); reportQuery.setManagerId(userId); } List<FinanceReport> list = reportOrderService.getFinanceReportList(reportQuery); String[] beginTimes = beginTime.split("-"); String[] endTimes = endTime.split("-"); modelMap.put(NormalExcelConstants.FILE_NAME, beginTimes[0] + "年" + beginTimes[1] + "月" + beginTimes[2] + "日-" + endTimes[0] + "年" + endTimes[1] + "月" + endTimes[2] + "日财务报表"); modelMap.put(NormalExcelConstants.PARAMS, new ExportParams()); modelMap.put(NormalExcelConstants.CLASS, FinanceReport.class); modelMap.put(NormalExcelConstants.DATA_LIST, list); return NormalExcelConstants.JEECG_EXCEL_VIEW; } /** * 库存快照导出 * * @param modelMap * @return */ @RequestMapping(value = "/report/stockDaily", method = RequestMethod.GET) public String exportStockDaily( @RequestParam(value = "beginTime", required = false) String beginTime, @RequestParam(value = "endTime", required = false) String endTime, ModelMap modelMap) { ReportQuery reportQuery = new ReportQuery(); if (StringUtilsExt.isNotBlank(beginTime)) { reportQuery.setBeginTime(beginTime); } if (StringUtilsExt.isNotBlank(endTime)) { reportQuery.setEndTime(endTime); } List<StockDailyReport> list = stockDailyService.getStockDailyReport(reportQuery); String[] beginTimes = beginTime.split("-"); String[] endTimes = endTime.split("-"); modelMap.put(NormalExcelConstants.FILE_NAME, beginTimes[0] + "年" + beginTimes[1] + "月" + beginTimes[2] + "日-" + endTimes[0] + "年" + endTimes[1] + "月" + endTimes[2] + "日库存快照报表"); modelMap.put(NormalExcelConstants.PARAMS, new ExportParams()); modelMap.put(NormalExcelConstants.CLASS, StockDailyReport.class); modelMap.put(NormalExcelConstants.DATA_LIST, list); return NormalExcelConstants.JEECG_EXCEL_VIEW; } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,444
\section{Introduction} The atomic electron cloud distortion induced by an external field is strongly influenced by the dielectric environment embedding the atom. This distortion ability referred as the \textit{induced polarizability} is one of the key ion specific effects in the simulation of salt solutions in inhomogeneous media such as the water-air interface or protein-water surfaces~\cite{jun}. The precise knowledge of the change in the polarizability of an isolated ion upon hydration in water is particularly important for the development of polarizable force fields used in these simulations. Moreover, ionic polarizability is also believed to have a substantial effect on the polarity of ionic liquids. Indeed, numerical studies based on ab-initio calculations show that the large dielectric permittivity of ionic liquids such as $[\mathrm{C}_2\mathrm{mim}][\mathrm{NTf}_2]$ and $[\mathrm{C}_2\mathrm{mim}][\mathrm{BF}_4]$ composed of ions with small individual dipole moments cannot be solely explained by their rotational polarizability~\cite{ionion2}. This suggests that an additional polarization mechanism resulting from the interaction of the polarizable ion with the surrounding ions in the liquid must be present. The alteration of ionic gas phase polarization upon solvation has been so far considered within numerical approaches based on quantum calculations with polarizable continuum model (PCM) or explicit solvent. These two approaches interestingly yield diverging pictures on the hydration of polarizable ions. Namely, the calculations with explicit solvent indicate that the ionic polarizability is decreased with respect to the gas phase~\cite{jun2}, whereas PCM approaches yield a higher polarizability in the liquid state~\cite{ionpol,ionpol2} (see also Ref.~\cite{NetzRev} for a review on the computational state of the art). The latter case is also in line with the ab-initio calculations of pure water clusters~\cite{watwat} and ionic liquids~\cite{ionion1}, where the transfer of both type of molecules from gas to the liquid environment was shown to increase their dipole moment. In order to understand the physics behind the hydration of polarizable molecules, analytical theories offering a deeper understanding are needed. The theoretical formulation of the problem requires in turn an explicit and realistic consideration of the discrete charge structure of solvent molecules and ions. Unfortunately, this level of refinement has been until recently beyond the state of the art of electrostatic theories, which are mostly based on dielectric continuum solvents embedding point charges. The first statistical theory of inhomogeneous electrolytes with explicit solvent was introduced in Ref.~\cite{dundip} in the form of a mean-field (MF) dipolar Poisson-Boltzmann (DPB) equation. This approach that models the solvent molecules as point dipoles was later generalized by including the steric interactions between the particles for inhomogeneous charged liquids~\cite{orland1}, and a one-loop extension was presented as well in Ref.~\cite{orland2} to explain the salt induced dielectric decrement effect in bulk electrolytes. We have recently incorporated into the DPB approach surface polarization effects, which allowed us to significantly improve the agreement of the dielectric continuum electrostatic with experimental capacitance data of carbon based materials~\cite{epl}. Sophisticated electrostatic formulations accounting for the dipolar and higher order multipolar moments of ions in the point dipole limit have been also proposed in Refs.~\cite{netzvdw,bohinc1,podgornik,dem}. In a similar context, we can also mention the works of Refs.~\cite{Lue1,pincus,bohinc2} where the extended charge structure of rigid linear molecules was ingeniously considered. We have recently developed a non-local electrostatic theory of polar liquids with explicit solvent and polarizable ions beyond the point dipole approximation~\cite{nlpb}. The electrolyte model that treats solvent molecules as finite size dipoles and polarizable ions as Drude oscillators was investigated at the MF level. It was shown that the consideration of the extended charge structure of solvent molecules enables us to capture the non-local dielectric response of water at charged interfaces observed in molecular dynamics simulations and atomic force experiments. In this article, we reconsider the model of Ref.~\cite{nlpb} beyond the MF level of approximation in order to characterize the hydration induced modification of ionic polarizabilities in high dielectric bulk liquids. We review in Section~\ref{mod} the derivation of the field theoretic charged liquid model, and derive the closure equations accounting for the correlations between the ions and the solvent molecules. These equations are first solved in Section~\ref{polar} in order to investigate the hydration of a single polarizable ion in a polar solvent such as water. Then, within the same theoretical framework, we consider in Section~\ref{ion} an ionic liquid free of solvent molecules in order to investigate a collective polarization effect in the liquid. It is shown that in both systems, our simple theory can capture the solvation induced electronic cloud expansion effect observed in ab-initio calculations~\cite{ionpol,ionpol2,ionion1}, and provides a physical explanation in terms of the electrostatic energy released by the ion upon hydration. The limitations of the liquid model and the computation scheme, and necessary extensions are discussed in detail in the Conclusion. \section{Model} \label{mod} We briefly review in this section the derivation of the field theoretic partition function for the polar liquid model previously introduced in Ref.~\cite{nlpb}. Then, starting from the Dyson equation, we derive an integral equation for the dielectric permittivity function embodying the interactions between the polarizable ions and solvent molecules of the bulk liquid. The geometry of solvent molecules is depicted in Fig.~\ref{fig0}(a). The polar liquid is composed of overall neutral solvent molecules modeled as linear dipoles of length $a$, and two point charges of valency $\pm Q=\pm1$ at the extremities. Furthermore, the solvent contains polarizable molecules of $p$ species, each of them being an oscillating rod of length $b$ (see Fig.~\ref{fig0}(b)). The point charges $e_i$ and $c_i$ at the extremities satisfy the inequality $e_ic_i<0$, where the index $i=1...p$ runs over the ionic species. Moreover, the ionic polarizability is taken into account within the Drude oscillator model~\cite{drude}, \begin{equation}\label{hpol} h_i\left(\mathbf{b}\right)=\frac{\mathbf{b}^2}{4b_{pi}^2}, \end{equation} where the square of the variance of electronic cloud oscillations $b_{pi}^2$ is proportional to the induced polarizability of ions $\alpha$ in the gas phase~\cite{nlpb}. Because the former offers a more intuitive realization of the electronic cloud fluctuations induced by thermal excitations, we will discuss the results in terms of the length scale $b_{pi}$. Furthermore, in the present work, we will consider exclusively the case of equal ionic polarizabilities for all species, but the analytical results will be given for the general case. We also note that the electroneutrality condition implies the equality $\sum_i\rho_{ib}q_i=0$, with $\rho_{ib}$ the bulk density, and $q_i=e_i+c_i$ the total charge of the polarizable molecules with species $i$. The canonical partition function for the system composed of solvent molecules and ions coupled with electrostatic interactions read \begin{eqnarray}\label{canpart} Z_c&=&\frac{e^{N_s\epsilon_s}}{N_s!\lambda_{Td}^{3N_s}}\int\prod_{k=1}^{N_s}\frac{\mathrm{d}\mathbf{\Omega}_k}{4\pi}\mathrm{d}\mathbf{x}_k\\ &&\times\prod_{i=1}^{p}\prod_{j=1}^{N_i}\frac{e^{N_i\epsilon_i}}{N_i!\lambda_{Ti}^{3N_i}}\int\frac{\mathrm{d}\mathbf{b}_j}{\left(4\pi b_{pi}^2\right)^{3/2}}\mathrm{d}\mathbf{y}_{ij}\;e^{-h_i\left(\mathbf{b}_j\right)-H(\mathbf{v})},\nonumber \end{eqnarray} where $N_s$ is the total number of solvent molecules, $N_i$ the number of ions for the species $i$, and $\lambda_{Td}$ and $\lambda_{Ti}$ denote respectively the thermal wavelengths of solvent molecules and ions. We also introduced in Eq.~(\ref{canpart}) the compact notation $\mathbf{v}=\left(\{\mathbf{x}_k\},\{\mathbf{a}_k\},\{\mathbf{y}_{ij}\},\{\mathbf{b}_j\}\right)$ for the vectors characterizing the configuration of particles, with $\mathbf{x}_k$ and $\mathbf{y}_{ij}$ denoting respectively the coordinate of the charges $+Q$ and $e_i$ of the solvent molecules and polarizable ions in depicted in Fig.~\ref{fig0}. Furthermore, $\mathbf{\Omega}_k=(\theta_k,\varphi_k)$ is the solid angle characterizing the orientation of the $kth$ solvent molecule, $\theta$ being the angle between the oriented dipole and the $z$-axis (see Fig.~\ref{fig0}(a)). We finally note that in Eq.~(\ref{canpart}), we subtracted from the total Hamiltonian the self energies of ions and polar molecules in the air, $\epsilon_i=\left(e_i^2+c_i^2\right)v_c(0)/2+e_ic_iv_c(b)$ and $\epsilon_s=Q^2\left[v_c(0)-v_c(a)\right]$. This point will be discussed below in further detail. The Hamiltonian of the bulk liquid is composed of pairwise electrostatic interactions, \begin{equation}\label{Hel} H_{el}(\mathbf{v})=\frac{1}{2}\int_{\mathbf{r}\br'}\left[\rho_{ic}+\rho_{sc}\right]_\mathbf{r} v_c(\mathbf{r}-\mathbf{r}')\left[\rho_{ic}+\rho_{sc}\right]_{\mathbf{r}'}, \end{equation} where the total ionic and solvent density operators for the charge compositions depicted in Fig.~\ref{fig0} are defined as \begin{eqnarray}\label{ci} &&\rho_{ic}(\mathbf{r})=\sum_{i=1}^p\sum_{j=1}^{N_i}\left[e_i\delta(\mathbf{r}-\mathbf{y}_{ij})+c_i\delta(\mathbf{r}-\mathbf{y}_{ij}-\mathbf{b}_j)\right]\\ \label{cd} &&\rho_{sc}(\mathbf{r})=\sum_{k=1}^{N_s}Q\left[\delta(\mathbf{r}-\mathbf{x}_k)-\delta(\mathbf{r}-\mathbf{x}_k-\mathbf{a}_k)\right]. \end{eqnarray} Moreover, in Eq.~(\ref{Hel}), $v_c(\mathbf{r}-\mathbf{r}')=\ell_B/|\mathbf{r}-\mathbf{r}'|$ stands for the Coulomb potential in the air medium, with $\ell_B=e^2/\left[4\pi\varepsilon_0k_BT\right]\simeq 55$ nm the Bjerrum length and $\varepsilon_0$ the dielectric permittivity in the air, $e$ the electron charge, and $T=300$ K the ambient temperature. We note that in the rest of the article, dielectric permittivities will be expressed in units of the air permittivity $\varepsilon_0$, and energies in units of the thermal energy $k_BT$. In order to transform the partition function~(\ref{canpart}) into a more tractable form, we pass from the particle density to the fluctuation potential representation by performing a standard Hubbard-Stratonovich transformation. In this representation, the grand canonical partition function of the system defined as $Z_G=\sum_{N_s\geq0}\prod_{i=1}^p\sum_{N_i\geq0}e^{\mu_i N_i}e^{\mu_w N_s}Z_c$ takes the form of a functional integral over the fluctuating electrostatic potential $\phi(\mathbf{r})$, $Z_G=\int \mathcal{D}\phi\;e^{-H[\phi]}$, with the Hamiltonian functional~\cite{nlpb} \begin{eqnarray}\label{HamFunc} H[\phi]&=&\int\mathrm{d}\mathbf{r}\frac{\left[\nabla\phi(\mathbf{r})\right]^2}{8\pi\ell_B}-\Lambda_s\int\frac{\mathrm{d}\mathbf{\Omega}}{4\pi}\mathrm{d}\mathbf{r}\;e^{\epsilon_s+iQ\left[\phi(\mathbf{r})-\phi(\mathbf{r}+\mathbf{a})\right]}\nonumber\\ &&-\sum_i\Lambda_i\int\frac{\mathrm{d\mathbf{b}}}{\left(4\pi b_{pi}^2\right)^{3/2}}\mathrm{d}\mathbf{r}\;e^{-h_i(\mathbf{b})+\epsilon_i}\nonumber\\ &&\hspace{3.5cm}\times e^{ie_i\phi(\mathbf{r})+ic_i\phi(\mathbf{r}+\mathbf{b})}. \end{eqnarray} The first term on the r.h.s. of Eq.~(\ref{HamFunc}) is the electrostatic energy of the freely propagating field in the air. The second term corresponds to the density of solvent molecules, and their fugacity is denoted by $\Lambda_s$. Finally, the third term on the r.h.s. of Eq.~(\ref{HamFunc}) is the density of polarizable ions with fugacity $\Lambda_i$. \begin{figure} \includegraphics[width=1.0\linewidth]{fig1.pdf} \caption{(Color online) Charge composition of solvent molecules of size $a$ (a) and polarizable ions with a fluctuating length $b$ (b). In the present work, we consider exclusively the case of ionic valencies $e_i$ and $c_i$ of opposite sign ($e_ic_i<0$), and solvent molecules with monovalent point charges $Q=1$.} \label{fig0} \end{figure} The Hamiltonian~(\ref{HamFunc}) was already derived in Ref.~\cite{nlpb} for the more general case of multipolar solvents embedding polarizable ions, and the saddle-point solution of the partition function corresponding to the MF approximation was investigated for polar liquids in contact with charged planes. In order to account for correlation effects in the bulk liquid beyond the MF level, we need to derive the electrostatic correlation function. Our starting point is the following form of the Dyson equation, \begin{equation}\label{Dyson} \int\mathcal{D}\phi\frac{\delta}{\delta\phi(\mathbf{r})}\;e^{-H[\phi]+\int\mathrm{d}\mathbf{r} J(\mathbf{r})\phi(\mathbf{r})}=0, \end{equation} where $J(\mathbf{r})$ is a generalized current introduced for the derivation of the two point correlation function. A proof of the equality~(\ref{Dyson}) can be found in Ref.~\cite{justin}. We also remind that the derivation of the electrostatic self-consistent equations of the primitive ion model~\cite{netzvar} with the use of this equality was presented in Ref.~\cite{jcp}. By taking now the functional derivative of Eq.~(\ref{Dyson}) with respect to $J(\mathbf{r}')$ and setting $J(\mathbf{r}')=0$, one obtains the following equation for the two point correlation function, \begin{eqnarray}\label{corr2} &&\nabla_\mathbf{r}^2\left\langle\phi(\mathbf{r})\phi(\mathbf{r}')\right\rangle\\ &&+4\pi\ell_BiQ\lambda_s\int\frac{\mathrm{d}\mathbf{\Omega}}{4\pi}\mathrm{d}\mathbf{r}\;e^{\epsilon_s}\;\left\{\left\langle e^{iQ\left[\phi(\mathbf{r})-\phi(\mathbf{r}+\mathbf{a})\right]}\phi(\mathbf{r}')\right\rangle\right.\nonumber\\ &&\hspace{4cm}-\left.\left\langle e^{iQ\left[\phi(\mathbf{r}-\mathbf{a})-\phi(\mathbf{r})\right]}\phi(\mathbf{r}')\right\rangle\right\}\nonumber\\ &&+4\pi\ell_Bi\sum_i\Lambda_i\int\frac{\mathrm{d\mathbf{b}}}{\left(4\pi b_{pi}^2\right)^{3/2}}\mathrm{d}\mathbf{r}\;e^{-h_i(\mathbf{b})+\epsilon_i}\nonumber\\ &&\times\left\{\left\langle\left[e_ie^{ie_i\phi(\mathbf{r})+ic_i\phi(\mathbf{r}+\mathbf{b})}+c_ie^{ie_i\phi(\mathbf{r}-\mathbf{b})+ic_i\phi(\mathbf{r})}\right]\phi(\mathbf{r}')\right\rangle\right\}\nonumber\\ &&=-4\pi\ell_B\delta(\mathbf{r}-\mathbf{r}')\nonumber, \end{eqnarray} where the bracket $\left\langle\cdot\right\rangle$ denotes the field average with the Hamiltonian Functional in Eq.~(\ref{HamFunc}). In Eq.~(\ref{corr2}), the dependence of the fluctuating solvent and ion densities (i.e. the functions inside the brackets on the l.h.s.) on the values of the potential at different points around $\mathbf{r}$ is a signature of non-local electrostatic interactions resulting from the extended charge structure of the solvent molecules and ions~\cite{nlpb}. We emphasize that the formal equation~(\ref{corr2}) is an exact relation. However, because the Hamiltonian of Eq.~(\ref{HamFunc}) is non-linear in the potential $\phi(\mathbf{r})$, an exact analytical evaluation of the averages over the fluctuating potential is impossible. To progress further, we approximate this non-linear Hamiltonian with a quadratic Hamiltonian functional, \begin{equation}\label{gauss} H_0[\phi]=\int\frac{\mathrm{d}\mathbf{r}\mathrm{d}\mathbf{r}'}{2}\phi(\mathbf{r})v_0^{-1}(\mathbf{r},\mathbf{r}')\phi(\mathbf{r}), \end{equation} where the electrostatic potential is chosen as the solution of the equation~(\ref{corr2}), that is, $v_0(\mathbf{r},\mathbf{r}')=\left\langle\phi(\mathbf{r})\phi(\mathbf{r}')\right\rangle$. At this stage, we note that the spherical symmetry in the bulk liquid implies $v_0(\mathbf{r},\mathbf{r}')=v_0(\mathbf{r}-\mathbf{r}')$, and this allows us to expand the potential in Fourier space as $v_0(\mathbf{r}-\mathbf{r}')=\int\frac{\mathrm{d}^3\mathbf{q}}{\left(2\pi\right)^3}\;e^{i\mathbf{q}\cdot(\mathbf{r}-\mathbf{r}')}\tilde{v}_0(q)$. Evaluating the averages in Eq.~(\ref{corr2}) with the quadratic functional~(\ref{gauss}) and injecting into the result the Fourier expansion of the correlation function, the explicit form of the potential finally follows in the form~\cite{rem} \begin{equation}\label{varpot} \tilde{v}_0^{-1}(q)=\frac{q^2\tilde{\epsilon}(q)}{4\pi\ell_B}+\sum_i\rho_{ib}q_i^2, \end{equation} with the Fourier transformed dielectric permittivity function \begin{eqnarray}\label{varep} \tilde{\epsilon}(q)&=&1+\frac{\kappa_s^2}{q^2}\left[1-\frac{\sin(qa)}{qa}\right]+\sum_i\frac{\kappa_{ip}^2}{q^2}\lan1-\frac{\sin(qb)}{qb}\right\rangle.\nonumber\\ \end{eqnarray} We introduced in Eq.~(\ref{varep}) the solvent and ionic screening parameters in the air, $\kappa^2_s=8\pi Q^2\ell_B\rho_{sb}$ and $\kappa^2_{ip}=8\pi|e_ic_i|\ell_B\rho_{ib}$. Furthermore, we defined in Eq.~(\ref{varep}) the statistical average over the fluctuations of the electronic cloud, \begin{equation}\label{av} \left\langle F(b)\right\rangle=\frac{\int_0^\infty\mathrm{d}bb^2 \;e^{-h_i(b)-\psi_{ip}(b)}F(b)}{\int_0^\infty\mathrm{d}bb^2\;e^{-h_i(b)-\psi_{ip}(b)}}, \end{equation} with the potential of mean force (PMF) \begin{equation}\label{dippmf} \psi_{ip}(b)=-|e_ic_i|\int_0^\infty\frac{\mathrm{d}qq^2}{2\pi^2}\left[1-\frac{\sin(qb)}{qb}\right]\left[\tilde{v}_c(q)-\tilde{v}_0(q)\right],\\ \end{equation} where the Fourier transform of the Coulomb potential in the vacuum given by $\tilde{v}_c(q)=q^2/(4\pi\ell_B)$. We also note that deriving Eq.~(\ref{varpot}), we used the thermodynamic relations between the particle fugacities and concentrations, $\rho_{sb}=\Lambda_s\partial\ln Z/\partial\Lambda_s=\Lambda_se^{-\psi_d}$ and $\rho_{ib}=\Lambda_i\partial\ln Z/\partial\Lambda_i=\Lambda_i\int\mathrm{d}\mathbf{b} e^{-h_i(\mathbf{b})-\psi_{ip}(b)}/(4\pi b_{pi}^2)^{3/2}$, with the liquid state self-energies of solvent molecules and ions respectively defined as \begin{eqnarray}\label{PMFs} &&\psi_s=-Q^2\int_0^\infty\frac{\mathrm{d}qq^2}{2\pi^2}\left[1-\frac{\sin(qa)}{qa}\right]\left[\tilde{v}_c(q)-\tilde{v}_0(q)\right]\nonumber\\ &&\\ \label{PMFi} &&\psi_i(b)=-\int_0^\infty\frac{\mathrm{d}qq^2}{2\pi^2}\left\{\frac{e_i^2+c_i^2}{2}+e_ic_i\frac{\sin(qb)}{qb}\right\}\\ &&\hspace{2.9cm}\times\left[\tilde{v}_c(q)-\tilde{v}_0(q)\right].\nonumber \end{eqnarray} One can notice that the energies in Eq.~(\ref{PMFs}) and~(\ref{PMFi}) correspond to the hydration energies of the solvent molecules and polarizable ions, i.e. the electrostatic cost to drive the molecules from the gas to the liquid environment. Moreover, one sees that Eqs.~(\ref{dippmf}) and~(\ref{PMFi}) are related as $\psi_i(b)=-q_i^2\left[v_c(0)-v_0(0)\right]/2+\psi_{ib}(b)$, which indicates that the PMF $\psi_{ip}(b)$ brings the net contribution from the polarizability to the ionic hydration energy. Finally, unlike previous point dipole models where the electrostatic energies have to be regularized with an ultraviolet cut-off in Fourier space~\cite{orland2,dem}, our consideration of the finite solvent molecular size and electronic cloud extension resulted in a cut-off free theory with well defined self energies in Eqs.~(\ref{dippmf})-(\ref{PMFi}). At this stage, we note that our motivation for subtracting from the Hamiltonian the gas phase self-energy of polarizable ions in Eq.~(\ref{canpart}) was twofold. First of all, this step allowed us to avoid the dipolar catastrophy problem. Indeed, the classical Drude oscillator model of Eq.~(\ref{hpol}) does not prevent the electron from falling into the nucleus, and this results in divergent ionic self-energies for $b\to 0$. This problem could be avoided in an alternative way by modifying the Drude model with a cut-off at small $b$, but we found that this technical complication shadows the transparency of the analytical results. Furthermore, the Drude potential is clearly an approximative fashion to consider the quantum mechanical interatomic interactions that already include the electrostatic coupling between the electron and the nucleus. We also note that the subtracted self-energy of solvent molecules does not affect the statistical average in Eq.~(\ref{av}). The relations~(\ref{varpot})-(\ref{dippmf}) form a set of closure equations that should be solved self-consistently. These two relations can be also interpreted as a single integral equation for the dielectric permittivity function $\tilde{\epsilon}(q)$ in Fourier space. Then, one notes that computing the average in Eq.~(\ref{varep}) by neglecting the PMF~(\ref{dippmf}) in Eq.~(\ref{av}), one obtains the MF permittivity function derived in Ref.~\cite{nlpb}, \begin{eqnarray}\label{varepmf} \tilde{\epsilon}_{MF}(q)&=&1+\frac{\kappa_s^2}{q^2}\left[1-\frac{\sin(qa)}{qa}\right]+\sum_i\frac{\kappa_{ip}^2}{q^2}\left[1-e^{-b_{pi}^2q^2}\right].\nonumber\\ \end{eqnarray} Hence, electrostatic correlation effects are incorporated in the hydration PMF $\psi_{ib}(b)$. In the rest of the article, the solution of the closure equations~(\ref{varpot})-(\ref{dippmf}) will be considered in order to investigate the solvation of polarizable ions in high dielectric liquids. \section{Results} In this section, we solve the closure equations~(\ref{varpot})-(\ref{dippmf}) in order to shed light on the electrostatic mechanism behind the hydration effects observed in ab-initio calculations for polarizable ions in high dielectric liquids such as polar solvents~\cite{ionpol,ionpol2} and ionic liquids~\cite{ionion1,ionion2}. We first investigate in Section~\ref{polar} the hydration of a single polarizable ion in a polar liquid such as water, and we characterize in Section~\ref{ion} a similar cooperative solvation mechanism in ionic liquids exclusively composed of polarizable ions. \subsection{Hydration of a single polarizable ion in water} \label{polar} \begin{figure} \includegraphics[width=1.0\linewidth]{fig2.pdf} \caption{(Color online) Drude oscillator potential Eq.~(\ref{hpol}) (blue curve), hydration energy of Eq.~(\ref{pmfdil}) (red curve), and total distortion potential of an hydrated polarizable ion (black curve). Model parameters are $a=1$ {\AA}, $\rho_{sb}=10^{-4}$ M, $b_{pi}=1$ {\AA}, and $|e_ic_i|=2$.} \label{fig1} \end{figure} This section is devoted to the hydration of a single polarizable ion in a strongly polar liquid such as water. In the dilute ion regime, the PMF of Eq.~(\ref{dippmf}) has to be evaluated at the leading order in the ion concentration by neglecting the ionic contributions corresponding respectively to the second and third terms on the r.h.s. of Eqs.~(\ref{varpot}) and~(\ref{varep}). In order to illustrate the hydration mechanism in an intuitive way, we first consider a polarizable ion in a dilute solvent. By expanding Eq.~(\ref{dippmf}) at the order $O\left((\kappa_sa)^2\right)$, which is valid for the solvent molecular size $a=1$ {\AA} in the solvent density regime $\rho_{sb}\lesssim0.1$ M, one obtains for the PMF associated with the polarizability the close form expression, \begin{eqnarray}\label{pmfdil} \psi_{ip}(b)&=&-|e_ic_i|\frac{\ell_B}{2a}\left(\kappa_sa\right)^2\left\{\frac{3ab-b^2}{3a^2}\theta(a-b)\right.\\ &&\hspace{2.8cm}+\left.\frac{3ab-a^2}{3ab}\theta(b-a)\right\}.\nonumber \end{eqnarray} The hydration potential $\psi_{ip}(b)$ of Eq.~(\ref{pmfdil}) and the total distortion energy $h_i(b)+\psi_{ip}(b)$ are compared in Fig.~\ref{fig1} with the distortion potential of an isolated ion $h_i(b)$. One sees that the negative hydration potential $\psi_{ip}(b)$ results in a net reduction of the bare distortion energy $h_i(b)$. In other words, the hydration of a polarizable ion favors the expansion of its electronic cloud. This peculiarity results from the fact that the Born energy of a point charge is proportional to the square of its valency, and the point charges on the polarizable ion are of opposite sign and satisfy the inequality $e_i^2+c_i^2>q_i^2$. As a result, the solvation energy of two separate charges with valencies $e_i$ and $c_i$ is lower than the Born energy of a single ion of valency $q_i$ in Eq.~(\ref{PMFi}), that is $\psi_i(b\to\infty)<\psi_i(b=0)$. It follows from this remark that for a rodlike molecule with the charges $e_i$ and $c_i$ of the same sign, hydration would in turn lead to a compression of the electronic cloud. Furthermore, the black curve in Fig.~\ref{fig1} shows that the total distortion potential $h_i(b)+\psi_{ip}(b)$ exhibits a minimum. This means that the polarizable molecule without average dipole in the gas phase acquires a net dipole moment upon hydration. One finally notes that in Eq.~(\ref{pmfdil}), the hydration potential converges for $b\gtrsim a$ to a constant value $\psi_{ib}=-|e_ic_i|\left(\kappa_sa\right)^2\ell_B/(2a)$. Thus, for dilute solvents, the hydration modifies the electronic cloud rigidity mainly at separation distances below the solvent molecular size. \begin{figure*} \includegraphics[width=.47\linewidth]{fig3a.pdf} \includegraphics[width=.47\linewidth]{fig3b.pdf} \includegraphics[width=.47\linewidth]{fig3c.pdf} \includegraphics[width=.47\linewidth]{fig3d.pdf} \caption{(Color online) (a) Enhancement of the ionic dipole moment $b_{mi}$ introduced in Eq.~(\ref{bm1}) and (b) the total polarizability $b_{tot,i}$ defined in Eq.~(\ref{btot}), and (c) the reduction of the effective intrinsic polarizability $b_{vi}$ of Eq.~(\ref{bm2}) with increasing solvent density. The ion in the polar solvent has gas phase polarizability $b_{pi}=0.20$ {\AA}, and different ionic valencies from $|e_ic_i|=1$ to $3$ are considered. The results obtained from the numerical solution of the self-consistent equations~(\ref{varpot})-(\ref{dippmf}) at a fixed dipole moment $p_0=1$ {\AA} are displayed by solid curves for the solvent molecular size $a=1$ {\AA} and ion valencies $|e_ic_i|=1$ to $3$, and by dash-dotted black curves for $a=3$ {\AA} and divalent molecules with $|e_ic_i|=3$. Dotted curves in (a) and (b) denote for divalent molecules the point dipole results of Eq.~(\ref{ptd}) obtained in the limit $a\to0$ of Eqs.~(\ref{varpot})-(\ref{dippmf}) at fixed dipole moment, and circles mark the asymptotic equations~(\ref{as1}) and~(\ref{as2}) derived in the same point dipole limit for large solvent concentrations. Dashed horizontal curves correspond to the complete ionic hydration state of Eqs.~(\ref{bmsat})-(\ref{btotsat}). (d) Dielectric permittivity profile around a point ion at $r=0$ for the solvent density $\rho_{sb}=55$ M.} \label{fig2} \end{figure*} To extend the investigation of the hydration induced modification of the electronic cloud radius and rigidity beyond the dilute solvent regime, we can map Eqs.~(\ref{varpot}) and~(\ref{dippmf}) onto an effective polarizable ion model. By adsorbing the effect of the hydration potential $\psi_{ib}(b)$ into an effective Drude oscillator model \begin{equation}\label{disef} h_i(\mathbf{b})=\frac{(\mathbf{b}-\mathbf{b}_{mi})^2}{4b^2_{vi}}, \end{equation} with the average dipole moment (or electronic cloud radius) $b_{mi}$ and induced ion polarizability $b_{vi}$ in the liquid environment, and evaluating the average in Eq.~(\ref{varep}) with the distortion potential~(\ref{disef}) without the hydration PMF~(\ref{dippmf}), we are left with the effective permittivity function \begin{eqnarray}\label{mf2} \tilde{\epsilon}_{eff}(q)&=&1+\frac{\kappa_s^2}{q^2}\left[1-\frac{\sin(qa)}{qa}\right]\\ &&\hspace{.3cm}+\sum_i\frac{\kappa_{ip}^2}{q^2}\left[1-\frac{\sin(qb_{mi})}{qb_{mi}}e^{-b^2_{vi}q^2}\right].\nonumber \end{eqnarray} The comparison of the function~(\ref{mf2}) with Eq.~(\ref{varepmf}) indicates that at the MF level, the ion has no dipole moment ($b_{mi}=0$), and its polarizability is equal to the gas phase value ($b_{vi}=b_{pi}$). By expanding now Eqs.~(\ref{varep}) and~(\ref{mf2}) in the infrared (IR) regime up to the order $O(q^4)$ and identifying the quadratic and quartic terms in the wavevector $q$, one obtains the coupled equations $6b^2_{vi}+b^2_{mi}=\left\langle b^2\right\rangle$ and $60b^4_{vi}+20b^2_{mi}b^2_{vi}+b^4_{mi}=\left\langle b^4\right\rangle$. The solution of these equations respectively yields for the average dipole moment and induced polarizability of the hydrated ion \begin{eqnarray}\label{bm1} &&b^2_{mi}=\left[\frac{5}{2}\left\langle b^2\right\rangle^2-\frac{3}{2}\left\langle b^4\right\rangle\right]^{1/2}\\ \label{bm2} &&b^2_{vi}=b^2_{tot,i}-\frac{b^2_{mi}}{6}, \end{eqnarray} where we introduced the total ionic polarizability \begin{equation} \label{btot} b^2_{tot,i}=\frac{1}{6}\left\langle b^2\right\rangle. \end{equation} We evaluated the dipole moment and polarizabilities in Eqs.~(\ref{bm1})-(\ref{btot}) with the numerical solution of Eqs.~(\ref{varpot})-(\ref{dippmf}). Figure~\ref{fig2}(a) displays the variation of the ionic dipole moment $b_{mi}$ with solvent density for the gas phase polarizability $b_{pi}=0.2$ {\AA} and various molecular valencies (solid curves). First of all, it is seen that an increase of the solvent concentration is accompanied with a monotonic rise of the dipole moment from zero to $b_{mi}\simeq 4$ {\AA}, until the latter saturates in the density regime $\rho_{sb}> 10$ M where the ion becomes fully hydrated. Then, one notices in Fig.~\ref{fig2}(b) that the expansion of the average electronic cloud radius upon hydration results in turn in an amplification of the total polarizability $b_{tot,i}$ by several factors. We note that the increase of the ionic polarizability upon hydration in a high dielectric liquid has been previously observed in ab-initio calculations with PCM solvent~\cite{ionpol,ionpol2}. This peculiarity was also revealed in Ref.~\cite{watwat} for water molecules, whose transfer from gas to liquid state was shown to be accompanied with a large amplification of their average dipole moment. In Section~\ref{ion}, it will be shown that a similar hydration mechanism is present as well in ionic liquids. Moreover, in Fig.~\ref{fig2}(c), one sees that the effective intrinsic polarizability $b_{vi}$ exhibits in turn a monotonic decrease upon hydration, until it reaches in the fully hydrated state almost half of its gas phase value $b_{pi}$. This indicates that upon hydration, the electronic cloud of the polarizable molecule increases in size, but also reaches an enhanced rigidity. In other words, the hydration opposes the electronic cloud deformation resulting from thermal fluctuations. Interestingly, comparison of Figs.~\ref{fig2} (a) and (c) shows that the increase of the electron cloud rigidity manifests itself at considerably lower concentrations than its expansion. Furthermore, in Figs.~\ref{fig2} (a) and (b), one notices that a significant departure from the MF behavior with $b_m=0$ and $b_{tot,i}=b_{pi}$ is observed above the characteristic solvent concentration $\rho_{sb}\simeq 10^{-3}$ M. This shows that in Fig.~\ref{fig2}(c), the hardening of the electronic cloud takes place already in the weak electrostatic coupling regime. Finally, in Figs.~\ref{fig2} (a)-(c), we note that although ions with a higher valency are clearly better solvated, the ionic dipole moment and polarizabilities exhibit weaker sensitivity to the molecular charge than the hydration energy in Eq.~(\ref{dippmf}) characterized by a linear dependence on the charge $|e_ic_i|$. \begin{figure} \includegraphics[width=1.0\linewidth]{fig4.pdf} \caption{(Color online) Rescaled ionic dipole moment (dashed red curves) and total polarizability (solid black curves) against the gas phase polarizability $b_{pi}$ for the solvent densities (a) $\rho_{sb}=2.0\times10^{-4}$ M and (b) $\rho_{sb}=55.0$ M, molecular charge $|e_ic_i|=2$, and solvent molecular size $a=1$ {\AA}. The curves are from the full numerical calculation, the black and red circles respectively correspond to the limiting laws~(\ref{bmdil}) and~(\ref{b2dil}), and the black and red squares are from the expressions~(\ref{bmsat}) and~(\ref{btotsat}) for the fully hydrated state.} \label{fig3} \end{figure} In order to characterize the scaling of the hydrated polarizabilities with the gas phase polarizability $b_{pi}$, we first consider the electrostatic weak coupling regime of dilute solvents. By evaluating in the dilute solvent regime the averages in Eqs.~(\ref{bm1}) and~(\ref{btot}) at the order $O\left((\kappa_sa)^2\ell_B/a\right)$, one obtains for the ionic dipole moment and the total polarizability \begin{eqnarray}\label{bmdil} \frac{b_{mi}^4}{b^4_{pi}}&=&\frac{12|e_ic_i|}{\sqrt\pi}\frac{\ell_B}{a}\left(\kappa_sa\right)^2f\left(\frac{a}{b_{pi}}\right)\\ \label{b2dil} \frac{b_{tot,i}}{b_{pi}}&=&1+\frac{|e_ic_i|}{3\sqrt\pi}\frac{\ell_B}{a}\left(\kappa_sa\right)^2g\left(\frac{a}{b_{pi}}\right), \end{eqnarray} where we introduced the auxiliary functions \begin{eqnarray} f(x)&=&x^{-1}\left[1-e^{-x^2/4}\right]\\ g(x)&=&x^{-1}-x^{-2}\sqrt\pi\;\mathrm{Erf}\left(\frac{x}{2}\right). \end{eqnarray} In Fig.~\ref{fig3}(a), we compare the prediction of these asymptotic laws (circles) with the numerical solution of Eqs.~(\ref{varpot})-(\ref{dippmf}) (continuous curves) for a dilute liquid with density $\rho_{sb}=2.0\times10^{-4}$ M. One notices that the behavior of the polarizabilities is characterized by two regimes separated by a peak located at $b_{pi}\simeq a/3$. Indeed, the asymptotic limit of Eqs.~(\ref{bmdil}) and~(\ref{b2dil}) indicate that the average electronic cloud radius and total polarizability grow with the gas phase polarizability as $b_{mi}\sim b_{pi}^{5/4}$ and $b_{tot,i}-b_{pi}\sim b_{pi}^2$ for $b_{pi}\ll a/3$ (left branch of the curves in Fig.~\ref{fig3}(a)), and $b_{mi}\sim b_{pi}^{3/4}$ and $b_{tot,i}-b_{pi}\sim c^{st}$ for $b_{pi}\gg a/3$ (right branch of the curves). Thus, the transition between these two regimes results from a competition between the solvent molecular size and the gas phase polarizability. In the opposite regime of concentrated solvents, the expansion of Eqs.~(\ref{varpot}) and~(\ref{dippmf}) for $\kappa_sa\gg1$ and $b_p/a\ll1$ yields for the hydration energy the asymptotic limit \begin{equation} \psi_{ip}(b)\simeq-\frac{|e_ic_i|\ell_B}{b}\left[e^{-\kappa_sb}+\kappa_sb-1\right]. \end{equation} Neglecting the exponential term and expanding the total distortion potential $U_i(b)=h_{i}(b)+\psi_{ip}(b)$ around the equilibrium position, we are left with the gaussian distribution $U_i(b)=\left(b-b_{mi}\right)^2/\left(4b_{vi}^2\right)$, with the average electronic cloud radius and effective intrinsic polarizability \begin{eqnarray}\label{bmsat} \frac{b_{mi}}{b_{pi}}&=&\left(\frac{2|e_ic_i|\ell_B}{b_{pi}}\right)^{1/3}\\ \label{bvsat} \frac{b_{vi}^2}{b_{pi}^2}&=&\frac{1}{3}. \end{eqnarray} Substituting these relations into Eq.~(\ref{bm2}), the total ionic polarizability follows as \begin{equation}\label{btotsat} \frac{b_{tot,i}^2}{b_{pi}^2}=\frac{1}{3}\left[1+\frac{1}{2}\left(2|e_ic_i|\frac{\ell_B}{b_{pi}}\right)^{2/3}\right]. \end{equation} Figures~\ref{fig2}(a)-(c) show that the closed form expressions in Eqs.~(\ref{bmsat})-(\ref{btotsat}) accurately reproduce the saturation values of the ionic dipole moment and the polarizabilities (dashed horizontal curves). First of all, in Eq.~(\ref{bvsat}), one notes that regardless of the ion charge, transferring the ion from the gaseous phase into the liquid environment reduces its intrinsic polarizability by a factor three. Moreover, Eqs.~(\ref{bmsat}) and~(\ref{btotsat}) indicate that in the fully hydrated state, the ionic dipole moments and total polarizability grow as the cubic root of the ion charge, which explains the weak dependence of the solvation on the molecular charge strength in Figs.~\ref{fig2}(a)-(c). We compare in Fig.~\ref{fig3}(b) the limiting laws~(\ref{bmsat}) and~(\ref{btotsat}) with the full numerical solution of the self-consistent equations for the solvent concentration $\rho_{sb}=55.0$ M. These equations indicate that in the range $b_{pi}=0.1$ {\AA} to $1.0$ {\AA}, the dipole moment and polarizability of the fully hydrated ion grows with the gas phase polarizability according to the $b_{pi}^{2/3}$ power law. We also note that interestingly, the hydrated polarizabilities in Eqs.~(\ref{bmsat})-(\ref{btotsat}) are independent of the solvent molecular size. This peculiarity stems from the fact that the complete hydration takes place in the parameter regime $\kappa_s\gg a^{-1}$, where the part of the dielectric susceptibility function associated with the rotation of solvent molecules (i.e. the third term on the r.h.s. of Eq.~(\ref{varep})) makes no contribution to the hydration energy $\psi_{ib}(b)$ in Eq.~(\ref{dippmf}). In our previous work on the MF theory of polar liquids at charged interfaces, it was shown that the non-local character of electrostatic interactions in the solvent results from the finite size of solvent molecules~\cite{nlpb}. The effect of non-locality on the hydration mechanism can be estimated by varying the solvent molecular size $a$ at fixed dipole moment $p_0=Qa$. To this aim, we reexpress the dielectric permittivity function~(\ref{varep}) in the form \begin{equation}\label{perres} \tilde{\epsilon}(q)=1+\frac{\left(\kappa_sp_0\right)^2}{\left(Qqa\right)^2}\left[1-\frac{\sin(qa)}{qa}\right], \end{equation} and calculate the total polarizabilities~(\ref{bm1})-(\ref{btot}) with the above permittivity function by varying $a$ with the dipole moment fixed at $p_0=1$ {\AA}. In Figs.~\ref{fig2} (a) and (b), the comparison of the curves with $a=1$ {\AA} and $3$ {\AA} shows that the increase of the solvent molecular size at fixed dipole moment lowers the average electronic cloud radius and the total ionic polarizability. Hence, non-locality weakens the hydration of the polarizable ion. To explain this peculiarity, we note that in the dilute ion regime, the inverse Fourier transform of the potential in Eq.~(\ref{varpot}) is given by a generalized Coulomb law, $v_0(r)=\ell_B/\left[r\varepsilon(r)\right]$, with the local dielectric permittivity function \begin{equation}\label{locdi} \varepsilon(r)=\frac{\pi}{2}\left/\int_0^\infty\frac{\mathrm{d}k}{k}\frac{\sin(kr/a)}{\tilde{\epsilon}(k)}\right., \end{equation} and the adimensional wavevector $k=qa$. The dielectric permittivity profile of Eq.~(\ref{locdi}) is reported in Fig.~\ref{fig2}(d). First of all, it is seen that the close vicinity of the ion at $r=0$ is characterized by a dielectric void. Then, one notes that the dielectric permittivity function in Eq.~(\ref{locdi}) depends solely on the rescaled distance $r/a$. This means that an increase of the solvent molecular size amplifies the dielectric void around a polarizable molecule, and consequently reduces its hydration energy in Eq.~(\ref{dippmf}). In the opposite point-dipole limit of solvent molecules $a\to 0$, the permittivity function~(\ref{perres}) tends to the bulk permittivity, $\tilde{\epsilon}(q)\to\varepsilon_b=1+4\pi\ell_Bp_0^2\rho_{sb}/3$, and the hydration PMF~(\ref{dippmf}) takes the simple form~\cite{rem2} \begin{equation}\label{psipt} \psi_{ip}(b)=\psi_{ip}(b\to\infty)+\frac{4\Gamma b_{pi}}{b}, \end{equation} with the adimensional parameter \begin{equation}\label{coup} \Gamma=\frac{\left(\kappa_sa\right)^2}{6+\left(\kappa_sa\right)^2}\frac{|e_ic_i|\ell_B}{4b_{pi}}. \end{equation} Evaluating the integrals in Eq.~(\ref{av}) with the PMF~(\ref{psipt}), the moments of the electronic cloud oscillations can be expressed in terms of Meijer G-functions~\cite{math}, \begin{equation}\label{ptd} \frac{\left\langle b^{2n}\right\rangle}{b_{pi}^{2n}}=\left(2\Gamma\right)^{2n}\frac{\mathrm{G}_{03}^{30}\left(-n-\frac{3}{2},-n-1,0\left|\Gamma^2\right.\right)}{\mathrm{G}_{03}^{30}\left(-\frac{3}{2},-1,0\left|\Gamma^2\right.\right)}. \end{equation} The ionic dipole moment and total polarizability obtained from Eq.~(\ref{ptd}) is reported in Figs.~\ref{fig2}(a) and (b). One notices that the point dipole result is very close to the case with finite solvent molecular size $a=1$ {\AA}. Thus, for the model parameters chosen in this work, non-locality plays a minor role in the hydration process. It is interesting to note that in this parameter regime, the hydration of the polarizable ion can be solely described by the single coupling parameter $\Gamma$. By Taylor-expanding Eq.~(\ref{ptd}) in the regime $\Gamma\gg1$, one obtains for the ionic dipole moment and total polarizability the following expressions, \begin{eqnarray} \label{as1} &&\frac{b_{mi}}{b_{pi}}=2\Gamma^{1/3}+\frac{2}{3}\Gamma^{-1/3}+O\left(\Gamma^{-1}\right)\\ \label{as2} &&\frac{b_{tot,i}}{b_{pi}}=\sqrt{\frac{2}{3}}\Gamma^{1/3}+\frac{7}{6\sqrt{6}}\Gamma^{-1/3}+O\left(\Gamma^{-1}\right). \end{eqnarray} In Figs.~\ref{fig2}(a) and (b), it is shown that the asymptotic laws~(\ref{as1}) and~(\ref{as2}) can accurately reproduce the increase of the ionic dipole moment and total polarizability from $\rho_{sb}=10^{-3}$ M to complete hydration. These equations indicate that the fully hydrated state of the polarizable ion is reached with increasing solvent concentration through the gradual saturation of the parameter $\Gamma$ in Eq.~(\ref{coup}). We consider next the counterpart of this hydration process in ionic liquids without solvent molecules. \subsection{Cooperative solvation in ionic liquids} \label{ion} \begin{figure} \includegraphics[width=1.0\linewidth]{fig5.pdf} \caption{(Color online) Effective dielectric permittivity of ionic liquids with bare polarizability $b_{pi}=0.2$ {\AA} and valency $n$. Solid curves are from the numerical solution of Eqs.~(\ref{varpot})-(\ref{dippmf}), dotted curves denote the full solvation limit in Eq.~(\ref{dielsat}), and square symbols from the approximative scheme of Eqs.~(\ref{ptpol1})-(\ref{ptpol2}). The black dashed curve is the MF dielectric permittivity $\varepsilon_b=1+\xi_p$ for $n=3$. Inset : Total ionic polarizabilities (solid curves) and their saturation values from Eq.~(\ref{btotsat}) (dashed horizontal curves).} \label{fig4} \end{figure} Ionic liquids are promising salt solvents that gradually replace water in new generation energy storage devices such as graphene based capacitors~\cite{gr}. The accurate knowledge of the dielectric permittivity of ionic liquids is needed to predict the charge storage ability of these devices. In ab-initio calculations of ionic liquids composed of small ions with negligible dipole moments~\cite{ionion1}, it was found that the contribution from electronic and orientational polarization of individual ions cannot alone explain the large dielectric permittivities measured in experiments~\cite{crc}. Based on this observation, it was also argued that an additional polarization effect induced by the surrounding ions must be present to explain the high dielectric permittivity values. In order to shed light on this point, we consider in this part the closure equations~(\ref{varpot})-(\ref{dippmf}) for an ionic liquid free of solvent molecules, and composed of two species of polarizable ions with the same bare polarizability $b_{pi}$ and bulk density $\rho_{ib}$. Furthermore, the point charges on the polarizable molecules are $e_{1,2}=\pm1$ and $c_{12}=\pm n$, which corresponds to the net molecular charges $q_{1,2}=\pm(n-1)$ (see Fig.~\ref{fig1}(b)). The dielectric permittivity of the medium at large separation distances from a central ion is obtained from the IR limit of Eq.~(\ref{varep}), $\varepsilon_b=\tilde{\epsilon}(q\to0)$, and it is given by \begin{equation}\label{diel} \varepsilon_b=1+\sum_i\kappa_{ip}^2b_{tot,i}^2, \end{equation} where the total ionic polarizability defined in Eq.~(\ref{btot}) has to be computed from the numerical solution of Eqs.~(\ref{varpot})-(\ref{dippmf}). Indeed, for an ionic liquid where the hydration of the polarizable ion affects the polarization of the surrounding medium in a self-consistent way, the solution of these equations is more tricky. Our numerical scheme consisted in solving these equations by iteration on a discretized Fourier lattice. Namely, at the first iterative level, the MF permittivity of Eq.~(\ref{varepmf}) was used as the input function in the potential Eq.~(\ref{varpot}) in order to evaluate the hydration PMF in Eq.~(\ref{dippmf}), and the latter was injected at the next step into Eq.~(\ref{av}) to obtain the updated dielectric permittivity function from Eq.~(\ref{varep}). This procedure was continued until self-consistency was achieved. We illustrate in Fig.~\ref{fig4} the ionic polarizability (inset) and the dielectric permittivity of the liquid (main plot) obtained from the numerical solution of Eqs.~(\ref{varpot})-(\ref{dippmf}) (solid curves). First of all, it is seen that the increase of the ion density is accompanied with a strong amplification of the total ion polarizability, which in turn results in a rise of the dielectric permittivity of the medium. Then, in the inset of Fig.~\ref{fig4}, we note that unlike the case of a polarizable ion in a polar solvent (see Fig.~\ref{fig2}(a)), the ionic polarizability and the full hydration density exhibits a pronounced dependence on the molecular charge. These effects can be shown to be driven by the self-consistent solvation of polarizable ions by their own field. To this aim, we introduce respectively the charge and dipolar screening parameters \begin{eqnarray} \kappa_c^2&=&4\pi\ell_B\sum_i\rho_{ib}q_i^2=8\pi\ell_B(n-1)^2\rho_{ib}\\ \kappa_p^2&=&\sum_i\kappa_{ip}^2=16\pi\ell_Bn\rho_{ib}, \end{eqnarray} and the corresponding coupling parameters $\xi_c=\left(\kappa_cb_{pi}\right)^2$ and $\xi_p=\left(\kappa_pb_{pi}\right)^2$. In the dilute liquid regime, by expanding the closure relations~(\ref{varpot}) and~(\ref{dippmf}) up to the order $O\left(\xi_c\right)$ and $O\left(\xi_p\right)$, one obtains for the solvation PMF \begin{equation}\label{ionsolv} \psi_{ip}(b)=-\frac{|e_ic_i|\ell_B}{2b_{pi}}\left[\xi_c\frac{b}{b_{pi}}+\xi_pF\left(\frac{b}{b_{pi}}\right)\right], \end{equation} where we introduced the auxiliary function \begin{equation} F(x)=1+\sqrt\pi\frac{x}{4}-\frac{1}{2}e^{-x^2/4}-\sqrt\pi\frac{x^2+2}{4x}\mathrm{Erf}\left(\frac{x}{2}\right). \end{equation} One sees in Eq.~(\ref{ionsolv}) that the solvation energy is composed of a contribution from the charge screening (the first term on the r.h.s.), and a part resulting from the polarizability induced dielectric screening of the ion by the surrounding ionic liquid (the second term on the r.h.s.). We display in Fig.~\ref{fig5} the PMF of Eq.~(\ref{ionsolv}) for a monovalent ionic solution ($n=2$) with concentration $\rho_{ib}=5\times10^{-5}$. It is seen that in this dilute liquid regime, the charge and dielectric screening effects independently lower the bare distortion energy $h_i(b)$ with an equal weight, thus favoring the expansion of the electronic cloud. Then, we note that as in the case of a polarizable ion in a polar solvent considered in Section~\ref{polar}, the total distortion potential exhibits a minimum. In other words, in the liquid environment, the polarizable ion acquires a finite dipole moment. We emphasize that this effect has been previously observed in ab-initio calculations of ionic liquids composed of charges with fluctuating geometry~\cite{ionion1}. \begin{figure} \includegraphics[width=1.0\linewidth]{fig6.pdf} \caption{(Color online) Drude oscillator potential Eq.~(\ref{hpol}) (blue curve), the first (charge screening) and second term (dielectric screening) of the ionic solvation energy in Eq.~(\ref{ionsolv}) denoted respectively by the red dotted and dashed curves, and the total distortion potential (black curve) for an ionic liquid composed of polarizable ions only, with ionic density per species $\rho_{ib}=5\times10^{-5}$ M, gas phase polarizability $b_{pi}=1$ {\AA}, and molecular charge $|e_ic_i|=2$.} \label{fig5} \end{figure} In order to determine the relative weight of the dielectric and charge screening mechanisms in the renormalization of the background dielectric permittivity beyond the dilute regime, we will introduce an approximative solution scheme of Eqs.~(\ref{varpot})-(\ref{dippmf}). To this aim, we first redefine the hydration PMF of Eq.~(\ref{dippmf}) by subtracting the constant energy in the dissociated state, $\varphi_{ip}(b)=\psi_{ip}(b)-\psi_{ip}(b\to\infty)$. Introducing the dimensioneless wavevector $k=b_{pi}q$ and separation distance $x=b/b_{pi}$, this PMF can be expressed as \begin{eqnarray} \label{dippmf2} \varphi_{ip}(x)&=&\frac{|e_ic_i|\ell_B}{b_{pi}}\frac{2}{\pi}\int_0^\infty\mathrm{d}k \frac{\xi_p\lan1-\frac{sin(kx')}{kx'}\right\rangle\frac{sin(kx)}{kx}}{k^2+\xi_c+\xi_p\lan1-\frac{sin(kx')}{kx'}\right\rangle},\nonumber\\ \end{eqnarray} where the statistical average of the functions inside the brackets is still evaluated according to Eq.~(\ref{av}) with the adimensional electronic cloud radius $x'=b'/b_{pi}$ as the integration variable. We now assume that the hydration PMF affects the electron cloud mainly at small separations $x<1$. This implies that in Eq.~(\ref{dippmf2}), only small wavevectors $k<1$ make a significant contribution to the integral. Based on this assumption, by expanding the sinusodidal functions inside the bracket of Eq.~(\ref{dippmf2}) at the order $O\left(k^2\right)$, the integral can be evaluated exactly. Within this approximation, the complicated integral equations~(\ref{varpot})-(\ref{dippmf}) for the dielectric permittivity are reduced to a simpler non-linear equation, \begin{eqnarray}\label{ptpol1} &&\varepsilon_b=1+\frac{\xi_p}{6}\frac{\int_0^\infty\mathrm{d}xx^4e^{-x^2/4-\varphi_{ip}(x)}}{\int_0^\infty\mathrm{d}xx^2e^{-x^2/4-\varphi_{ip}(x)}}\\ \label{ptpol2} &&\varphi_{ip}(x)=\frac{\varepsilon_b-e^{-x\sqrt{\xi_c/\varepsilon_b}}}{\varepsilon_b}\frac{|e_ic_i|\ell_B}{b_{pi}x}, \end{eqnarray} where we made use of Eqs.~(\ref{av}) and~(\ref{diel}). In Fig.~\ref{fig4}, it is shown that the numerical solution of Eq.~(\ref{ptpol1}) can accurately reproduce the dielectric permittivity obtained from the closure equations~(\ref{varpot})-(\ref{dippmf}) over the whole density range. We now note that in the solvation PMF of Eq.~(\ref{ptpol2}), the contribution from the dielectric and charge screenings correspond respectively to the first constant term $\varepsilon_b$ and the second exponential function in the numerator. This equation indicates that while increasing the ion concentration from the dilute regime, the exponential term is gradually dominated by the constant term in the numerator and becomes negligible for $\varepsilon_b\gg 1$. Thus, charge screening makes a significant contribution to the dielectric permittivity exclusively at low ion concentrations. To asccertain the latter point, we now consider the strict limit of large liquid densities with $\kappa_{p}b_{pi}\gg1$. By evaluating the PMF of Eq.~(\ref{dippmf}) in this limit, we found that the total ionic polarizability is still given by the expression~(\ref{btotsat}) (see the horizontal lines in the inset of Fig.~\ref{fig4}). Substituting this relation into Eq.~(\ref{diel}), one obtains the dielectric permittivity of the ionic liquid at the fully solvated state \begin{equation}\label{dielsat} \varepsilon_b=1+\frac{\xi_p}{3}\left[1+\frac{1}{2}\left(2|e_ic_i|\frac{\ell_B}{b_{pi}}\right)^{2/3}\right]. \end{equation} In the main plot of Fig.~\ref{fig4}, it is shown that this closed form expression is a very good approximation for the dielectric permittivity of the ionic liquid beyond the dilute regime. One can note that in Eq.~(\ref{dielsat}), the dependence of the permittivity on the charge screening parameter $\xi_c$ has disappeared. This shows that close to the full solvation state, the collective solvation mechanism is solely driven by the dielectric screening induced by polarizable ions. We also compare in Fig.~\ref{fig4} the MF level bulk dielectric permittivity $\varepsilon_b=1+\xi_p$ for the ion valency $n=3$ with the self-consistent result. The MF theory that neglects the collective ionic solvation is shown to strongly underestimate the dielectric permittivity of the ionic liquid. This observation is in line with Ref.~\cite{ionion2} where the rotational polarizability associated with the gas phase dipole moment of ions was shown to be unsufficient to explain the high dielectric permittivity of ionic liquids. This suggests that the cooperative hydration mechanism scrutinized in this part brings the main contribution to the dielectric permittivities of ionic liquids. Hence, correlation effects cannot be neglected in polarizable liquids. \section{Conclusion} We have introduced in this article a classical electrostatic theory of polarizable ions in high dielectric liquids. Within this theoretical framework, we have scrutinized the physical mechanism behind the ionic solvation properties observed in ab-initio calculations of polar solvents~\cite{ionpol,ionpol2} and ionic liquids~\cite{ionion1,ionion2}. In the first part of the article, we presented the electrostatic formulation of polarizable ions immersed in polar solvents composed of dipolar molecules with finite size. Then, we derived from the Dyson equation the electrostatic self-consistent relations accounting for the electrostatic correlations between the particles in the liquid. The second part of the article was devoted to the hydration of a single polarizable in a polar solvent such as water. It was shown that the electrostatic energy release experienced by the polarizable ion upon hydration results in the expansion of its electronic cloud. As a result, the ion carrying zero dipole moment in the gas phase acquires in the liquid environment an average dipole moment. However, the hydration also amplifies the rigidity of the electronic cloud, thereby opposing its deformation induced by thermal fluctuations. In qualitative agreement with quantum molecular calculations with PCM solvent~\cite{ionpol,ionpol2}, the overall effect was shown to be an enhancement of the gas phase polarizability upon hydration. In the third part of the article, we have investigated a cooperative solvation mechanism in ionic liquids free of solvent molecules. We have found that similar to the case of a polarizable ion in the polar solvent and in agreement with ab-initio calculations of ionic liquids~\cite{ionion1}, each polarizable ion acquires in the liquid a finite dipole moment and an increased polarizability. This effect resulting from the polarization field generated by the surrounding ions self-consistently amplifies the dielectric permittivity of the medium. We note that this solvation induced amplification of the dielectric permittivity is substantial even in the weak electrostatic coupling regime of dilute liquids. This suggests that the self-consistent solvation mechanism brings the dominant contribution to the dielectric permittivity of ionic liquids composed of small ions with negligible permanent dipole moment in the gas phase~\cite{ionion2}. We have introduced the first microscopic theory of ionic hydration in explicit solvent, and we emphasize that the model as well as the theoretical scheme need refinements. First of all, it should be noted that our approach does not account for the hydrogen bond formation in water solvent, which is believed to amplify the dielectric permittivity of water~\cite{ons}. This complication expected to become significant beyond the dilute liquid regime should be addressed in a future work by extending our approach beyond the gaussian field approximation, i.e. by opting for a more sophisticated closure to solve Eq.~(\ref{corr2}). An additional complication for solvents at physiological concentrations comes from the importance of excluded volume effects associated with the finite size of the particles in the liquid. The first step to generalize the model in this direction consists in including simple hard-core or repulsive Yukawa interactions between the particles as in Refs.~\cite{duny,jstat,jcp1}. Then, our theoretical scheme should be extended to a second order cumulant expansion of the grand potential around the reference Hamiltonian Eq.~(\ref{gauss}). This generalization would allow to determine how much our results are quantitatively modified beyond the dilute liquid regime. Indeed, we expect hard-core interactions between solvent molecules and ions to reduce the polarizability increase induced by the electrostatic hydration mechanism. In this sense, the results presented in this article beyond the dilute solvent regime should be considered as an upper boundary for the actual ionic cloud expansion effect. Our results should be also compared at the next step with MC simulations of the polarizable ion model introduced in Sec.~\ref{mod}, but these simulations are currently unavailable. Finally, the consideration of the induced polarizability with a classical Drude potential is another limitation of the present model. Actually, it should be noted that the ionic dipole moments in the solvated state provided by our theory are larger than the values observed in ab-initio calculations~\cite{ionpol,ionpol2,ionion1}. For example, the Pauli exclusion effect neglected by the classical approach is expected to partially suppress the hydration induced expansion of the electron cloud. However, refinements at the quantum level are of course beyond the scope and the main message of the present work. Indeed, the ability of the theory to qualitatively capture ionic hydration effects observed in quantum molecular calculations for both polar solvents and ionic liquids on the one hand, and the presence of these effects in the dilute liquid regime where the complications discussed so far are not expected on the other hand confirm the physical consistency of the model with real dielectric liquids. \\ \acknowledgements This work has been in part supported by The Academy of Finland through its Centres of Excellence Program (project no. 251748) and NanoFluid grants.
{ "redpajama_set_name": "RedPajamaArXiv" }
251
Q: How to use jQuery code in meteor? so I can't make run JQuery code in meteor! I have added package meteor add jquery. Solutions: make jquery run in meteor OR convert JQuery code to JS code Somebody knows how to solve it? thanks a lot for your help! Example $(document).ready(function() { $('.collapsible').collapsible({ accordion: false }); }); A: You don't need to explicitly add the jquery package inside a Meteor project because it usually already includes packages depending on jquery (namely the templating packages). However, you can't just copy jQuery code examples inside a Meteor app and expect them to work without a little bit of extra work : in particular you need to initialize jQuery plugins only when the corresponding DOM elements have been inserted in the DOM by Blaze, the Meteor template rendering engine. Assuming that you have the following (MaterializeCSS) template markup : <template name="collapsible"> <ul class="collapsible" data-collapsible="accordion"> <li> <div class="collapsible-header"><i class="mdi-image-filter-drama"></i>First</div> <div class="collapsible-body"><p>Lorem ipsum dolor sit amet.</p></div> </li> <li> <div class="collapsible-header"><i class="mdi-maps-place"></i>Second</div> <div class="collapsible-body"><p>Lorem ipsum dolor sit amet.</p></div> </li> <li> <div class="collapsible-header"><i class="mdi-social-whatshot"></i>Third</div> <div class="collapsible-body"><p>Lorem ipsum dolor sit amet.</p></div> </li> </ul> </template> You'll need to initialize the collapsible plugin inside an onRendered lifecycle event : Template.collapsible.onRendered(function(){ // we're using the template instance scoped jQuery this.$('.collapsible').collapsible({ accordion: false }); });
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,259
<?php namespace Sylius\Behat\Page\Admin\Promotion; use Sylius\Behat\Page\Admin\Crud\IndexPageInterface as BaseIndexPageInterface; use Sylius\Component\Promotion\Model\PromotionInterface; /** * @author Arkadiusz Krakowiak <arkadiusz.krakowiak@lakion.com> */ interface IndexPageInterface extends BaseIndexPageInterface { /** * @param PromotionInterface $promotion * * @return int */ public function getUsageNumber(PromotionInterface $promotion); /** * @param PromotionInterface $promotion * * @return bool */ public function isAbleToManageCouponsFor(PromotionInterface $promotion); /** * @param PromotionInterface $promotion * * @return bool */ public function isCouponBasedFor(PromotionInterface $promotion); }
{ "redpajama_set_name": "RedPajamaGithub" }
267
Antara News, Wed, February 02 2011 A farmer picks cocoa at the Rangkahpawon plantation in Kediri, East Java. (Antara Photo/Arief Priyono) Jakarta (ANTARA News) - Having the ability to produce cocoa beans in large quantities, Indonesia is ready to set aside Ivory Coast as the world`s largest producer and exporter. For the last 20 years Indonesia was the world`s third largest cocoa producer after Ivory Coast and Ghana, contributing export earnings in excess of US $1.4 billion per year. In the late 1980s cocoa growing began seriously in the regions of Sulawesi island, and considerably lifted the fortune of cocoa-growing communities over the next two decades. Therefore, West Sulawesi Governor Anwar Adnan Saleh in the provincial city of Mamuju said on Wednesday that he remained optimistic Indonesia would become the world`s largest coca producer and exporter. In mid 2008, the Indonesian Government announced a large national program for revitalization of the cocoa industry,known as Gernas Pro Kakao. "Since the announcement of Gernas Pro Kakao program, I remain optimistic that West Sulawesi is able to turn Indonesia into the largest cocoa producing country," Governor Anwar Adnan said in Mamuju. He said the program was intended to replace up to 70,000 hectares of cocoa, rehabilitate another 140,000 hectares and intensify farming on 300,000 hectares - bringing the total planted area to around 900,000 hectares of productive cocoa. The West Sulawesi governor said Ivory Coast at present was the world`s largest cocoa producing country while Indonesia is on the third place, but he was optimistic that the latter will be the first in years to come. Therefore, in his presentation on national cocoa revitalization program at the National Development Planning Board (Bappenas), Governor Anwar asked that the ongoing program since 2008 be continued until 2014. "The Gernas Pro Kakao program has a target to improve the production and quality of the commodity to exceed Ivory Coast in 2014," the governor said. He said West Sulawesi provincial government planned to step up cocoa production every year but it was hampered by the limited amount of budget from the central government. In 2009 West Sulawesi produced 40,000 tons of cocoa but the production declined in 2010 following the reduction of budget from the central government. According to the West Sulawesi governor, 80 percent of national cocoa production came from eastern Indonesian regions while the rest come from other provinces such as Bali, East Nusa Tenggara, and Aceh. Vice Minister of Trade Mahendra Siregar also said Indonesia has a great opportunity to be the largest cocoa producer in the world for its ability to produce cocoa beans in large enough quantities. According to him, cocoa is the third largest contributor to exports in the exports of agricultural products group. As the number three producer in the world`s largest cocoa after Ivory Coast and Ghana, he said Indonesia should increase its production of the commodity. He said that besides boosting the production, the quality of Indonesian cocoa should also be improved because commodity had special characteristics that were not owned by another country. Indonesia has earlier exported about 80 percent of its cocoa beans, but with the imposition of the export tax, exports could be cut for domestic grinders. The government hopes that in 2011 its exports of cocoa beans would drop from 80 percent to 50 percent. The government has since April 1, 2010 imposed a 15 percent tax on cacao bean exports in order to boost local processing industry and increase the added value of farmers cacao production. About 93 percent of Indonesia`s 1.5 million hectares of cocoa plantations are owned by smallholders. Since the imposition of the regulation last April, several cacao processing companies have also planned to expand as of 2011 so that next year production of processed cacao is projected to rise to 300,000 tons or they would be able to process almost 50 percent of total national cacao bean production. According to the Indonesian Cocoa Association, the cocoa bean exports from Indonesia`s main growing region of Sulawesi island increased 2.1 percent last year. The shipments from the world`s third biggest producer of the chocolate ingredient rose to 280,708 metric tons in 2010 from 274,887 metric tons in 2009, according to data from the association. "Cocoa output in 2010 didn`t increase much because rainy weather disrupted harvests, and shipments were delayed," Indonesian Cocoa Association spokesman Zulhefy Sikumbang has said. Sulawesi accounts for about 75 percent of total output and overseas sales of the commodity from the Southeast Asian nation. Therefore the West Sulawesi governor said the province would continue to maintain its cocoa production as national commodity and to develop it to increase the national economic growth. He said the cocoa plant would be included into 18 categories of superior commodities to step up the national economic growth. "The central government earlier did not include cocoa into 18 categories of superior seed plants to be nationally developed in an effort to step up national economic growth," Anwar said. But he added that West Sulawesi provincial government would maintain cocoa and continue to develop it as a mainstay commodity in this country. The governor said that following the Indonesian government announcement of a large national program for the revitalization of cocoa industry, known as Gernas Pro Kakao, around 25 provinces in 2011 would implement the program. "We are optimistic that the Gernas Pro Kakao program for all cocoa production areas in Sulawesi island and almost all eastern Indonesian regions will make Indonesia the world`s largest cocoa producing country," Governor Anwar Adnan Saleh said. Editor: Aditia Maruli Labels: Agriculture, Ghana, Ivory Coast
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,920
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.apache.servicemix.bundles</groupId> <artifactId>bundles-pom</artifactId> <version>10</version> <relativePath>../bundles-pom/pom.xml</relativePath> </parent> <groupId>org.apache.servicemix.bundles</groupId> <artifactId>org.apache.servicemix.bundles.qpid</artifactId> <packaging>bundle</packaging> <version>0.22_2-SNAPSHOT</version> <name>Apache ServiceMix :: Bundles :: ${pkgArtifactId}</name> <description>This OSGi bundle wraps qpid-common and qpid-client ${pkgVersion} jar files.</description> <properties> <pkgGroupId>org.apache.qpid</pkgGroupId> <pkgArtifactId>qpid</pkgArtifactId> <pkgVersion>0.22</pkgVersion> <servicemix.osgi.export.pkg> org.apache.qpid </servicemix.osgi.export.pkg> <servicemix.osgi.import.pkg> javax.jms*, javax.naming*, javax.net*, javax.security*, javax.transaction*, edu.emory.mathcs.backport.java.util.concurrent;resolution:=optional, org.ietf.jgss;resolution:=optional, org.slf4j;resolution:=optional </servicemix.osgi.import.pkg> </properties> <dependencies> <dependency> <groupId>${pkgGroupId}</groupId> <artifactId>${pkgArtifactId}-common</artifactId> <version>${pkgVersion}</version> </dependency> <dependency> <groupId>${pkgGroupId}</groupId> <artifactId>${pkgArtifactId}-client</artifactId> <version>${pkgVersion}</version> </dependency> <!-- sources --> <dependency> <groupId>${pkgGroupId}</groupId> <artifactId>${pkgArtifactId}-common</artifactId> <version>${pkgVersion}</version> <classifier>sources</classifier> </dependency> <dependency> <groupId>${pkgGroupId}</groupId> <artifactId>${pkgArtifactId}-client</artifactId> <version>${pkgVersion}</version> <classifier>sources</classifier> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <artifactSet> <includes> <include>${pkgGroupId}:${pkgArtifactId}-common</include> <include>${pkgGroupId}:${pkgArtifactId}-client</include> </includes> </artifactSet> <filters> <filter> <artifact>${pkgGroupId}:${pkgArtifactId}-common</artifact> <excludes> <exclude>**</exclude> </excludes> </filter> <filter> <artifact>${pkgGroupId}:${pkgArtifactId}-client</artifact> <excludes> <exclude>**</exclude> </excludes> </filter> </filters> <promoteTransitiveDependencies>true</promoteTransitiveDependencies> <createDependencyReducedPom>false</createDependencyReducedPom> <keepDependenciesWithProvidedScope>true</keepDependenciesWithProvidedScope> </configuration> </execution> </executions> </plugin> </plugins> </build> </project>
{ "redpajama_set_name": "RedPajamaGithub" }
3,059
Q: Django DRF without db I need to build a rest api using django. On GET requests my tool has to captures parameters from url and invoke a function for their manipulation. For example: on input url myapp/name=john&birthdate=11July, the function compute(name,birthdate) computes a transformation on parameters returning a json as output. I don't understand how to proceed, considering that each tutorial i followed is about db interaction. A: urls.py: url(r'^myapp/$', views.myapp, name='myapp'), views.py def myapp(request): name = request.GET.get('name', None) birthdate = request.GET.get('birthdate', None) if name and birthdate: result = compute(name, birthdate) return result return None You don't need a database for that. Although you need a database for Django to work.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,516
Fucecchio és un municipi italià, situat a la regió de la Toscana i a la ciutat metropolitana de Florència. L'any 2004 tenia 21.912 habitants. Municipis de Florència
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,090
Q: How to assign a new event for a second enter press I have a phonebook, where you can add contact name and number, with click or enter press. I made Edit and a Delete button, the delete button works fine, also the Edit button, but I want when I press the Edit button and modify the value I want to be able to save it with pressing the enter not only by clicking save. A little help would be appreciated. I tried out a few things this is the closest I got, here I can save the item by pressing enter, but it will create another table row as I already have an event assigned to enter and that fires as well let form = document.querySelector('.form'); let inputMessage = document.getElementById('inputMessage'); let table = document.querySelector('.book-table'); let tableBody = document.createElement('tbody'); let nameInput = document.getElementById('name'); let numberInput = document.getElementById('number'); let submitBtn = document.querySelector('.btn'); let editBtn = document.querySelector('.edit-btn'); let deleteBtn = document.querySelector('.delete-btn'); function addNumber() { if (nameInput.value != '' && numberInput.value != '') { let newRow = document.createElement('tr'); let firstCol = document.createElement('td'); let secondCol = document.createElement('td'); let thirdCol = document.createElement('td'); let fourthCol = document.createElement('td'); let editBtn = document.createElement('button'); let deleteBtn = document.createElement('button'); firstCol.innerHTML = nameInput.value; secondCol.innerHTML = numberInput.value; firstCol.className = 'list-name'; secondCol.className = 'list-number'; editBtn.className = 'edit-btn'; deleteBtn.className = 'delete-btn'; editBtn.innerHTML = 'Edit'; deleteBtn.innerHTML = 'Delete'; table.appendChild(tableBody); tableBody.appendChild(newRow); newRow.appendChild(firstCol) newRow.appendChild(secondCol) newRow.appendChild(thirdCol); newRow.appendChild(fourthCol); thirdCol.appendChild(editBtn); fourthCol.appendChild(deleteBtn); nameInput.value = ''; numberInput.value = ''; inputMessage.style.visibility = 'visible'; inputMessage.innerHTML = 'Contact Added Successfully'; inputMessage.classList.add('message'); inputMessage.classList.remove('error-message'); } else { inputMessage.style.visibility = 'visible'; inputMessage.innerHTML = 'Please fill out all required fields'; inputMessage.classList.remove('message'); inputMessage.classList.add('error-message'); } } function editBook(e) { if (e.target && e.target.className == 'edit-btn') { e.target.innerHTML = 'Save'; clickCount++; const td = e.target.parentNode.parentNode; // console.log(td); let editName = td.getElementsByTagName('td')[0]; let editNumber = td.getElementsByTagName('td')[1]; if (clickCount > 1) { // change HTML back to edit e.target.innerHTML = 'Edit'; // set clickCount back to 0 clickCount = 0; } // save the values from the input in the same table row let tmp = nameInput.value; // console.log(tmp, nameInput.value); nameInput.value = editName.innerHTML; // console.log(nameInput.value, editName.innerHtml); editName.innerHTML = tmp; // console.log(editName.innerHTML, tmp); let tmp2 = numberInput.value; numberInput.value = editNumber.innerHTML; editNumber.innerHTML = tmp2; } } let enterCount = 0; numberInput.addEventListener('keydown', function (e) { let dynamicTd = document.getElementsByTagName('td'); if (e.keyCode == 13) { if (numberInput.value != '' && nameInput.value != '' && enterCount < 1) { addNumber(); enterCount++; } else if (editBtn.innerHTML = 'Save') { if (clickCount > 1) { dynamicTd[2].innerHTML = 'Edit'; clickCount = 0; } let tmp = nameInput.value; nameInput.value = dynamicTd[0].innerHTML; dynamicTd[0].innerHTML = tmp; let tmp2 = numberInput.value; numberInput.value = dynamicTd[1].innerHTML; dynamicTd[1].innerHTML = tmp2; enterCount = 0; } } }); <form action="" class="form"> <label for="name">Name</label> <input type="text" id="name" name="name" required autocomplete="off"> <label for="number">Phone</label> <input type="number" id="number" name="number" required> </form> <button type="submit" class="btn">Add Contact</button> </div> <!-- Add error and success message --> <div id="inputMessage" class="message"> </div> <table class="book-table"> <thead> <tr> <th>Name</th> <th>Phone Number</th> <th colspan="2">Action</th> </tr> </thead> A: You could keep track of your editing state in a variable. Then when you press enter, have your function check the variable with an if statement. Return out of creating a new row, and then you should only have your edit.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,032
Language Español English I am: Choose Study at Uniandes Principles of the Founders About Uniandes Strategy 2016 - 2020 Ranking and accreditation Admissions and Financial Support Programs And Faculties Research and Repository Vice-President for Research (Investigations) Quiero Estudiar Scholarships Program University of the Andes Foundation Contents with the tag: Alejandro Gaviria Uribe becomes president of the Universidad de los Andes After having been elected by the Board of Governors, Alejandro Gaviria Uribe became the new president of the Universidad de los Andes on July 26, 2019. Alejandro Gaviria Uribe is the new President of Universidad de los Andes The Board of Governors of the Universidad de los Andes has elected the engineer and economist Alejandro Gaviria Uribe as the new President for an initial period of four years. Professor at Los Andes takes part in a National Geographic cave paintings expedition Experts research caves in Patagonia. Andrés Burbano from the Department of Design at the university carried out the photogrammetry of the site. Uniandinos are presented with an awarded for creating vegan yarn Design students win the Biodesign Challenge, which is an award endorsed by Peta, the Stella McCartney brand, and the Stray Dog Capital corporation. Colombian involved in the construction of the Bayonne Bridge in New York Daniela Moreno, who studied civil engineering in Los Andes was involved in the design and construction of the new Bayonne Bridge. INSTITUTIONAL REGULATIONS Fees increase percentage General Statute Internal acts and fee increase Pecuniary rights Teacher Statute Communications and Brand Conect-TE Emergencies: Extension 0000 Spanish Center
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,224
import { LOGIN_REQUEST, LOGIN_SUCCESS, LOGIN_FAILURE, LOGOUT_SUCCESS } from './loginActions'; const login = ( state = { isFetching: false, isAuthenticated: localStorage.getItem('id_token') ? true : false }, action) => { switch (action.type) { case LOGIN_REQUEST: return Object.assign({}, state, { isFetching: true, isAuthenticated: false, user: action.creds }); case LOGIN_SUCCESS: return Object.assign({}, state, { isFetching: false, isAuthenticated: true, errorMessage: '' }); case LOGIN_FAILURE: return Object.assign({}, state, { isFetching: false, isAuthenticated: false, errorMessage: action.message }); case LOGOUT_SUCCESS: return Object.assign({}, state, { isFetching: true, isAuthenticated: false }); default: return state; } }; export default login;
{ "redpajama_set_name": "RedPajamaGithub" }
3,944
It is critical to apply grease properly when replacing fuser film sleeves, and it is also important to apply the right type of grease. Let us help you demystify the process for choosing the right grease and save some money. The grease is used to lubricate fuser film sleeves. It is critical to apply grease properly when replacing fuser film sleeves, and it is also important to apply the right type of grease. If incorrect grease is applied, the film sleeve may stop turning after printing for a long time and cause paper jam due to dry out of the grease. For step by step fuser film sleeve replacement instruction, check out our article for Fuser Film Sleeve Installation Instruction. When it comes to purchase grease for fuser film sleeves, it could be confusing. To make the answer short, HP-300 grease will work for all types of fuser film sleeves. Our HP-300 grease comes in two sizes, 2g (GRS-HP300-2G) and 20G (GRS-HP300-20G). For one fuser film sleeve replacement, 2g (GRS-HP300-2G) is sufficient. 2G of HP-300 grease may be enough for two film sleeves at most. The downside of HP-300 grease is that it is very expensive. At Partsmart, we carry two types of grease. One is fully fluorinated polymer grease called HP-300. Another type is silicon based grease. HP-300 grease is for high speed printers and fuser film sleeves with metal base, whereas silicon based grease is for low to medium speed (5 ppm - 25 ppm) printers and fuser film sleeves with polymer base. In below we have listed the ideal grease type for different fuser film sleeves. The metal base sleeves are built for newer models of printer, such as HP LaserJet P4014, HP LaserJet Enterprise M601, HP LaserJet 4250, HP LaserJet 4300, and most color printers such as HP Color LaserJet 4700, HP Color LaserJet CP2025, HP Color LaserJet CP3525 etc. These types of printers run fast and require more efficient heat transfer. Therefore, the sleeves require grease that provides extraordinary performance under extreme conditions. GRS-HP-300 in 2G package - 2 gram grease is enough for 2 film sleeves. GRS-HP-300 in 20G package - 20 gram grease is enough for about 20 film sleeves. For older model printers, such as HP LaserJet 1200, HP LaserJet 1300, HP LaserJet 2100, HP LaserJet 2200, HP LaserJet 2300, HP LaserJet 2400, HP Laserjet 4000, HP LaserJet 4100, HP LaserJet 6P and HP LaserJet 6L etc., the fuser film sleeves are made with polymer base. These types of fuser film sleeves are suitable for low to medium speed printers and only require grease that provide good performance under relatively high temperature. GRS-1.0 in 1 oz package -1 oz grease is enough for 25 film sleeves. GRS-5.3 in 5.3 oz package - 5.3 oz grease is enough for about 130 film sleeves.
{ "redpajama_set_name": "RedPajamaC4" }
6,258
Meet Porter Stansberry, the fraudster behind ominous 'NewAmerica3' ads Steven Nelson Associate Editor November 08, 2011 11:14 PM ET Many television viewers encounter strange, disjointed ads promoting NewAmerica3.com. The ads plead with viewers to visit the website, where the narrator promises a Nostradamus-like offering of dire prophecies for the future. What most viewers don't know is that the man behind the ad has been found liable in the past for defrauding investors. Meet Frank Porter Stansberry. In 2003 the Securities and Exchange Commission filed a complaint against him for peddling false information to subscribers of his financial newsletter. The name of his company at the time was Pirate Investor. According to the SEC, Stansberry told email subscribers that if they paid $1,000, he would provide a hot investment tip based on inside information from a "senior executive inside the company." He encouraged customers to purchase stock, promising they would "make a fortune." The man who supposedly provided Stansberry with the information denied ever doing so, and many investors lost thousands of dollars by acting on the fabricated information. The SEC complaint declared that Stansberry "engaged in an ongoing scheme to defraud public investors by disseminating false information in several Internet newsletters." Approximately 1,217 individuals took the bait and purchased the report with fraudulent information after reading an email solicitation signed by Stansberry's pseudonym "Jay McDaniel." The solicitation was sent to at least 800,000 people, and netted a total of $1,005,000. In 2007 Stansberry was ordered to pay $1.5 million in restitution and penalties for the scam. The judge ruled that his actions "undoubtedly involved deliberate fraud" and "making statements that he knew to be false." In 2009 an appeal by Stansberry was denied. The Fourth Circuit Court of Appeals ruled that "it would take an act of willful blindness to ignore the fact that Appellants profited from the false statements." Stansberry, for his part, vehemently denied wrongdoing. Writing in his own defense, he claimed his First Amendment rights as a publisher had been violated, and insisted his predictions were mostly correct. That case wasn't Stansberry's only foray into questionable marketing tactics. In 2003 he told subscribers that investing in the company behind a unproven AIDS vaccine was "the only realistic chance to make 50 times your money in the short term." Stansberry's promotion of the AIDS vaccine company, VaxGen, was discounted by the company's own spokesman, who described "the tone and manner" of Stansberry's pitch as "very objectionable." Visitors to NewAmerica3.com are presented with a video ominously warning of an apocalyptic future, and encouraging them to trust Stansberry's proven track record of accurate predictions. The website offers potential customers an amazing discount — $49.50, rather than the usual $99 rate — for a year-long subscription to a monthly online newsletter and access to several research reports provided by Stansberry's company, the Baltimore-based Stansberry & Associates Investment Research. One of the products subscribers receive is a report titled "The 4 Investment Assets You Do NOT Have to Report to the U.S. Government." A description of the report promises information "vital to getting rich in the coming currency crisis." Other reports encourage investment in gold and silver. The mass-media have largely overlooked the barrage of Stansberry's television advertisements. A public relations representative for one cable television channel told The Daily Caller that the ads may be distributed over external ad networks, rather than sold directly by channels. The ads can be blocked by individual channels, however, even if they are run by external providers. Follow Steven on Twitter Tags : securities and exchange commission
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,537
\section{Introduction and result} Let $P\subset \mathbb{R}^2$ be polygon in the plane with $\ell\ge 3$ vertices. We refer to $P$ as a container in what follows. Let $X_1,\ldots, X_n$ be $n\geq 3$ independent random points, distributed uniformly in $P$. We denote by $P_n$ the random convex hull $[X_1,\ldots, X_n]$ of these points. It is a random polygon in the container polygon $P$, see Figure \ref{fig:Polygon}. In this article we are interested in the combinatorial structure of $P_n$, more precisely in the variance expansion and the fluctuations of the number $f_0(P_n)$ of vertices of $P_n$. Note that this quantity is the same as the number $f_1(P_n)$ of edges of $P_n$. \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{polygon2} \caption{Random polygon in a polygon.} \label{fig:Polygon} \end{figure} The random variable $f_0(P_n)$ has been intensively studied in the literature and has attracted a lot of interest in geometric probability as well as convex and integral geometry. For example, as $n\to\infty$, R\'enyi and Sulanke in their fundamental article \cite{RS63} have established the asymptotics \begin{equation}\label{eq:expPolUnif} \mathbb{E} f_0(P_n)= {2 \ell \over 3}\log n + \frac 23 \sum_{i=1}^\ell \log \left(\frac{F_i}{\textup{area}(P)}\right) + \frac{2 \gamma \ell }3 + o(1), \end{equation} for the expected number of vertices of $P_n$, where $F_i$, $i=1, \dots, \ell$, are the areas of the triangles formed by three consecutive vertices of $P$ (that is, $F_i=\textup{area}([v_{i-1},v_i,v_{i+1}])$ if $v_1,\ldots,v_\ell$ are the vertices of $P$ with the convention that $v_0=v_\ell$ and $v_{\ell+1}=v_1$). The corresponding variance asymptotics \begin{equation}\label{eq:varPolUnif} \operatorname{Var} f_0(P_n)={10\ell\over 27}\log n (1+o(1)), \end{equation} is due to Groeneboom \cite{Gro88}. In the same paper, Groeneboom also proved a central limit theorem for $f_0(P_n)$, as $n\to\infty$. Denoting by $\Phi(\,\cdot\,)$ the distribution function of a standard Gaussian random variable, B\'ar\'any and Reitzner have obtained the following quantitative version of the central limit theorem: \begin{equation}\label{eq:BaraReitz} \sup_{x\in\mathbb{R}}\Big|\mathbb{P}\Big({f_0(P_n)-\mathbb{E} f_0(P_n)\over \sqrt{\operatorname{Var}(P_n)}}\leq x\Big)-\Phi(x)\Big|\leq c\,{(\log\log n)^{60}\over \sqrt{\log n}}, \end{equation} where $c>0$ is some constant not depending on $n$ (note that in the published version \cite{BR10} of \cite{BR06} a different model for $P_n$ has been treated, which involves an additional randomization of the number of generating random points, which will also play a prominent role in our analysis). The double logarithmic factor also appears in a central limit theorem of Pardon \cite{Pardon1, Pardon2}, which holds for general planar convex container sets and is not restricted to the polygons or convex sets bounded by a sufficiently smooth closed curve. The purpose of the present article is to demonstrate how the double-logarithmic factor in \eqref{eq:BaraReitz} can be removed for polygons. Since the resulting rate of convergence is then of the same order as $(\operatorname{Var} f_0(P_n))^{-1/2}$, we believe that, up to numerical constants, our result is in fact optimal. Moreover, our technique allows us at the same time to determine the precisely expansion of \eqref{eq:varPolUnif} up to the constant-order term, leading to a second-order analogue of the classical result \eqref{eq:expPolUnif} for the expectation of R\'enyi and Sulanke. Even further, our technique allows us to make more precise the $o(1)$-term in \eqref{eq:expPolUnif}. \begin{theorem}\label{thm:main} For every planar polygon $P\in\mathbb{R}^2$ and $n\geq 3$ we have that $$ \sup_{x\in\mathbb{R}}\Big|\mathbb{P}\Big({f_0(P_n)-\mathbb{E} f_0(P_n)\over \sqrt{\operatorname{Var} f_0(P_n)}}\leq x\Big)-\Phi(x)\Big|\leq {c\over \sqrt{\log n}}, $$ where $c>0$ is some constant independent of $n$, with \begin{align*} \mathbb{E} f_0(P_{n}) &= {2 \ell \over 3}\log n + \frac 23 \sum_{i=1}^\ell \log \left(\frac{F_i}{\textup{area}(P)}\right) + \frac{2 \gamma \ell }3 + O((\log n)^2n^{-1/4}) \intertext{and} \operatorname{Var} f_0(P_n) &= {10 \ell \over 27}\log n +{10\over 27}\sum_{i=1}^\ell \log \left(\frac{F_i}{\textup{area}(P)}\right) + \frac{(10 \gamma -2 \pi^2 )\ell }{27} + O( (\log n)^4 n^{-1/4}), \end{align*} as $n\to\infty$, where $\gamma\approx 0.57721\ldots$ is the Euler-Mascheroni constant. \end{theorem} \begin{remark} On the way of the proof of Theorem \ref{thm:main} we obtain in Theorem \ref{thm:BerryEsseenPoisson} the same statements also for the Poisson random polytope model previously considered in \cite{BR10}, that is, under the additional randomization that the number of generating points follows a Poisson distribution with mean $n$. \end{remark} \begin{remark} Although this paper focusses on the vertex number of the random polygon $P_n$, we briefly discuss a direct consequence of Theorem \ref{thm:main} on the area of $P_n$. First, the classical Efron identity \cite{Ef} connects the expected number of vertices of $P_n$ with its area: $$ {\mathbb{E}\textup{area}(P_n)\over\textup{area}(P)} = 1- {\mathbb{E} f_0(P_{n+1})\over n+1}. $$ Applying the expansion for $\mathbb{E} f_0(P_{n+1})$ in Theorem \ref{thm:main} we conclude that, as $n\to\infty$, $$ \frac{\mathbb{E} \textup{area}(P \setminus P_n)}{\textup{area}(P)} = {2 \ell \over 3} (\log n)n^{-1}+ \bigg[\frac {2}3 \sum_{i=1}^\ell \log \left(\frac{F_i}{\textup{area}(P)}\right) + \frac{2 \gamma \ell }3 \bigg] n^{-1} + O( (\log n)^2 n^{-5/4}) . $$ Similarly, one can apply Buchta's identity \cite[Corollary 1]{Bu11}, which connects the variance of the area of $P_n$ with the first two moments of its vertex number: $$ {\operatorname{Var}\textup{area}(P_n)\over\textup{area}(P)^2} = {\operatorname{Var} f_0(P_{n+2})+A_n-B_n\over(n+1)(n+2)} $$ with $A_n:=(\mathbb{E} f_0(P_{n+2}))^2-{n+2\over n+1}(\mathbb{E} f_0(P_{n+1}))^2$ and $B_n:=(2n+3)\mathbb{E} f_0(P_{n+2})-2(n+2)\mathbb{E} f_0(P_{n+1})$. In connection with the variance expansion in Theorem \ref{thm:main} this leads to \begin{align*} \frac {\operatorname{Var} \textup{area}(P_n)}{\textup{area}(P)^2}= {28 \ell \over 27} (\log n)n^{-2} +\bigg[{28\over 27}\sum_{i=1}^\ell \log \left(\frac{F_i}{\textup{area}(P)}\right) + \frac{(28 \gamma -2 \pi^2 )\ell }{27}\bigg]n^{-2} + O( (\log n)^4 n^{-9/4}) , \end{align*} as $n\to\infty$. \end{remark} We would like to point out that if the container set $P$ of the random polygon $P_n$ has a $C^2$-smooth boundary with everywhere positive curvature the first Berry-Esseen bound for the vertex number in \cite{Reitz05} also contained an additional logarithmic factor, and the presumably optimal Berry-Esseen bound has been found in \cite[Theorem 3.5]{LachSchulteYukich}. Moreover, the approach in \cite{LachSchulteYukich} even allows to deal with higher dimensional random polytopes and with other geometric and combinatorial parameters. However, we would like to stress at this point that the transition from smooth container sets to polygons appears to be highly non-trivial. One reason for this fact is the observation that, in contrast to smooth containers, the geometry of a random polygon in a polygon is not locally determined in a sufficiently strong sense. For example, for an arbitrary boundary point $x$ of the container set one can ask for the expected number of vertices of the random polygon $P_n$ one can "see" from $x$. While in the smooth case this number stays bounded for large $n$, in case of a polygon it grows to infinity at a double logarithmic speed. These and similar geometric facts are the reason why already the proof of a sub-optimal Berry-Esseen bound like \eqref{eq:BaraReitz} is considerably more involved compared to its counterpart for smooth container sets. Another aspect to be mentioned in this context is the strong concentration of the number of vertices of a random polygon in a small neighbourhood around the corners of the container polygon. This concentration phenomenon, which apparently does not take place in smoothly bounded container sets, makes it much harder to approximate the random polygon by the so-called floating body associated with the container. More precisely, even the careful refinement from \cite{BR10} of this approach automatically leads to the double-logarithmic factor in \eqref{eq:BaraReitz}. We shall now briefly explain how we overcome these difficulties. In particular, this description makes it evident that our approach cannot be extended to deal with the vertex number, or more generally the number of faces of arbitrary dimension, of convex hulls of random points in polytopes of dimension more than two. The main ingredient we use in the proof of Theorem \ref{thm:main} is a decomposition of the boundary of the random polygon $P_n$ into so-called random convex chains, where each chain corresponds to one of the corners of the container polygon, see Figure \ref{fig:Step2}. In fact, the vertex number of $P_n$ is the same as the sum of the number of vertices of these chains, which after Poissonization of the construction become independent random variables. By a suitable affine transformation, each chain can be transformed into the following standard form without changing its combinatorial structure (of course, the number $m$ below depends in a suitable way on the size of the corresponding chain in $P_n$). Consider the triangle $T$ with vertices $(0,0)$, $(0,1)$ and $(1,0)$ and let $T_m$ be the convex hull of $(0,1)$ and $(1,0)$ together with $m\geq 1$ independent and uniformly distributed random points in $T$. The following Berry-Esseen bound for the vertex number $f_0(T_m)$ of the random convex chain $T_m$ has been obtained in \cite[Corollary 9]{GT21}: \begin{equation}\label{eq:CLTChain} \sup_{x\in\mathbb{R}}\Big|\mathbb{P}\Big({f_0(T_m)-\mathbb{E} f_0(T_m)\over \sqrt{\operatorname{Var} f_0(T_m)}}\leq x\Big)-\Phi(x)\Big|\leq {c\over\sqrt{\log m}}, \end{equation} for some absolute constant $c>0$ and $m\geq 1$. We would like to mention at this occasion that this result is a consequence of an unexpected connection (which does not persist for polygons different from a triangle) between the random variables $f_0(T_m)$, the location of zeros of certain orthogonal polynomials related to probability generating function of $f_0(T_m)$ and a Berry-Esseen bound for sums of independent Bernoulli random variables. We remark that \eqref{eq:CLTChain} is the main motivation behind Theorem \ref{thm:main} and the highly non-trivial transition from \eqref{eq:CLTChain} to the Berry-Esseen bound for $f_0(P_n)$ in Theorem \ref{thm:main} is our main contribution. In particular, it involves a careful merge of the Berry-Esseen bounds of the individual convex chains based on Poissonization and conditioning arguments. A similar strategy will be applied in order to determine the asymptotic behaviour of the expectation and variance of $f_0(P_n)$ in Theorem, \ref{thm:main}. \section{Preliminaries}\label{sec:Prelim} In this paper, $[A]$ stands for the convex hull of a set $A\subseteq\mathbb{R}^2$ and $\# A$ for its cardinality. In addition, if $A$ is a line segment, we denote its length by $|A|$. We write $B(x,r)$ for the closed disc centred at $x\in\mathbb{R}^2$ with radius $r>0$. Given a set $A\subset \mathbb{R}^2$ and a point $x\in\mathbb{R}^2$ denote by $d(A,x):=\inf_{y\in A}\|y-x\|$ the distance between $A$ and $x$. For two functions $f$ and $g$ we write $f=O(g)$ if $\limsup\limits_{x\to\infty}|f(x)/g(x)|<\infty$ and $f=o(g)$ if $\lim\limits_{x\to\infty}|f(x)/g(x)|=0$. Given a line $L:=\{(x,y)\in\mathbb{R}^2\colon ux+vy=t\}$ we denote by $$ L^+:=\{(x,y)\in\mathbb{R}^2\colon ux+vy\ge t\}\qquad \text{and} \qquad L^-:=\{(x,y)\in\mathbb{R}^2\colon ux+vy\leq t\} $$ the positive and negative half-planes into which $L$ divides $\mathbb{R}^2$, respectively. Given a convex body $K\in\mathbb{R}^2$ we consider a function $v:K\mapsto\mathbb{R}$, defined as $$ v(z):=\min\{\textup{area}(H\cap K)\colon H\text{ is a half-plane with }z\in H\}. $$ Then the floating body $K(v\ge \delta)$ with parameter $\delta>0$ is a level set of the function $v$, namely $K(v\ge \delta):=\{z\in K\colon v(z)\ge \delta\}$. The wet part is the set $K(v<\delta):=K\setminus K(v\ge \delta)$. In case of the polygon $P$ with $\textup{area}(P)=1$ the following formula for the area of the wet part (and an analogous formula for volume in arbitrary dimensions) was independently obtained in \cite{Schu91} and \cite{BB93}: \begin{equation}\label{eq:wetpart} \textup{area}(P(v<\delta))={\ell\over 4}\delta \log \frac 1{\delta}\, (1+o(1)). \end{equation} It was first observed by B\'ar\'any and Larman \cite{BarLar} that the random polygon $P_n$ is close to the floating body $P( v \geq n^{-1})$, with similar result for the Poissonized random polytope. This connection was made precise in several aspects. B\'ar\'any and Dalla \cite[Theorem 1]{BD} proved the following fact, see also B\'ar\'any \cite[Theorem 7.4]{Barsurvey}. Note that it is possible to choose the same constant $b_0>0$ in the following two results in which we denote by $\overline{\mathcal{A}}$ the complement of an event $\mathcal{A}$. \begin{lemma}\label{le:floating-Pn} Choose $n$ independent uniform random points $X_1, \dots, X_n$ in a container polygon $P$. Then there is a constant $b_0>0$ such that the event $\mathcal{A}_n := \{ P(v\ge b_0 n^{-1} \log n )\subset P_{n}\}$ satisfies \begin{equation} \mathbb{P}(\overline{\mathcal{A}_n} ) =O(n^{-6}). \end{equation} \end{lemma} We also have to deal with the Poissonized random polytope in the following. To define this model properly, let $N$ be a Poisson random variable with mean $n$, and choose $N$ independent uniform random points $X_1, \dots, X_N$ in a polygon $P$, which are also independent of $N$. Then $X_1, \dots, X_N$ is a homogeneous Poisson point process $\eta$ in the polygon $P$ with $\mathbb{E}(\#\eta)=n$. We denote its convex hull by $P_\eta=[\eta]$. The next lemma is a combination of Lemma 5.3 in \cite{BR10} and the Poissonized version of \cite[Theorem 1]{BD}. \begin{lemma}\label{le:floating-Peta} There are constants $b_0, c_0>0$ independent of $n$, such that the events $$\mathcal{A}^\pi_n := \{P(v\ge b_0 n^{-1} \log n)\subset P_\eta\}\qquad\text{and}\qquad \mathcal{B}^\pi_n := \{\#(\eta \cap P(v< b_0 n^{-1} \log n ))\leq c_0 (\log n)^2\} $$ satisfy $$ \mathbb{P}(\overline{\mathcal{A}^\pi_n}) \leq \mathbb{P}(\overline{\mathcal{A}^\pi_n \cap \mathcal{B}^\pi_n}) = O(n^{-6}). $$ \end{lemma} During the proofs of this paper we switch several times between the Poisson model $P_\eta$ and the binomial model $P_n$ of the random polygons we consider. For this the following estimate will turn out to be helpful. It is a slight extension of a result of Vervaat \cite{Vervaat}. \begin{lemma}\label{le:diff-Poisson-binom} Let $n \in \mathbb{N}$ and $p>0$. Put ${n \choose m}=0$ for $n<m$. Then \begin{equation*} \sum_{m=0}^\infty m^k \left|\frac{(np)^m}{m!} e^{-np} - {n \choose m} p^m (1-p)^{n-m} \right| \leq \begin{cases} 2p & k=0, \\ 2np^2 & k=1, \\ 2np^2 (1+ n p) & k=2. \end{cases} \end{equation*} \end{lemma} \begin{proof} The inequality for $k=0$ is due to Vervaat \cite{Vervaat}. The case $k=1$ follows from \begin{align*} \sum_{m=0}^\infty m \left|\frac{(np)^m}{m!} e^{-np} - {n \choose m} p^m (1-p)^{n-m} \right| & \leq np \sum_{m=0}^\infty \left|\frac{(np)^{m}}{m!} e^{-np} - {n-1 \choose m} p^{m} (1-p)^{n-m-1} \right| \\ &\leq 2np^2. \end{align*} The case $k=2$ is a combination of case $k=1$ with \begin{align*} \sum_{m=0}^\infty m(m-1) & \left|\frac{(np)^m}{m!} e^{-np} - {n \choose m} p^m (1-p)^{n-m} \right| \\ \leq & (np)^2 \sum_{m=0}^\infty \left|\frac{(np)^{m}}{m!} e^{-np} - \frac{n-1}{n} {n-2 \choose m} p^{m} (1-p)^{n-m-2} \right| \\ \leq & (np)^2 \sum_{m=0}^\infty \left|\frac{(np)^{m}}{m!} e^{-np} - {n-2 \choose m} p^{m} (1-p)^{n-m-2} \right| \\ & + np^2 \sum_{m=0}^\infty \left| {n-2 \choose m} p^{m} (1-p)^{n-m-2} \right| \\ \leq & 2 p (np)^2 + np^2 . \end{align*} This completes the argument. \end{proof} \section{Proof of Theorem \ref{thm:main}}\label{sec:ProofSquare} The strategy of the proof of Theorem \ref{thm:main} consists of the following main steps: \begin{description} \item[{Step 1:}] As a first step we randomize the model further by taking instead of a fixed number $n$ of points, distributed uniformly in $P$, a random number $N$ of points. More precisely, as the number of points we use a Poisson random variable with mean $n$ so that the collection of random points becomes a homogeneous Poisson point process. This construction is known as Poissonization appears to be very helpful for our purposes. In fact, after Poissonization the number of points in disjoint regions become independent random variables. \item[{Step 2:}] In order to prove the variance asymptotic and the Berry-Esseen bound for the Poisson model described in Step 1 we need to introduce an additional construction, which allows us to exclude some "bad" and "unlikely" events on which we do not have sufficient control on the geometric configuration. \item[{Step 3:}] The third step is devoted to the proof of the variance asymptotic and the Berry-Esseen bound for the Poisson random convex chain. The corresponding results for the classical convex chain, proven in \cite{GT21}, are transformed using so-called "transfer lemma", which has been used in similar situations before in the literature, see \cite{BR06,BR10, BV06,Reitz05,Vu06}. \item[{Step 4:}] The fourth step is concerned with the proof of the variance asymptotic and the Berry-Esseen bound for the Poisson random polygon in $P$ under the conditions introduced in Step 2. As already explained in the introduction, the idea of the proof is to decompose the boundary of the random polygon into independent random convex chains, each of which corresponds to one corner of container polygon. The result can now be obtained from the corresponding result for the Poissonized convex chain from Step 3. \item[{Step 5:}] In this step we remove the constraints introduced in Step 2 and show that this does not change the quality of the variance asymptotic and the Berry-Esseen bound for the Poisson model. \item[{Step 6:}] In the last step we deduce the variance asymptotic and the Berry-Esseen bound for the original model from the one for the Poisson model via de-Poissonization. The transition from the Poisson model to the binomial one is made using again the "transfer lemma". \end{description} In order to simplify some of our arguments we can and will from now on assume that the container polygon $P$ has area one. This is indeed possible as the vertex number of $P_n$ does not change under rescaling of the container polygon $P$. \subsection{Step 1: Introducing the Poisson model} \label{subsec1} Let $\eta$ be a homogeneous Poisson point process in the container polygon $P$ with $\mathbb{E}(\#\eta) = n$. Formally, $\eta$ can be constructed as follows. Let $N$ be a Poisson random variable with mean $n$ and, independently of $N$, let $X_1,X_2,\ldots$ be a sequence of independent and uniformly distributed random points in $P$. Then $\eta$ can be defined as the random set of points $\{X_1,\ldots,X_N\}$, which is interpreted as the empty set if $N=0$, an event having probability $e^{-n}$. We will refer to $\eta$ as a homogeneous Poisson point process with intensity $n$ on $P$. The well-known multivariate Mecke formula is a useful tool to compute expectations of Poisson functionals, see~\cite[Theorem 4.1]{LP}. It says that \begin{equation}\label{eq:Mecke} \mathbb{E} \sum_{(X_1,\ldots,X_k)\in \eta^k_{\neq}} f(X_1,\ldots,X_k,\eta) = n^k \, \mathbb{E} \int\limits_{P^k} f(x_1,\ldots,x_k,\eta\cup\{x_1,\ldots,x_k\})\,d x_1\ldots d x_k \end{equation} where $\eta^k_{\neq}$ denotes the set of all $k$-tuples of distinct points of $\eta$, and $f$ is a non-negative measurable function acting on a $k$-tuple of points and a locally-finite point configuration in $\mathbb{R}^2$ to which these points belong. We consider now the random polygon $P_{\eta}$, which (we recall) was defined as a convex hull of the Poisson point process $\eta$, that is, $P_{\eta}=[\eta]$. There exists a clear connection between the random polygons $P_n$ and $P_{\eta}$, namely, $P_n$ has the same distribution as $P_\eta$, given that the number of points $N=\#\eta$ of $\eta$ is equal to $n$: $$ P_n\stackrel{d}{=}(P_{\eta}\,|\,\#\eta = n). $$ The Poisson random polygons $P_{\eta}$ have been intensively studied. B{\'a}r{\'a}ny and Reitzner \cite{BR10} and Yukich and Calka \cite{CY17} showed that the number $f_0(P_\eta)$ of vertices of $P_\eta$ satisfies \begin{align*} \mathbb{E} f_0(P_{\eta})&={2 \ell \over 3} \log n (1+o(1)), \\ \operatorname{Var} f_0(P_{\eta})&= c_2 \ell \log n (1+o(1)) \end{align*} for some constant $c_2>0$. In fact, more general results have been proven for arbitrary dimensions. For the first formula see \cite[Theorem 1.2]{BR10}, where also a lower bound for the variance was obtained, and an upper bound for the variance appears in \cite[Theorem 1.1]{BR10a}. For the precise asymptotics for the variance see \cite[Theorem 1.3]{CY17}. In addition, in \cite {BR10a} a Berry-Esseen bound for $f_0(P_\eta)$ is shown, which involves a double logarithmic factor as in \eqref{eq:BaraReitz}. As explained above, we will give more precise estimates for the moments in Theorem \ref{thm:BerryEsseenPoisson} and remove the double-logarithmic factor in the central limit theorem for this Poisson random polygon model $P_\eta$, and eventually carry these results to $P_n$. \subsection{Step 2: Fixing the construction} \begin{figure}[t] \centering \begin{tikzpicture} \node at (0,0) {\includegraphics[width=0.8\textwidth]{polygon-triangle}}; \node at (-2.8,-0.9) {\small $Z_1$}; \node at (2.8,-1.5) {\small $Z_2$}; \node at (5,2.7) {\small $Z_3$}; \node at (-5.3,2.2) {\small $Z_\ell$}; \node at (0.4,-2.8) {\small $V_1$}; \node at (4.2,-0.7) {\small $V_2$}; \node at (-4.9,0.5) {\small $V_\ell$}; \node at (-5.8,-0.7) {\small $v_0=v_{\ell}$}; \node at (-2.7,-2.5) {\small $e_1$}; \node at (-0.2,-3.9) {\small $v_1$}; \node at (2.5,-2.5) {\small $e_2$}; \node at (5.1,-0.8) {\small $v_2$}; \node at (-1.2,-1.8) {\footnotesize $\delta_{1,1}$}; \node at (1.6,-2.1) {\footnotesize $\delta_{1,2}$}; \node at (-0.5,0) {\footnotesize $\Delta_1$}; \draw (-0.2,-1.4) -- (-0.6,-0.2); \node at (3,1.1) {\small $\Delta_2$}; \draw (3,0.9) -- (3.6,0); \node at (-3.5,1.5) {\small $\Delta_\ell$}; \draw (-3.7,1.3) -- (-4.4,1); \end{tikzpicture} \caption{Illustration of the construction in Step 2.} \label{fig:Step2} \end{figure} Let $v_1,\ldots,v_\ell$ be the $\ell\geq 3$ consecutive vertices of the polygon $P$ and $e_1,\ldots,e_\ell$ be the $\ell$ consecutive edges of $P$, where $e_i =[v_{i-1}, v_i]$ with the convention that here and in what follows the index is taken modulo $\ell$, e.g. $v_0=v_\ell$. Further, denote by $\ell_i$ the length of the edge $e_i$, by $\alpha_i$ the angle of $P$ at the vertex $v_i$ and by $F_i$ the area of the triangle $[v_{i-1}, v_i, v_{i+1}]=[e_i, e_{i+1}]$, $1\leq i\leq \ell$. For every edge $e_i$, $1\leq i\leq \ell$, with probability one there is one point $Z_i:=(x_i,y_i)$ from the Poisson point process $\eta$, such that either $L^+(Z_i)\cap \eta=\{Z_i\}$ or $L^-(Z_i)\cap \eta=\{Z_i\}$, where $L(Z_i)$ denotes the line parallel to $e_i$ passing through $Z_i$, see Figure \ref{fig:Step2}. Without loss of generality we assume that $L(Z_i)$ is parametrized in a way such that $L^-(Z_i)\cap \eta=\{Z_i\}$ and thus $\eta \subset L^+(Z_i)$. It might happen that $\eta=\varnothing$ or that for some or few $1\leq i\leq \ell $ we have $Z_i=Z_{i+1}$ with $Z_{\ell +1}=Z_1$. Since by a conditioning argument we will exclude these situations in the next steps, we will ignore these cases in the forthcoming discussion. Denote the point of intersection of $L(Z_i)$ and $L(Z_{i+1})$ by $V_{i}:=L(Z_i)\cap L(Z_{i+1})$, see Figure \ref{fig:Step2}, and put $\Delta_i:=[Z_i,V_{i},Z_{i+1}]$, $1\leq i\leq \ell$, and $\Delta_i := Z_i$ if $Z_i=Z_{i+1}$. The length of the edges of the triangle $\Delta_i$ are denoted by $\delta_{i,i}=\|Z_i-V_i\| $ and $\delta_{i,i+1}= \| V_i - Z_{i+1} \| $. In the next sections we will need the first moments and covariances of the logarithmic area of $\Delta_i$, and the probability that the triangle is not too small. We start with the following lemma. \begin{lemma}\label{lm:smalldistanceedge} There is a constant $c_P>0$ depending only on $P$, such that for $1\leq i\leq \ell$ we have $$ \mathbb{P}( d(Z_{i}, e_i) \geq x ) \leq e^{- c_P n x}. $$ \end{lemma} \begin{proof} Recall that $\ell_i$ denotes the length of $e_i$. If $d(Z_{i}, e_i) \geq x $ then $L^-(Z_i) \cap P$ contains a triangle with base $\ell_i$ and height $x$, implying that $$\textup{area}( L^-(Z_i) \cap P) \geq \frac{ \ell_i}2 x .$$ Because $\eta \subset L^+(Z_i)$, the interior of the set $L^-(Z_i) \cap P$ is empty, and hence $$ \mathbb{P}( d(Z_{i}, e_i) \geq x ) \leq e^{- n \, \frac{\ell_i}2 x}. $$ Thus, the result follows with $c_P:= \min_i \frac{\ell_i}2 $. \end{proof} From now on we assume that $d(Z_i, e_i) \leq n^{- \frac 34}$, $Z_i \neq Z_{i+1}$, and $\delta_{i,j} \geq n^{- \frac 14}$ for $1\leq i\leq \ell$ and $j=i, i+1$ (note that the latter conditional already implies that $Z_i\neq Z_{i+1}$, which we have included only for convenience). We denote this event by $\mathcal{E}$. Observe that given $\mathcal{E}$ we have \begin{equation}\label{eq:Delta>n-12} \Delta_i \geq \frac 12 \min_{1\leq j\leq \ell} \sin \alpha_j \ n^{- \frac 12} . \end{equation} In a first step we show that $\mathcal{E}$ happens with high probability. \begin{corollary}\label{cor:complcE} There are constants $\underline c_P, \overline c_P>0$ only depending on $P$, such that $$ \underline c_Pn^{- 1}\leq \mathbb{P}(\overline{\mathcal{E}})\leq \overline c_P n^{-\frac 14}. $$ \end{corollary} \begin{proof} The event that $d(Z_i, e_i) \geq n^{- \frac 34}$ for some $i=1,\ldots,\ell$ occurs according to Lemma \ref{lm:smalldistanceedge} with probability $\leq \ell e^{- c_P n^{ \frac 14} }$. The probability of the event that $V_i=Z_i=Z_{i+1}$ for some $i=1,\ldots,\ell$, again according to Lemma \ref{lm:smalldistanceedge}, can be estimated by \begin{align*} \sum_{i=1}^\ell\mathbb{P}(Z_i=Z_{i+1}) & \leq \sum_{i=1}^\ell \mathbb{P}(Z_i=Z_{i+1}, d(Z_i, e_i) \leq n^{- \frac 34}, d(Z_{i+1}, e_{i+1}) \leq n^{- \frac 34}) + \ell e^{- c_P n^{\frac 14}} \\ & \leq \sum_{i=1}^\ell \mathbb{P}(\eta \cap B(v_i,(\sin(\alpha_i/2))^{-1}n^{- \frac 34}) \geq 1) + \ell e^{- c_P n^{\frac 14}} \\ & \leq \ell \big[1-\exp\big(-2 \pi\max_{1\leq i\leq \ell}(\sin(\alpha_i/2))^{-1} n^{- \frac 12} \big)\big] + \ell e^{- c_P n^{\frac 14}} = O(n^{- \frac 12}). \end{align*} Finally, we compute $\mathbb{P}(\delta_{i,j} \leq n^{- \frac 14})$ for $i=1,\ldots,\ell$ and $j=i,i+1$ using the multivariate Mecke formula \eqref{eq:Mecke}. Denote by $L_i(x)$ the line through $x$ parallel to $e_i$ and by $L_i^+(x)$ the corresponding halfplane not containing the edge $e_i$. Taking for simplicity $i=1, j=2$ we have for $\delta_{1,2}=\delta_{1,2}(\eta)$ that \begin{align*} &\mathbb{P} \big( \delta_{1,2} \leq n^{- \frac 14}, \ d(Z_k, e_k) \leq n^{- \frac 34} \ \forall k=1,\ldots,\ell \big) \\ & = \mathbb{E} \sum_{(X_1, X_2,X_3) \in \eta_{\neq}^2} \mathbbm{1} (\delta_{1,2}(\eta)\leq n^{- \frac 14}) \mathbbm{1}\big(\eta \subset L_1^+(X_1) \cap L_2^+ (X_2)\cap L_3^+ (X_3), d(Z_k, e_k) \leq n^{- \frac 34} \ \forall k=1,\ldots,\ell \big) \\ &= n^3 \mathbb{E} \int\limits_{P}\int\limits_{P}\int\limits_{P} \mathbbm{1}(\delta_{1,2}(\eta\cup\{x_1,x_2,x_3\}) \leq n^{- \frac 14}) \\ &\hspace{1.5cm}\times\mathbbm{1} \big(\eta\cup\{x_1,x_2,x_3\} \subset L_1^+(x_1) \cap L_2^+ (x_2)\cap L_3^+ (x_3) , d(Z_k, e_k) \leq n^{- \frac 34} \ \forall k=1,\ldots,\ell \big) \,dx_1dx_2dx_3. \end{align*} Let us remark that the condition that $\eta\subset L_1^+(X_1) \cap L_2^+ (X_2)\cap L_3^+ (X_3)$ in the second line automatically implies that $Z_1=X_1$, $Z_2=X_2$ and $Z_3=X_3$. Also, the random variable $\delta_{1,2}(\eta)$ is determined by the random points $X_1$ and $X_2$ in the second and $\delta_{1,2}(\eta\cup\{x_1,x_2,x_3\})$ by the fixed points $x_1$ and $x_2$ in the third line. Now, we first integrate $x_2$ along the segment $L_2(x_2) \cap P$ where everything is fixed except the position of $x_2$ on $[V_1, V_2] \subset L_2(x_2)$, where $V_1$ and $V_2$ are points on $L_2(x_2)$ whose positions are determined by $x_1,x_2$ and $x_3$, see Figure \ref{fig:Step2} (in fact, the dependence of $V_2$ on the position of $x_3$ is the reason why we consider the three points $x_1,x_2,x_3$). Because $ \int_0^a \mathbbm{1}(x \leq n^{- \frac 14}) \, dx \leq n^{- \frac 14} a^{-1} \int_0^a dx $ and $\| V_1-V_2 \| = \ell_2(1+O(n^{- \frac 34}))$, \begin{align*} &\mathbb{P} \Big( \delta_{1,2} \leq n^{- \frac 14}, \ d(Z_k, e_k) \leq n^{- \frac 34} \ \forall k=1,\ldots,\ell \Big) \\ & \leq n^{- \frac 14} \ell_2^{-1} (1+O(n^{- \frac 34})) n^3 \mathbb{E} \int\limits_{P}\int\limits_{P}\int\limits_{P} \mathbbm{1} \big(\eta \in L_1^+(x_1) \cap L_2^+ (x_2)\cap L_3^+ (x_3) , \ldots\\ &\hspace{5cm}\ldots d(Z_k, e_k) \leq n^{- \frac 34} \ \forall k=1,\ldots,\ell \big)\ dx_1dx_2dx_3 \\ & = n^{- \frac 14} \ell_2^{-1} (1+O(n^{- \frac 34})) \mathbb{P}\Big( d(Z_k, e_k) \leq n^{- \frac 34} \ \forall k=1,\ldots,\ell \Big), \end{align*} where in the last line we have applied backwards the same argument, based on the multivariate Mecke formula, as above. Hence \begin{align*} &\mathbb{P} (\exists i\in\{1,\ldots,\ell\},j\in\{i,i+1\}:\delta_{i,j} \leq n^{- \frac 14})\\ & \leq \sum_{i=1}^{\ell}\sum_{j=i,i+1} \mathbb{P} \Big( \delta_{i,j} \leq n^{- \frac 14}, \ d(Z_k, e_k) \leq n^{- \frac 34} \ \forall k=1,\ldots,\ell \Big) + \ell e^{- c_P n^{\frac 14}} \\ &= O(n^{- \frac 14}). \end{align*} Combining the three estimates yields the right hand side of the inequality. On the other hand, there is some small $c_1>0$ depending on $P$ such that the probability that $B(v_1, n^{-1}) \cap P$ contains precisely one point of $\eta$ is $$ \mathbb{P}(\eta \cap B(v_1, n^{- 1}) = 1) = n\, \textup{area}(B(v_1, n^{- 1}) \cap P) e^{- n\, \textup{area}(B(v_1, n^{- 1}) \cap P) } = c_1 n^{- 1} e^{-c_1 n^{- 1}} $$ for $n$ sufficiently large. The area of the parallel strips along the edges $e_1$ and $e_{2}$ of width $n^{- 1}$ without the disk $B(v_1, n^{- 1})$ is upper bounded by $c_2 n^{-1}$ with some $c_2>0 $ depending on $P$. Hence, the probability that this region contains no points of $\eta$ is lower bounded by $ e^{- c_3} $ with $c_3>0$ depending on $P$. If these independent events occur than the single point in $B(v_1, n^{- 1})$ is just $Z_1=Z_2$, which proves the left hand side inequality, that is \begin{align*} \mathbb{P}(\overline \mathcal{E} ) \geq \mathbb{P}(Z_1=Z_{2}) & \geq c_1 n^{- 1} e^{-c_1 n^{- 1} -c_3} \geq \underline{c}_P n^{- 1}. \end{align*} This completes the argument. \end{proof} In the second step we investigate the moments and mixed moments of $\log \delta_{i,i}$ and $\log \delta_{i,i+1}$ under the condition $\mathcal{E}$. \begin{lemma}\label{lm:distancesmall} The logarithmic moments satisfy, for $k=i, i+1$, $1\leq i\leq \ell$, \begin{align} \label{eq:E-log-delta} \mathbb{E} (\log \delta_{i,k} |\mathcal{E} ) &=\log \ell_k -1 + O(n^{- \frac 34}) \intertext{and} \label{eq:E-log-delta-sqr} \mathbb{E} ((\log \delta_{i,k} )^2 |\mathcal{E}) &=(\log \ell_k-1)^2 + 1 +O(n^{- \frac 34}) . \intertext{For the mixed logarithmic moments we obtain} \label{eq:E-log-delta-neighbors} \mathbb{E}[ (\log \delta_{i,i+1}) (\log \delta_{i+1,i+1})|\mathcal{E} ] &= (\log \ell_{i+1} -1)^2 + 1- \frac {\pi^2}6 + O(n^{- \frac 34}) \intertext{and for $j \neq l$,} \label{eq:E-log-delta-nonneighbors} \mathbb{E}[ (\log \delta_{i,j}) (\log \delta_{k,l})|\mathcal{E} ]& = (\log \ell_{j} -1)(\log \ell_{l} -1) + O(n^{- \frac 34}). \end{align} \end{lemma} \begin{proof} We assume the event $\mathcal{E}$ and prove \eqref{eq:E-log-delta} e.g.\ for $\delta_{i,i+1}$ and $i=1$. Our argument will be similar to that in the proof of Corollary \ref{cor:complcE} and we will also use the same notation introduced as there. In particular, recall that $L_i(x)$ is the line through $x$ parallel to $e_i$ and $L_i^+(x)$ the corresponding halfplane not containing the edge $e_i$. The multivariate Mecke formula \eqref{eq:Mecke} yields \begin{align*} \mathbb{E} (\log \delta_{1,2} \mathbbm{1}(\mathcal{E})) & = \mathbb{E} \sum_{(X_1, X_2, X_3) \in \eta_{\neq}^3} \log \delta_{1,2}(\eta)\, \mathbbm{1}(\eta \subset L_1^+(X_1) \cap L_2^+ (X_2) \cap L_3^+ (X_3) , \mathcal{E}) \\ &= n^3 \mathbb{E} \int\limits_{P}\int\limits_{P}\int\limits_{P} \log \delta_{1,2}(\eta\cup\{x_1,x_2,x_3\}) \\ &\hspace{3cm}\times\mathbbm{1}(\eta\cup\{x_1,x_2,x_3\}\subset L_1^+(x_1) \cap L_2^+ (x_2) \cap L_3^+ (x_3) , \mathcal{E})\, d x_2 d x_3 d x_1 . \end{align*} We integrate $x_2$ on the line segment $[V_1,V_2]\subset L_2(x_2) \cap P$, where $V_1$ and $V_2$ are the points determined by $x_1,x_2,x_3$ as in Figure \ref{fig:Step2}. Because $ \int_0^a \log x\, dx = (\log a - 1) a $ and $\| V_1-V_2 \| = \ell_2(1+O(n^{- \frac 34}))$, given $\mathcal{E}$, we have that \begin{eqnarray*} \int\limits_{L(x_2) \cap P} \log \delta_{1,2}(\eta\cup\{x_1,x_2,x_3\}) \, d_{L_2(x_2)}x_2 &=& \int\limits_0^{\|V_1-V_2\|} \log x \, dx \\ &=& (\log \ell_2 - 1 +O(n^{- \frac 34})) \ell_2 \\ &=& (\log \ell_2 - 1 +O(n^{- \frac 34})) \int\limits_{L(x_2) \cap P} d_{L_2(x_2)}x_2 \end{eqnarray*} under the event $\mathcal{E}$, where $d_{L_2(x_2)}x_2 $ indicates that we are integrating with respect to the Lebesgue measure on $L_2(x_2)$. Thus, $$ \mathbb{E} ( \log \delta_{1,2} \mathbbm{1}(\mathcal{E})) = (\log \ell_2 - 1 +O(n^{- \frac 34})) \mathbb{E} \mathbbm{1} (\mathcal{E}), $$ and $$ \mathbb{E} ( \log \delta_{1,2} |\mathcal{E}) = \log \ell_2 - 1 +O(n^{- \frac 34}). $$ In this way we obtain Equation \eqref{eq:E-log-delta}. In the same way, this time using the identity $ \int_0^a (\log x)^2 dx = ((\log a)^2 - 2 \log a +2) a $, one proves \eqref{eq:E-log-delta-sqr}. Analogously, using the identity \begin{eqnarray*} \int\limits_0^a \log x \log(a-x) dx &=& a \left[(\log a )^2 - 2 \log a + 2- \frac {\pi^2}6 \right] \end{eqnarray*} we obtain for the expectation of the product of the two log-neighbouring distances $$ \mathbb{E} ((\log \delta_{1,2})( \log \delta_{2,2} )| \mathcal{E} ) = (\log \ell_2 )^2 - 2 \log \ell_2 + 2- \frac {\pi^2}6 + O(n^{- \frac 34}) = (\log \ell_2 -1)^2 + 1- \frac {\pi^2}6 + O(n^{- \frac 34}). $$ Considering the product of two distances not on the same line $L_i(\,\cdot\,)$, e.g. $$ \mathbb{E} (\log \delta_{1,1} \log \delta_{1,2} | \mathcal{E} ) \qquad\mbox{ or }\qquad \mathbb{E} (\log \delta_{1,2} \log \delta_{2,3} | \mathcal{E} ) $$ we rewrite this as a multiple integral using the multivariate Mecke formula once again, and integrate first with respect to $x_2$ on $L_2(x_2)$ to obtain \begin{eqnarray*} \mathbb{E} ((\log \delta_{1,1})( \log \delta_{1,2} )| \mathcal{E} ) &=& (\log \ell_2 - 1 +O(n^{- \frac 34})) \mathbb{E} (\log \delta_{1,1} | \mathcal{E} ) \\ &=& (\log \ell_2 - 1 ) (\log \ell_1 - 1 ) +O(n^{- \frac 34}), \end{eqnarray*} and similarly for all other cases. This shows that $\delta_{i,i+1}$ and $\delta_{i+1,i+1}$ are asymptotically uncorrelated to all other $\delta_{j,k}$ not on $L_{i+1}(\,\cdot\,)$. The proof is thus completed. \end{proof} The conditional second moments can be written in a more concise way as the conditional covariance of the involved quantities. \begin{corollary} For $i,j,k,l\in\{1,\ldots,\ell\}$ it holds that \begin{equation} \label{eq:cov-delta_i} \operatorname{Cov} (\log \delta_{i,j} , \log \delta_{k,l}|\mathcal{E} ) = \mathbbm{1}(j=l) - \mathbbm{1}(j=\ell)\mathbbm{1}(|i-k|=1)\frac{\pi^2}6 + O(n^{- \frac 34}). \end{equation} \end{corollary} For $i\in\{1,\ldots,\ell\}$, the logarithmic area of the triangle $\Delta_i$ equals $$ \log \textup{area}(\Delta_i) = \log \frac{\sin \alpha_i}2 + \log \delta_{i,i} + \log \delta_{i,i+1} . $$ Because the edges of $\Delta_i$ of length $\delta_{i,i}$ and $\delta_{i,i+1}$ are parallel to $e_i$ and $e_{i+1}$, we see that $$ F_i= \frac{\sin \alpha_i}2 \ell_i \ell_{i+1},\qquad 1\leq i\leq \ell, $$ is the area of the triangle $[v_{i-1}, v_i, v_{i+1}]=[e_i, e_{i+1}]$. Lemma \ref{lm:distancesmall} yields immediately the conditional expectation, variance and covariances of $\log\textup{area}(\Delta_i)$. For example, $$ \operatorname{Cov} (\log \textup{area}(\Delta_i), \log \textup{area} (\Delta_k)|\mathcal{E}) = \sum_{j=i, i+1,\ l=k, k+1} \operatorname{Cov} (\log \delta_{i,j} , \log \delta_{k,l} |\mathcal{E}). $$ Combined with the formula for the conditional covariance \eqref{eq:cov-delta_i} this yields the following result. \begin{corollary}\label{cor:moments-logarea} For $i\in\{1,\ldots,\ell\}$ one has that \begin{align} \label{eq:E-log-area}\mathbb{E} (\log \textup{area} (\Delta_i )|\mathcal{E} ) &= \log F_i -2 + O(n^{- \frac 34}) , \\ \label{eq:Var-log-area} \operatorname{Var} (\log \textup{area} (\Delta_i) |\mathcal{E} ) &= 2+ O(n^{- \frac 34}) \intertext{and} \label{eq:Covar-log-area} \operatorname{Cov} (\log \textup{area}( \Delta_i), \log \textup{area} (\Delta_k) |\mathcal{E} ) &= \mathbbm{1}(|i-k|=1) \left(1 - \frac{\pi^2}6 \right) + O(n^{- \frac 34}) . \end{align} \end{corollary} \subsection{Step 3: Berry-Esseen bound for the Poisson random convex chain} Let $T$ be the canonical triangle with vertices $(0,1)$, $(0,0)$ and $(0,1)$, and $\chi$ a homogeneous Poisson point process with $\mathbb{E}(\#\chi)=M>1$ in $T$. The convex hull of the two vertices $(0,1), (1,0)$ and the points of $\chi$ is denoted by $$ T_{\chi}:= [\chi, (0,1),(1,0)] .$$ Denote by $f_0(T_\chi)$ the number of vertices of $T_\chi$. In order to obtain the Berry-Esseen bound for the Poissonized version of the random convex chain as in \eqref{eq:CLTChain} we will use the following transfer lemma taken from \cite[Lemma 3.2]{BR10}. The proof of this lemma can be found, for example, in \cite{BV06} (see also the remark after Lemma 3.2 in \cite{BR10}). \begin{lemma}\label{lm:transference} Given two sequences of random variables $\xi_n$ and $\xi'_n$ with means $\mu_n\in\mathbb{R}$ and $\mu_n'\in\mathbb{R}$, and variances $0<\sigma_n^2<\infty$ and $0<\sigma_n^{'2}<\infty$, respectively. Assume that there are sequences $\varepsilon_1(n)$, $\varepsilon_2(n)$, $\varepsilon_3(n)$ and $\varepsilon_4(n)$, all tending to zero as $n\to \infty$, such that \begin{itemize} \item[(i)] $|\mu_n'-\mu_n|\leq \varepsilon_1(n)\sigma_n$, \item[(ii)] $|\sigma_n^{'2}-\sigma_n^2|\leq \varepsilon_2(n)\sigma_n^{2}$, \item[(iii)] for every $x\in\mathbb{R}$, $|\mathbb{P}(\xi'\leq x)-\mathbb{P}(\xi\leq x)|\leq \varepsilon_3(n)$, \item[(iv)] for any $x\in\mathbb{R}$, $ \Big|\mathbb{P}\Big({\xi_n'-\mu_n'\over \sigma_n'}\leq x\Big)-\Phi(x)\Big|\leq \varepsilon_4(n). $ \end{itemize} Then there is a positive constant $C>0$ such that $$ \sup_{x\in\mathbb{R}}\Big|\mathbb{P}\Big({\xi_n-\mu_n\over \sigma_n}\leq x\Big)-\Phi(x)\Big|\leq C\sum_{i=1}^4\varepsilon_i(n). $$ \end{lemma} In order to verify conditions (i) and (ii) of the previous lemma for our model we derive the following asymptotic formulas for the expectation and the variance of $f_0(T_{\chi})$. For later purposes we also include the asymptotics for the second moment as well and we formulate our result for general homogeneous Poisson point processes. \begin{lemma}\label{lm:estimatesPoissonChain} Consider a homogeneous Poisson point process $\chi$ in the canonical triangle $T$ with $\mathbb{E}(\# \chi)=M>1$. Then \begin{align} \mathbb{E} f_0(T_{\chi})&={2\over 3}\log M + \frac {2\gamma +7}3 +O(M^{-1/2}),\label{eq:poissonExp}\\ \operatorname{Var} f_0(T_{\chi})&= \frac {10}{27} \log M + \frac{10 \gamma + 2\pi^2 - 28 }{27} + O(M^{-1/2}) ,\label{eq:poissonVar} \end{align} as $M\to\infty$. \end{lemma} \begin{proof} First of all note, that without loss of generality we may assume $M\in\mathbb{N}$, since $\log M= \log\lfloor M\rfloor + (M^{-1})$. Denote by $H_n = \sum_{i=1}^{n} \frac 1i$ the harmonic sum and by $H^{(2)}_n = \sum_{i=1}^{n} \frac 1{i^2}$ the harmonic sum of second order. Set $H_0= H^{(2)}_0:=0$ for convenience. It is well known that \begin{align} H_n &= \log n +\gamma + O(n^{-1}),\label{eq:HarmNum1},\\ H^{(2)}_n &= \frac{\pi^2}6+O(n^{-1})\label{eq:HarmNum2}, \end{align} as $n \to \infty$, where $\gamma$ is the Euler–Mascheroni constant. Let $T_k$, $k\ge 1$ be the random convex chain, which is build based on the sample of $k$ independent random points $Y_1,\ldots, Y_k$, uniformly distributed inside the canonical triangle $T$. That is, $T_k:=[Y_1,\ldots, Y_k,(0,1),(1,0)]$. It is known from \cite[Corollary 1 and Corollary 2]{Buch12} that \begin{align} \label{eq:expUnif} \mathbb{E} f_0(T_k)&= {2 \over 3} H_k + \frac 73, \\ \label{eq:varUnif} \operatorname{Var} f_0(T_k)&={10\over 27} H_k + {4\over 9} H^{(2)}_k - {28 \over 27 } + {4 \over 9(k+1)} ; \end{align} note that the result in \cite{Buch12} is stated for the quantity $N_k= f_0(T_k)-2$. Let $Y$ be a Poisson random variable with mean $M$. Then $$ \mathbb{E} H_Y = \sum_{k=0}^\infty (H_k-H_M) \mathbb{P}(Y=k) + H_M . $$ Because of the trivial estimate $$ |H_Y-H_M| \leq \max \left\{ \frac {Y-M}{M}, \frac {M-Y}{Y+1} \right\} \leq \frac {|Y-M|}M + \frac {|M-Y|}{Y+1} $$ we have \begin{align*} |\mathbb{E} H_Y - H_M | \leq \mathbb{E} \, \frac {|Y-M|}M + \, \mathbb{E} \frac {|M-Y|}{Y+1} \leq \sqrt{\mathbb{E} \frac {(Y-M)^2}{M^2}} + \sqrt{\mathbb{E} \frac {(M-Y)^2}{(Y+1)^2}} . \end{align*} Since $\mathbb{E} Y=M$ and $\mathbb{E} Y^2 = M^2 + M $, we see that $$ \mathbb{E} \, \frac{(Y-M)^2}{M^2} = M^{-2}\mathbb{E} Y^2 - 2 M^{-1} \mathbb{E} Y +1 = M^{-1} . $$ Analogously, since $ \mathbb{E} (Y+1)^{-1} = M^{-1} \left(1 - e^{-M} \right) \geq M^{-1} (1- M^{-1}) $, and $$ \mathbb{E} (Y+1)^{-2} = \sum_{k=2}^\infty \frac {M^{k-2}} {k!} \frac {k} {k-1} e^{-M} \leq \sum_{k=2}^\infty \frac {M^{k-2}} {k!} \left(1+\frac {3} {k+1} \right) e^{-M} \leq M^{-2} (1+ 3 M^{-1}) , $$ we see that \begin{align*} \mathbb{E} \frac {(M-Y)^2}{(Y+1)^2} &= \mathbb{E} \frac {(M+1)^2}{(Y+1)^2} -2 \mathbb{E} \frac {(M+1)}{(Y+1)} + 1 \leq 17 M^{-1} . \end{align*} Hence, by \eqref{eq:HarmNum1} $$ \mathbb{E} H_Y = H_M + O(M^{-\frac 12}) = \log M + \gamma + O(M^{-\frac 12}). $$ Together with \eqref{eq:expUnif} this proves \eqref{eq:poissonExp}. Similarly, by the law of total variance we have \begin{align*} \operatorname{Var} f_0(T_{\chi}) &= \mathbb{E}_Y \operatorname{Var} (f_0(T_{k})|Y=k) + \operatorname{Var}_Y \mathbb{E} (f_0(T_k)|Y=k). \\ &= \mathbb{E} \left(\frac {10}{27} H_Y + {4\over 9} H^{(2)}_Y - {28 \over 27 } + {4 \over 9(Y+1)}\right) + \operatorname{Var} \left( \frac 23 H_Y \right), \end{align*} and by \eqref{eq:HarmNum2} we obtain \begin{align*} \operatorname{Var} f_0(T_{\chi}) &= \left(\frac {10}{27} \log M + \frac{10 \gamma + 2\pi^2 - 28 }{27} + O(M^{-\frac 12}) \right) + \frac 49 \operatorname{Var} H_Y . \end{align*} To prove \eqref{eq:poissonVar} it remains to show that the variance of $H_Y$ is bounded by a constant. For this we use the Poincar\'e inequality for Poisson random variables, which says that $$ \operatorname{Var} f(Y) \leq M \mathbb{E} (f(Y+1)-f(Y))^2 $$ for functions $f:\{0,1,2,\ldots\}\to\mathbb{R}$ for which $\operatorname{Var} f(Y)<\infty$. This inequality can be considered as a special case of the general Poincar\'e inequality \cite[Theorem 18.7]{LP} for functionals of Poisson random measures. Applying this to $f(Y)=H_Y$ we conclude that $$ \operatorname{Var} H_Y \leq M \mathbb{E} \frac 1{(Y+1)^2} \leq \sum 2 \frac{M^{k+1}}{(k+2)!} e^{-M} \leq 2 M^{-1}, $$ which completes the argument. \end{proof} Now we are prepared to prove a Berry-Esseen bound for the number of vertices of the Poisson random chain $T_\chi$ in the canonical triangle $T$. \begin{lemma}\label{lm:BerryEssenPoissonChain} Consider a homogeneous Poisson point process $\chi$ in the canonical triangle $T$ with $\mathbb{E}(\# \chi)=M>1$. Then $$ \sup_{x\in\mathbb{R}}\Big|\mathbb{P}\Big({f_0(T_\chi)-\mathbb{E} f_0(T_\chi)\over \sqrt{\operatorname{Var} f_0(T_\chi)}}\leq x\Big)-\Phi(x)\Big|\leq {c\over\sqrt{\log M}}, $$ for some absolute constant $c>0$. \end{lemma} \begin{proof} As before, without loss of generality we assume $M\in\mathbb{N}$. Let $T_M$ denotes the convex chain, build on random points $X_1,\ldots, X_M$ independently and uniformly distributed in $T$. We apply Lemma \ref{lm:transference} with $\xi_M':=f_0(T_M)$ and $\xi_M:=f_0(T_{\chi})$. The condition (iv) with $\varepsilon_4(M)=c_4/\sqrt{\log M}$ for some constant $c_4>0$ independent of $M$ follows immediately from \eqref{eq:CLTChain}. The condition (i) with $\varepsilon_1(M)=c_1/\sqrt{\log M}$ for $c_1>0$ independent of $M$ can be verified by formulas \eqref{eq:poissonExp}, \eqref{eq:expUnif} and \eqref{eq:varUnif}. Analogously, condition (ii) with $\varepsilon_2(M)=c_2/\log M$ for $c_2>0$ independent of $M$ follows from \eqref{eq:poissonVar} and \eqref{eq:varUnif}. For the verification of condition (iii) we use the convex floating body introduced in Section \ref{sec:Prelim}, and follow an approach already used somewhat implicitly in \cite{Gro88} as well as \cite{Reitz05}. Recall that $\mathcal{A}_M$ is the event that the vertices of the convex hull of the $M$ random points are contained in the wet part $T(v< b_0 M^{-1}\log M)$ of the triangle $T$, which clearly implies that all vertices of the convex chain $T_M$ are contained in this wet part. Analogously, $A^\pi_M$ is the event that all vertices of the Poisson convex hull, and thus all vertices of the Poisson convex chain $T_{\chi}$ belong to $T(v< b_0 M^{-1}\log M)$. By Lemma \ref{le:floating-Pn} and Lemma \ref{le:floating-Peta} we have $$ \mathbb{P}(\overline{\mathcal{A}_M} ) = O(M^{-6})\qquad\text{and}\qquad \mathbb{P}(\overline{\mathcal{A}^\pi_M} ) = O(M^{-6}). $$ \begin{figure}[t] \centering \begin{tikzpicture} \clip (-4.5,-4.5) rectangle (4.5,4.5); \node at (0,0) {\includegraphics[width=0.44\textwidth]{floating}}; \node at (-1.3,-1) {\tiny $T(v\ge b_0 M^{-1}\log M)$}; \node at (1.2,0) {\small $P_M$}; \node at (-4.2,-2.2) {\small $T_M$}; \draw (0.75,0) -- (0.18,-0.43); \draw (-4,-2) -- (-3.57,-0.7); \end{tikzpicture} \caption{Illustration of the construction used in the proof of Lemma \ref{lm:BerryEssenPoissonChain}. The convex hull $P_M$ (or $P_\chi$) is indicated by the dashed segments, while the convex chain $T_M$ (or $T_\chi$) by a solid line. The floating body of $T$ is drawn in grey.} \label{fig:Step3} \end{figure} Combining these estimates yields, for any $x\in\mathbb{R}$, \begin{equation}\label{eq:24.01.22} \big|\mathbb{P}(f_0(T_M)\leq x)-\mathbb{P}(f_0(T_{\chi})\leq x)\big| \leq \big|\mathbb{P}(f_0(T_M)\leq x, \mathcal{A}_M)-\mathbb{P}(f_0(T_{\chi})\leq x , \mathcal{A}^\pi_{M})|+ O(M^{-6}) . \end{equation} For $T_\chi$ we have that $\#\big(\chi \cap T(v\ge b_0 M^{-1} \log M)\big)$ is Poisson distributed with mean $$ Mp := M\, {\textup{area}(T(v\ge b_0 M^{-1} \log M)) \over \textup{area}(T)} = O( (\log M)^2 ) $$ by \eqref{eq:wetpart}, and for $T_M$ the number of points in $T(v\ge b_0 M^{-1} \log M)$ is binomial distributed with mean $p$. Denote by $E_m$ the event that precisely $m$ points of the Poisson or binomial process are in $T(v\ge b_0 M^{-1} \log M)$. Coupling both processes in the canonical way and using Lemma \ref{le:diff-Poisson-binom} together with \eqref{eq:24.01.22} yields \begin{align*} \big|\mathbb{P}(f_0(T_M) & \leq x)- \mathbb{P}(f_0(T_{\chi})\leq x)\big| \\ & \leq \sum_{m=0}^\infty \mathbb{P}(f_0(T_M)\leq x, \mathcal{A}_M | E_m )\left| \frac{(Mp)^m}{m!} e^{- Mp} - {M \choose m} p^m (1-p)^{n-m} \right| + O(M^{-6}) \\ & \leq 2p + O(M^{-6})\\ & = O( M^{-1} (\log M)^2). \end{align*} Thus condition (iii) in Lemma \ref{lm:transference} holds with $\varepsilon_3(M)=c_3 M^{-1} (\log M)^2$ for some $c_3>0$ independent of $M$. An application of Lemma \ref{lm:transference} finishes the proof. \end{proof} \subsection{Step 4: Berry-Esseen bound for the Poisson model under condition $\mathcal{E}$} In the next step we consider the random variable $f_0(P_{\eta})$, conditioned on the event $\mathcal{E}$ we introduced in Step 2: $$ \xi:=(f_0(P_{\eta})|\mathcal{E}). $$ Let us also condition on the positions of the points $Z_1,\dots,Z_\ell$ and introduce the random variable $$ \xi':=(f_0(P_{\eta})|\mathcal{E}, Z_1,\ldots,Z_\ell). $$ It should be mentioned that under $\mathcal{E}$ all $\ell$ points $Z_1,\ldots, Z_\ell$ are well defined and in fact distinct. Hence, due to the independence property of Poisson point processes the random variable $\xi'$ can be decomposed into the sum of $\ell$ independent random variables $\xi'_i$, $1\leq i\leq \ell$, where each $\xi_i'$ is defined as a number of vertices of the random convex chain, formed by the Poisson point process $\eta$ restricted to the triangle $\Delta_i$. More precisely, given an arbitrary triangle $\Delta\subset P$ with vertices $v_1,v_2,v_3$ and a Poisson point process $\eta$ we define $$ T_{\eta}(\Delta,v_1,v_2):=[(\eta\cap\Delta), v_1,v_2]. $$ Then we take $$ \xi'_i:= f_0(T_{\eta}(\Delta_i,Z_i,Z_{i+1}))-2, $$ where the $-2$ is coming from the fact that we exclude the two endpoints $Z_i$ and $Z_{i+1}$ of the convex chain. Now, consider for each $1\leq i\leq \ell$ the affine transformation $A_i:\mathbb{R}^2\to\mathbb{R}^2$ which maps the triangle $\Delta_i$ with vertices $Z_i$, $V_i$ and $Z_{i+1}$ to the canonical triangle $T$ with vertices $(0,1)$, $(0,0)$ and $(0,1)$. Using the mapping property \cite[Theorem 5.1]{LP} and restriction property \cite[Theorem 5.2]{LP} of Poisson point processes we conclude that $\eta_i:=A_i(\eta\cap\Delta_i)$ is a homogeneous Poisson point process on $T$ with intensity $2n\,\textup{area}(\Delta_i)$. Since the number of vertices is invariant under affine transformations we conclude that $$ \xi'_i\stackrel{d}{=} f_0(T_{\eta_i})-2. $$ In order to prove a Berry-Esseen bound for the random variable $$ \xi'=\sum_{i=1}^\ell \xi_i'+ \ell, $$ where the additional summand $+\ell$ is coming from the fact that we excluded in the definition of $\xi_1,\ldots,\xi_\ell$ the points $Z_1,\ldots,Z_\ell$, we will use the following lemma. \begin{lemma}\label{lm:GlueBerryEsseen} Let $X_1,\ldots,X_k$ be independent random variables with $\mu_i:=\mathbb{E} X_i<\infty$, $\sigma_i:=\sqrt{\operatorname{Var} X_i}\in(0,\infty)$, $1\leq i\leq k$ and let $G_1,\ldots,G_k$ be independent standard Gaussian random variables. Let $\varepsilon_i>0$, $1\leq i\leq k$ be such that \begin{equation}\label{eq:conditions} \sup_{x\in\mathbb{R}}\Big|\mathbb{P}\Big({X_i-\mu_i\over \sigma_i}\leq x\Big)-\mathbb{P}(G_i\leq x)\Big|\leq \varepsilon_i,\qquad 1\leq i\leq k. \end{equation} Then for $X:=X_1+\ldots+X_k$ we have $$ \sup_{x\in\mathbb{R}}\Big|\mathbb{P}\Big({X-\mathbb{E} X\over \sqrt{\operatorname{Var} X}}\leq x\Big)-\Phi(x)\Big|\leq \sum_{i=1}^k\varepsilon_i. $$ \end{lemma} \begin{proof} Let $\mathcal{F}$ be the space of cumulative distribution functions, namely $$ \mathcal{F}:=\{F:\mathbb{R}\mapsto[0,1]\colon F\text{ is right-continuous, monotonically increasing}, F(\infty)=1,F(-\infty)=0\}, $$ where $F(\pm\infty)$ has to be interpreted as the appropriate limit. First of all let us recall that the classical Kolmogorov (or uniform) metric $d:\mathcal{F}\times\mathcal{F}\to [0,\infty)$ on the space $\mathcal{F}$ is defined as $$ d(F_1,F_2):=\sup_{x\in\mathbb{R}}|F_1(x)-F_2(x)|. $$ Given two random variables $X,Y$ with cumulative distribution functions $F_X,F_Y$, respectively, we write $$ d(X,Y):=d(F_X,F_Y)=\sup_{x\in\mathbb{R}}|\mathbb{P}(X\leq x)-\mathbb{P}(Y\leq x)|. $$ Using this notation and the fact that $d(aX+b,aY+b)=d(X,Y)$ for any $a>0,b\in\mathbb{R}$ the conditions in \eqref{eq:conditions} can be written in the form \begin{equation}\label{eq:conditionsNew} d(X_i-\mu_i,\sigma_i G_i)\leq \varepsilon_i,\qquad 1\leq i\leq k. \end{equation} Further, we apply the so-called semi-additivity property of the Kolmogorov metric, which says that for any independent $Y_1,\ldots,Y_k$ and any independent $Y'_1,\ldots,Y'_k$ one has that $$ d(Y_1+\ldots+Y_k,Y_1'+\ldots+Y_k')\leq \sum_{i=1}^n d(Y_i,Y_i'), $$ see \cite[Section 2.3, Equation (1.2)]{Zol76}. Recalling that $X=X_1+\ldots+X_k$, and taking $Y_i=X_i-\mu_i$ and $Y_i'=\sigma_i G_i$ we conclude by \eqref{eq:conditionsNew} that \begin{align*} d(X-\mathbb{E} X, \sigma_1 G_1+\ldots+\sigma_k G_k)&=d\Big({X-\mathbb{E} X\over \sqrt{\operatorname{Var} X}}, {\sigma_1 G_1+\ldots+\sigma_k G_k\over (\sigma_1^2+\ldots+\sigma_k^2)^{1/2}}\Big)\leq \sum_{i=1}^k\varepsilon_i. \end{align*} Finally, we need to observe that $(\sigma_1 G_1+\ldots+\sigma_k G_k)/(\sigma_1^2+\ldots+\sigma_k^2)^{1/2}$ has the standard Gaussian distribution. This completes the proof. \end{proof} We apply Lemma \ref{lm:GlueBerryEsseen} with $k=\ell$ to the random variables $X_1:=\xi_1',\ldots,X_\ell:=\xi_\ell'$. \begin{corollary}\label{cor:PoissonWithZ} There exists a constant $c>0$ such that for any $n\geq 2$ we have that $$ \sup_{x\in\mathbb{R}}\Big|\mathbb{P}\Big({\xi'-\mathbb{E}\xi'\over \sqrt{\operatorname{Var} \xi'}}\leq x\Big)-\Phi(x)\Big|\leq {c\over \sqrt{\log n}}. $$ \end{corollary} \begin{proof} Lemma \ref{lm:BerryEssenPoissonChain} yields \eqref{eq:conditions} for the random variables $X_i=\xi'_i$ with $\varepsilon_i=C(\log n +\log\textup{area}(\Delta_i))^{-1/2}$, $1\leq i\leq \ell$ and some absolute constant $C>0$. It is clear that for $X := \xi'-\ell = \sum_{i=1}^\ell \xi'_i$ we have $X-\mathbb{E} X=\xi'-\mathbb{E}\xi'$ and $\operatorname{Var} X=\operatorname{Var}\xi'$. Moreover, according to \eqref{eq:Delta>n-12} we have $\Delta_i \geq c n^{- \frac 12}$ given $\mathcal{E}$, and thus $\varepsilon_i = O((\log n)^{-1/2})$ for all $1\leq i\leq \ell$, and the proof is complete. \end{proof} Note that the obtained bound is independent of the exact position of the points $Z_1,\ldots, Z_\ell$ if we condition on the event $\mathcal{E}$. This already indicates that the same bound holds for the random variable $f_0(P_\eta)$ conditionally on $\mathcal{E}$ only. Our next result ensures that this is indeed the case. \begin{lemma}\label{lem:RemovePointsZ} There exists a constant $c>0$ such that for any $n\geq 2$ we have $$ \sup_{x\in\mathbb{R}}\Big|\mathbb{P}\Big({(f_0(P_{\eta})|\mathcal{E})-\mu \over\sigma}\leq x\Big)-\Phi(x)\Big|\leq {c\over\sqrt{\log n}} $$ with \begin{align*} \mu &=\mathbb{E} (f_0(P_{\eta})|\mathcal{E}) = {2 \ell \over 3}\log n + \frac 23 \sum_{i=1}^\ell \log F_i + \frac{2 \gamma \ell }3 + O(n^{- \frac 12}) \intertext{and} \sigma^2 &=\operatorname{Var} (f_0(P_{\eta})|\mathcal{E}) = {10 \ell \over 27}\log n +{10\over 27}\sum_{i=1}^\ell \log F_i + \frac{(10 \gamma -2 \pi^2 )\ell }{27} + O(n^{-\frac 12}). \end{align*} \end{lemma} \begin{proof} Let ${\bf Z}=(Z_1, \dots, Z_\ell)$ and recall that $\xi=(f_0(P_{\eta})|\mathcal{E})$ and $\xi'=(f_0(P_{\eta})|\mathcal{E}, {\bf Z})$. Moreover, we define $\mu:=\mathbb{E}\xi$, $\mu':=\mathbb{E}\xi'$ and $\sigma^2:=\operatorname{Var} \xi$, ${\sigma'}^2:=\operatorname{Var} \xi' $ (in this proof we suppress the dependence on the parameter $n$ in our notation). Using the representation $\xi'=\sum_{i=1}^\ell \xi_i'+\ell $ together with Lemma \ref{lm:estimatesPoissonChain}, thanks to the condition $\mathcal{E}$, we obtain \begin{align*} \mu' &= {2 \ell \over 3}\log n +{2\over 3}\sum_{i=1}^\ell \log\textup{area}(\Delta_i)+\frac{(2 \gamma + 4)\ell }3 + O(n^{-\frac 12}),\qquad \\ \sigma'^2 &= {10 \ell \over 27}\log n +{10\over 27}\sum_{i=1}^\ell \log\textup{area}(\Delta_i) + \frac{(10 \gamma + 2\pi^2 - 28)\ell }{27} + O(n^{-\frac 12}) . \end{align*} Corollary \ref{cor:moments-logarea} shows that \begin{align*} \mu = \mathbb{E} (\mu' | \mathcal{E}) &= \mathbb{E} \left( {2 \ell \over 3}\log n +{2\over 3}\sum_{i=1}^\ell \log\textup{area}(\Delta_i)+\frac{(2 \gamma + 4)\ell }3 + O(n^{-\frac 12}) \ \Big|\mathcal{E} \right) \\ &= {2 \ell \over 3}\log n + \frac 23 \sum_{i=1}^\ell \log F_i + \frac{2 \gamma \ell }3 + O(n^{- \frac 12}), \end{align*} which already coincides with the expectation \eqref{eq:expPolUnif} by R\'enyi and Sulanke. Analogously, the expected conditional variance is given by \begin{align*} \mathbb{E} ({\sigma'}^2 | \mathcal{E}) &= \mathbb{E} \left( {10 \ell \over 27}\log n +{10\over 27}\sum_{i=1}^\ell \log\textup{area}(\Delta_i) + \frac{(10 \gamma + 2\pi^2 - 28)\ell }{27} + O(n^{-\frac 12}) \right) \\ &= {10 \ell \over 27}\log n +{10\over 27}\sum_{i=1}^\ell \log F_i + \frac{(10 \gamma + 2\pi^2 - 48)\ell }{27} + O(n^{-\frac 12}) . \end{align*} And for the variance of the expectation we use Corollary \ref{cor:moments-logarea} again, \begin{align*} \operatorname{Var} (\mu' | \mathcal{E}) &= \frac 49 \operatorname{Var} \left( \sum_{i=1}^\ell \log \textup{area} (\Delta_i) +O(n^{-\frac 12})\ \Big|\mathcal{E} \right) \\ &= \frac 49 \sum_{i=1}^\ell \operatorname{Var} \left(\log \textup{area} (\Delta_i) \ \Big|\mathcal{E} \right) + \frac 89 \sum_{i=1}^\ell \operatorname{Cov} \left( \log \textup{area} (\Delta_i) , \log \textup{area} (\Delta_{i+1}) \ \Big|\mathcal{E} \right) + O(n^{- \frac 12 }) \\ &= \frac 89 \ell \left( 2- \frac{\pi^2}6 \right) + O(n^{-\frac 12}) \end{align*} Hence \begin{align*} \sigma^2 &= \mathbb{E} (\underbrace{\operatorname{Var} (f_0(P_{\eta})|\mathcal{E}, {\bf Z})}_{= {\sigma '}^2} | \mathcal{E}) + \operatorname{Var} (\underbrace{\mathbb{E} (f_0(P_{\eta})|\mathcal{E}, {\bf Z})}_{= \mu'} |\mathcal{E})\\ &= {10 \ell \over 27}\log n +{10\over 27}\sum_{i=1}^\ell \log F_i + \frac{(10 \gamma -2 \pi^2 )\ell }{27} + O(n^{-\frac 12}) . \end{align*} This also shows that \begin{align} \mu-\mu'& = {2\over 3}\sum_{i=1}^\ell \log\textup{area}(\Delta_i)+O(1), \label{eq:moments}\\ \sigma'^2-\sigma^2 & = {10\over 27}\sum_{i=1}^\ell \log\textup{area}(\Delta_i)+O(1), \label{eq:variance1}\\ \sigma+\sigma'& \geq \sigma = \Big({10 \ell \over 27}\log n\Big)^{\frac 12} +O(1)\label{eq:variance2}. \end{align} Next, we observe that \begin{align}\label{eq:14.05.21_2} \nonumber &\sup_{x\in\mathbb{R}} \Big| \mathbb{P}(\frac {\xi-\mu}{\sigma}\leq x) - \Phi(x)\Big|\\ & \qquad\qquad= \nonumber \sup_{y\in\mathbb{R}} \Big| \mathbb{P}(\xi\leq y) - \Phi\Big(\frac {y-\mu}{\sigma}\Big)\Big| \\ & \qquad\qquad\leq \nonumber \sup_{y\in \mathbb{R}} \Big| \mathbb{E} \mathbb{P}(\xi'\leq y) - \mathbb{E} \Phi\Big(\frac {y-\mu'}{\sigma'}\Big)\Big| + \sup_{y\in\mathbb{R}}\Big |\mathbb{E} \Phi\Big(\frac {y-\mu'}{\sigma'}\Big) - \Phi\Big(\frac {y-\mu}{\sigma}\Big)\Big| \\ & \qquad\qquad\leq \mathbb{E} \sup_{y\in\mathbb{R}} \Big| \mathbb{P}(\xi'\leq y) - \Phi\Big(\frac {y-\mu'}{\sigma'}\Big)\Big| + \mathbb{E} \sup_{y\in\mathbb{R}} \Big|\Phi\Big(\frac {y-\mu'}{\sigma'}\Big) - \Phi\Big(\frac {y-\mu}{\sigma}\Big)\Big|, \end{align} where the expectation is taken with respect to the law of the random vector ${\bf Z}$. From Corollary \ref{cor:PoissonWithZ} we have \begin{equation}\label{eq:14.05.21_3} \mathbb{E} \sup_{y\in\mathbb{R}} \Big| \mathbb{P}(\xi'\leq y) - \Phi\Big(\frac {y-\mu'}{\sigma'}\Big)\Big|=\mathbb{E} \sup_{x\in\mathbb{R}} \Big| \mathbb{P}\Big(\frac {\xi'-\mu'}{\sigma'}\leq x\Big) - \Phi(x)\Big| = O\Big( {1\over\sqrt{\log n}}\Big). \end{equation} It remains to deal with the random variable $$ Y_{\bf Z}:=\sup_{y\in\mathbb{R}} \Big|\Phi\Big(\frac {y-\mu'}{\sigma'}\Big) - \Phi\Big(\frac {y-\mu}{\sigma}\Big)\Big|. $$ Assume that the supremum is attained at $y_0$. Then, with $\phi(t):={1\over \sqrt{2\pi}}e^{-t^2/2}$ the density of standard normal distribution, \begin{equation}\label{eq:18-05-21a} Y_{\bf{Z}}=\Big|\Phi\Big(\frac {y_0-\mu'}{\sigma'}\Big) - \Phi\Big(\frac {y_0-\mu}{\sigma}\Big)\Big|\leq \sup_{t\in\mathbb{R}}|\phi(t)|\cdot \Big|\frac {y_0-\mu'}{\sigma'} - \frac {y_0-\mu}{\sigma}\Big|\leq\Big|\frac {y_0-\mu'}{\sigma'} - \frac {y_0-\mu}{\sigma}\Big|, \end{equation} where $y_0$ is such that $$ {1\over \sigma'}\phi\Big(\frac {y_0-\mu'}{\sigma'}\Big) - {1\over \sigma}\phi\Big(\frac {y_0-\mu}{\sigma}\Big)=0. $$ By taking logarithms on both sides, we see that the last equation is equivalent to $$ y_0^2(\sigma^2-\sigma'^2)-2y_0(\mu'\sigma^2-\mu\sigma'^2)+\mu'^2\sigma^2-\mu^2\sigma'^2-\sigma^2\sigma'^2\log\big({\sigma\over \sigma'}\big)=0. $$ This quadratic equation has the following solutions: $$ y^{\pm}_0={\mu'\sigma^2-\mu\sigma'^2\pm\sigma\sigma'\sqrt{(\mu'-\mu)^2+\log(\sigma/\sigma')|\sigma'^2-\sigma^2|}\over \sigma^2-\sigma'^2}. $$ Substituting this back into \eqref{eq:18-05-21a} leads to the bound $$ Y_{\bf{Z}} \leq {|\mu'-\mu|+\sqrt{(\mu'-\mu)^2+\log(\sigma/\sigma')|\sigma'^2-\sigma^2|}\over \sigma+\sigma'} \leq 2{|\mu'-\mu|\over \sigma+\sigma'} + {\sqrt{|\log(\sigma/ \sigma')||\sigma'^2-\sigma^2|}\over \sigma+\sigma'}, $$ where we used the fact that $\sqrt{a+b} \leq \sqrt a + \sqrt b$ for all $a,b>0$. Observe that given $\mathcal{E}$ we have $\textup{area}(\Delta_i) \geq n^{- \frac 12}$ and thus $\sigma'^2 \geq {5 \ell \over 27}\log n +O(1)$. Hence, there exist constants $c_1,c_2>0$ such that $c_1<\sigma/\sigma'<c_2$. Further, $\sqrt{|\sigma'^2-\sigma^2|} \leq |\sigma'^2-\sigma^2|+1$. Thus, using \eqref{eq:moments}, \eqref{eq:variance1}, and \eqref{eq:variance2} we conclude that, for some constant $C_1>0$, $$ Y_{\bf{Z}}\leq C_1\Big(\sum_{i=1}^\ell {|\log\textup{area}(\Delta_i)|\over \sqrt{\log n}}+{1\over \sqrt{\log n}}\Big), $$ and Corollary \ref{cor:moments-logarea} yields, for another constant $C_2>0$, $$ \mathbb{E} Y_{\bf{Z}}\leq C_2\Big({\mathbb{E}|\log\textup{area}(\Delta_1)|\over \sqrt{\log n}}+{1\over \sqrt{\log n}} \Big) = {O(1)\over \sqrt{\log n}} . $$ Together with \eqref{eq:14.05.21_2} and \eqref{eq:14.05.21_3} this completes the proof. \end{proof} \subsection{Step 5: Removing the condition $\mathcal{E}$} In order to remove the remaining condition $\mathcal{E}$ in Lemma \ref{lem:RemovePointsZ} we use Lemma \ref{lm:transference} again. We will apply this lemma to the random variables $\xi'_n=f_0(P_{\eta}|\mathcal{E})$ and $\xi_n=f_0(P_{\eta})$. Note that condition (iv) then follows from Lemma \ref{lem:RemovePointsZ}. Checking the other conditions requires a more careful analysis. \begin{lemma}\label{lm:condition} The random variables $\xi_n:=f_0(P_{\eta})$ and $\xi_n':=f_0(P_{\eta}|\mathcal{E})$ satisfy conditions (i)--(iii) of Lemma \ref{lm:transference} with $\varepsilon_1(n)=c_1(\log n)^{3/2}n^{-1/4}$, $\varepsilon_2(n)=c_2(\log n)^3n^{-1/4}$, $\varepsilon_3(n)=c_3n^{-1/4}$, where $c_1,c_2,c_3>0$ are positive constants not depending on $n$. \end{lemma} \begin{proof} The proof of this lemma will basically follow the lines of the proof of Lemma 8.2 in \cite{BR10}. We will start by estimating $\mathbb{E}(f_0(P_{\eta})^k|\overline{\mathcal{E}})$ for $k=1,2$. For this we assume additionally that the event $\mathcal{A}^\pi_n \cap \mathcal{B}^\pi_n$ holds, that is $$P(v\ge b_0 n^{-1} \log n )\subset P_{\eta}\qquad\text{and}\qquad \ \#(\eta \cap P(v< b_0 n^{-1} \log n ))\leq c_0 (\log n)^2 ,$$ and make use of Lemma \ref{le:floating-Peta}. It follows that $ f_0(P_{\eta})^k \mathbbm{1} (\mathcal{A}^\pi_n \cap \mathcal{B}^\pi_n) \leq c_0^{k} (\log n)^{2k} . $ As a consequence, \begin{align*} \mathbb{E}(f_0(P_{\eta})^k|\overline{\mathcal{E}})&=\mathbb{E}(f_0(P_{\eta})^k \mathbbm{1} (\overline{\mathcal{A}^\pi_n \cap \mathcal{B}^\pi_n})|\overline{\mathcal{E}})+ \mathbb{E}(f_0(P_{\eta})^k\mathbbm{1} (\mathcal{A}^\pi_n \cap \mathcal{B}^\pi_n)|\overline{\mathcal{E}}) \\ &\leq \frac{\mathbb{E}((\# \eta )^k \mathbbm{1} (\overline{\mathcal{A}^\pi_n \cap \mathcal{B}^\pi_n}) \mathbbm{1} (\overline{\mathcal{E}}))}{\mathbb{P}(\overline{\mathcal{E}})}+ C\,(\log n)^{2k}. \\ &\leq \frac{(\mathbb{E}(\# \eta )^{2k})^{\frac12} \mathbb{P} (\overline{\mathcal{A}^\pi_n \cap \mathcal{B}^\pi_n})^{\frac 12}}{\mathbb{P}(\overline{\mathcal{E}})}+ C\,(\log n)^{2k}, \end{align*} by H\"older's inequality and where $C>0$ is some constant. Because for $m \geq 1$, $\mathbb{E} N^{m} \leq m^m (n^m+1)$ for a Poisson random variable $N$ with mean $np$, $$ (\mathbb{E}(\# \eta )^{2k})^{\frac12} = O(n^{k}), $$ and from Corollary \ref{cor:complcE} and Lemma \ref{le:floating-Peta} we see that $$ {\mathbb{P}(\overline{\mathcal{A}^\pi_n \cap \mathcal{B}^\pi_n})^{\frac 12}\over \mathbb{P}(\overline{\mathcal{E}})} = O( n^{- \frac 52}). $$ Thus, for $k=1,2$ \begin{equation}\label{eq:3} \mathbb{E}(f_0(P_{\eta})^k|\overline{\mathcal{E}}) = O( (\log n)^{2k}). \end{equation} In order to verify conditions (i)--(iii) in Lemma \ref{lm:transference} we will use the following simple inequality from \cite[Claim 8.3]{BR10}, which says that $$ |\mathbb{E}(\zeta)-\mathbb{E}(\zeta|A)|\leq (\mathbb{E}(\zeta|A)+\mathbb{E}(\zeta|\overline{A}))\mathbb{P}(\overline{A}), $$ for any non-negative random variable $\zeta$ and any event $A$. For condition (i) we take $\zeta = \mathbb{E} f_0(P_{\eta})$ and $A=\mathcal{E}$. By Lemma \ref{lem:RemovePointsZ} we get $ \mathbb{E} (f_0(P_{\eta})|\mathcal{E}) = O( \log n)$. Using this and \eqref{eq:3}, and the estimate in Corollary \ref{cor:complcE} for $\mathbb{P} (\overline{\mathcal{E}})$, we conclude that \begin{equation}\label{eq:4} |\mathbb{E}(f_0(P_{\eta}))-\mathbb{E}(f_0(P_{\eta})|\mathcal{E})| = O((\log n)^2n^{-1/4}), \end{equation} and, thus, \begin{equation}\label{eq:EE-Poiss} \mathbb{E} f_0(P_{\eta}) = {2 \ell \over 3}\log n + \frac 23 \sum_{i=1}^\ell \log F_i + \frac{2 \gamma \ell }3 + O((\log n)^2n^{-1/4}) . \end{equation} In the same way, for condition (ii) we take $\zeta =\mathbb{E} f_0(P_{\eta})^2$ and $A=\mathcal{E}$. Then \begin{align*} |\operatorname{Var}(f_0(P_{\eta}))-\operatorname{Var}(f_0(P_{\eta})|\mathcal{E})| &\leq |\mathbb{E} f_0(P_{\eta})^2-\mathbb{E}(f_0(P_{\eta})^2|\mathcal{E})|+|(\mathbb{E} f_0(P_{\eta}))^2-(\mathbb{E}(f_0(P_{\eta})|\mathcal{E}))^2|. \end{align*} For the first term we get from Lemma \ref{lem:RemovePointsZ} the bound $$ \mathbb{E}(f_0(P_{\eta})^2|\mathcal{E}) = \operatorname{Var} (f_0(P_{\eta})|\mathcal{E}) + (\mathbb{E} (f_0(P_{\eta})|\mathcal{E}))^2 = O( (\log n)^2), $$ and combine this with \eqref{eq:3} and Corollary \ref{cor:complcE}, in order to obtain \begin{align*} |\mathbb{E} f_0(P_{\eta})^2-\mathbb{E}(f_0(P_{\eta})^2|\mathcal{E})|&\leq (\mathbb{E}(f_0(P_{\eta})^2|\mathcal{E})+\mathbb{E}(f_0(P_{\eta})^2|\overline{\mathcal{E}}))\mathbb{P}(\overline{\mathcal{E}})\\ & = O((\log n)^4n^{-1/4}). \end{align*} For the second term we note that again by Lemma \ref{lem:RemovePointsZ} and \eqref{eq:4} we get \begin{align*} |(\mathbb{E} f_0(P_{\eta}))^2-(\mathbb{E}(f_0(P_{\eta})|\mathcal{E}))^2| & = O( (\log n)^3n^{-1/4}). \end{align*} Putting these estimates together we conclude that $$ |\operatorname{Var}(f_0(P_{\eta}))-\operatorname{Var}(f_0(P_{\eta})|\mathcal{E})| = O( (\log n)^4n^{-1/4}), $$ implying that \begin{equation}\label{eq:Var-Poiss} \operatorname{Var}(f_0(P_{\eta})) = {10 \ell \over 27}\log n +{10\over 27}\sum_{i=1}^\ell \log F_i + \frac{(10 \gamma -2 \pi^2 )\ell }{27} + O( (\log n)^4n^{-1/4}) . \end{equation} This shows that $\varepsilon_2(n)=c_2(\log n)^3n^{-1/4}$, where $c_2>0$ is some absolute constant. Analogously, by \eqref{eq:4} we set $\varepsilon_1(n)=c_1(\log n)^{3/2}n^{-1/4}$, where $c_1>0$ is some absolute constant. Finally, taking $\zeta = \mathbbm{1} (f_0(P_{\eta})\leq x)$ and $A=\mathcal{E}$ we obtain from Corollary \ref{cor:complcE} that $$ |\mathbb{P}(f_0(P_{\eta}|\mathcal{E})\leq x)-\mathbb{P}(f_0(P_{\eta})\leq x)|\leq 2\mathbb{P}(\overline{\mathcal{E}}) = O( n^{-1/4}). $$ This completes the argument. \end{proof} By Lemma \ref{lm:transference} and Lemma \ref{lm:condition} together with Lemma \ref{lem:RemovePointsZ} we conclude the required Berry-Esseen bound for the Poisson model of random polygons. Expectation and variance have been obtained in \eqref{eq:EE-Poiss} and \eqref{eq:Var-Poiss}. \begin{theorem}\label{thm:BerryEsseenPoisson} Consider the Poisson random polygon $P_\eta$ induced by a homogeneous Poisson point process $\eta$ in a polygon $P$ of unit area with $\mathbb{E}(\# \eta)=n$. Then, for any $n\geq 2$, $$ \sup_{x\in \mathbb{R}}\Big|\mathbb{P}\Big({f_0(P_{\eta})-\mathbb{E} f_0(P_{\eta})\over \sqrt{\operatorname{Var} f_0(P_{\eta})}}\ge x\Big)-\Phi(x)\Big|\leq {c\over \sqrt{\log n}} $$ for some constant $c>0$ independent of $n$, with \begin{align*} \mathbb{E} f_0(P_{\eta}) &= {2 \ell \over 3}\log n + \frac 23 \sum_{i=1}^\ell \log F_i + \frac{2 \gamma \ell }3 + O((\log n)^2n^{-1/4}) \intertext{and} \operatorname{Var}(f_0(P_{\eta})) &= {10 \ell \over 27}\log n +{10\over 27}\sum_{i=1}^\ell \log F_i + \frac{(10 \gamma -2 \pi^2 )\ell }{27} + O( (\log n)^4n^{-1/4}) . \end{align*} \end{theorem} \subsection{Step 6: Going back to uniform model} This is the last step in the proof of Theorem \ref{thm:main}, in which we apply Lemma \ref{lm:transference} with $\xi'_n=f_0(P_{\eta})$ and $\xi_n=f_0(P_n)$. Condition (iv) there holds due to Theorem \ref{thm:BerryEsseenPoisson} with $\varepsilon_4(n)=c_4/\sqrt{\log n}$ for some $c_4>0$. Condition (i) with $\varepsilon_1(n)=c_1/\sqrt{\log n}$, $c_1>0$, follows from \eqref{eq:expPolUnif} and Theorem \ref{thm:BerryEsseenPoisson}. In order to check condition (ii) we need a more precise asymptotics for the variance of $f_0(P_n)$ compared to the one given by \eqref{eq:varPolUnif}. In particular we need a result of the form $\operatorname{Var}(f_0(P_{n}))={10 \ell\over 27} \log n +O(1)$. First note that by the trivial estimate $f_0(P_\eta) \leq \# \eta$ and using Lemma \ref{le:floating-Peta} we have \begin{align} \label{eq:diff-expP_eta-cA} | \mathbb{E} (f_0(P_\eta)^k \mathbbm{1}(\mathcal{A}^\pi_n)) - \mathbb{E} f_0(P_\eta)^k | &= \nonumber | \mathbb{E} ( f_0(P_\eta)^k ( \mathbbm{1} (\mathcal{A}^\pi_n) -1 )) | \\ \nonumber &\leq (\mathbb{E} (f_0(P_\eta)^{2k})^{\frac 12} (\mathbb{E} ( \mathbbm{1} (\mathcal{A}^\pi_n) -1 )^2 )^{\frac 12} \\ & = O( n^k \, \mathbb{P}( \overline{\mathcal{A}^\pi_n})^{\frac 12}) = O( n^{k-3}) \end{align} for $k=1,2$, say. The definition of the variance gives \begin{align*} | \operatorname{Var} (f_0(P_\eta) \mathbbm{1} (\mathcal{A}^\pi_n)) & - \operatorname{Var} f_0(P_\eta) | \\ & \leq | \mathbb{E} (f_0(P_\eta)^2 \mathbbm{1}(\mathcal{A}^\pi_n)) - \mathbb{E} f_0(P_\eta)^2 | + | (\mathbb{E} (f_0(P_\eta) \mathbbm{1}( \mathcal{A}^\pi_n)))^2 - (\mathbb{E} f_0(P_\eta))^2 | . \end{align*} We use now \eqref{eq:diff-expP_eta-cA} to bound the first term by $n^{-1}$. Theorem \ref{thm:BerryEsseenPoisson} and \eqref{eq:diff-expP_eta-cA} shows, that \begin{align*} | (\mathbb{E} (f_0(P_\eta) \mathbbm{1}(\mathcal{A}^\pi_n)))^2 - (\mathbb{E} f_0(P_\eta))^2 | = | \mathbb{E} (f_0(P_\eta) \mathbbm{1}( \mathcal{A}^\pi_n)) - \mathbb{E} f_0(P_\eta) | | \mathbb{E} (f_0(P_\eta) \mathbbm{1}(\mathcal{A}^\pi_n)) + \mathbb{E} f_0(P_\eta) | \end{align*} is bounded by $n^{-2} \log n = O( n^{-1})$. Hence \begin{align*} | \operatorname{Var} (f_0(P_\eta) \mathbbm{1}(\mathcal{A}^\pi_n)) - \operatorname{Var} f_0(P_\eta) | & = O( n^{-1}). \end{align*} The same result holds for $ \operatorname{Var} (f_0(P_n) \mathbbm{1}(\mathcal{A}_n)) - \operatorname{Var} f_0(P_n) $ with an actually slightly simpler proof because $f_0(P_n) \leq n$ in which $n$ is non-random. These estimates show that \begin{align*} | \operatorname{Var} f_0(P_\eta) - \operatorname{Var} f_0(P_n) | &\leq | \operatorname{Var} (f_0(P_\eta) \mathbbm{1}(\mathcal{A}^\pi_n)) - \operatorname{Var} (f_0(P_n) \mathbbm{1}(\mathcal{A}_n)) | + O(n^{-1}) \\ & \leq | \mathbb{E} (f_0(P_\eta) \mathbbm{1}(\mathcal{A}^\pi_n))^2 - \mathbb{E} (f_0(P_n) \mathbbm{1}(\mathcal{A}_n))^2 | \\ &\qquad + | \mathbb{E} (f_0(P_\eta) \mathbbm{1}(\mathcal{A}^\pi_n)) - \mathbb{E} (f_0(P_n) \mathbbm{1}(\mathcal{A}_n)) |\, \left( \mathbb{E} f_0(P_\eta) + \mathbb{E} f_0(P_n) \right) + O(n^{-1}) . \end{align*} For $P_\eta$ we have that $\# \eta \cap P(v\ge b_0 n^{-1} \log n)$ is Poisson distributed with mean $$ np := n\, \textup{area}(P(v\ge b_0 n^{-1} \log n)) = O( (\log n)^2 ) , $$ by \eqref{eq:wetpart}, and for $P_n$ the number of points in $P(v\ge b_0 n^{-1} \log n)$ is binomial distributed with mean $p$. Denote by $E_m$ the event that precisely $m$ points of the Poisson or binomial process are in $P(v\ge b_0 n^{-1} \log n)$. Coupling both processes in the canonical way yields \begin{align*} | \mathbb{E} (f_0(P_\eta) \mathbbm{1}(\mathcal{A}^\pi_n))^k & - \mathbb{E} (f_0(P_n) \mathbbm{1}(\mathcal{A}_n))^k | \\ & = \sum_{m=0}^\infty \mathbb{E} (f_0(P_n) \mathbbm{1}(\mathcal{A}_n))^k | E_m) \left|\frac {(np)^m}{m!} e^{-np} - {n \choose m} p^m (1-p)^{n-m} \right| \\ & \leq \sum_{m=0}^\infty m^k \left|\frac {(np)^m}{m!} e^{-np} - {n \choose m} p^m (1-p)^{n-m} \right| . \end{align*} Lemma \ref{le:diff-Poisson-binom} and the fact that the expected number of vertices in both models is of order $\log n$ implies \begin{align*} | \operatorname{Var} f_0(P_\eta) - \operatorname{Var} f_0(P_n) | & = O( n^{-1} (\log n)^5 ) . \end{align*} Thus, $$ \operatorname{Var} f_0(P_n) = {10 \ell \over 27}\log n +{10\over 27}\sum_{i=1}^\ell \log F_i + \frac{(10 \gamma -2 \pi^2 )\ell }{27} + O( (\log n)^4n^{-1/4}) , $$ which proves the variance expansion in Theorem \eqref{thm:main} and shows that condition (ii) holds with $\varepsilon_2(n)=c_2 n^{-1} (\log n)^4$ for some $c_2>0$ independent of $n$. The slightly better estimate for the expectation in Theorem \ref{thm:main} (in comparison to \eqref{eq:expPolUnif}) follows analogously from Theorem \ref{thm:BerryEsseenPoisson} and the estimate \begin{align*} | \mathbb{E} f_0(P_\eta) - \mathbb{E} f_0(P_n) | &\leq | \mathbb{E} (f_0(P_\eta) \mathbbm{1}(\mathcal{A}^\pi_n)) - \mathbb{E} (f_0(P_n) \mathbbm{1}(\mathcal{A}_n)) | + O(n^{-1}) = O(n^{-1}(\log n)^4) . \end{align*} Condition (iii) can be verified following the same approach, which was already used in Lemma \ref{lm:BerryEssenPoissonChain}. By Lemma \ref{le:floating-Pn}, Lemma \ref{le:floating-Peta} and Lemma \ref{le:diff-Poisson-binom}, \begin{align*} \big|\mathbb{P}(f_0(P_n)\leq x)-\mathbb{P}(f_0(P_{\eta})\leq x)\big| & = \big|\mathbb{P}(f_0(P_n)\leq x, \mathcal{A}_n)-\mathbb{P}(f_0(P_{\eta})\leq x, \mathcal{A}_n^\pi)\big| + O(n^{-6}) \\ & \leq \sum_{m=0}^\infty \left| \frac{(np)^m}{m!} e^{- np} - {n \choose m} p^m (1-p)^{n-m} \right| + O(n^{-6}) \\ & \leq 2p + O(n^{-6})\\ & = O( n^{-1} (\log n)^2). \end{align*} Thus, (iii) holds with $\varepsilon_3(n)=c_3 n^{-1} (\log n)^2$ for some constant $c_3>0$ independent of $n$. Finally, by combining all estimates and using Lemma \ref{lm:transference} we complete the proof of Theorem \ref{thm:main}.\qed \subsection*{Acknowledgement} We would like to thank Sam Johnston (Bath) for an enlightening discussion on the content of Lemma \ref{lm:GlueBerryEsseen}. This work has been initiated during the virtual Hausdorff Trimester Program \textit{The Interplay between High Dimensional Geometry and Probability}. \\ CT was supported by the German Research Foundation (DFG) via SPP 2265 \textit{Random Geometric Systems}. AG was supported by the DFG under Germany's Excellence Strategy EXC 2044 -- 390685587, \textit{Mathematics M\"unster: Dynamics - Geometry - Structure}. \addcontentsline{toc}{section}{References} \bibliographystyle{acm}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,851
{"url":"https:\/\/mathematica.stackexchange.com\/questions\/74246\/dsolve-with-absolute-value","text":"# DSolve with Absolute Value\n\nWhen playing around with DSolve, I enter the following:\n\n DSolve[{Derivative[1][y][x] == x + Abs[x]*y[x]}, y[x], x]\n\n\nThe output is:\n\n$$y(x)\\to e^{\\frac{1}{2} (x \\left| x\\right| -1)} \\int_1^x K[1] e^{\\frac{1}{2} (1-K[1] \\left| K[1]\\right| )} \\, dK[1]+c_1 e^{\\frac{1}{2} (x \\left| x\\right| -1)}$$\n\nI would have expected a conditional which accounts for $x \\le 0$ and $x \\gt 0$ or something along those lines.\n\nHow is one supposed to interpret the output that is being produced? Is it a bug or am I not reading it correctly?\n\nTesting the solution numerically shows the solution appears to be valid:\n\n{sol} = DSolve[{Derivative[1][y][x] == x + Abs[x]*y[x]}, y, x]\n\nSeedRandom[0];\nRandomReal[{-2, 2}, 10]\nTable[\nBlock[{C},\nC[1] = 1;\nDerivative[1][y][x] == x + Abs[x]*y[x] \/.\n(sol \/. Integrate -> NIntegrate) \/. Abs' -> Sign],\n{x, %}]\n(*\n{0.609871, 0.532281, 0.731252, 0.265407, 1.74081, 1.90475,\n-1.04619, 0.550249, -1.59561, 0.582099}\n\n{True, True, True, True, True, True, True, True, True, True}\n*)\n\n\nIn response to Amzoti's comment, I'm not so sure about there being a nice solution, if x is assumed to be complex instead of real, which is how it is treated in Mathematica. If we specify that x is real, we do get a nice solution (and much more quickly):\n\nAssuming[{x \u2208 Reals},\n{sol} = DSolve[{Derivative[1][y][x] == x + Abs[x]*y[x]}, y, x]\n];\ny[x] \/. sol \/\/ PiecewiseExpand\n\n\n\u2022 It may be valid, but it is also quite ugly. A nice closed form solution without integrals is possible and that is how one would do it by hand. (+1) \u2013\u00a0Amzoti Feb 17 '15 at 13:51\n\u2022 @Amzoti Thanks. See update. \u2013\u00a0Michael E2 Feb 17 '15 at 14:00\n\u2022 That is what I would have expected and I forgot to take into account reals only, so thanks! I am surprised Mathematica does not respond that way by default with a statement, if $x \\in \\mathbb{R}$ or something along those lines. \u2013\u00a0Amzoti Feb 17 '15 at 14:04","date":"2021-06-17 00:07:01","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.26759397983551025, \"perplexity\": 1226.87112496513}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487626122.27\/warc\/CC-MAIN-20210616220531-20210617010531-00021.warc.gz\"}"}
null
null
News · Politics City Manager Appoints New Director for the Coral Springs Museum of Art July 6, 2020 by Sharon Aron Baron No Comments Jill M. Brown By Sharon Aron Baron City Manager Frank Babinec announced on Monday the selection of Jill M. Brown to serve as Director of the Coral Springs Museum of Art (CSMoA). She replaces former director, Julia Andrews who retired in February. Brown's experience will elevate the CSMoA's reach through cultural development, new program initiative, and strategic planning. A dedicated and studious art enthusiast, Brown graduated from three universities. She obtained her bachelor's degree in Art from Ohio University, Teaching Certification in Visual Arts from Bowling Green State University, and her Master of Education in Art Education from the University of Toledo. Brown's expansive career in the Arts includes developing multifaceted cultural initiatives that encourage organizational growth, economic development, and tourism. Her focus encompasses developing the arts community supporters through education, exhibition, project management, art administration, marketing, special event implementation, and more. "I am excited to help transform the city's vision for the CSMoA. The Arts connect people and perspective by building welcoming communities that provide opportunities for enrichment and creative learning," said Brown. "I am looking forward to working with the museum's talented staff to implement new programming and new strategies that will further elevate the Coral Springs community." In her new position as Director of the CSMoA, Brown will be instrumental in introducing new operational approaches, enhancing marketing and social media strategies, and developing revenue-generating programs. Learn more about the CSMoA and its programming, visit www.coralspringsmuseum.org Send Your News to Coral Springs #1 News Site Here. Game Night Arcade All American Auto Glass South Florida Dentistry for Children, P.A. Screen Enclosures and Repair
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,418
import collections import operator import os import time import flask from oslo_config import cfg from oslo_log import log as logging import six from stackalytics.dashboard import decorators from stackalytics.dashboard import helpers from stackalytics.dashboard import kpi from stackalytics.dashboard import parameters from stackalytics.dashboard import reports from stackalytics.dashboard import vault from stackalytics.processor import config from stackalytics.processor import utils # Application objects --------- app = flask.Flask(__name__) app.config.from_object(__name__) app.config.from_envvar('DASHBOARD_CONF', silent=True) app.register_blueprint(reports.blueprint) app.register_blueprint(kpi.blueprint) LOG = logging.getLogger(__name__) conf = cfg.CONF conf.register_opts(config.CONNECTION_OPTS + config.DASHBOARD_OPTS) # Handlers --------- @app.route('/') @decorators.templated() def overview(): pass @app.route('/widget') def widget(): return flask.render_template('widget.html') # AJAX Handlers --------- def _get_aggregated_stats(records, metric_filter, keys, param_id, param_title=None, finalize_handler=None): param_title = param_title or param_id result = dict((c, {'metric': 0, 'id': c}) for c in keys) context = {'vault': vault.get_vault()} if metric_filter: for record in records: metric_filter(result, record, param_id, context) result[getattr(record, param_id)]['name'] = ( getattr(record, param_title)) else: for record in records: record_param_id = getattr(record, param_id) result[record_param_id]['metric'] += 1 result[record_param_id]['name'] = getattr(record, param_title) response = [r for r in result.values() if r['metric']] if finalize_handler: response = [item for item in map(finalize_handler, response) if item] response.sort(key=lambda x: x['metric'], reverse=True) utils.add_index(response, item_filter=lambda x: x['id'] != '*independent') return response @app.route('/api/1.0/new_companies') @decorators.exception_handler() @decorators.response() @decorators.jsonify('stats') @decorators.record_filter(ignore=['start_date']) def get_new_companies(records, **kwargs): days = int(flask.request.args.get('days') or reports.DEFAULT_DAYS_COUNT) start_date = int(time.time()) - days * 24 * 60 * 60 result = {} for record in records: company_name = record.company_name date = record.date if company_name not in result or result[company_name] > date: result[company_name] = date response = list(({'name': company_name, 'date': result[company_name], 'date_str': helpers.format_date(result[company_name])}) for company_name in result if result[company_name] >= start_date) response.sort(key=lambda x: x['date'], reverse=True) utils.add_index(response) return response @app.route('/api/1.0/stats/companies') @decorators.exception_handler() @decorators.response() @decorators.cached() @decorators.jsonify('stats') @decorators.record_filter() @decorators.aggregate_filter() def get_companies(records, metric_filter, finalize_handler, **kwargs): return _get_aggregated_stats(records, metric_filter, vault.get_memory_storage().get_companies(), 'company_name', finalize_handler=finalize_handler) @app.route('/api/1.0/stats/modules') @decorators.exception_handler() @decorators.response() @decorators.cached() @decorators.jsonify('stats') @decorators.record_filter() @decorators.aggregate_filter() def get_modules(records, metric_filter, finalize_handler, **kwargs): return _get_aggregated_stats(records, metric_filter, vault.get_memory_storage().get_modules(), 'module', finalize_handler=finalize_handler) def get_core_engineer_branch(user, modules): is_core = None for (module, branch) in (user.get('core') or []): if module in modules: is_core = branch if branch == 'master': # master is preferable, but stables are ok break return is_core @app.route('/api/1.0/stats/engineers') @decorators.exception_handler() @decorators.response() @decorators.cached() @decorators.jsonify('stats') @decorators.record_filter() @decorators.aggregate_filter() def get_engineers(records, metric_filter, finalize_handler, **kwargs): modules_names = parameters.get_parameter(kwargs, 'module') modules = set([m for m, r in vault.resolve_modules(modules_names, [''])]) def postprocessing(record): if finalize_handler: record = finalize_handler(record) user = vault.get_user_from_runtime_storage(record['id']) record['core'] = get_core_engineer_branch(user, modules) return record return _get_aggregated_stats(records, metric_filter, vault.get_memory_storage().get_user_ids(), 'user_id', 'author_name', finalize_handler=postprocessing) @app.route('/api/1.0/stats/engineers_extended') @decorators.exception_handler() @decorators.response() @decorators.cached(ignore=['metric']) @decorators.jsonify('stats') @decorators.record_filter(ignore=['metric']) def get_engineers_extended(records, **kwargs): modules_names = parameters.get_parameter(kwargs, 'module') modules = set([m for m, r in vault.resolve_modules(modules_names, [''])]) def postprocessing(record): record = decorators.mark_finalize(record) if not (record['mark'] or record['review'] or record['commit'] or record['email'] or record['patch']): return user = vault.get_user_from_runtime_storage(record['id']) record['company'] = user['companies'][-1]['company_name'] record['core'] = get_core_engineer_branch(user, modules) return record def record_processing(result, record, param_id): result_row = result[getattr(record, param_id)] record_type = record.record_type result_row[record_type] = result_row.get(record_type, 0) + 1 if record_type == 'mark': decorators.mark_filter(result, record, param_id, {}) result = {} for record in records: user_id = record.user_id if user_id not in result: result[user_id] = {'id': user_id, 'mark': 0, 'review': 0, 'commit': 0, 'email': 0, 'patch': 0, 'metric': 0} record_processing(result, record, 'user_id') result[user_id]['name'] = record.author_name response = result.values() response = [item for item in map(postprocessing, response) if item] response.sort(key=lambda x: x['metric'], reverse=True) utils.add_index(response) return response @app.route('/api/1.0/stats/distinct_engineers') @decorators.exception_handler() @decorators.response() @decorators.cached() @decorators.jsonify('stats') @decorators.record_filter() def get_distinct_engineers(records, **kwargs): result = {} for record in records: result[record.user_id] = { 'author_name': record.author_name, 'author_email': record.author_email, } return result @app.route('/api/1.0/activity') @decorators.exception_handler() @decorators.response() @decorators.jsonify('activity') @decorators.record_filter() def get_activity_json(records, **kwargs): start_record = int(flask.request.args.get('start_record') or 0) page_size = int(flask.request.args.get('page_size') or parameters.DEFAULT_RECORDS_LIMIT) query_message = flask.request.args.get('query_message') return helpers.get_activity(records, start_record, page_size, query_message) @app.route('/api/1.0/contribution') @decorators.exception_handler() @decorators.response() @decorators.cached(ignore=['metric']) @decorators.jsonify('contribution') @decorators.record_filter(ignore=['metric']) def get_contribution_json(records, **kwargs): return helpers.get_contribution_summary(records) @app.route('/api/1.0/companies') @decorators.exception_handler() @decorators.response() @decorators.cached(ignore=['company']) @decorators.jsonify() @decorators.record_filter(ignore=['company']) def get_companies_json(record_ids, **kwargs): memory_storage = vault.get_memory_storage() companies = set(company for company in memory_storage.get_index_keys_by_record_ids( 'company_name', record_ids)) if kwargs['_params']['company']: companies.add(memory_storage.get_original_company_name( kwargs['_params']['company'][0])) return [{'id': c.lower().replace('&', ''), 'text': c} for c in sorted(companies)] @app.route('/api/1.0/modules') @decorators.exception_handler() @decorators.response() @decorators.cached(ignore=['module']) @decorators.jsonify() @decorators.record_filter(ignore=['module']) def get_modules_json(record_ids, **kwargs): module_id_index = vault.get_vault()['module_id_index'] tags = parameters.get_parameter(kwargs, 'tag', plural_name='tags') # all modules mentioned in records module_ids = vault.get_memory_storage().get_index_keys_by_record_ids( 'module', record_ids) add_modules = set([]) for module in six.itervalues(module_id_index): if set(module['modules']) & module_ids: add_modules.add(module['id']) module_ids |= add_modules # keep only modules with specified tags if tags: module_ids = set(module_id for module_id in module_ids if ((module_id in module_id_index) and (module_id_index[module_id].get('tag') in tags))) result = [] for module_id in module_ids: module = module_id_index[module_id] result.append({'id': module['id'], 'text': module['module_group_name'], 'tag': module['tag']}) return sorted(result, key=operator.itemgetter('text')) @app.route('/api/1.0/companies/<company_name>') @decorators.response() @decorators.cached() @decorators.jsonify('company') def get_company(company_name, **kwargs): memory_storage_inst = vault.get_memory_storage() for company in memory_storage_inst.get_companies(): if company.lower() == company_name.lower(): return { 'id': company_name, 'text': memory_storage_inst.get_original_company_name( company_name) } flask.abort(404) @app.route('/api/1.0/modules/<module_id>') @decorators.response() @decorators.cached() @decorators.jsonify('module') def get_module(module_id, **kwargs): module = helpers.extend_module(module_id) if not module: flask.abort(404) return module @app.route('/api/1.0/members') @decorators.exception_handler() @decorators.response() @decorators.cached(ignore=['release', 'project_type', 'module']) @decorators.jsonify('members') @decorators.record_filter(ignore=['release', 'project_type', 'module']) def get_members(records, **kwargs): response = [] for record in records: record = vault.extend_record(record) nr = dict([(k, record[k]) for k in ['author_name', 'date', 'company_name', 'member_uri']]) nr['date_str'] = helpers.format_date(nr['date']) response.append(nr) response.sort(key=lambda x: x['date'], reverse=True) utils.add_index(response) return response @app.route('/api/1.0/stats/bp') @decorators.exception_handler() @decorators.response() @decorators.cached() @decorators.jsonify('stats') @decorators.record_filter() def get_bpd(records, **kwargs): result = [] for record in records: if record.record_type in ['bpd', 'bpc']: record = vault.extend_record(record) mention_date = record.get('mention_date') if mention_date: date = helpers.format_date(mention_date) else: date = 'never' result.append({ 'date': date, 'status': record['lifecycle_status'], 'metric': record.get('mention_count') or 0, 'id': record['name'], 'name': record['name'], 'link': helpers.make_blueprint_link(record['module'], record['name']) }) result.sort(key=lambda x: x['metric'], reverse=True) utils.add_index(result) return result @app.route('/api/1.0/users') @decorators.exception_handler() @decorators.response() @decorators.cached(ignore=['user_id']) @decorators.jsonify() @decorators.record_filter(ignore=['user_id']) def get_users_json(record_ids, **kwargs): core_in = parameters.get_single_parameter(kwargs, 'core_in') or None valid_modules = set() if core_in: core_in = set(core_in.split(',')) valid_modules = vault.resolve_project_types( kwargs['_params']['project_type']) valid_modules = set(m[0] for m in vault.resolve_modules( valid_modules, kwargs['_params']['release'])) user_ids = vault.get_memory_storage().get_index_keys_by_record_ids( 'user_id', record_ids) if kwargs['_params']['user_id']: user_ids.add(kwargs['_params']['user_id'][0]) result = [] for user_id in user_ids: user = vault.get_user_from_runtime_storage(user_id) r = {'id': user_id, 'text': user['user_name']} add_flag = not core_in if core_in and user.get('core'): core_modules = [module_branch[0] for module_branch in user['core'] if (module_branch[1] in core_in and module_branch[0] in valid_modules)] if core_modules: r['core'] = core_modules if user['companies']: r['company_name'] = user['companies'][-1]['company_name'] add_flag = True if add_flag: result.append(r) result.sort(key=lambda x: x['text']) return result @app.route('/api/1.0/users/<user_id>') @decorators.response() @decorators.jsonify('user') def get_user(user_id): user = vault.get_user_from_runtime_storage(user_id) if not user: flask.abort(404) user = helpers.extend_user(user) return user @app.route('/api/1.0/releases') @decorators.exception_handler() @decorators.response() @decorators.cached(ignore=parameters.FILTER_PARAMETERS) @decorators.jsonify(root=('data', 'default')) def get_releases_json(**kwargs): return ([{'id': r['release_name'], 'text': r['release_name'].capitalize()} for r in vault.get_release_options()], parameters.get_default('release')) @app.route('/api/1.0/metrics') @decorators.exception_handler() @decorators.response() @decorators.cached(ignore=parameters.FILTER_PARAMETERS) @decorators.jsonify(root=('data', 'default')) def get_metrics_json(**kwargs): return (sorted([{'id': m, 'text': t} for m, t in six.iteritems(parameters.METRIC_LABELS)], key=operator.itemgetter('text')), parameters.get_default('metric')) @app.route('/api/1.0/project_types') @decorators.response() @decorators.exception_handler() @decorators.cached(ignore=parameters.FILTER_PARAMETERS) @decorators.jsonify(root=('data', 'default')) def get_project_types_json(**kwargs): return ([{'id': pt['id'], 'text': pt['title'], 'child': pt.get('child', False)} for pt in vault.get_project_types()], parameters.get_default('project_type')) @app.route('/api/1.0/affiliation_changes') @decorators.exception_handler() @decorators.response() @decorators.jsonify('affiliation_changes') def get_company_changes(**kwargs): start_days = str(flask.request.args.get('start_days') or utils.timestamp_to_date(int(time.time()) - 365 * 24 * 60 * 60)) end_days = str(flask.request.args.get('end_days') or utils.timestamp_to_date(int(time.time()))) start_date = utils.date_to_timestamp_ext(start_days) end_date = utils.date_to_timestamp_ext(end_days) runtime_storage = vault.get_runtime_storage() result = [] for user in runtime_storage.get_all_users(): companies = user.get('companies') or [] if len(companies) < 2: continue companies_iter = iter(companies) company = companies_iter.next() old_company_name = company['company_name'] date = company['end_date'] for company in companies_iter: new_company_name = company['company_name'] if start_date <= date <= end_date: result.append({ 'user_id': user['user_id'], 'user_name': user['user_name'], 'old_company_name': old_company_name, 'new_company_name': new_company_name, 'date': date, }) old_company_name = new_company_name date = company['end_date'] return result def _get_week(kwargs, param_name): date_param = parameters.get_single_parameter(kwargs, param_name) if date_param: ts = utils.date_to_timestamp_ext(date_param) else: ts = vault.get_vault()[param_name] return utils.timestamp_to_week(ts) @app.route('/api/1.0/stats/timeline') @decorators.exception_handler() @decorators.response() @decorators.cached() @decorators.jsonify('timeline') @decorators.record_filter(ignore=['release', 'start_date']) def timeline(records, **kwargs): # find start and end dates metric = parameters.get_parameter(kwargs, 'metric') start_date = int(parameters.get_single_parameter(kwargs, 'start_date') or 0) release_name = parameters.get_single_parameter(kwargs, 'release') or 'all' releases = vault.get_vault()['releases'] if 'all' in release_name: start_week = release_start_week = _get_week(kwargs, 'start_date') end_week = release_end_week = _get_week(kwargs, 'end_date') else: release = releases[release_name] start_week = release_start_week = utils.timestamp_to_week( release['start_date']) end_week = release_end_week = utils.timestamp_to_week( release['end_date']) now = utils.timestamp_to_week(int(time.time())) + 1 # expand start-end to year if needed if release_end_week - release_start_week < 52: expansion = (52 - (release_end_week - release_start_week)) // 2 if release_end_week + expansion < now: end_week += expansion else: end_week = now start_week = end_week - 52 # empty stats for all weeks in range weeks = range(start_week, end_week) week_stat_loc = dict((c, 0) for c in weeks) week_stat_commits = dict((c, 0) for c in weeks) week_stat_commits_hl = dict((c, 0) for c in weeks) if ('commits' in metric) or ('loc' in metric): handler = lambda record: record.loc else: handler = lambda record: 0 # fill stats with the data if 'person-day' in metric: # special case for man-day effort metric release_stat = collections.defaultdict(set) all_stat = collections.defaultdict(set) for record in records: if start_week <= record.week < end_week: day = utils.timestamp_to_day(record.date) user_id = record.user_id if record.release == release_name: release_stat[day].add(user_id) all_stat[day].add(user_id) for day, users in six.iteritems(release_stat): week = utils.timestamp_to_week(day * 24 * 3600) week_stat_commits_hl[week] += len(users) for day, users in six.iteritems(all_stat): week = utils.timestamp_to_week(day * 24 * 3600) week_stat_commits[week] += len(users) else: for record in records: week = record.week if start_week <= week < end_week: week_stat_loc[week] += handler(record) week_stat_commits[week] += 1 if 'members' in metric: if record.date >= start_date: week_stat_commits_hl[week] += 1 else: if record.release == release_name: week_stat_commits_hl[week] += 1 if 'all' == release_name and 'members' not in metric: week_stat_commits_hl = week_stat_commits # form arrays in format acceptable to timeline plugin array_loc = [] array_commits = [] array_commits_hl = [] for week in weeks: week_str = utils.week_to_date(week) array_loc.append([week_str, week_stat_loc[week]]) array_commits.append([week_str, week_stat_commits[week]]) array_commits_hl.append([week_str, week_stat_commits_hl[week]]) return [array_commits, array_commits_hl, array_loc] @app.template_test() def too_old(timestamp): age = cfg.CONF.age_warn now = time.time() return timestamp + age < now def main(): logging.register_options(conf) logging.set_defaults() conf_file = os.getenv('STACKALYTICS_CONF') if conf_file and os.path.isfile(conf_file): conf(default_config_files=[conf_file]) app.config['DEBUG'] = cfg.CONF.debug LOG.info('Stackalytics.dashboard is configured via "%s"', conf_file) else: conf(project='stackalytics') logging.setup(conf, 'stackalytics.dashboard') app.run(cfg.CONF.listen_host, cfg.CONF.listen_port) if __name__ == '__main__': main()
{ "redpajama_set_name": "RedPajamaGithub" }
5,273
Nils Petter Petersson, född 18 oktober 1825 i Böda socken, Öland, död 28 april 1880 i Stockholm, var en svensk orgelbyggare. Pettersson lärde sig troligtvis bygga orglar av sin far, orgelbyggaren Sven Petter Pettersson. Han reparerade orglar på 1850-talet och 1860-talet. Biografi Pettersson föddes 18 oktober 1825 på Bocketorp 2 i Böda. Han var son till instrumentmakaren Sven Petter Pettersson och Ulrica Magdalena Hultin. 1844 flyttar de till Sjonhem på Gotland. Pettersson flyttade 1851 till Visby Pettersson flyttade 1854 till Prästgården i Vamlingbo, Gotland. 1856 flyttade familjen till Torslunda. 1857 flyttade familjen till Kalmar. 1858 flyttade familjen tillbaka till Prästgården i Vamlingbo, Gotland. 1859 flyttade familjen till Visby. 1873 flyttade familjen till Stockholm. Familj Pettersson gifte sig 1851 med Anna Helena Hägg (1826–1899). De fick tillsammans barnen Anna Charlotta Helena (1852–1853), Nils Adolf Hjalmar (född 1853), Per Edvard Alexander (född 1855), Hulda Alma Elisabeth (1856–1860), Emma Helena Augusta (1857–1938), Ida Charlotta Ulrika Victoria (1859–1938), Helena Elisabeth (1861–1928), Oscar Carl Christopher (född 1862), Eugenia Anna Mathilda (1864–1948), Sofia Cecilia Catharina (1866–1939), Gustaf Hägg (1867–1925) och Otto (född 1869). Lista över orglar Renoveringar Gesäller 1856–1862 - Carl Johan Hamberg (född 1825), var gesäll. 1860–1864 - Jonas Fredrik Olsson Sellander (född 1839), var gesäll. Litteratur och källor Sven Petter Pettersson Svenska orgelbyggare Män Födda 1825 Avlidna 1880 Svenska orgelbyggare under 1800-talet
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,147
Radioactive-free vodka produced from crops in Chernobyl Scientists now want to produce the artisan vodka and give 75 per cent of profits back to the affected community School of the Environment Geography and Geosciences Faculty of Science and Health Sustainability and the Environment Democratic Citizenship A radioactive-free vodka produced from crops in Chernobyl's abandoned zone has been brewed by a team of scientists. Professor Jim Smith, at the University of Portsmouth, described the artisan vodka – branded ATOMIK – as possibly the most important bottle of spirits in the world. He and colleagues in Ukraine, where vodka is traditionally brewed, hope it will help the region recover economically. In a report released today, Professor Smith and colleagues in the UK and Ukraine present the results of a three-year research project into the transfer of radioactivity to crops grown in the Chernobyl Exclusion Zone. Professor Smith now wants to produce the artisan vodka made from grain grown near Chernobyl, and give 75 per cent of the profits back to the affected community. I think this is the most important bottle of spirits in the world because it could help the economic recovery of communities living in and around the abandoned areas. Professor Jim Smith, School of the Environment, Geography and Geosciences He said: "I think this is the most important bottle of spirits in the world because it could help the economic recovery of communities living in and around the abandoned areas. "Many thousands of people are still living in the Zone of Obligatory Resettlement where new investment and use of agricultural land is still forbidden." The team found some radioactivity in the grain: strontium-90 is slightly above the cautious Ukrainian limit of 20 Bq/kg. But, because distilling reduces any impurities in the original grain, the only radioactivity the researchers could detect in the alcohol is natural Carbon-14 at the same level you would expect in any spirit drink. They have diluted the distilled alcohol with mineral water from the deep aquifer in Chernobyl town, 10km south of the reactor, which has similar chemistry to groundwater in the Champagne region of France - and is also free from contamination. They are setting up a social enterprise "The Chernobyl Spirit Company" to begin to produce and sell "ATOMIK", a high quality home-made vodka or "moonshine". "We don't think the main Exclusion Zone should be extensively used for agriculture as it is now a wildlife reserve," said Professor Smith. "But there are other areas where people live, but agriculture is still banned." "33 years on, many abandoned areas could now be used to grow crops safely without the need for distillation. "We aim to make a high-value product to support economic development of areas outside the main Exclusion Zone where radiation isn't now a significant health risk." The report has been positively received by the State Agency of Ukraine for Exclusion Zone Management. Mr Oleg Nasvit, First Deputy Head, said: "We welcome this initiative to use abandoned lands to help local communities. It is important that we do everything we can to support the restoration of normal life in these areas whilst always putting safety first." Mr Nasvit added: "I'd call this a high quality moonshine - it isn't typical of a more highly purified vodka, but has the flavour of the grain from our original Ukrainian distillation methods – I like it." The artisanal vodka is one of the results of a project led by Professor Smith, a leading expert on Chernobyl. There are some legal issues to be completed first, but The Chernobyl Spirit Company is hoping to begin small scale experimental production of "ATOMIK" grain spirit sometime this year. Analytical tests of the water and distillate alcohol were conducted by the Ukrainian Hydrometeorological Institute, the University of Southampton GAU-Radioanalytical, the University of Portsmouth Geological and Environmental Laboratories and an independent wine and spirits testing laboratory. The artisanal vodka is one of the results of a project led by Professor Smith, a leading expert on Chernobyl, which was given funding to find out when and if it is safe to start using some of the abandoned land for growing crops. We welcome this initiative to use abandoned lands to help local communities. It is important that we do everything we can to support the restoration of normal life in these areas whilst always putting safety first. Mr Oleg Nasvit, First Deputy Head of the State Agency of Ukraine for Exclusion Zone Management He was awarded £100,000 by the Natural Environment Research Council (NERC) to work with the Ukraine government and other partners including the Ukrainian State Agency for Exclusion Zone Management, the Chernobyl ECOCENTRE, the Ukrainian Hydrometeorological Institute, the Ukrainian Institute for Agricultural Radiology and the Institute of Geological Sciences of Ukraine. UK partners are the University of Salford and the Centre for Ecology and Hydrology. The 4,200 square kilometre human exclusion zone around Chernobyl was put in place due to chronic radiation fall-out following the accident in 1986. Radiation was detected across Europe. About 300,000 residents were permanently evacuated from their homes after the accident. Earth and Environmental Sciences research Get to know the faculty Read more about the Faculty
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,760
Q: Error on type mismatch I'm writing this function on SML. It is supposed to take a list of possible first name variations (my name is Victoria, so V, Vic, Vicky, etc.) and create records of {altname1, middle, last}, {alt2, middle, last}. So here's my code: fun similar_names (substits:, name) = let val {first=n1, second=n2, third=n3} = name fun name_constructor (altnames:string list, acc) = case altnames of [] => acc | a::aa => {first=a, second=n2, third=n3}::acc in name_constructor( get_substitutions2(substits, n1),name) end get_substitutions2 will just give a list of all the possible variations of a first name (ie: string list), and it works. The error I'm getting is: a02.sml:65.2-65.58 Error: operator and operand don't agree [tycon mismatch] operator domain: string list * {first:string, second:'Z, third:'Y} list operand: string list * {first:string, second:'Z, third:'Y} in expression: name_constructor (get_substitutions2 (substits,n1),name) I don't understand why it's going between record list and record alone. Could you help? A: name is only one record, but name_constructor expects acc to be a list (since you say ::acc). Try name_constructor(get_substitutions2(substits, n1), [name])
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,664
{{Infobox government agency | native_name_a = मंत्रालय नियोजनविभाग महाराष्ट्र शासन| type = ministry | name = Ministry of Planning, Government of Maharashtra | native_name = | seal = File:Seal of Maharashtra.png | seal_width = 200 | seal_caption = Seal of the state of Maharashtra | logo = | logo_width = | logo_caption = | picture = Mantralay of Mumbai, Administrative Headquarters 03.jpg | picture_caption = Building of Administrative Headquarters of Mumbai | picture_width = | jurisdiction = Maharashtra | headquarters = Mantralay, Mumbai | region_code = IN | minister1_name = Devendra Fadnavis| minister1_pfo = Deputy Chief Minister | deputyminister1_name = Vacant, TBDsince 29 June 2022 | deputyminister1_pfo = Minister of State | child1_agency = | child2_agency = | website = | formed = | employees = | budget = | chief1_name = (IAS) | chief1_position = | chief2_name = | chief2_position = | parent_department = Government of Maharashtra }} The Ministry of Planning is a ministry of the Government of Maharashtra. It is responsible for preparing annual plans for the development of Maharashtra state. The Ministry is headed by a cabinet level Minister. Devendra Fadnavis' is Current Deputy Chief Minister of Maharashtra And Minister of Planning. Head office List of Cabinet Ministers List of Ministers of State List of Principal Secretary Organizational structure Cabinet minister is head of the department. Cabinet minister is assisted by the Minister of State. Secretary from Indian Administrative Service cadre is responsible for execution of policies. Directorate of Economics and Statistics Special directorate is responsible for collecting statistics in Maharashtra for planning purpose. Functions of DES Collect official statistics Data collection through surveys, Censuses Publish statistical publications regularly Offer training to statistical personnel Co-ordinate the work of statistical sections in different departments Directorate also acts as liaison between Government of India and Government of Maharashtra. Maharashtra Remote Sensing Application Centre (MRSAC) MRSAC is an autonomous body under Ministry of Planning. MRSAC is a Nodal Agency for designing a Maharashtra Geo-Spatial Digital Database System-(MGDDS)''. Farm yields are improved in Maharashtra using artificial intelligence and satellite images. References External links Government ministries of Maharashtra Economic planning in India
{ "redpajama_set_name": "RedPajamaWikipedia" }
590
csharp-instagram-wrapper ======================== [ Demo](http://devkod.com/InstagramCSharpSdk) # C# SDK for Instagram API It's C# wrapper to use Instagram API. To install InstagramWrapper, run the following command in the Package Manager Console <code>PM&gt; Install-Package InstagramWrapper</code> or Download Project and add reference InstagramWrapper.dll on your project. # Depencies You must add Json.net(v 6.0 or Higher) references on your project To install Json.NET, run the following command in the Package Manager Console<br> <pre>Install-Package Newtonsoft.Json</pre><br> if you've already instaled (lower than 6.0) run this. command. <pre>Update-Package Newtonsoft.Json</pre> # Create new Application Register your application on [Instagram Developers](http://instagram.com/developer/). While creating your app you must provide a redirect url. During the development application you can host your application on localhost # Authentication [Instagram Authentication Document](http://instagram.com/developer/authentication/) If you are developing a web application create a Sign in Link <pre> https://instagram.com/oauth/authorize/?client_id=CLIENT-ID&redirect_uri=REDIRECT-URI&response_type=token<br> <a href="https://instagram.com/oauth/authorize/?client_id=CLIENT-ID&redirect_uri=REDIRECT-URI&response_type=token">Sign in wit Instagram</a></pre> For other type porjects (win form, store app or win phone) use web browser component and redicet login url. # Permissions <i>However, if you plan on asking for extended access such as liking, commenting, or managing friendships, you'll have to specify these scopes in your authorization request. Here are the scopes we currently support:</i> + `basic`: - to read any and all data related to a user (e.g. following/followed-by lists, photos, etc.) (granted by default) + `comments` - to create or delete comments on a user's behalf + `relationships` - to follow and unfollow users on a user's behalf + `likes` - to like and unlike items on a user's behalf <p> add scope parameter in your sign in url: scope=likes+comments</p> # Receiving Access Token After user signed in your application you'll get a code to get access token on redirect url. <pre>http://your-redirect-uri?code=CODE</pre> It's easy to get an access token via C# SDK. <pre> InstagramAuth ia = new InstagramAuth(); InstaConfig ic = new InstaConfig(); ic.redirect_uri = ""; your app redirect url ic.client_secret = ""; your app secret ic.client_id = ""; //your app client id var user = ia.GetAccessToken(code,ic); // get user who loged in with an access_token </pre> Keep Development Alive PayPal cagri058@hotmail.com
{ "redpajama_set_name": "RedPajamaGithub" }
8,652
{"url":"https:\/\/mersenneforum.org\/showthread.php?s=01bbaa5b10cd39681b8953ac5c272476&t=19772&page=3","text":"mersenneforum.org I take a known prime and prove it to be a composite (..or maybe need help?)\n Register FAQ Search Today's Posts Mark Forums Read\n\n 2014-10-22, 01:02 #23 storflyt32 \u00a0 Feb 2013 2\u00d7229 Posts Thanks for the link.\n2014-10-22, 03:03 \u00a0 #24\newmayer\n2\u03c9=0\n\nSep 2002\nRep\u00fablica de California\n\n2\u00b73\u00b729\u00b767 Posts\n\nQuote:\n Originally Posted by storflyt32 Right now, the largest known Fermat prime is 475856^524288+1 which is a quite bit smaller number than Mersenne48 (or M48).\nHmm ... I was under the impression that 65537 is the largest known Fermat prime, but hey, what do I know?\n\n 2014-10-28, 20:36 #25 storflyt32 \u00a0 Feb 2013 2\u00b7229 Posts @ ewmayer I guess you certainly know this already. Here is the general syntax when it comes to the Fermat numbers and their possible factorization. 2^0 + 1 = 2 2^1 + 1 = 3 2^2 + 1 = 5 2^4 + 1 = 17 2^8 + 1 = 257 2^16 + 1 = 65537 These numbers are the six first and only known Fermat factors which are known to be prime. They are designated F(0) through F(5), respectively. Try using for example F(6) in the input box at factordb.com (and not the web-browser address bar) and press the \"Factorize!\"-button. 2^32 + 1, 2^64 + 1, 2^128 +1 and 2^256 + 1, 2^512 + 1, 2^1024 + 1 and 2^2048 +1 are known to be composite numbers as a whole, although they have now been completely factored. But when it comes to 2^4096 + 1, 2^8192 + 1, 2^16384 + 1 and 2^32768 + 1 and so on, these numbers are only partially factored for now, meaning that there is a mix or combination of prime factors and a remaining composite part which has yet to be factorized. Finding the remaining factors of these numbers apparently is not a trivial manner. First a factor which is somewhere between 54 digits (P54 = 568630647535356955169033410940867804839360742060818433) and less than about (2^4096+1)\/25860116183332395113497853167940236083358054650286886725246241569916604094012679963198712829716480001 (which is the syntax for the C1133), has to be found and it needs to be divisable from both 2^4096+1 and the C1133 mentioned above in order to become a valid factor. The question becomes - in which way is this supposed to be working? For now I really don't know the answer to this question and I have tried it out a couple of times. Last fiddled with by storflyt32 on 2014-10-28 at 20:41\n 2014-10-28, 20:58 #26 storflyt32 \u00a0 Feb 2013 2\u00d7229 Posts Here is another example: http:\/\/factordb.com\/index.php?query=...79514579840739 The factors are already individually known, but apparently some 76.5 hours more to go in order to possibly factorize this number: Here are the factors for this number: P46 = 7774289568841054467342907020273258993552032567 P171 = 39351869136828636825562309706971225245700129962404540849168691727314561178481864067 5586069862814412735711970194801864005631730155162613125109843290448727462456016513851317 Last fiddled with by storflyt32 on 2014-10-28 at 21:04\n2014-10-28, 21:13 \u00a0 #27\nBrian-E\n\n\"Brian\"\nJul 2007\nThe Netherlands\n\n2\u00b73\u00b75\u00b7109 Posts\n\nQuote:\n Originally Posted by storflyt32 @ ewmayer I guess you certainly know this already.\nDo you realise who you are talking to?\n\nQuote:\n Here is the general syntax when it comes to the Fermat numbers and their possible factorization. 2^0 + 1 = 2 2^1 + 1 = 3 2^2 + 1 = 5 2^4 + 1 = 17 2^8 + 1 = 257 2^16 + 1 = 65537 These numbers are the six first and only known Fermat factors which are known to be prime. They are designated F(0) through F(5), respectively.\nIt really isn't a good idea to start inventing your own notation. You should use the standard terminology and then everyone will know what you are talking about.\n\nThe numbers in your list above, with the exception of the first one, are Fermat numbers (not Fermat factors).\nRemove the first one which does not belong in the list, then the remaining numbers are $F_0$ through $F_4$. The Fermat number $F_n$ is $2^2^n+1$.\n\n 2014-10-28, 22:03 #28 storflyt32 \u00a0 Feb 2013 2\u00b7229 Posts Sorry, should have skipped the 2. BTW: Was replying to ewmayer. Should be readily visible.\n 2014-10-28, 22:24 #29 storflyt32 \u00a0 Feb 2013 2\u00d7229 Posts Anyway, I now have the output here for the mentioned 7333*2^138560+1 . If this number was a prime number, it should definitely be showing up, but it does not do so. What if I update the factordb with the most recent result and let you know? Edit: And this is something which I am apparently not able to do. Trust the numbers (meaning results). Last fiddled with by storflyt32 on 2014-10-28 at 22:27\n2014-10-28, 22:51 \u00a0 #30\nVBCurtis\n\n\"Curtis\"\nFeb 2005\nRiverside, CA\n\n7\u00b723\u00b731 Posts\n\nQuote:\n Originally Posted by storflyt32 Anyway, I now have the output here for the mentioned 7333*2^138560+1 . If this number was a prime number, it should definitely be showing up, but it does not do so. What if I update the factordb with the most recent result and let you know? Edit: And this is something which I am apparently not able to do. Trust the numbers (meaning results).\nOutput from what? Show up where? What recent result?\nYour comments are so general that it's hard to figure out what you are talking about, trying to do, or not understanding from the lengthy advice you have been given in this thread.\nThe number you refer to is prime, and has been known to be prime for a while- so there is nothing to update in factordb or anywhere else. You have discovered nothing new about this number, but you have surely discovered which tools don't work for numbers this size.\n\nLast fiddled with by VBCurtis on 2014-10-28 at 22:54\n\n2014-10-28, 23:25 \u00a0 #31\nXyzzy\n\nAug 2002\n\n100000011100002 Posts\n\nQuote:\n Originally Posted by storflyt32 @ ewmayer\nhttp:\/\/www.ams.org\/journals\/mcom\/200...02-01479-5.pdf\n\n2014-10-28, 23:36 \u00a0 #32\nretina\nUndefined\n\n\"The unspeakable one\"\nJun 2006\nMy evil lair\n\n6,277 Posts\n\nQuote:\n Originally Posted by storflyt32 @ ewmayer I guess you certainly know this already.\nYeah, exactly. I doubt ewmayer could tell the difference between P\u00e9pin's test and a pregnancy test. You tell him dude.\n\n 2014-10-28, 23:39 #33 storflyt32 \u00a0 Feb 2013 45810 Posts Thank you very much! Including http:\/\/factordb.com\/index.php?query=...06725862850881 Reading the .pdf right now, by the way. You do not have to post or update this one if you wish not to do so, but this number: http:\/\/factordb.com\/index.php?query=...92009040129237 has then the two factors http:\/\/factordb.com\/index.php?query=...56016513851317 and http:\/\/factordb.com\/index.php?query=...43089817881761 respectively. Last fiddled with by storflyt32 on 2014-10-29 at 00:16\n\n Similar Threads Thread Thread Starter Forum Replies Last Post kruoli FactorDB 5 2018-02-16 16:54 siegert81 Math 2 2014-11-19 10:24 VJS Lounge 4 2005-05-09 20:56 Alien Math 12 2004-01-07 11:36 mdjvz Software 4 2003-09-28 17:13\n\nAll times are UTC. The time now is 19:36.\n\nFri Oct 15 19:36:01 UTC 2021 up 84 days, 14:05, 0 users, load averages: 2.42, 2.25, 2.05","date":"2021-10-15 19:36:01","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 4, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4228599965572357, \"perplexity\": 1698.675100645846}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323583083.92\/warc\/CC-MAIN-20211015192439-20211015222439-00509.warc.gz\"}"}
null
null
Q: What's actually wrong with an endpoint returning HTML rather than JSON data? When I first started learning PHP (about 5 or 6 years ago) I learned about Ajax, and I went through "the phases": * *Your server returns HTML data and you put it inside a DOM's innerHTML *You learn about data transfer formats such as XML (and say "oooh so THAT'S what it's used for) and then JSON. *You return JSON and build your UI using vanilla JavaScript code *You move to jQuery *You learn about APIs, headers, HTTP status codes, REST, CORS and Bootstrap *You learn SPA, and frontend frameworks (React, Vue.js, and AngularJS) and the JSON API standard. *You receive some enterprise legacy code and upon inspecting it, find that they do what's described in step 1. As I worked with this legacy codebase, I didn't even consider that it could return HTML (I mean, we're professionals now, right?), so I had a hard time looking for the JSON endpoint that was returning the data that the Ajax calls populate. It was not until I asked "the programmer" that he told me it was returning HTML and being appended directly to the DOM with innerHTML. Of course, this was hard to accept. I started thinking of ways to refactor this into JSON endpoints, thinking about unit testing the endpoints and so on. However, this codebase has no tests. Not a single one. And it's over 200k lines. Of course one of my tasks includes proposing approaches for testing the whole thing, but at the moment we're not tackling that yet. So I'm nowhere, in a corner, wondering: if we have no tests whatsoever, so we have no particular reason to create this JSON endpoint (since it's not "reusable": it literally returns data that only fits on that part of the application, but I think this was already implied since it... returns HTML data). What exactly is wrong with doing this? A: There are 3 ways (at least?) to build up a web page: * *Generate the entire page server side. *Return a bare bones page from the server plus code (JavaScript), and have the page fetch its data and render into HTML client side. *Return a partial page plus code, and have the code fetch pre-rendered blocks of HTML that it can drop into the page. The first one is fine. The second one is also fine. It's the last one that's the problem. The reason is simple: you have now divided the construction of the HTML page into completely disconnected parts. The problem is one of maintenance. Now you have two separate entities in charge of managing the details of the UI. So you have to keep CSS and other similar details in sync between the two separate pieces. You changed the width of the side bar? Great. Now the HTML fragment that comes back causes horizontal scrolling because its assumptions about the side bar width no longer hold. You changed background color for that block? Great, now your HTML fragment's font color clashes because it assumed a different background color and someone forgot to test that endpoint. The point is that you have now split up knowledge that should be centralized in a single place (namely the presentation logic), and this makes it more difficult to make sure all the pieces fit together correctly. By using a JSON API, you can instead keep all that logic in just the front end, or you can keep it all in your server side templates if you render your data into HTML to begin with. It's about keeping the presentation knowledge/logic in a single place, so it can be managed consistently and as part of a single process. HTML/CSS/JS is difficult enough to keep straight without breaking it up into a lot of tiny pieces. JSON APIs also have the additional benefit of making the data available completely independently from the presentation logic. This allows multiple, different presenters, such as both a mobile app and a web page, to consume the same data. In particular, it enables consuming the data without a browser (such as mobile apps or nightly cron jobs); these consumers may not even be able to parse HTML. (This of course necessarily relies on having a situation where the data is the same between the different consumers, or one can use a subset of the other.) Whether you need this ability depends on your particular application's requirements, though, while managing your presentation logic is necessary regardless. I will say that if you implement it up front, though, you'll be better prepared for future growth. A: JSON and HTML fulfill two different semantic purposes. If you are populating a web page with data, use JSON. If you are building up a web page from portions of web pages, use HTML. They may kinda sound like they're the same thing, but they aren't, at all. For one thing, when you are building up a portion of a web page using HTML returned by the server, you are working server-side. When you are binding data to a web page, you are working client-side. Also, you have to be careful with HTML to not tightly-bind to a specific page. The whole point of rendering partial pages in this way is for the partials to be reusable, and if you make the partial too specific, it won't compose to other pages. JSON doesn't have this problem, because it's just data, not web page structure. A: I would say there is nothing wrong with the server returning an HTML fragment and the UI assigning it to .innerHTML of some element. This is, in my opinion, the easiest way to develop asynchronous JavaScript code. The benefit is that as little as possible is done using JavaScript and as much as possible is done in a controlled back-end environment. Remember that JavaScript support in browsers varies but your back-end always has the same version of the back-end components, meaning that doing as much as possible in the back-end will mean as little as possible version incompatibilities. Now, sometimes you want more than just an HTML fragment. For example, a status code and an HTML fragment. Then you can use a JSON object that has two members, statusCode and HTML of which the second can be assigned to .innerHTML of some element after checking the statusCode. So, using JSON and using innerHTML are by no means alternative exclusive approaches; they can be used together. By using JSON you can even have multiple HTML fragments in the same response which get assigned to the .innerHTML of multiple elements. In summary: do use .innerHTML. It makes your code compatible with as many browser versions as possible. If needing more, use JSON and .innerHTML together. Avoid XML. A: There is nothing wrong in principle. The question is: What do you want to achieve? JSON is perfect for transmitting data. If you send HTML instead and expect the client to extract the data from the HTML, that's rubbish. On the other hand, if you want to transmit HMTL that is going to be rendered as HTML, then send it as HTML - instead of packing the HTML into a string, turning the string into JSON, transmitting JSON, decoding it on the other side, getting a string, and extracting the HTML from the string. And just yesterday I ran into code that put two items into an array, turned the array into JSON, put the JSON into a string, put the string inside an array, turned the whole array into JSON, sent it to the client, which decoded the JSON, got an array containing a string, took the string, extracted the JSON from the string, decoded the JSON, and got an array with two items. Don't do that. A: This all depends on the purpose of the API, but generally what you describe is a pretty strong violation of separation of concerns: In a modern application, API code should be responsible for data, and client code should be responsible for presentation. When your API returns HTML, you are tightly coupling your data and presentation. When the API returns HTML, the only thing you can do (easily) with that HTML is display it as some part of a larger page. From a different angle, the only thing the API is good for is supplying your page with HTML. Plus, you've spread your HTML across both the client and server code. This can make maintenance a headache. If your API returns JSON, or some other form of pure data, it becomes much more useful. The existing app can still consume that data, and present it appropriately. Now, though, other things can use the API to access the same data. Again, too, maintenance is easier - all the HTML resides in one place, so if you want to re-style the whole site, you have no need to change your API. A: The main problem is that it tightly couples the server to the client, who must know the HTML structure. It also makes the endpoints more difficult to re-use in new ways or new applications. Returning plain data and letting the client render it decreases coupling and increases flexibility and testability- you can run unit tests on the client for mock data, and run unit tests on the server to test the desired data. A: HTML is tied to a specific design and use. With HTML, if you want to change the page layout you need to change how the html is generated by the server call. Usually, that requires a back-end programmer. Now you have back-end programmers, who by definition aren't your best html writers, handling these updates. With JSON, if the page layout changes, the existing JSON server call doesn't necessarily have to change at all. Instead, your front-end developer, or even the designer, updates the template to produce the different HTML you want from the same basic data. Additionally, the JSON can become the basis for other services. You might have different roles that need to see the same basic data in different ways. For example, you may have a customer web site that shows data about a product in an order page, and an inside sales page for reps that shows the same data in a very different layout, perhaps alongside some other information not available to general customers. With JSON, the same server call can be used in both views. Finally, JSON can scale better. In recent years we've kind of gone overboard with client-side javascript frameworks. I think it's time to actually take a step back and start thinking about what javascript we're using, and how it affects browser performance... especially on mobile devices. That said, if you're running a site large enough to require a server farm or cluster, instead of a single server, JSON can scale better. Users will give you processing time in their browsers for free, and if you take advantage of that you can reduce the server load in a large deployment. JSON also uses less bandwidth, so again, if you're big enough and use it appropriately, JSON is measurably cheaper. Of course, it can also scale worse, if you end up feeding 40KB libraries to parse 2KB of data into 7KB of html, so again: it pays to be aware of what you're doing. But the potential is there for JSON to improve performance and costs. A: I think you have it a little backwards. You say: we have no test whatsoever, so we have no particular reason to create this JSON endpoint A reason to use a proper endpoint would be so that you could test it. I'd say that having no tests is a very good reason to start writing some. That is, if there is any logic that would be suitable to test. 200k lines of code is a lot to refactor and are probably hard to maintain. Breaking out some endpoints and testing them might be a good place to start. Another reason might be to separate the server from the client. If, in the distant future, the application design or layout changes, it's easier to work with a proper data format than HTML output. In an ideal world, you would only have to change the client and not touch the server at all. A: What's actually wrong with an endpoint returning HTML rather than JSON data? Nothing, really. Each application has different requirements, and it may be that your application wasn't designed to be a SPA. It may be that these beautiful frameworks that you cited (Angular, Vue, React, etc...) weren't available at development time, or weren't "approved" enterprisey thingies to be used in your organization. I'm gonna tell you this: an endpoint that returns HTML reduces your dependency on JavaScript libraries and reduces the load on the user's browser since it won't need to interpret/execute JS code to create DOM objects - the HTML is already there, it's just a matter of parsing the elements and rendering them. Of course, this means we're talking about a reasonable amount of data. 10 megabytes of HTML data isn't reasonable. But since there's nothing wrong with returning HTML, what you are losing by not using JSON/XML is basically the possibility of using your endpoint as an API. And here lies the biggest question: does it really need to be an API? Related: Is it OK to return HTML from a JSON API? A: There is nothing wrong with such an endpoint if it fulfills its requirements. If it is required to spit out html that a known consumer can parse effectively, sure, why not? The problem is that, for the general case, you want your endpoints to spit out a payload that is well-formed and effectively parse-able by a standard parser. And by effectively parse-able, I mean, parse-able declaratively. If your client is forced to read the payload and pry open information bits from it with loops and if-statements, then it is not effectively parse-able. And HTML, being the way its, it is very forgiven in not requiring itself to be well-formed. Now, if you make sure your html is xml-compliant, then you are gold. With that said, I have a significant problem with this: I'm gonna tell you this: an endpoint that returns HTML reduces your dependency on JavaScript libraries and reduces the load on the user browser since it won't need to interpret/execute JS code to create DOM objects - the HTML is already there, it's just a matter of parsing the elements and rendering them. Of course, this means we're talking about a reasonable amount of data. 10 megabytes of HTML data isn't reasonable. That's a bad idea no matter how you cut it. Decades of collective industrial experience has shown us that, in general, it is a good idea to separate data (or model) from its display (or view.) Here you are conflating the two for the purpose of speedy JS code execution. And that's a micro optimization. I've never seen this as a good idea except in very trivial systems. My advise? Don't do it. HC SVNT DRACONES, YMMV, etc. A: It's difficult to categorize a right or a wrong. IMO, the questions I'll ask are: "should it", or "can it do with less?". Every endpoint should strive for communicating with as little content as possible. The signal-to-noise ratio is typically HTTP codes < JSON < XHTML. In most of the situations, it's good to chose the least noisy protocol. I differ on the point about client browser load by @machado, as with modern browsers this is a non-issue. Most of them are equipped in dealing with HTTP codes and JSON responses pretty well. And although you don't have tests at the moment, long term maintenance of a less noisy protocol would be cheaper than the one above it. A: JSON is just a textual presentation of structured data. A client naturally needs to have a parser to process data but virtually all languages have JSON parser functions. It is much more efficient to use JSON parser than to use HTML parser. It takes little footprint. Not so with an HTML parser. In PHP, you just use json_encode($data) and it's up to the client on the other side to parse it. And when you fetch JSON data from a web service, you just use $data=json_decode($response) and you can decide how to use data like you would do with variables. Suppose you develop an app for a mobile device, why do you need the HTML format when mobile apps rarely use the web browser to parse data? Many mobile apps use JSON (most common format) to exchange data. Considering that mobiles are often on metered plans, why do you want to use HTML that takes much more bandwidth than JSON? Why use HMTL when HTML is limited in its vocabulary and JSON can define data? {"person_name":"Jeff Doe"} is more informative than HTML can provide about its data since it only defines structure for HTML parsers, not define data. JSON has nothing to do with HTTP. You can put JSON in a file. You can use it for configurations. Composer uses JSON. You can use it to save simple variables to files as well.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,574
/** * The types and classes of this package are responsible for binding a method call to calling another method. */ @NeverNull.ByDefault package net.bytebuddy.implementation.bind; import net.bytebuddy.utility.nullability.NeverNull;
{ "redpajama_set_name": "RedPajamaGithub" }
1,803
/** * */ package me.yumin.sharding.client; import java.util.Map; /** * @author xuanyin * */ @SuppressWarnings("rawtypes") public interface IShardingPlugin { /** * 插件执行入口 * * @param actionName * @param result * @param shardingId * @param parameterMap * @return */ boolean afterInvoke(String actionName, Object result, Object shardingId, Map parameterMap); }
{ "redpajama_set_name": "RedPajamaGithub" }
3,720
{"url":"https:\/\/math.stackexchange.com\/questions\/2110353\/prime-number-search","text":"Prime number search\n\nLet $$n = 2^8\\times 3^5\\times 5^3\\times 7^3\\times 11^2\\times 13^2\\times 17\\times 19\\times 23\\times 29\\times 31 \\times 37 \\times 41 \\times 43 \\times 47 \\times 53 \\times 59 \\times 61 \\times 67 \\times 71 \\times 73\\times 79 \\times 83 \\times 89 \\times 97 \\times 101 \\times 103\\times 107 \\times 109 \\times 113 \\times 127 \\times 131 \\times 137 \\times 139 \\times 149 \\times 151 \\times 157 \\times 163 \\times 167 \\times 173 \\times 179 \\times 181 \\times 191 \\times 193 \\times 199 \\times 211\\times 233 \\times 239 \\times 241 \\times 251 \\times 257 \\times 263 \\times 269 \\times 271\\times 277 \\times 281 \\times 283\\times 293\\times 307\\times 311 \\times 313 \\times 331 \\times 347 \\times 359 \\times 367 \\times 373 \\times 397 \\times 409 \\times 419 \\times 431 \\times 433\\times 443\\times 487 \\times 491 \\times 499\\times 509 \\times 577\\times 593\\times 619\\times 641\\times 653\\times 659\\times683\\times719\\times 743 \\times 761\\times 809 \\times 911\\times 953 \\times 1013 \\times 1019 \\times 1031 \\times 1049 \\times 1103 \\times 1223 \\times 1229 \\times 1301$$\n\n$n$ has 97 prime factors, and around $2.6\\times10^{229}$. Let $$A=\\{3 , 7 , 9, 19, 31, 39, 49, 63, 79, 99, 127, 159, 199, 249, 319, 399, 499, 511, 639, 999, 1023\\}$$ Out of $42$ numbers of the form $10^n \\pm k, k\\in A$ which ones are prime?\n\nMy observation is $\\forall x\\in A$, $x+1$ is of the form $2^\\alpha5^\\theta$. However, I can't see a patern amongst the prime factors of $n$. I feel like Fermat's Little Theorem will help eliminate most of the numbers, but, ,I have no idea how to prove any of them are actually prime.\n\nFor example, as $\\phi(7) | n$, $10^n\\equiv 1\\pmod7$. Thus, $10^n - 1023 \\equiv 1-1023 \\equiv 1022 \\equiv 0$. Same argument works for 99 and 127 too.\n\nOr in general, for any $x\\in A\\setminus \\{3\\}$, take an odd prime divisor $p$ of $x-1$, then, $\\phi(p) = p-1 | n$, thus, $10^n - x$ is divisible by $p$.\n\n\u2022 If any number of these $42$ is actually a prime, it will very hard (or very long) to prove it. So, assuming that this is a prepared-for-solving problem, I think that the answer is probably \"none\". \u2013\u00a0ajotatxe Jan 23 '17 at 15:37\n\u2022 Could you eliminate some numbers with Fermat's little theorem ? If yes, please report us the current status! \u2013\u00a0Peter Jan 23 '17 at 17:24\n\u2022 Added an example. \u2013\u00a0Emre Jan 23 '17 at 17:46\n\nThis is only a partial answer, just for some $10^n-k$\nFor $k=(7,19,31,49,79,127,199,319,499,511)$ the last four digits of $10^n-k$ are $$(9993,9981,9969,9951,9921,9873,9801,9681,9501,9489)$$\nAs the sum of the digits in each of these is divisible by 3, as are all the preceding 9s, $10^n-k$ is divisible by 3, and hence composite, for $$k=(7,19,31,49,79,127,199,319,499,511)$$","date":"2019-09-23 20:09:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9279958605766296, \"perplexity\": 147.0041903696007}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-39\/segments\/1568514578201.99\/warc\/CC-MAIN-20190923193125-20190923215125-00505.warc.gz\"}"}
null
null
package org.apache.drill.exec.udfs; import io.netty.buffer.DrillBuf; import org.apache.drill.common.types.TypeProtos.DataMode; import org.apache.drill.common.types.TypeProtos.MinorType; import org.apache.drill.exec.expr.holders.VarCharHolder; import org.apache.drill.exec.vector.complex.reader.FieldReader; import org.apache.drill.exec.vector.complex.writer.BaseWriter; import java.util.Iterator; public class ComplexSchemaUtils { public static DrillBuf getFields(FieldReader reader, BaseWriter.ComplexWriter outWriter, DrillBuf buffer) { BaseWriter.MapWriter queryMapWriter = outWriter.rootAsMap(); if (reader.getType().getMinorType() != MinorType.MAP) { // If the field is not a map, return an empty map queryMapWriter.start(); queryMapWriter.end(); } Iterator<String> fieldIterator = reader.iterator(); queryMapWriter.start(); while (fieldIterator.hasNext()) { String fieldName = fieldIterator.next(); FieldReader fieldReader = reader.reader(fieldName); String dataType = fieldReader.getType().getMinorType().toString(); DataMode dataMode = fieldReader.getType().getMode(); if (dataMode == DataMode.REPEATED) { dataType = dataMode + "_" + dataType; } VarCharHolder rowHolder = new VarCharHolder(); byte[] rowStringBytes = dataType.getBytes(java.nio.charset.StandardCharsets.UTF_8); buffer = buffer.reallocIfNeeded(rowStringBytes.length); buffer.setBytes(0, rowStringBytes); rowHolder.start = 0; rowHolder.end = rowStringBytes.length; rowHolder.buffer = buffer; queryMapWriter.varChar(fieldName).write(rowHolder); } queryMapWriter.end(); return buffer; } }
{ "redpajama_set_name": "RedPajamaGithub" }
5,552
\section{Introduction and Related Work} The exponential growth in the number of medical images of different modalities used in clinical practice has led to an interest in developing automated methods for analysis of medical images. Computed tomography (CT scan) is a noninvasive diagnostic imaging procedure that uses a combination of X-rays and computer algorithms to generate different internal views of the body (often called slices). A CT scan shows detailed images of internal organs and structures inside the body, which also includes the bones, muscles and fat. CT scans of the abdomen and pelvis is used as a diagnostic imaging test to aid in the detection of diseases of liver, biliary tract, the small intestine and colon. CT scanning is painless, noninvasive and accurate. When compared to Magnetic Resonance Imaging (MRI), CT has wider availability, and is fast. CT images are generated using X-ray beams and the amount of X-rays absorbed by tissues at each location in the body is mapped to Hounsfield units (HU). The denser the tissue, the more the X-rays are attenuated, and the higher the number of HU. Water is always set to be 0 HU, while air is −1000 HU, and bones have values between several hundred to several thousand HU. Manual segmentation of the organs and tumors from medical images is a tedious task and often introduces inter-rater variability. Convolutional neural networks (CNNs) \cite{cnn} have been applied to wide variety of image classification \cite{alex,vgg,resnet} and semantic segmentation \cite{fcn, unet} tasks. In this paper, we focus on segmentation of liver and its tumors in CT images taken from thoracic to pelvis region using CNNs. Our network's architecture for segmentation task is inspired from DenseNet \cite{densenet,tiramisu}. DenseNet connects each layer to every other layer in a feed-forward fashion by concatenation of all feature outputs. The output of the $l^{th}$ layer is defined as \begin{equation} \label{eq:dense} x_{l}=H_{l}([x_{l-1},x_{l-2}, \cdots, x_0]) \end{equation} where $x_{l}$ represents the feature maps at the $l^{th}$ layer and $[\cdots]$ represents the concatenation operation. In our case, $H_{l}$ is the layer comprising of Batch Normalization (BN) \cite{bn}, followed by Exponential Linear Unit (ELU) \cite{elu}, a convolution and dropout \cite{dropout}. This kind of connectivity pattern aids in reuse of features and allows implicit deep supervision during training and thus substantially reduces the number of parameters while maintaining good performance, which is ideal in scenarios with limited data. The output dimension of each layer has $k$ (growth rate parameter) feature maps. The number of feature maps in DenseNet grow linearly with depth. A Transition Down (TD) layer in DenseNet is introduced for reducing spatial dimension of feature maps which is accomplished by using a $1\times1$ convolution (depth preserving) followed by a $2\times 2$ max-pooling operation. A denseblock refers to concatenation of new feature maps created at a given resolution. \section{Our Method} CT Windowing is a technique frequently used in the evaluation of CT scans for the purpose of enhancing contrast of particular type of tissue or the abnormality type being examined. The abdominal CT images of a patient comprises of various organs such as liver, spleen, gal bladder, etc. Anatomically, the HU range of liver is $60\pm6$. Our method is based on 2-stage cascaded approach \cite{patrick} for segmentation of liver and its tumors from HU windowed CT volumes. In this method we first train the liver model for the task of liver segmentation. Since the shape (contour) and texture of liver is simple when compared to its tumors, the CT images after windowing were down-sampled to half their original size ($512 \times 512$) and then used for training. This helped in reducing the computation required for the training liver model and also helps in faster segmentation of liver. The tumor model was 3-channel input and was trained independently on full-sized CT images of liver windowed at $3$ different levels having different window widths. During prediction, the liver segmentation from first stage in the cascade (liver model) aids the second stage (tumor model) by precisely localizing on liver regions in CT images to produce combined predictions of liver and tumor segmentations. \subsection{DATASET AND PREPROCESSING} The models were trained and tested on the LITS MICCAI-2017 challenge dataset which comprised of $200$ contrast enhanced CT images taken at different phases (mostly venous phase) with only a few cases with anomalies like fatty liver, cirrhosis liver, calcification in the liver, etc. Out of $200$ CT scans, for the $130$ scans radiologist hand-drawn ground-truths were given for training the model and the rest $70$ were used for testing by the challenge organizers. We divided the $130$ training dataset into train:– $90$ volumes, validation:– $26$ volumes, test:– $14$ volumes. For liver model, the following pre-processing techniques were done on the CT volumes in the order specified: \begin{enumerate} \item HU values are windowed to the range of [-100,300]. \item $0-1$ min-max normalization on the entire volume. \item Down-sample the slices from $512 \times 512$ to $256 \times 256$. \end{enumerate} For tumor model, the following pre-processing techniques were done on the CT volumes in the order specified: \begin{enumerate} \item 3 different HU windowing ranges ([0,100], [-100,200], [-100,400]) were used to produce 3 images. \item $0-1$ min-max normalization on the entire volume. \end{enumerate} In most of the CT volumes (in the challenge dataset), the liver and tumor slices comprised of a small fraction of the total volume. So in order to address the data imbalance, the liver model was trained only on liver slices and additional 10 slices were taken above and below the liver. Similarly, the tumor model was trained only on tumor slices and with additional 5 slices above and below the tumor. \subsection{Liver Model} \subsubsection{Network Architecture}\hspace*{\fill} \\ \begin{figure} \begin{center} \includegraphics[width=0.35\textheight]{liver_network} \end{center} \caption{Network Architecture of liver model.} \label{fig:arch_liver} \end{figure} Fig.(\ref{fig:arch_liver}) illustrates the schematic diagram of the proposed network for liver segmentation. Our proposed network doesn't have skip connections. The up-sampling and down-sampling paths uses fully convolutional denseblocks. Each denseblock comprises of 4 layer blocks and each layer in the denseblock is sequentially composed of BN $\rightarrow$ ELU $\rightarrow$ $3\times3$ convolution layer. The first Dense-Block was prefixed with a layer comprising of several convolution filters of size $3\times3$ on the input images. In the down-sampling path, the input to a dense block was concatenated with its output, leading to a linear growth of the number of feature maps. The Transition-Down block (TD) consists of BN $\rightarrow$ ELU $\rightarrow$ $1\times1$ convolution and $\rightarrow$ $2\times2$ max-pooling layers. The last layer of the down-sampling path is referred to as Bottleneck.\\ In the up-sampling path, the input of a Dense-Block is not concatenated with its output. Transition-Up (TU) block consists of spatial-bilinear up-sampler followed by BN $\rightarrow$ ELU $\rightarrow$ and $\rightarrow$ $3 \times3$ convolution layer. During our training we found that by using spatial bilinear up-sampler followed by convolution yielded better results compared to transposed convolution (deconvolution) operation with a stride of $2$. Our thesis is that the network also learns better way of up-sampling the outputs which resulted in better predictions. To get the final segmentation map of size of $512 \times 512$ a simple spatial bilinear up-sampling block (SBU) is added in the penultimate layer. The feature maps of the hindmost up-sampling component was convolved with a $3\times3$ convolution layer followed by a soft-max layer to generate the final segmentation. To prevent over-fitting, a dropout rate of $0.2$ was implemented following each convolution layer.\\ \begin{table} \parbox{.2\linewidth}{ \centering \caption*{\textbf{Layer}} \begin{tabular}{|c|ll} \cline{1-1} Batch Normalization & & \\ \cline{1-1} Exponential Linear Unit & & \\ \cline{1-1} $3 \times 3$ Convolution & & \\ \cline{1-1} Dropout $p = 0.2$ & & \\ \cline{1-1} \end{tabular} } \hfill \parbox{.2\linewidth}{ \centering \caption*{\textbf{TD}} \begin{tabular}{|c|ll} \cline{1-1} Batch Normalization & & \\ \cline{1-1} Exponential Linear Unit & & \\ \cline{1-1} $1 \times 1$ Convolution & & \\ \cline{1-1} Dropout $p = 0.2$ & & \\ \cline{1-1} $2 \times 2$ Max Pooling & & \\ \cline{1-1} \end{tabular} } \hfill \parbox{.2\linewidth}{ \centering \caption*{\textbf{TU}} \begin{tabular}{|c|ll} \cline{1-1} Spatial Bilinear\\ Up-sampler \\ $3 \times 3$ Convolution\\ Batch Normalization & & \\ \cline{1-1} \end{tabular} } \caption{Building blocks of the network architecture. From left to right: layer used in the model, Transition Down (TD) and Transition Up (TU).} \label{tab:bloc} \end{table} Table (\ref{tab:bloc}) summaries the individual blocks of network architecture. It was observed that by using Exponential Linear Unit (ELUs) instead of Rectified Linear units (ReLUs) led to faster convergence.\\ \subsubsection{Loss function}\hspace*{\fill} \\ Liver being the largest organ in the body, the class imbalance shown in the CT volume is minimal, but contouring of the liver borders from its neighboring organs in CT images is generally not precise. Hence in-order to predict the edges precisely, weight-maps were generated from the given ground truth segmentation (see Figure \ref{fig:wmap}), edges of the liver were weighed higher relative to the interior regions of the liver. These weight-maps were used during training for minimizing the spatially-weighted-cross-entropy loss function. The use of weight-mapping leads to adding a heavy penalty in cost function for predicting imprecise liver contours during training the model. \begin{figure}[h] \subfloat[Ground Truth Segmentation]{\includegraphics[width=1.8in]{gt_liver}} \subfloat[Weight Map]{\includegraphics[width=1.8in]{wmap_liver}} \caption{The figure shows the weight-map generation from the ground-truth label image} \label{fig:wmap} \end{figure} \begin{equation} \label{eq:loss_liver} \begin{split} total\_loss &= spatially\_weighted\_cross\_entropy\_loss + \\ & L2\_loss \end{split} \end{equation} In our experiments we define an epoch to be completed when a specified number of iterations are executed on the randomly sampled batches of data from the train and validation set. In this experiment the number of iterations were $1000$ and $250$ for train and validation datasets respectively. The parameters of the network were optimized so as to minimize the $total\_loss$, Equation. (\ref{eq:loss_liver}). The liver model was trained with a batch size of $4$ for $80$ epochs with a learning rate of $10^{-4}$ and L2 weight-decay of $10^{-6}$ using ADAM optimizer\cite{adam}.\\ \subsubsection{Post processing}\hspace*{\fill} \\ The outputs of the Liver network were subjected to the following post-processing methods sequentially: \begin{enumerate} \item Morphological binary erosion \item Get the largest connected component \item Morphological binary dilation \end{enumerate} By applying these techniques, we were able to remove some false positives like spleen or heart that might be very closely attached to the liver. \subsection{Tumor Model} \subsubsection{Network Architecture}\hspace*{\fill} \\ The network architecture for tumor model was similar to liver model, except that it accepted 3-channel inputs and there was no simple spatial bilinear up-sampling block (SBU) as in the liver model. The 3 channels were feed with the same slice of the CT volume, but each channel's HU - windowing was at 3 different levels having different window widths. It was observed that by providing 3 different HU windowed channels, the model is able to learn the tumor and its boundary very well compared to a single channel input. To prevent over-fitting a dropout rate of $0.2$ was implemented.\\ \subsubsection{Loss function}\hspace*{\fill} \\ The shape of tumors is diffusive, inhomogeneous and sparsely located in the CT volumes. This leads to class imbalance in the dataset, thereby making it hard for the network to learn the intricate features of the tumor. Hence in order to reduce the class imbalance we employ two strategies: \begin{itemize} \item Use of weight-maps with tumor portion being weighed very high compared to background. The weight-maps are used for calculation of spatially-weighted-cross-entropy loss. \item Use weighted combination of two loss functions: spatially-weighted-cross-entropy loss and a loss function based on dice overlap co-efficient. \end{itemize} \par The dice co-efficient is an overlap metric used for assessing the quality of segmentation maps. The dice coefficient between two binary volumes can be written as: \begin{equation} \label{eq:dice} DICE =\frac{2\sum_{i}^{N}p_ig_i}{\sum_i^Np_i^2+\sum_i^Ng_i^2} \end{equation} where the sums run over the $N$ voxels, of the predicted binary segmentation volume $p_i \in P$ and the ground truth binary volume $g_i \in G$ The parameters of the network were optimized so as to minimize the $total\_loss$, Equation. (\ref{eq:loss_tumor}). \begin{equation} \label{eq:loss_tumor} \begin{split} total\_loss &= \lambda (spatially\_weighted\_cross\_entropy\_loss) + \\ & \gamma (1-dice\_loss) + L2\_loss \end{split} \end{equation} where $\lambda$ and $\gamma$ are empirically assigned weights to individual losses. During training it was observed that the dice loss allowed higher overlap scores than when trained with the loss function based on the cross entropy loss alone. In this work we set $\gamma = 0.5$ and $\lambda = 0.5$.\\ \subsubsection{Post processing}\hspace*{\fill} \\ The predictions of the tumor network were masked with the background taken from the liver prediction, hence the final tumor predictions were only inside the liver. \section{Conclusion} With the pre-processing, network architecture and post processing steps discussed in this paper we were able to achieve an average liver dice of 0.93 on the 14 test volumes (mentioned in the dataset split up in the beginning) and an average tumor dice of 0.45 on the 14 test volumes. Our liver predictions have a smoother surface (3D volume) and precise edges compared to the liver predictions from a U-net whose predictions have ridges on the surface that arise because of very finer slice wise predictions that result because of the skip connections. The true positives of tumor prediction have almost more than 50\% overlap with the ground truth but the model predicts a lot of false positives like gallbladder edges, diaphragm, vessels etc. The results of our proposed 2-stage cascaded model's predictions on the 70 CT test volumes is summarized in Tabels \ref{tab:res1} \& \ref{tab:res2}. \begin{table}[!h] \centering \begin{tabular}{|c|c|c|} \hline {\ul \textbf{Metrics}} & \textbf{\begin{tabular}[c]{@{}c@{}}Computed LIVER \\ SEGMENTATION metrics\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Computed LESION \\ SEGMENTATION metrics\end{tabular}} \\ \hline \textbf{voe} & 0.150 & 0.411 \\ \hline \textbf{dice global} & 0.923 & 0.625 \\ \hline \textbf{dice} & 0.912 & 0.725 \\ \hline \textbf{rmsd} & 9.682 & 2.070 \\ \hline \textbf{rvd} & -0.008 & 19.705 \\ \hline \textbf{assd} & 6.465 & 1.441 \\ \hline \textbf{jaccard} & 0.850 & 0.589 \\ \hline \textbf{dice\_per\_case} & 0.912 & 0.492 \\ \hline \textbf{msd} & 45.928 & 7.515 \\ \hline \end{tabular} \caption{Results of segmentation metrics on the test set comprising of 70 CT volumes} \label{tab:res1} \end{table} \begin{table}[!h] \centering \begin{tabular}{|c|c|} \hline \textbf{Computed LESION DETECTION metrics} & \textbf{Computed TUMOR BURDEN} \\ \hline recall: 0.348 & rmse: 0.044 \\ \hline precision\_greater\_zero: 0.211 & max: 0.194 \\ \hline precision: 0.117 & \\ \hline recall\_greater\_zero: 0.628 & \\ \hline \end{tabular} \caption{Results of lesion detection and tumor burden estimation metrics on the test set comprising of 70 CT volumes} \label{tab:res2} \end{table} \appendices \section{Segmentation Results} \begin{figure}[h] \begin{center} \includegraphics[width=0.35\textheight]{prediction} \end{center} \caption{The figure shows the segmentation results of liver and tumor in a CT slice of test dataset. The color red indicates liver and green indicates tumor. The proposed model sometimes mis classifies darker vessels as tumors.} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
447
Cempaka Putih Barat is een plaats (wijk) - (kelurahan) in het bestuurlijke gebied Cempaka Putih, Jakarta Pusat (Centraal-Jakarta) in de provincie Jakarta, Indonesië. De plaats telt 37.234 inwoners (volkstelling 2010). Kelurahan van Jakarta
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,173
Q: Is the solution to this complex power series correct? Find all z for which the series ∑(1+i)$^n$z$^n$ converges. Using the ratio test I got |z| < 1/(2$^1$$^/$$^2$) A: Using the ratio test you can indeed deduce that it converges if $|z|<\frac1{\sqrt2}$. And also that it diverges if $|z|>\frac1{\sqrt2}$. Now, it remains to see what happens if $|z|=\frac1{\sqrt2}$. In that case,$$\left|(1+i)^nz^n\right|=1$$and therefore the series diverges then, too.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,794
Q: Hibernate Automatically Prevent Cross Joins without rewriting large query? Suppose I have a Hibernate-generated large query that comes out with CROSS JOINs which is impacting performance: SELECT studyt0_.abbreviation AS col_0_0_, userst2_.user_name AS col_1_0_, recallst3_.id AS col_2_0_, recallst3_.recall_date AS col_3_0_, recallst3_.created_date AS col_4_0_, (SELECT lookupt8_.description FROM PUBLIC.answers_t answerst5_ CROSS JOIN PUBLIC.activity_questions_t activityqu6_ CROSS JOIN PUBLIC.activities_t activities7_ CROSS JOIN PUBLIC.lookup_t lookupt8_ WHERE lookupt8_.id = activities7_.activity_category_id AND activities7_.id = activityqu6_.activity_id AND activityqu6_.id = answerst5_.activity_question_id AND activityqu6_.question_id = 1 AND answerst5_.event_id = eventst4_.id) AS col_5_0_, (SELECT activities11_.activity_title FROM PUBLIC.answers_t answerst9_ CROSS JOIN PUBLIC.activity_questions_t activityqu10_ CROSS JOIN PUBLIC.activities_t activities11_ etc This is due to implicit joins we have in our HQL, as I understand. (select l.description from AnswersT ans, ActivityQuestionsT aq, ActivitiesT a, LookupT l " + "where l.id=a.lookupT.id and a.id=aq.activitiesT.id and aq.id=ans.activityQuestionsT.id and aq.questionsT.id=1 and ans.eventsT.id=e.id) as category, " + Question: Is there a way to quickly get rid of the CROSS JOINs and replace them with INNER JOINs without rewriting the entire query, which is very large? Is there a config workaround of some kind? A: I see here 2 things that can be optimized. 1) The approach (select A from ...) as nameA, (select B from ...) as nameB, ... leads to a huge cross join: Every value in the 1st column will be combined with every value in the 2nd column etc. To me it looks like a bug. Normally there should be only one from clause, something like select A as nameA, select B as nameB, ... from .... 2) Independent on this, check your where clause. The current condition where l.id=a.lookupT.id and a.id=aq.activitiesT.id and aq.id=ans.activityQuestionsT.id and aq.questionsT.id=1 and ans.eventsT.id=e.id can be replaced with a simpler one: where ans.activityQuestionsT.questionsT.id = 1
{ "redpajama_set_name": "RedPajamaStackExchange" }
655
<?php namespace Mauchede\RancherApi\Exception; /** * ColumnNotFoundException is thrown when a column does not exist. * * @author Morgan Auchede <morgan.auchede@gmail.com> */ class ColumnNotFoundException extends \RuntimeException implements RancherApiException { }
{ "redpajama_set_name": "RedPajamaGithub" }
586
{"url":"https:\/\/chorasimilarity.wordpress.com\/2013\/07\/page\/2\/","text":"# Local FAN-IN eliminates GLOBAL FAN-OUT (II)\n\nAs I wrote in\u00a0\u00a0\u00a0Local FAN-IN eliminates global FAN-OUT\u00a0(I) , the introduction of the three moves (FAN-IN and the two DIST moves) eliminates global FAN-OUT from the lambda calculus sector of the graphic lambda calculus.\u00a0 In this post we shall see that we can safely eliminate other two moves, namely R1a, R1b, as well as improving the behaviour of the crossings from the $\\lambda$-TANGLE sector.\n\nThe equilibrium is thus established: three new moves instead of the three old moves. And there are some unexpected advantages.\n\n______________\n\nProposition.\n\nProof.\u00a0 (a) Done in the following picture.\n\nThe proof of (b) is here:\n\nFinally, here is the proof of (c):\n\n______________\n\nThe $\\lambda$-TANGLE sector of the graphic lambda calculus is obtained by using the lambda-crossing macros\n\nIn Theorem 6.3\u00a0\u00a0\u00a0arXiv:1305.5786 [cs.LO]\u00a0 I proved that all the oriented Reidemeister moves (with the crossings replaced by the respective macros), with the exception of the moves R2c, R2d, R3a and R3h, can be proved by using the graphic beta move and elimination of loops.\u00a0 We can improve the theorem in the following way.\n\nTheorem.\u00a0 By using the graphic beta move, elimination of loops, FAN-IN and CO-COMM, we can prove all the 16 oriented Reidemeister moves.\n\nProof. The missing moves R2c, R2d, R3a and R3h are all equivalent (by using the graphic beta move and elimination of loops, see this question\/answer at mathoverflow) with the following switching move, which we can prove with FAN-IN and CO-COMM:\n\nThe proof is done.\n\n# Local FAN-IN eliminates global FAN-OUT (I)\n\nFor being able to build\u00a0 a chemical concrete machine (see the posts \u00a0A chemical concrete machine for lambda\u00a0calculus\u00a0 and \u00a0Why build a chemical concrete machine, and\u00a0how?) we have to prove that\u00a0 universal computation can be attained with only local moves in graphic lambda calculus. Or, the lambda calculus sector of the graphic lambda calculus, which gives universality to graphic lambda calculus, uses the global FAN-OUT move (see theorem 3.1 (d)\u00a0 arXiv:1305.5786 [cs.LO]. Similarly, we see in proposition 3.2 (d), which describes the way combinatory logic appears in graphic lambda calculus, that again global FAN-OUT is used.\n\nI want to describe a way to eliminate the global FAN-OUT move from combinatory logic (as appears in graphic lambda calculus via\u00a0the algorithm described here ).\n\n________________\n\nThere are reasons to dislike global moves in relation to B-type neural networks (see the last post \u00a0 \u00a0Pair of synapses, one controlling the other (B-type NN part\u00a0III) ). Similar concerns can be found in the series of posts which has as the most recent one Dictionary from emergent algebra to graphic lambda calculus\u00a0(III) .\n\nIn this first post I shall introduce a local FAN-IN move and two distributivity moves and I shall prove that they eliminate the need for using global FAN-OUT in combinatory logic. In the next post I shall prove that we can eliminate two other moves (so that the total number of moves of graphic lambda calculus stays the same as before) and moreover we can recover from distributivity and local FAN-OUT moves the missing oriented Reidemeister moves from the $\\lambda$-TANGLE sector.\n\n________________\n\nDefinition. The local FAN-IN move is described in the next figure and it can be applied for any $\\varepsilon \\not = 1$.\n\n\u2022 as you see, in the move appears a dilation gate, what can this has to do with combinatory logic? As I explained previously, the properties of the gates are coming through the moves they are involved in, and not from their name. I could have introduced a new gate, with two inputs and one output, call this new gate \u201cfan-in gate\u201d and use it in the FAN-IN move. However, wait until the next post to see that there are other advantages, besides the economy of gates available, in using a dilation gate as a fan-in.\n\u2022 the FAN-IN move resembles to the packing arrows trick which is used extensively in the neural networks posts.\u00a0 This suggests to use as a\u00a0 fan-in gate\n\nthe green triangle gate and as fan-out gate the red triangle gate. This would eliminate the $\\Upsilon$ gate from the formalism, but is not clear to me how this replacement would interfere with the rest of the moves.\n\n\u2022 the FAN-IN move resembles with the dual of the graphic beta move, but is not the same (recall that until now I have not accepted the dual of the graphic beta move in the list of the moves of graphic lambda calculus, although there are strong reasons to do so):\n\nwhich is needed in the emergent algebra sector in order to make the dictionary to work (and related as well to the goal of non using global FAN-OUT in that sector).\u00a0 This latter move is in fact a distributivity move (see further), but we are free to choose different moves in different sectors of the graphic lambda calculus,\n\n\u2022 I know it is surprising that until now there was nothing about evaluation strategies in graphic lambda calculus, the reason being that because there are no variables then there is noting to evaluate. However, the situation is not so simple, at some point, for example in the chemical concrete machine or for neural networks, some criterion for choosing the order of moves will be needed. But it is an important point to notice that replacing global FAN-OUT (which could be seen as a remnant of having variables and evaluating them) by local FAN-IN has nothing to to with evaluation strategies.\n\n________________\n\nDefinition: The distributivity moves (related to the application and lambda abstraction gates) are the following:\n\n\u2022 the first distributivity move is straighforward, an application gate is just doubled and two fan-out moves establish the right connections. We see here why the \u201cmystery move\u201d can be seen as a kind of distributivity move,\n\u2022 the second distributivity move is where we need a fan-in gate (and where we use a dilation gate instead): because of th orientation of the arrows, after we double the lambda abstraction gates, we need to collect two arrows into one!\n\n________________\n\nCombinatory logic terms appear in graphic lambda calculus as\u00a0 trees made by application gates, with leaves one of the combinators S, K, I (seen as graphs in $GRAPH$.\u00a0 I want to show the following. [UPDATE: made some corrections.]\n\nTheorem.\u00a0\u00a0 We can replace the global FAN-OUT move with a sequence of local FAN-IN ,\u00a0 DIST, CO-COMM and local pruning moves, every time the global FAN-OUT move is applied to a term made by SKI combinators.\n\nProof.\u00a0 First, remark that a sequence of\u00a0 DIST moves for the application gate allows to reduce the problem of replacing global FAN-OUT moves for any combinator to the problem of replacing it for S, K, and I. This is because the DIST move for the application gate allows to do the FAN-OUT of trees of application gates:\n\nNow we have to prove that we can perform global FAN-OUT for I , K, S combinators.\u00a0 For the combinator I the proof is the following:\n\nFor the combinator K we shall also use a local pruning move:\n\nFinally, the proof for the combinator S is the following:\n\nNow we are going to use 3 DIST moves, followed by the switch of arrows explained in\u00a0\u00a0\u00a0Local FAN-IN eliminates GLOBAL FAN-OUT\u00a0(II) , which is applied in the dashed green ellipse from the next figure:\n\nAnd we are done.\n\nUPDATE: At close inspection, it turns out that we don\u2019t need to do switch (macro) moves. Instead, if we go back at the point we were before the last figure,\u00a0 we may use\u00a0 first CO-ASSOC and then perform the three FAN-IN moves .\n\n# new_analysis\n\nI remembered that I tried once before to keep a blog, in 2005. It still exists, but links are broken now. What a fool for not insisting with it! I reproduce further the one post, with the broken links retrieved from the Wayback Machine.\n\n### New analysis\n\nThe purpose of this blog is to report on the evolution of some very hot new ideas in mathematics and physics. The new emerging trend seems to be oriented toward a new analysis, in the same way as new geometries, other than euclidean, emerged 2 centuries ago.\n\nThis is a rather \u201ctechnical\u201d blog, intended for mathematicians and physicists. If you are looking for new ideas, hope this will be a nice place to find some of them. If you are the \u201cpublish rubbish or perish\u201d kind of person, this place is not of much use for you. But if you are the \u201cthink before you publish\u201d guy, and if you have a rather large horizont of mathematical interests, then you are welcome here.\n\nThe languages which I shall use are (mostly broken) English and sometimes Romanian. I reserve the right to express here also some of my personal opinions, remotely related to the main subject of the blog.\n\nHere it starts.\n\nFor the moment the most relevant page to look for is:\n\nhttp:\/\/irmi.epfl.ch\/cag\/buliga_sr.html\n\nbut use the mighty google to search for papers on the web, with the keywords: subriemannian , \u201csub-Riemannian\u201d , \u201cCarnot-Caratheodory\u201d, \u201ccurvature metric space\u201d\n\ncheck also the wonderful http:\/\/www.arxiv.org . My papers from there are available at\n\nhttp:\/\/www.arxiv.org\/find\/math\/1\/au:+Buliga_M\/0\/1\/0\/all\/0\/1\n\n# The weekend round table on UD\n\nThe comments from \u00a0 Is 3D Cube a predecessor of UD? \u00a0 were very useful as concerns the\u00a0 idea of making open source UD like programs. There are already two projects, created by Bauke and Jim respectively, see the page \u00a0\u00a0UD projects here for the relevant links.\n\nWe cannot possibly know if any of\u00a0 these two proposals is like Bruce Dell\u2019s UD, but it does not matter much because Bauke and Jim programs may well turn out to be as good as Dell\u2019s, or why not even better. (However,\u00a0 I still want to know how exactly the database of Saj\u2019s 3D Cube is done, because of the claims he made many years ago, which are almost identical with the ones made by Dell, see the link from the beginning of this post.)\n\nEnough chit-chat, the reason for this post \u00a0 is that I suppose new discussions will follow, for example about Bauke\u2019s still pending detailed explanations about his program. Also, any day now Jim might amaze us.\n\nSo, please start commenting here instead, if you feel the need to ask or answer questions about the two projects. Or, hey, you are welcome to announce yours!\n\n_________\n\nUPDATE:\u00a0\u00a0 I add here Bauke\u2019s explanations from this comment :\n\nMy algorithm consists of three steps:\n1. Preparing the quadtree. (This process is very similar to frustum culling, I\u2019m basically culling all the leaf nodes of the quadtree that are not in the frustum.) This step can be omitted at the cost of higher CPU usage.\n2. Computing the cubemap. (Or actually only the visible part, if you did step 1.) It uses the quadtree to do occlusion culling. The quadtree is basically an improved z-buffer, though since the octree is descended in front to back order, it is sufficient to only store \u2018rendered\u2019 or \u2018not rendered\u2019. It furthermore allows to do constant time depth checks for larger regions.\n\n3. Rendering the cubemap. This is just the common cubemap rendering method. I do nothing new here.\n\nMy description only explains step 2, as this is the core of my algorithm. Step 1 is an optimization and step 3 is so common that I expect the reader to already know how this step works.\n\nStep 2 does not use the display frustum at all. It does do the perspective. But does so by computing the nearest octree leaf (for the sake of simplicity I\u2019m ignoring the LOD\/mipmap\/anti-aliassing here) in the intersection of a cone and the octree model. This is shown in the following 2D images:\n\nI don\u2019t know what you mean with \u2018scaling of each pixel\u2019, but I believe that scaling of pixels only happens in step 3. In step 1 and 2 I completely ignore that this happens.\n\nI hope this answers your questions. If not, please tell which of the 3 steps you do not understand.\n\n_________\n\nUPDATE: You may like techniques used here\u00a0 [source]\n\n# Why build a chemical concrete machine, and how?\n\nWhat I have in mind can be split in two.\n\nThere is a first part concerning graphic lambda calculus seen as it\u2019s about real molecules interacting in space. For this part I would like to construct graphs which are like a Turing machine (or other computing devices) then, the important step is to eliminate everything which is due to our human 1d writing conventions (see details in the rant from the first part of this post) and thinking and simplify such \u201cmachines\u201d, in the same way, for example, like I saw it\u2019s possible\n\nA serious tool for doing this would be, for example, a program which allows to visualize and perform moves\u00a0 (automatically and from the keyboard) on graphs in $GRAPH$.\n\nThe second part, which goes in parallel, would be to try to find in the real world (here DNA origami looks like a possible tool) ways to construct chemically, physically, such machines, which are naturally adapted to the real world because they are geometrical (an object or a property is geometrical if it can be described, or it\u2019s invariant to an arbitrary change of parametrization, in our case, an arbitrary renaming).\u00a0 For this part I want to understand first how DNA origami works, to the level of being able to absorb information and understand some of the hurdles.\u00a0 This leads to applications, which are still vague in my head, but I was impressed by this video\n\nas well as by research of Jean-Pierre Banatre and Daniel Le Metayer.\n\nIn conclusion, I can\u2019t imagine what a syringe with 10^9 nano geometrical turing machines\u00a0 graphs representing lambda terms [see UPDATE]\u00a0 can\u2019t do.\n\n______________\n\nUPDATE:\u00a0 Corrections to this post are made in\u00a0\u00a0Chemical concrete machine not algorithmic self-assembly of DNA, nor Turing\u00a0machine\u00a0\u00a0 , where it is stressed that the \u201cchemical concrete machine\u201d, even if it has Turing universality property, it is not intended to be a Turing machine, (nor an automaton), as is wrongly suggested in this post.\n\n# Academic Spring and OA movement just a symptom, not cause of change\n\n\u2026 a reaction to profound changes which\u00a0 question the role of universities and scholars. It\u2019s a symptom of an adaptation attempt.\n\nThe OA movement, which advances so slowly because of the resistance of the scholars (voluntarily lulled by the propaganda machine of the association between legacy publishing industry and rulers of universities), is just an opening for asking more unsettling questions:\n\n\u2022 is the\u00a0 research article as we know it a viable vehicle of communication?\n\u2022 what is the difference between peer-reviewing articles and writing them?\n\u2022 should review be confined to scholars, or informed netizens (for example those detecting plagiarism) have their place in the review system?\n\u2022 is an article a definite piece of research, from the moment of publishing it (in whatever form, legacy or open), or it is forever an evolving project, due to contributions from a community of interested peoples, and if the latter is the case, then who is the author of it?\n\u2022 is it fair to publish an article inspired (in the moral sense, not the legal one) from information freely shared on the net, without acknowledging it, because is not in the form of an article?\n\u2022 is an article the goal of the research, as is the test the goal of studying?\n\nWhich is our place, as researchers? Are we like the scholars of medieval universities, becoming increasingly irrelevant, less and less creative, with our modern version of rhetoric and theological studies, called now problem solving and grant projects writing?\n\nIf you look at the timing of the end of the medieval universities and the flourishing of the early modern ones, there are some patterns.We see that (wiki source on early modern universities):\n\nAt the end of the Middle Ages, about 400 years after the first university was founded, there were twenty-nine universities spread throughout Europe. In the 15th century, twenty-eight new ones were created, with another eighteen added between 1500 and 1625.[33] This pace continued until by the end of the 18th century there were approximately 143 universities in Europe and Eastern Europe, with the highest concentrations in the German Empire (34), Italian countries (26), France (25), and Spain (23) \u2013 this was close to a 500% increase over the number of universities toward the end of the Middle Ages.\n\nCompare with the global spread of the printing press. Compare with the influence of the printing press on the Italian Renaissance (read about Demetrios Chalkokondyles).\n\nTraditionally held to have begun in 1543, when were first printed the books De humani corporis fabrica (On the Workings of the Human Body) by Andreas Vesalius, which gave a new confidence to the role of dissection, observation, and mechanistic view of anatomy,[59] and also De Revolutionibus, by Nicolaus Copernicus. [wiki quote]\n\nMeanwhile, medieval universities faced more and more problems, like [source]\n\nInternal strife within the universities themselves, such as student brawling and absentee professors, acted to destabilize these institutions as well. Universities were also reluctant to give up older curricula, and the continued reliance on the works of Aristotle defied contemporary advancements in science and the arts.[36] This era was also affected by the rise of the nation-state. As universities increasingly came under state control, or formed under the auspices of the state, the faculty governance model (begun by the University of Paris) became more and more prominent. Although the older student-controlled universities still existed, they slowly started to move toward this structural organization. Control of universities still tended to be independent, although university leadership was increasingly appointed by the state.[37]\n\nTo finish with a quote from the same wiki source:\n\nThe epistemological tensions between scientists and universities were also heightened by the economic realities of research during this time, as individual scientists, associations and universities were vying for limited resources. There was also competition from the formation of new colleges funded by private benefactors and designed to provide free education to the public, or established by local governments to provide a knowledge hungry populace with an alternative to traditional universities.[53] Even when universities supported new scientific endeavors, and the university provided foundational training and authority for the research and conclusions, they could not compete with the resources available through private benefactors.[54]\n\nSo, just a symptom.\n\n______________\n\nUPDATE:\u00a0 Robin Osborne\u2019s article is a perfect illustration\u00a0 of the confusion which reigns in academia. The opinions of the author, like the following one [boldfaced by me]\n\nWhen I propose to a research council or similar body that I will investigate a set of research questions in relation to a particular set of data, the research council decides whether those are good questions to apply to that dataset, and in the period during which I am funded by that research council, I investigate those questions, so that at the end of the research I can produce my answers.\n\nshow more than enough that today\u2019s university is medieval university reloaded.\u00a0 How can anybody decide a priori which questions will turn out to be good, a posteriori?\u00a0 Where is the independence of the researcher? How is it possible to think that a research council may have any other than a mediocre glimpse into the eventual value of a line of research, based on bureaucratic past evidence? And for a reason: because research is supposed to be an exploration, a creation of a new territory, it\u2019s not done yet at the moment of grant application. (Well, that\u2019s something everybody knows, but nevertheless we pretend it does not matter, isn\u2019t it sick?)\u00a0 Instead, conformity reigns.\u00a0 Mike Taylor spends a post on this article, exposing it\u2019s weakness\u00a0 as concerns OA.\n\n______________\n\nUPDATE 2: Christopher Lee takes the view somewhat opposite to the one from this post, here:\n\nIn cultured cities, they formed clubs for the same purpose; at club meetings, particularly juicy letters might be read out in their entirety. Everything was informal (bureaucracy to-science ratio around zero), individual (each person spoke only for themselves, and made up their own mind), and direct (when Pierre wrote to Johan, or Nikolai to Karl, no one yelled \u201cStop! It has not yet been blessed by a Journal!\u201d).\n\nTo use my nomenclature, it was a selected-papers network. And it worked brilliantly for hundreds of years, despite wars, plagues and severe network latency (ping times of 109 msec).\n\nEven work we consider \u201cmodern\u201d was conducted this way, almost to the twentieth century: for example, Darwin\u2019s work on evolution by natural selection was \u201cpublished\u201d in 1858, by his friends arranging a reading of it at a meeting of the Linnean Society. From this point of view, it\u2019s the current journal system that\u2019s a historical anomaly, and a very recent one at that.\n\nI am very curious about what Christopher Lee will tell us about solutions to\u00a0 escape\u00a0 wall-gardens and I wholeheartedly support the Selected Papers Net.\n\nBut in defense of my opinion that the main problem resides in the fact that actual academia is the medieval university reloaded, this\u00a0 quote (taken out out context?) is an example of the survivorship bias. I think that the historical anomaly is not the dissemination of knowledge by using the most efficient technology, but sticking to old ways when revolutionary new possibilities appear. (In the past it was the journal and at that time scholars cried \u201cStop! it is published before being blessed by our authority!\u201d, exactly alike scholars from today who cry against OA. Of course, we know almost nothing today about these medieval scholars which formed the majority at that time, proving again that history has a way to punish stupid choices.)\n\n# Pair of synapses, one controlling the other (B-type NN part III)\n\nIn\u00a0 Teaser (I) and Teaser (II) I discussed about the possibility to construct neural networks with controlled connections, resembling the B-type neural networks of Turing, in the following sense:\n\n\u2022 they are formed by \u201cneurons\u201d, which in Turing\u2019s description are boolean logic gates, and here they are graphs from graphic lambda calculus which correspond to lambda calculus terms. But there\u2019s no real restriction on that, graphic lambda calculus being more general and powerful than untyped lambda calculus, one may consider \u201cneurons\u201d which are outside the lambda calculus sector. The fact is that a \u201cneuron\u201d here should be interpreted as any graph in $GRAPH$ with at least one input but only one output. Later, when we shall speak about the order of performing moves in such neural networks, it will be seen that each \u201cneuron\u201d is a bag containing graphs which are modified (or reduced, as people say) independently one from another. Neurons are packing conventions from a formal viewpoint, but this viewpoint may not be the best one, a better viewpoint would be to see neurons as well as synapses, described further, as real constructs, like in the related project of the chemical concrete machine, In particular, neurons may do other things than computing in the lambda calculus sense (classical computation), they may do some more geometrical or physical or robot-like activities, not usually considered computations.\n\u2022 the connections between neurons are controlled in a B-type NN, by TRUE \u2013 FALSE control wires or inputs. Turing explains that controls themselves can be realized as constructions with boolean neurons (i.e. in his language B-type NNs are particular cases of his A-type NNs). In\u00a0Teaser (II)\u00a0\u00a0\u00a0 is explained how the connections between graphic lambda calculus neurons can be constructed by using a 2-zipper and a switch, such that a connection is controlled (by the same TRUE-FALSE mechanism, but this time TRUE and FALSE are literary the graphs of the corresponding terms in lambda calculus) by another connection.\n\nThere is an important difference, besides the power of the calculi used (boolean vs. graphic lambda), namely that there is no signal transmitted through the wires\u00a0 of the network. That is because graphic lambda calculus does not need variable names. I shall come back to this very important point (which is a big advantage for implementing such networks in reality) in a future post, but let me mention two simple facts:\n\n\u2022 an analogy: think about hardware and software, a difference between them is that software may be associated to the pattern of signals which circulates in the hardware. The hardware is the machine inhabited by the software, they are not in the same plane of physical reality, right? Well, in this implementation there is literary no difference between those, exactly because there are no signals (software) transmitted through the hardware.\n\u2022 this use of names, software, variable and signals is a pain for those trying to make sense how to implement in reality computing constructs. But, if you think, the need to name that stuff \u201cx\u201d and that other stuff \u201cy\u201d, to make alpha conversions in order to avoid name clashes, or even to describe computations in a linear string of symbols, all this are in no relation with physical reality, they are only cultural constraints of humans (and their artificial constructs). It all has to do with the fact that humans communicate science through writing, and of course, it has to do with the enumeration techniques of the cartesian method , which \u201cis designed as a technique for understanding performed by one mind in isolation, severely handicapped by the bad capacity of communication with other isolated minds\u201d. Molecules in a cell or in a solution do not align in a string according to the experimenter writing habits, from left to right, right to left, or vertical. They just don\u2019t wait for an external\u00a0 clock\u00a0 to rhythm their ballet, nor for the experimenter to draw it\u2019s familiar coordinate system. These are all limitations of the method, not of nature.\n\n_____________\n\nFurther, in this post, I shall use \u201csynapse\u201d instead \u201cconnection\u201d.\u00a0 Let\u2019s see, now that we know we can implement synapses by a TRUE-FALSE mechanism, let\u2019s see if we can do it simpler. Also, if it is possible to make the \u201csynapses control other synapses\u201d concept more symmetric.\n\nInstead of the little magenta triangles used in Teaser I, II posts (which are not the same as the magenta triangles used in the\u00a0chemical concrete machine post) , I shall use a more clear notation:\n\nThe design of a synapse proposed in\u00a0Teaser (II)\u00a0 involved two switches and a zipper, justified by the direct translation of TRUE-FALSE lambda calculus terms in the freedom sector of graphic lambda calculus.\u00a0 Think now that in fact we have there two synapses, one controlling the other. The zipper between them is only a form of binding them together in unambiguous way.\n\nA simplified form of this idea of a pair of synapses is the following:\n\nIn the upper part of the figure you see the pair of synapses, taking the a form like\u00a0 this: $><$. There are two synapses there, one is $>$, which is controlled by the other $<$.\u00a0 The connection between the blue 1 and the red 3\u2032 is controlled by the other synapse. In order to see how, let\u2019s perform a graphic beta move and obtain the graph from the lower part of the figure.\n\nThe red 3 can connect, in the sense explained by the switch mechanism from Teaser (II) with blue 1\u2032 or blue 2\u2032, all three of them belonging to the controlling synapse. Say red 3\u00a0 connects with blue 1\u2032 and look what happens:\n\nOK, so we obtain a connection between blue 1 and red 3\u2032. Suppose now that red 3 connects with blue 2\u2032, look:\n\nYes, blue 1 does not connect with red 3\u2032, they just exchanged a pair of termination gates. That\u2019s how the pair of synapses work.\n\nFinally, let me remark that the use of termination gates is a bit ugly, it breaks the symmetry,\u00a0 is not really needed.","date":"2019-07-19 06:45:30","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 11, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5520297884941101, \"perplexity\": 1370.8532533598595}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-30\/segments\/1563195526064.11\/warc\/CC-MAIN-20190719053856-20190719075856-00255.warc.gz\"}"}
null
null
<body> Contains support for storing and retrieving types from the filesystem. </body>
{ "redpajama_set_name": "RedPajamaGithub" }
4,127
{"url":"https:\/\/newbedev.com\/symmetric-under-particle-exchange","text":"# Symmetric Under Particle Exchange?\n\nThe issue is easily resolved if you explicitly label your kets using particle numbers: $$\\vert\\psi_\\pm\\rangle=\\frac{1}{\\sqrt{2}}\\left(\\vert 0\\rangle_1\\vert 1\\rangle_2\\pm \\vert 1\\rangle_1\\vert 0\\rangle_2\\right)$$ so that the action of the permutation group is on the particle labels $1$ and $2$. Thus $$P_{12}\\vert\\psi_\\pm\\rangle = \\frac{1}{\\sqrt{2}}\\left(\\vert 0\\rangle_2\\vert 1\\rangle_1\\pm \\vert 1\\rangle_2\\vert 0\\rangle_1\\right) =\\frac{1}{\\sqrt{2}}\\left(\\vert 1\\rangle_1\\vert 0\\rangle_2\\pm \\vert 0\\rangle_1\\vert 1\\rangle_2\\right) =\\pm \\vert\\psi\\rangle$$\n\nIn this fashion writing $$\\frac{1}{\\sqrt{2}}\\left(\\vert 0\\rangle_1\\vert 0\\rangle_2-\\vert 1\\rangle_1\\vert 1\\rangle_2 \\right)$$ is clearly symmetric under interchange of $1$ and $2$.\n\n(Note there is another action which interchanges the states $0$ and $1$, but the symmetry character of the state is normally defined under permutation not of the states but of particle labels.)\n\nAlternatively one can understand $|\\Psi\\rangle=|ab\\rangle$ as a wavefunction statement to the effect of $\\Psi(x_1, x_2) = \\psi_a(x_1)\\psi_b(x_2).$ The particle-permutation operator can be written easily in the wavefunction picture as $P[\\Psi](x_1, x_2) = \\Psi(x_2, x_1)$ and therefore $\\hat P |ab\\rangle = |ba\\rangle.$\n\nIt is linear, so $\\hat P\\big( |00\\rangle - |11\\rangle\\big) = \\hat P |00\\rangle - \\hat P |11 \\rangle = |00\\rangle - |11\\rangle.$\n\nI think it should be symmetric. When we write $\\mid 10\\rangle$, we mean to say that this ket is a tensor product of two individual kets $|1\\rangle\\in\\mathcal{H}_1$ and $|0\\rangle\\in\\mathcal{H}_2$, so that $|10\\rangle=|1\\rangle\\otimes|0\\rangle$, which is an element of the composite Hilbert space $\\mathcal{H}=\\mathcal{H}_1\\otimes\\mathcal{H}_2$. When you're exchanging particles, you're essentially taking the first ket to the second Hilbert space and vice versa. So under an exchange of particles, we get $|10\\rangle\\to|01\\rangle$. Thus, $|00\\rangle\\to|00\\rangle$ and $|11\\rangle\\to|11\\rangle$, which means the state\n\n$\\mid\\psi\\rangle=\\frac{1}{\\sqrt{2}}(|00\\rangle-|11\\rangle)\\to\\frac{1}{\\sqrt{2}}(|00\\rangle-|11\\rangle)$\n\nis indeed symmetric under exchange of particles.","date":"2023-03-22 11:46:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9306027889251709, \"perplexity\": 204.23968428367726}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296943809.76\/warc\/CC-MAIN-20230322114226-20230322144226-00549.warc.gz\"}"}
null
null
\section{Introduction} \label{sec1} The amount of energy that is stored in a system and the portion of it that can be extracted is a basic question of thermodynamics, and its utility can hardly be over estimated. An usual battery is one which stores electrochemical energy and converts it to electrical energy when required. These batteries are widely used in various devices. Due to ever-increasing needs of portability and flexibility of devices, batteries of smaller and smaller sizes are required. Hence, creating batteries of molecular size have become a topic of great interest. It may be envisaged that especially the small size will make quantum mechanical effects important in such devices. In recent times, a lot of research is being done in this field and such batteries have been called \enquote{quantum batteries} \cite{Entanglement boost for extractable work from ensembles of quantum batteries, Entanglement Generation is Not Necessary for Optimal Work Extraction, Quantacell: Powerful charging of quantum batteries, Correlation approach to work extraction from finite quantum systems, Enhancing the charging power of quantum batteries, Precision and Work Fluctuations in Gaussian Battery Charging, High-Power Collective Charging of a Solid-State Quantum Battery, Charger-mediated energy transfer in exactly-solvable models for quantum batteries, Spin-chain model of a many-body quantum battery, Quantum Batteries, Bounds on Capacity and Power of Quantum Batteries, Enhanced energy transfer in a Dicke quantum battery, Powerful harmonic charging in a quantum battery, Charger-mediated energy transfer for quantum batteries: an open system approach, Extractable work the role of correlations and asymptotic freedom in quantum batteries, Quantum versus classical many-body batteries, Stable adiabatic quantum batteries, Dissipative charging of a quantum battery, Many-body localized quantum batteries, A quantum open system model of molecular battery charged by excitons, Enhancement in performance of quantum battery by ordered and disordered interactions, Charging of quantum batteries with general harmonic power, Random Quantum Batteries, Fluctuations in stored work bound the charging power of quantum batteries, Stabilizing Open Quantum Batteries by Sequential Measurements, Non-Markovian effects on charging of quantum batteries}. The concept of a quantum battery was, as far as we know, introduced by Alicki and Fannes in 2013 \cite{Entanglement boost for extractable work from ensembles of quantum batteries}. Since then, different models have been considered as substrates for the device, like short- and long-range XXZ quantum spin chains \cite{Spin-chain model of a many-body quantum battery}, spins in cavities modeled by the Dicke interaction \cite{Enhanced energy transfer in a Dicke quantum battery}, ordered and disordered XYZ model \cite{Enhancement in performance of quantum battery by ordered and disordered interactions}, etc. Different methods have been put forward to enhance the charging power of quantum batteries \cite{Quantacell: Powerful charging of quantum batteries, Enhancing the charging power of quantum batteries, Precision and Work Fluctuations in Gaussian Battery Charging, High-Power Collective Charging of a Solid-State Quantum Battery, Charger-mediated energy transfer in exactly-solvable models for quantum batteries, Powerful harmonic charging in a quantum battery, Charger-mediated energy transfer for quantum batteries: an open system approach, Dissipative charging of a quantum battery, Charging of quantum batteries with general harmonic power}. The relation between work (i.e., energy) extraction or charging power and entanglement among the batteries when working with more than one battery has been an area of vigorous study \cite{Entanglement boost for extractable work from ensembles of quantum batteries, Correlation approach to work extraction from finite quantum systems, Enhanced energy transfer in a Dicke quantum battery, Quantacell: Powerful charging of quantum batteries, Enhancing the charging power of quantum batteries, High-Power Collective Charging of a Solid-State Quantum Battery, Entanglement Generation is Not Necessary for Optimal Work Extraction}. Formally, a quantum battery is a quantum mechanical system described by a state, say $\rho$, and a Hamiltonian, say $H$. One can charge the system by applying a time-dependent field, the system is henceforth assumed to store the energy, and then one can extract work from it by using another time-dependent field. Let the time-dependent field be applied from time $0$ to say $\tau$, for extracting energy. Then the amount of extracted work is given by \begin{equation} W=\text{Tr} \left(\rho H\right)-\text{Tr} \left(U(\tau, \xi)\rho U(\tau, \xi)^\dagger H\right). \nonumber \end{equation} Here, $\xi$ represents the collection of all system parameters, contained, e.g., in the system's potential energy. The dependence of $U$ on $\tau$ and $\xi$ will henceforth be suppressed in the notation. This extracted work will be maximized if we get a unitary operator for which the second term gets a minimum value. Hence, this maximum value of $W$ is given by \begin{equation} W_{max}=\text{Tr} \left(\rho H\right)-\min_U \text{Tr} \left(U\rho U^\dagger H\right). \nonumber \end{equation} A state from which work extraction is not possible is called a \enquote{passive state} \cite{passive1, passive2}. A passive state, say $\sigma$, of a system with a fixed Hamiltonian, commutes with the Hamiltonian, and if the energy eigenvalues, $\epsilon_i$, satisfy $\epsilon_i<\epsilon_j$, then eigenvalues, $p_i$, of the passive state satisfy $p_j\leq p_i$ for all eigenvalues. The maximum amount of extractable work, in terms of the passive state, is \begin{equation} W_{max}=\text{Tr} \left(\rho H\right)-\text{Tr} \left(\sigma_\rho H\right), \label{eq1} \end{equation} where $\sigma_\rho$ is the passive state of the system with Hamiltonian $H$ and has the same eigenvalues as those of the initial state $\rho$. Such a passive state is unique. In this paper, we define a locally passive state as one from which no energy can be extracted by using local unitary operations. We provide a characterization of the same, and prove its uniqueness. We subsequently restrict attention to two-qubit batteries, and first uncover the relation between globally extractable work from locally passive battery states and the entanglement content of the latter. We then consider the issue of global extraction of work from generic states - not necessarily locally passive. We identify that the difference between global and local extraction of work from a pure battery state with a given amount of entanglement is exactly equal to the optimal global work that is extractable from the corresponding locally passive battery state having the same entanglement. Furthermore, we also find that the pure battery state for which globally extractable work attains a maximum, among the set of all pure states with a fixed value of entanglement, also provides the maximum locally extractable work. We uncover the structure and properties of the locally passive state in Sec. \ref{sec2}. We present the maximum amount of global work extraction from these local passive states as a function of their entanglement in Sec. \ref{sec3}. In Sec. \ref{sec4}, we discuss about the maximum global work extraction from arbitrary states with fixed entanglement. We compare the maximum work extraction by global and local unitary operations in Sec. \ref{sec5}. We summarize our results in Sec. \ref{sec6}. \section{Construction of locally passive quantum battery states} \label{sec2} In this section, we discuss about work extraction using \emph{local} unitary operations. If we consider a Hamiltonian, $H_{AB}$, and a system state, $\rho_{AB}$ on a Hilbert space $\mathcal{H}_A\otimes \mathcal{H}_B$, where $A$ and $B$ are two subsystems, then the initial energy of the system is Tr$\left(\rho_{AB} H_{AB}\right)$, and after a local unitary operation, $U_A\otimes U_B$, the energy would be Tr$\left(U_A\otimes U_B \rho_{AB} U_A^\dagger\otimes U_B^\dagger H_{AB}\right)$. Hence, the maximum work extraction using local unitary operations is given by \begin{equation} W_{max}^l=\text{Tr} \left(\rho_{AB} H_{AB}\right)-\min_{U_A\otimes U_B}\text{Tr} \left(U_A\otimes U_B \rho_{AB} U_A^\dagger\otimes U_B^\dagger H_{AB}\right). \nonumber \label{eq16} \end{equation} Now, a state $\sigma^l_{AB}$ will be refereed to as \enquote{locally passive}, if no work can be extracted from this state locally. That is, if Tr$\left(U_A\otimes U_B \sigma_{AB}^l U_A^\dagger\otimes U_B^\dagger H_{AB}\right)\geq \text{Tr} \left(\sigma_{AB}^l H_{AB}\right)$ for all $U_A\otimes U_B$. Therefore, maximum amount of locally extractable work is given by \begin{equation} W_{max}^l=\text{Tr} \left(\rho_{AB} H_{AB}\right)-\text{Tr} \left(\sigma^l_{\rho_{AB}} H_{AB}\right) \label{eqB} . \end{equation} Here $\sigma^l_{\rho_{AB}}$ is a locally passive state of the system with Hamiltonian $H_{AB}$ and has the same eigenvalues as $\rho_{AB}$. To uncover some properties of this state, we state the following two theorems. The symbol $I$, with a suitable suffix, denotes the identity operator on the corresponding Hilbert space. \vspace{0.5cm} \\ \noindent \textbf{Theorem 1.} \textit{For self adjoint operators $X_{AB}=X_A\otimes I_B+I_A\otimes X_B$ and $Y_{AB}$ on a finite dimensional Hilbert space $\mathcal{H}_{AB}=\mathcal{H}_A\otimes\mathcal{H}_B$,} Tr\textit{$\left(U_A\otimes U_B Y_{AB} U_A^\dagger\otimes U_B^\dagger X_{AB}\right)\geq$} Tr\textit{$(Y_{AB} X_{AB})$ is true, for all unitary operators $U_A$ and $U_B$, if and only if $X_{A/B}$ commutes with $Y_{A/B}$, where} $\text{Tr}_{A/B}\left(Y_{AB}\right)=Y_{B/A}$\textit{, and eigenvalues of $X_{A/B}$ (say, $\epsilon_i^{A/B}$) and of $Y_{A/B}$ (say $\alpha_i^{A/B}$) satisfy $\left(\alpha^{A/B}_j-\alpha^{A/B}_k)(\epsilon^{A/B}_j-\epsilon^{A/B}_k\right)\leq 0$ for all $j$ and $k$.} \\ \textbf{Remark:} After the statements and proofs of the two theorems, we will identify $X_{AB}$ as a local Hamiltonian $H_{AB}$, and $Y_{AB}$ as a local passive state $\sigma^l_{AB}$. \\ \textbf{Proof.} \\ Let $X_{A}$ and $X_B$ respectively commute with $Y_{A}$ and $Y_B$, and have common eigenbases $|e_i\rangle$ and $|f_i\rangle$ respectively. Then in this eigenbases, $Y_{AB}$ and $X_{AB}$ are given by \begin{eqnarray} Y_{AB}&=&\sum a^{kl}_{ij}|e_i\rangle\langle e_j|\otimes|f_k\rangle\langle f_l|, \label{eq2} \\ X_{AB}&=&\sum\left(\epsilon_m^A|e_m\rangle \langle e_m|\otimes |f_n\rangle \langle f_n|+\epsilon_n^B|e_m\rangle \langle e_m|\otimes |f_n\rangle \langle f_n| \right). \label{eq3} \end{eqnarray} In this paper, we use the convention in which if the running variable is not mentioned below the summation symbol, then all running variables on the right of it are to be summed over. Here, the sums are over the whole eigenbases, and $\epsilon^A_m$ and $\epsilon^B_m$ are eigenvalues of $X_A$ and $X_B$ respectively. Hence, using Eqs. \eqref{eq2} and \eqref{eq3} we get \begin{eqnarray} &\text{Tr}&\left(U_A\otimes U_B Y_{AB} U_A^\dagger\otimes U_B^\dagger X_{AB}\right) \nonumber \\ &=&\sum(U_A)_{mi}(U_B)_{nk} a^{kl}_{ij}(U_A^\dagger)_{jm}(U_B^\dagger)_{ln}\epsilon_m^A \nonumber \\ &&+\sum(U_A)_{mi}(U_B)_{nk} a^{kl}_{ij}(U_A^\dagger)_{jm}(U_B^\dagger)_{ln}\epsilon_n^B, \nonumber \end{eqnarray} where the trace is performed in the product bi-orthonormal basis, $\{|e_i\rangle \otimes |f_j\rangle\}$. Now, for any unitary operator $U$, we know that $\sum_\beta(U)_{\alpha\beta}(U^\dagger)_{\beta\gamma}=\delta_{\alpha\gamma}$. From Eq. \eqref{eq2}, we can write $Y_A=\sum a^{kk}_{ij}|e_i\rangle\langle e_j|$ and $Y_B=\sum a^{kl}_{ii}|f_k\rangle\langle f_l|$. Since $Y_A$ and $Y_B$ are represented in terms of their own eigenbases, we have $\sum_ka^{kk}_{ij}=\delta_{ij}\alpha_i^A$ and $\sum_ia^{kl}_{ii}=\delta_{kl}\alpha_k^B$. Using these relations in the above equation, we get \begin{eqnarray} &\text{Tr}&\left(U_A\otimes U_B Y_{AB} U_A^\dagger\otimes U_B^\dagger X_{AB}\right) \nonumber \\ &=&\sum \left(|\left(U_A\right)_{mi}|^2\epsilon_m^A\alpha_i^A+|\left(U_B\right)_{nk}|^2\epsilon_n^B\alpha_k^B\right). \nonumber \end{eqnarray} Now, $|\left(U_A\right)_{mi}|^2$ and $|\left(U_B\right)_{nk}|^2$ are doubly stochastic matrices, i.e., $\sum_p|\left(U_{A/B}\right)_{pq}|^2=\sum_q|\left(U_{A/B}\right)_{pq}|^2=1$ \cite{Birkhoff}. Therefore, using Birkhoff theorem, we can write $|U_{A/B}| = \sum_r \theta^{A/B}_r P^{A/B}_r$, where $\sum_r\theta_r=1$, $\theta_r\geq 0$ for all $r$, and $P_r$ are permutation matrices of the same Hilbert space. Hence, we get \begin{eqnarray} &\text{Tr}&\left(U_A\otimes U_B \rho_{AB} U_A^\dagger\otimes U_B^\dagger H_{AB}\right) \nonumber \\ &=& \sum_r \theta_r^A\sum_m \epsilon_m^A\alpha^A_{r(m)}+\sum_r \theta_r^B\sum_m \epsilon_m^B\alpha^B_{r(m)}. \nonumber \end{eqnarray} Here $r$ denotes different permutations. We can see that if $\epsilon^{w}_m>\epsilon^{w}_i$ implies $\alpha^w_m\leq\alpha^w_i$ for both $w=A,B$, then minimum value of the above expression is $\sum \epsilon_m^A\alpha^A_m+\sum \epsilon_n^B\alpha^B_n$, i.e., Tr$(X_{AB}Y_{AB})$. Hence, we get Tr$\left(U_A\otimes U_B Y_{AB} U_A^\dagger\otimes U_B^\dagger X_{AB}\right)\geq \text{Tr}\left(Y_{AB}X_{AB}\right)$. Thus one part of the theorem is proved. Let us next assume that Tr$\left(U_A\otimes U_B Y_{AB} U_A^\dagger\otimes U_B^\dagger X_{AB}\right)\geq \text{Tr}\left(Y_{AB}X_{AB}\right)$ is true for any $U_A\otimes U_B$. Now if we expand $U_A$ and $U_B$ as \begin{eqnarray} U_A=1+2M_A+2M_A^2+2M_A^3+\cdot\cdot\cdot, \nonumber \\ U_B=1+2M_B+2M_B^2+2M_B^3+\cdot\cdot\cdot, \nonumber \end{eqnarray} where $M^\dagger_{A/B}=-M_{A/B}$ and $||M_{A/B}||<1$, then using these and simplifying a little bit, we get \begin{eqnarray} &\text{Tr}&\left(U_A\otimes U_B Y_{AB} U_A^\dagger\otimes U_B^\dagger X_{AB}\right) \nonumber \\ &=&\text{Tr}\left(X_{AB}Y_{AB}\right)+2\text{Tr}\left(\left[Y_A,X_A\right]M_B\right)+2\text{Tr}\left(\left[Y_B,X_B\right]M_A\right). \nonumber \end{eqnarray} From the above equation, we can see that the minimum value of either side would be at $U_A=I_A$ and $U_B=I_B$, where $I_{A/B}$ is the identity operator in the Hilbert space $\mathcal{H}_{A/B}$, if $[Y_{A/B},X_{A/B}]=0$. Now, let the unitaries $U_A$ and $U_B$ in the subspace of eigenvectors corresponding to any two eigenvalues $\epsilon^A_1$, $\epsilon^A_2$ of $X_A$ and $\epsilon^B_1$, $\epsilon^B_2$ of $X_B$ be given by \begin{eqnarray} U_A^s= \left[ \begin{matrix} \cos\phi_A&\sin\phi_A \\ -\sin\phi_A&\cos\phi_A \end{matrix} \right], \ \ U_B^s= \left[ \begin{matrix} \cos\phi_B&\sin\phi_B \\ -\sin\phi_B&\cos\phi_B \end{matrix} \right]. \nonumber \end{eqnarray} Suppose that in this subspace, $Y_{AB}$ is given by \begin{equation} Y_{AB}^s= \left[ \begin{matrix} a_1&a_2&a_3&a_4 \\ b_1&b_2&b_3&-a_3 \\ c_1&c_2&c_3&-a_2 \\ d_1&-c_1&-b_1&d_4 \end{matrix} \right]. \nonumber \end{equation} Hence, \begin{eqnarray} Y_A^s= \left[ \begin{matrix} a_1+b_2&0\\ 0&c_3+d_4 \end{matrix} \right], \ \ Y_B^s= \left[ \begin{matrix} a_1+c_3&0\\ 0&b_2+d_4 \end{matrix} \right]. \nonumber \end{eqnarray} These are diagonal matrices, with eigenvalues $a_1+b_2$ (say $\alpha_1^A$), $c_3+d_4$ (say $\alpha_2^A$) and $a_1+c_3$ (say $\alpha_1^B$), $b_2+d_4$ (say $\alpha_2^B$), as they should be, because we have written all the matrices in the eigenbases of $X_A$ and $X_B$. Using these matrices, we get \begin{eqnarray} &&\text{Tr}\left({U^s_A}\otimes {U^s_B} Y_{AB}^s {U^s_A}^\dagger\otimes {U^s_B}^\dagger X^s_{AB}\right) \nonumber \\ &=&\frac{1}{2}\left[\left(\alpha_1^A+ \alpha_2^A\right) \left(\epsilon_1^A + \epsilon_2^A + \epsilon_1^B + \epsilon_2^B\right)\right] \nonumber \\ &+&\frac{1}{2}\left[\left(\alpha_1^A-\alpha_2^A\right) \left(\epsilon_1^A - \epsilon_2^A\right) \cos(2\phi_A)\right] \nonumber \\ &+& \frac{1}{2}\left[\left(\alpha_1^B-\alpha_2^B\right) \left(\epsilon_1^B - \epsilon_2^B\right) \cos(2 \phi_B)\right], \nonumber \end{eqnarray} where $X^s_{AB}$ is $X_{AB}$ in the corresponding subspace. This would be minimum at $\phi_A=0$ and $\phi_B=0$ only if $\left(\epsilon_1^A - \epsilon_2^A\right)>0$ for $ \left(\alpha_1^A -\alpha_2^A\right)\leq0$ and $\left(\epsilon_1^B - \epsilon_2^B\right)>0$ for $\left(\alpha_1^B -\alpha_2^B\right)\leq 0$. \hfill \(\square\) \vspace{0.5cm} \\ \noindent\textbf{Theorem 2.} \textit{Every system has a unique locally passive state.} \\ \textbf{Proof.} \\ Let us begin by assuming that the converse statement is true, i.e., a system $\rho_{AB}$ has two locally passive states, $\sigma^l_{\rho_{AB}}$ and $\sigma'^l_{\rho_{AB}}$, which are related to $\rho_{AB}$ through local unitary operations. Hence, $\sigma^l_{\rho_{AB}}$ and $\sigma'^l_{\rho_{AB}}$ are also related through a local unitary operation, say $U_A \otimes U_B$, where \begin{eqnarray} U_w^s=\left[ \begin{matrix} \cos\phi_w&\sin\phi_w \\ -\sin\phi_w&\cos\phi_w \end{matrix} \right]. \nonumber \end{eqnarray} Here, $w$ denotes either of the two subsystems, $A$ and $B$, and the superscript $s$ indicates that we are working in a two-dimensional subspace (see proof of Theorem 1). Now, let \begin{eqnarray} \text{Tr}_B\left(\sigma^{ls}_{\rho_{AB}}\right)= \left[ \begin{matrix} p&0\\ 0&q \end{matrix} \right]. \nonumber \end{eqnarray} After applying $U_A$ on $ \text{Tr}_B\left(\sigma^{l}_{\rho_{AB}}\right)$, we would get $ \text{Tr}_B\left(\sigma'^{l}_{\rho_{AB}}\right)$. Since $\text{Tr}_B\left(\sigma'^{l}_{\rho_{AB}}\right)$ is another passive state in the subsystem $A$, it should also be a diagonal matrix, in the same basis in which $\text{Tr}_B\left(\sigma^{l}_{\rho_{AB}}\right)$ is expressed above. Hence the off-diagonal term, $(p-q)\sin\phi_A \cos\phi_A=0$. Therefore, $\phi_A=0$, $\frac{\pi}{2}$, $\pi$, $\frac{3\pi}{2}$. Similarly, $\phi_B=0$, $\frac{\pi}{2}$, $\pi$, $\frac{3\pi}{2}$. For $\phi_{A/B}=0$, $\pi$, we will get the same state back, and for the other two values of $\phi_{A/B}$ we will get a state with the same eigenvalues but in opposite order, so that it would not be a local passive state. Hence a system would always have an unique locally passive state. \hfill \(\square\) From the two theorems above, we conclude that if a system, in the state $\sigma_{AB}^l$, and governed by the Hamiltonian $H_{AB}=H_A\otimes I_B+I_A\otimes H_B$, satisfies the conditions \begin{enumerate} \item subsystems of $\sigma_{AB}^l$, i.e. $\sigma^l_A$ and $\sigma^l_B$ commutes with $H_A$ and $H_B$, and \label{con1} \item if eigenvalues of $H_A$ and $H_B$ are set in increasing order, then in the corresponding basis, eigenvalues of $\sigma_{A}^l$ and $\sigma_B^l$ would be in non-increasing order, \label{con2} \end{enumerate} then one cannot extract any work from this state by local unitary operations. Hence, these are the locally passive states. For any system $\rho_{AB}$ and Hamiltonian $H_{AB}=H_A\otimes I_B+I_A\otimes H_B$, we can get a corresponding local passive state $\sigma_{\rho_{AB}}^l$ such that \begin{equation} \sigma_{\rho_{AB}}^l=U_A\otimes U_B \rho_{AB} U_A^\dagger \otimes U_B^\dagger , \nonumber \end{equation} where $U_A$ and $U_B$ are the unitaries which diagonalizes $\rho_A$ and $\rho_B$ in such a way that its eigenvalues are non-increasing with eigenvalues of $H_A$ and $H_B$. This therefore forms a complete characterization of the locally passive states for local Hamiltonian, in arbitrary dimensions. \section{Global work extraction from locally passive quantum battery states with fixed entanglement} \label{sec3} In this section, we will consider the problem of global work extraction for two-qubit states which are locally passive for local Hamiltonians, where the state has a pre-decided amount of entanglement shared. We begin our analysis with pure states, and then generalize to mixed states. We will henceforth do all calculations by considering the Hamiltonian, \begin{equation} H_{AB}=\epsilon^A\sigma_z\otimes I_2+\epsilon^B I_2\otimes\sigma_z, \label{eq4} \end{equation} of a two-qubit system, where $\epsilon^A>\epsilon^B$, $\sigma_z$ is the Pauli $z$ matrix, and $I_2$ is the identity operator on the qubit Hilbert space. Note that this amounts to choosing a local basis at the outset for a two-qubit Hamiltonian of the form $H_A\otimes I_A +I_B\otimes H_B$, and hence does not lead to any loss of generality for our purposes. We also assume that $\epsilon_A > \epsilon_B\geq 0$. Whenever we will do any numerical calculation we will, for specificity, take $\epsilon^A=2\epsilon$ and $\epsilon^B=\epsilon$, where $\epsilon$ has the units of energy. Now, in this basis, energy eigenvalues are in decreasing order, if chosen as per the sequence, $\epsilon^A+\epsilon^B$, $\epsilon^A-\epsilon^B$, $-\epsilon^A+\epsilon^B$, $-\epsilon^A-\epsilon^B$. A general pure locally passive state is given by \begin{eqnarray} \sigma_{AB}^l= \left[ \begin{matrix} |c_0|^2&c_0c_1^*&c_0c_2^*&c_0c_3^* \\ c_1c_0^*&|c_1|^2&c_1c_2^*&c_1c_3^* \\ c_2c_0^*&c_2c_1^*&|c_2|^2&c_2c_3^* \\ c_3c_0^*&c_3c_1^*&c_3c_2^*&|c_3|^2 \end{matrix} \right], \label{eq5} \end{eqnarray} where the $c_i$'s satisfy the following conditions: \begin{enumerate} \item[(i)] $|c_i|\leq 1$ for $i=0,1,2,3$, \item[(ii)] $\sum_i |c_i|^2=1$, \item[(iii)] $c_0c_2^*=-c_1c_3^*$, \item[(iv)] $c_0c_1^*=-c_2c_3^*$, \item[(v)] $|c_0|^2+|c_1|^2\leq |c_2|^2+|c_3|^2$, \item[(vi)] $|c_0|^2+|c_2|^2\leq |c_1|^2+|c_3|^2$. \end{enumerate} Here, the first two conditions ensure that the state is a valid pure quantum state, while the next four ensure its local passivity. In this paper, we will measure entanglement by using the concept of logarithmic negativity \cite{measure}. The amount of entanglement in $\sigma^l_{AB}$ is given by \begin{equation} E=\log_2\left(2|c_1c_2-c_0c_3|+1\right). \label{eq6} \end{equation} Since $\sigma_{AB}^l$ is a pure state, its eigenvalues are 1, 0, 0, 0. Hence, using Eq. \eqref{eq1}, we get the maximum work that is extractable by using global unitary operations from $\sigma^l_{AB}$, and is given by \begin{eqnarray} W_{max}^p=\left(\epsilon^A+\epsilon^B\right)\left(|c_0|^2-|c_3|^2+1\right)+\left(\epsilon^A-\epsilon^B\right)\left(|c_1|^2-|c_2|^2\right). \nonumber \\ \label{eq7} \end{eqnarray} The superscript \enquote{$p$} is to indicate that the state from which the work extraction is being considered is locally passive. If we substitute the values of $c_0$ and $c_1$ from condition (iii) in condition (iv), we get the following results: \begin{enumerate} \item[(a)] If $c_3=0$ but $c_2\neq 0$, then $c_0=0$. Hence, $W^p_{max}=\left(\epsilon^A-\epsilon^B\right)\left(|c_1|^2-|c_2|^2\right)+\epsilon^A+\epsilon^B$. \label{con8} \item[(b)] If $c_2=0$ but $c_3\neq 0$, then $c_1=0$. \\ Hence, $W^p_{max}=\left(\epsilon^A+\epsilon^B\right)\left(|c_0|^2-|c_3|^2+1\right)$. \label{con9} \item[(c)] If $c_3\neq 0$ and $c_2\neq 0$, then $|c_1|^2=|c_2|^2$ and $|c_3|^2=|c_0|^2$.\\ Hence, $W^p_{max}=\epsilon^A+\epsilon^B$. \label{con10} \end{enumerate} We can see that $W^p_{max}$ has higher values for case (b) than in the other two. Hence among all pure local passive states, we can extract higher work from those states for which $c_1=c_2=0$. Putting $c_1=c_2=0$ in Eq. \eqref{eq6}, and expressing $c_0$ and $c_3$ in terms of $E$, using condition (ii) and Eq. \eqref{eq6}, we get \begin{eqnarray} |c_0|&=&\frac{1}{2}\left(1\pm\sqrt{1-(2^E-1)^2}\right) \nonumber \\ \text{and } |c_3|&=&\frac{1}{2}\left(1\mp\sqrt{1-(2^E-1)^2}\right). \label{eq8} \end{eqnarray} But according to conditions (v) and (vi), $|c_0|\leq |c_3|$, and hence the acceptable solution is $|c_0|=\frac{1}{2}\left(1-\sqrt{1-(2^E-1)^2}\right)$. Thus, the maximum amount of extractable work from pure locally passive states with a fixed entanglement $E$, is \begin{equation} G_E^p=(\epsilon^A+\epsilon^B)\left(1-\sqrt{2^{E+1}-2^{2E}}\right). \label{A} \end{equation} We denote the state, for which this maximum value is achievable, by $\sigma^{l\text{ } max}_E$. \begin{figure}[h!] \includegraphics[scale=.8]{qb_pure_passive_global1.pdf} \caption{Global work extractable from pure locally passive quantum battery states. The maximum work extractable from a set of pure locally passive state with fixed entanglement E, using global unitary operations is denoted by $G^p_E$, and is plotted in units of $\epsilon$, on the vertical axis, while $E$ is plotted on the horizontal one. The horizontal axis is in ebits, while the vertical one is dimensionless. The analytic form is given in Eq. \eqref{A}.} \label{fig1} \end{figure} \begin{figure}[h!] \includegraphics[scale=.8]{qb_mixed_passive_global1.pdf} \caption{Global work extractable from general two-qubit locally passive quantum battery states. The considerations are the same as in Fig. \ref{fig1}, except that the states can also be mixed, so that there are far more states available in the optimization procedure for a fixed value of $E$. The vertical axis is now to represent the quantity denoted by $\widetilde{G}^p_E$, in units of $\epsilon$. Also, the plot is obtained via a numerical nonlinear optimization procedure.} \label{fig2} \end{figure} We plot $G_E^p$ vs $E$ in Fig \ref{fig1}. From the plot, we can conclude that the globally extractable work extraction increases with entanglement for states from which we can not extract any local work. We numerically analyze the maximum work extraction for local passive states which may be mixed, via numerical nonlinear optimization. The runs are performed so that the maximum value of globally extractable work, which we now denote as $\widetilde{G}^p_E$, is correct up to the $3^{rd}$ decimal point. We present the graph in Fig. \ref{fig2}. Comparing Figs. \ref{fig1} and \ref{fig2}, we can see that global work extraction from a mixed state is much higher than that for the pure state with the same entanglement for locally passive states. While $G^p_E$ and $\widetilde{G}^p_E$ are both concave upward as functions of $E$, although for pure states, the curvature is higher. For low values of entanglement, non-pure states provide far greater globally extractable work than pure states. \section{Global work extraction from general battery states with fixed entanglement} \label{sec4} We now move over from locally passive states to general quantum states of two qubits, while still remaining with local Hamiltonians, and analyze the amount of work that can be extracted globally, from a state with a fixed value of entanglement. The case of local work extraction for general two-qubit states, and its difference with the globally extractable work, is considered in the succeeding section. We begin the analysis by considering pure states, so that the Hamiltonian and the states are of the form given in Eqs. \eqref{eq4} and \eqref{eq5}. In this case, the states which we take into consideration are not necessarily locally passive. Hence, the $c_i$'s satisfy only conditions (i) and (ii). The forms of $E$ and globally extractable work remain the same as in Eqs. \eqref{eq6} and \eqref{eq7}. Since, in this section, we are talking about global work extraction from states that are not necessarily locally passive, we will denote the maximum work extraction from a pure state by $W_{max}$ and the maximum work extraction from a set of pure states with fixed entanglement $E$ by $G_E$. Now, the coefficients of $\left(\epsilon^A+\epsilon^B\right)$ and $\left(\epsilon^A-\epsilon^B\right)$ have the same forms as in Eq. \eqref{eq7}, and both have the same constraints on them as given in the conditions (i) and (ii), and Eq. \eqref{eq6}, so that the maximum value that one of the coefficients can achieve, is the same for both of them. Now, since $\left(\epsilon^A+\epsilon^B\right) \geq \left(\epsilon^A-\epsilon^B\right)$, if we keep increasing the value of the coefficient of $\left(\epsilon^A+\epsilon^B\right)$ and keep decreasing the value of the coefficient of $\left(\epsilon^A-\epsilon^B\right)$ in a way such that the constraints remain satisfied, we can maximize $W_{max}$, keeping entanglement fixed. This maximum value of global work extraction is given by \begin{equation} G_E=\left(\epsilon^A+\epsilon^B\right)\left(|c_0|^2-|c_3|^2+1\right). \label{eq9} \end{equation} Using Eq. \eqref{eq6} and condition (ii), we get the same solution for $|c_0|$ and $|c_3|$ as given in Eq. \eqref{eq8}. But in this case, $|c_0|$ may not be less than $|c_3|$, and we can see from Eq. \eqref{eq9} that we would get a higher amount of work extraction for $|c_0|\geq |c_3|$ in comparison to the case when $|c_0|\leq |c_3|$. Hence, in this case, we choose $|c_0|=\frac{1}{2}\left(1+\sqrt{1-(2^E-1)^2}\right)$. Therefore, \begin{equation} G_E=(\epsilon^A+\epsilon^B)\left(1+\sqrt{2^{E+1}-2^{2E}}\right). \label{eq13} \end{equation} The state, for which this maximum value is achieved, is given by \begin{eqnarray} \rho^{max}_{E}=\left[ \begin{matrix} |c_0|^2&0&0&c_0c_3^* \\ 0&0&0&0 \\ 0&0&0&0 \\ c_3c_0^*&0&0&|c_3|^2 \end{matrix} \right], \label{eq10} \end{eqnarray} where \begin{eqnarray} |c_0|&=&\frac{1}{2}\left(1+\sqrt{1-(2^E-1)^2}\right) \nonumber \\ \text{and } |c_3|&=&\frac{1}{2}\left(1-\sqrt{1-(2^E-1)^2}\right). \label{eq12} \end{eqnarray} \begin{figure}[h!] \includegraphics[scale=.8]{qb_pure_global1.pdf} \caption{How much work for given entanglement? We plot here the globally extractable work from an arbitrary pure quantum battery state with given entanglement. While the globally extractable work is represented on the vertical axis and denoted by $G_E$ (in units of $\epsilon$), the entanglement is represented on the horizontal axis and denoted by $E$. The vertical axis is dimensionless, while the horizontal one is in ebits. } \label{fig3} \end{figure} In Fig. \ref{fig3}, we plot this $G_E$ as a function of $E$. We can see that the two curves display respectively in Figs. \ref{fig1} and \ref{fig3} have rather opposite natures. While $G^p_E$ is increasing with entanglement, $G_E$ is decreasing. And whereas $G^p_E$ is a concave function of entanglement, $G_E$ is convex. To get an understanding of this differing nature of the two curves we do an analysis at the beginning of the succeeding section. We now move over to general states, and find the maximum amount of work that can be extracted by global unitaries. The analysis is again performed using the numerical nonlinear optimization procedure. The convergence is checked up to the first decimal point. In Fig. \ref{fig5}, we plot this maximum value as a function of the entanglement in the battery state. We observe that the behavior of the plot in Fig. \ref{fig3}, where the battery state was restricted to be pure, is similar to that in Fig. \ref{fig5}, where there is no such restriction. \begin{figure}[h!] \includegraphics[scale=.8]{qb_mixed_global1.pdf} \caption{Globally extractable work from general quantum battery states. The considerations are exactly the same as in Fig. \ref{fig3}, except that the battery state for a given entanglement, $E$, can be non-pure. The globally extractable work is denoted here by $\widetilde{G}_E$.} \label{fig5} \end{figure} \section{Work deficit for local extraction from quantum battery} \label{sec5} In this section, we will discuss about the quantity of advantage when performing global operations for extracting work in comparison to the case when local operations are performed. We will first determine the amount of work deficit for using local work extraction from states for which global work extraction is maximum among all pure states with fixed entanglement. Then we will find the difference between maximum amount of extractable work using global and local operations as a function of entanglement. We have seen that the optimal amount of extractable work, among the set of all pure states with fixed entanglement $E$, is achievable for the state $\rho^{max}_E$, expressed in Eq. \eqref{eq10}. The locally passive state corresponding to this pure state, is given by \begin{eqnarray} \sigma_{\rho^{max}_E}^l= \left[ \begin{matrix} |c_3|^2&0&0&c_3c_0^* \\ 0&0&0&0 \\ 0&0&0&0 \\ c_0c_3^*&0&0&|c_0|^2 \end{matrix} \right]. \nonumber \end{eqnarray} Here, $c_0$ and $c_3$ satisfy Eq. \eqref{eq12}. Hence, using Eq. \eqref{eqB}, we get the locally extractable work from $\rho^{max}_E$ as \begin{equation} \overline{L}_E=2(\epsilon^A+\epsilon^B)\sqrt{2^{E+1}-2^{2E}}. \nonumber \label{eq11} \end{equation} Therefore, the deficit in work extraction from $\rho_E^{max}$ because of restricting to local unitaries is \begin{equation} G_E-\overline{L}_E=\left(\epsilon^A+\epsilon^B\right)\left(1-\sqrt{2^{E+1}-2^{2E}}\right). \label{eq15} \end{equation} This is exactly equal to the amount of optimal global work extractable from pure locally passive states, for the same entanglement, i.e., $G^p_E$. We therefore have the following result. \vspace{0.5cm}\\ \noindent \textbf{Theorem 3.} \emph{The difference in work extractions between the instances using global and local unitaries from the pure battery state providing maximal global work is equal to the amount of maximal global work available from the corresponding pure locally passive state having the same entanglement.} Therefore, we can conclude that the opposite features in Figs. \ref{fig1} and \ref{fig3} arose due to the fact that work was being extracted from $\rho^{max}_E$ by using local unitary operations, where, since $\sigma^{l\text{ }max}_E$ is a locally passive state, the locally extractable work from $\sigma^{l\text{ }max}_E$ is zero. \begin{figure}[h!] \includegraphics[scale=.8]{qb_pure_local1.pdf} \caption{Interplay between global and local work extraction for quantum batteries. We plot here the locally extractable work from pure states with fixed entanglement $E$, and locally extractable work from that pure state for which globally extractable work is maximum, with the same entanglement $E$. The optimal amount of locally extractable work from pure states, $L_E$, with fixed entanglement $E$, have been plotted using black squares, along the vertical axis, against $E$ along the horizontal axis. On the other hand, the amount of extractable work using local operations, $\overline{L}_E$, from a pure state with entanglement $E$ for which extractable work using global operations is maximum is plotted along the vertical axis using the red line. $L_E$ and $\overline{L}_E$ both are given in units of $\epsilon$, so that the vertical axis is dimensionless, while the horizontal axis is in ebits.} \label{fig6} \end{figure} We have found the maximum work extractable by global operations for pure states in the preceding section, and have obtained the result that the maximum extractable work, $G_E$ (given in Eq. \eqref{eq13}), is achievable for the state $\rho^{max}_E$ (given in Eq. \eqref{eq10}). Using the nonlinear numerical optimization procedure, we now find the maximum amount of work extractable from a pure state with entanglement, $E$, by using local operations, and denote it by $L_E$. We denote the state for which this locally extractable work is maximum by $\rho^{l \text{ }max}_E$. Surprisingly, we find that \begin{eqnarray} \rho^{l \text{ }max}_E&=&\rho^{max}_E, \nonumber \\ \text{and hence, } {L}_E&=&\overline{L}_E. \label{eq14} \end{eqnarray} This can be seen from Fig. \ref{fig6}, where we plot $\overline{L}_E$ with a line and $L_E$ with points. It can be noticed that all the points fall on the line. Hence, we can state the following result. \vspace{0.5cm}\\ \textbf{Proposition 4.} \textit{The pure state for which the globally extractable work is maximum, among the set of all pure states with a fixed value of entanglement, also provides a maximum locally extractable work.} Using Eqs. \eqref{eq15} and \eqref{eq14}, we get the difference between maximum global work extraction and maximum local work extraction, as a function of entanglement, and is given by \begin{equation} G_E-L_E=(\epsilon^A+\epsilon^B) \left(1-\sqrt{2^{E+1}}-2^{2E}\right). \nonumber \end{equation}. \section{Conclusion} \label{sec6} We have analyzed the working of a shared quantum battery governed by local Hamiltonians. An important attribute of a battery is its passive state, that is, the state that disallows any energy extraction. A characterization of the globally passive state was already known. Here we have characterized the local passive state for an arbitrary quantum battery with local Hamiltonians. We subsequently restricted our attention to two-qubit systems. We found the relation between the entanglement of a two-qubit locally passive battery state with the amount of energy that can be globally extracted from it. While the result is derived analytically for pure battery states, the general case is determined via a nonlinear numerical optimization procedure. We then considered the question of global extraction of energy from a general - not necessarily locally passive - two-qubit battery when the governing Hamiltonian is local. We found that the difference between global and local extraction of work from a pure battery state with a given amount of entanglement is equal to the optimal global work extractable from the corresponding locally passive battery state having the same entanglement. We also showed that the pure battery state for which globally extractable work attains a maximum, among the set of all pure states with a fixed value of entanglement, also provides the maximum locally extractable work.
{ "redpajama_set_name": "RedPajamaArXiv" }
856
{"url":"http:\/\/mathhelpforum.com\/calculus\/137424-area-rectangle-within-ellipse.html","text":"# Thread: Area of a rectangle within an ellipse\n\n1. ## Area of a rectangle within an ellipse\n\nFind the dimensions of the largest rectangle wish dies parallel to the axes that can be inscribed in the ellipse x^2 + 4 y^2 = 4.\n\nOkay, here's my work so far:\nA=4xy\nRearrange for x so A=4y(sqrt(4-4y^2))\nUse product rule, obtain derivative and set A' = 0\nIt eventually comes to A' = -32y^2\/2 (sqrt (4-4y^2)) + 4 (sqrt (4-4y^2))\nAnd I solve to get y = sqrt2\/2, which is incorrect.\nThe answer for the dimensions is 2sqrt2 by sqrt2. I know I have to sub to get x back after, but is there some algebraic process which is incorrect here? Any help would be super.\n\n2. Hello, cinematic!\n\nYour work is correct . . . up to the punchline.\n. . Did you forget to \"double\"?\n\nFind the dimensions of the largest rectangle with sides parallel to the axes\nthat can be inscribed in the ellipse: $x^2 + 4 y^2 \\:=\\: 4$\nCode:\n |\n* * *\n* | *\no - - - - - + - - - - - o\n*: | :*\n: | y:\n* : | : *\n- - * : - - - - - + - - - - - : * - -\n* : | x : *\n: | :\n*: | :*\no - - - - - + - - - - - o\n* | *\n* * *\n|\n\nThe upper half of the ellipse is: . $y \\:=\\:\\sqrt{1-\\frac{x^2}{4}}$\n\nThe area of the rectangle is: . $A \\;=\\;4xy$\n\nWe have: . $A \\;=\\;4x\\left(1 - \\tfrac{1}{4}x^2\\right)^{\\frac{1}{2}}$\n\nThen: . $A' \\;=\\;4x\\cdot\\tfrac{1}{2}\\left(1 - \\tfrac{1}{4}x^2\\right)^{\\text{-}\\frac{1}{2}}\\! \\left(\\text{-}\\tfrac{1}{2}x\\right) + 4\\left(1-\\tfrac{1}{4}x^2\\right)^{\\frac{1}{2}}$\n\n. . . $-\\frac{x^2}{\\sqrt{1-\\frac{x^2}{4}}} + 4\\sqrt{1 - \\tfrac{x^2}{4}} \\;=\\;0 \\quad\\Rightarrow\\quad \\frac{x^2}{\\sqrt{1-\\frac{x^2}{4}}} \\;=\\;4\\sqrt{1-\\tfrac{x^2}{4}}$\n\n. . . $x^2 \\:=\\:4\\left(1 - \\tfrac{x^2}{4}\\right) \\quad\\Rightarrow\\quad x^2 \\:=\\:4-x^2 \\quad\\Rightarrow\\quad 2x^2 \\:=\\:4 \\quad\\Rightarrow\\quad x^2 \\:=\\:2$\n\nHence: . $x \\:=\\:\\sqrt{2} \\quad\\Rightarrow\\quad y \\:=\\:\\frac{\\sqrt{2}}{2}$\n\nTherefore: . $\\begin{Bmatrix}\\text{width} &=& 2x &=& 2\\sqrt{2} \\\\ \\text{height} &=& 2y &=& \\sqrt{2} \\end{Bmatrix}$\n\n,\n\n,\n\n### find the dimensions of the largest rectangle that can be inscribed in the ellipse\n\nClick on a term to search for related topics.","date":"2016-09-27 00:19:38","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 9, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7677547931671143, \"perplexity\": 834.9777504756476}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-40\/segments\/1474738660916.37\/warc\/CC-MAIN-20160924173740-00065-ip-10-143-35-109.ec2.internal.warc.gz\"}"}
null
null
\section{Introduction} \label{sec:introduction} There is considerable interest in generating human-like virtual agents for AR and VR applications. These agents are used to generate immersive social experiences for games, training, entertainment, virtual space visitations, or social-phobia treatments. In order to maintain the sense of presence in virtual environments, it is \uline{essential} that these virtual agents look realistic and interact in a plausible manner~\cite{bailenson2005independent}. There has been a great deal of work on improving the realism of virtual humans in terms of rendering, body shapes, facial expressions, hair, locomotion, etc. A key issue is related to dressing three-dimensional virtual humans using garments and animating the cloth deformation corresponding to draping and wrinkles. This is \uline{crucial} because up to $80$\% of a human body can be covered by clothing. As virtual agents move, bend, or interact with the environment, the clothing folds, wrinkles, and stretches to conform to the virtual agents' poses. Moreover, the garments may stick to themselves and/or other pieces of clothing. This problem is also important in the context of efficient virtual try-on simulation. In these applications, the user can experience the fit of several garments on their 3D models or avatars. The goal is to experience real-life clothes on virtual models and evaluate the garment behavior when a user moves around or stands in different poses. Recently, many AR-based interfaces have been proposed for virtual try-on. All these applications must be able to simulate or animate the cloth and garments at interactive rates. Physics-based simulation methods directly model the non-linear behavior of clothing and contacts to generate realistic cloth simulations~\cite{baraff1998large,narain2012adaptive,cirio2014yarn}. However, high-resolution clothing meshes and \uline{ computationally} expensive nonlinear solvers are frequently used, making it difficult to simulate clothing at interactive rates (i.e. 30fps or more). Moreover, dressing bodies of different shapes requires a separate simulation or setup for each body shape. This makes it difficult to use physics-based methods for real-time VR or virtual try-on applications. \uline{ Most of the data-driven clothing animation methods} synthesize the cloth deformation from pre-computed clothing deformation samples for different body poses~\cite{kim2013near,xu2014sensitivity}. In order to achieve real-time performance, these methods compromise the animation quality by simplifying the relationship between the deformed clothing and the underlying body poses by assuming linearity or locality. To consider the effect of body shape on cloth deformation, Peng et al.~\cite{guan2012drape} model the cloth deformation as a function of body motion and shape and resize the garment to fit the body. Recently, \cite{santesteban2019learning} presented a learning-based model to predict the deformation of a fixed-size garment under various poses and shapes. However, it uses an exhaustive database construction procedure that requires substantial computational resources \uline{offline}. \noindent {\bf Main Results:} In this paper, we present a novel real-time algorithm for cloth synthesis for VR and virtual try-on applications. We use a data-driven approach and present a new method to simulate plausible deformation effects at an interactive rate. We factorize the clothing deformation into two independent components: the pose-dependent deformation and the shape-dependent deformation. We refer to the pose-dependent deformation as the {\em clothing pose model}, which is used to predict clothing deformations under various poses; we refer to the shape-dependent deformation as the {\em clothing shape model}, which can predict clothing deformations under various shapes. We also present a clothing synthesis scheme that combines these two components through Taylor expansion. Our method adopts a pose- and shape-dependent skinning scheme for clothing synthesis to meet the needs of real-time virtual try-on on a CPU for synthetic bodies of \uline{various shapes and poses} (as shown in Figure~\ref{fig:teaser}). The three novel components of our work are: {\textbf{Clothing shape model:}} \uline{We present a novel pose-independent clothing shape model.} Given a set of clothing instances simulated on bodies with a specific pose and various shapes, we first reduce the \uline{dimensionality} with Principal Component Analysis (PCA). We use the first $k$ principal component coefficients spanning the clothing deformation space \uline{(we experimentally set $k=5$, as described in Section \ref{subsec:clothingshapemodel})}. Next, we map the body shape parameters to the coefficients of the PCA basis of the clothing deformation space. Therefore, given a set of body shape parameters, we can predict the corresponding clothing deformation result. {\textbf{Taylor expansion to approximate clothing deformations:}} \uline{We present a real-time clothing synthesis method by considering both the pose-dependent deformation and the shape-dependent deformation using Taylor expansions.} We represent clothing deformation as a function of body shape parameters $\beta$ and body pose parameters $\theta$, $f(\beta,\theta)$. \uline{The clothing deformation in the neighborhood of the given anchoring point $(\beta_0,\theta_0)$ is approximated with a Taylor expansion. The partial derivatives of the clothing deformation} are calculated numerically using the clothing pose model and the clothing shape model to predict $f(\beta_0, \theta_0+\Delta \theta)$ and $f(\beta_0+\Delta \beta, \theta_0)$, respectively. Given the new parameters $(\beta=\beta_0+\Delta \beta,\theta=\theta_0+\Delta \theta)$, we synthesize the clothing deformation by blending the Taylor expansion results from \uline{the} nearby anchoring points. \uline{Accordingly, our approach can add the shape dimension to any clothing pose model; this is in contrast to \cite{xu2014sensitivity}, which can only deal with pose-dependent deformations.} Moreover, we use a sensitivity-based measurement to find nearby anchoring points and to calculate the blending weights. {\textbf{Sampling scheme:}} \uline{We present a pose space analysis method to generate a compact example database in parallel, which can significantly reduce the offline computational cost.} In order to generate plausible results, we cluster the pose space into a small number of clusters. We use the cluster centers to calculate the anchoring points and thereby build a compact database. In our implementation, we only sample data points along the $\theta$ axis, which enables us to not have to train the input clothing pose model at various shapes. This sampling scheme can also be used to generate the database of \uline{the sensitivity-optimized rigging (SOR) method \cite{xu2014sensitivity}, and thereby significantly improving their results.} Our approach is general and provides a universal scheme to add the shape dimension to any clothing pose model with a compact database. We validate our method with SOR~\cite{xu2014sensitivity} and a sequence of simulated clothing instances. Our results show that, with a small number of anchor points (approximately 150), our method can generate realistic clothing deformations with an extra time consumption of 4ms per frame. On a commodity CPU, we obtain $56$ FPS for a long-sleeved shirt with $12$K vertices. We also perform a preliminary user study in an immersive VR setting and highlight the perceptual benefits of our approach compared to prior interactive methods based on linear blend skinning. \begin{comment} \uline{\sout{ In summary, the novel components of our work include:}} \begin{itemize} \item \uline{\sout{A novel pose-independent clothing shape model. Given the body shape parameter, it can accurately predict the corresponding clothing shape that fits the input body.}} \item \uline{\sout{A real-time clothing synthesis method to combine the pose-dependent deformation and the shape-dependent deformation through Taylor expansion. Accordingly, our approach can add the shape dimension to any clothing pose model; this is in contrast to \cite{xu2014sensitivity}, which can only deal with pose-dependent deformation.}} \item \uline{\sout{A pose space analysis method to generate a compact example database in parallel, which can significantly reduce the offline computational cost.}} \end{itemize} \end{comment} The rest of the paper is organized as follow: In Section \ref{sec:relatedworks}, we detail relevant related work in clothing animation. In Section \ref{sec:method}, we describe Taylor expansion, clothing shape model, clothing pose model, runtime synthesis, and database construction of our method. Section \ref{sec:experiment} details the results of our method. Discussion and conclusion will be given in Section \ref{sec:discussion} and Section \ref{sec:conclusion}. \section{Related Work}\label{sec:relatedworks} {\textbf{Physics-based clothing simulation.}} In the past two decades, following the seminal work by Baraff et al. ~\cite{baraff1998large}, physics-based clothing simulation has become a hot topic in the computer graphics community. The following works focus on integration methods~\cite{hauth2003analysis,volino2005implicit,fierz2011element}, strain limiting ~\cite{provot1995deformation,goldenthal2007efficient,thomaszewski2009continuum,wang2010multi,ma2016anisotropic}, and various clothing simulation models \cite{grinspun2003discrete,english2008animating,choi2005stable,volino2009simple,bridson2002robust,harmon2008robust,zheng2012energy,cirio2014yarn,guo2018material}. While these methods can produce highly realistic clothing deformations, they are typically quite time-consuming, especially with high-resolution cloth meshes. Many acceleration methods have been developed, including the projective dynamics method \cite{liu2013fast,bouaziz2014projective}, where integration is interpreted as an optimization problem; the Chebyshev semi-iterative approach, which accelerates the projective dynamics method \cite{wang2015chebyshev,fratarcangeli2018parallel}; and position-based dynamics~\cite{muller2007position}, where internal forces are replaced by position-based constraints to achieve both efficiency and stability. \uline{Recently, parallel GPU-based methods have been developed for implicit integration and contact handling\cite{tang2016cama,tang2018cloth,tang2013gpu,govindaraju2005quick,pcloth}, which perform implicit integration and accurate collision handling (including self-collisions). The underlying collision queries are performed using CCD tests that involve use of algebraic solvers and reliable computations~\cite{manocha1994algorithms,tang2014fast,manocha1998solving,manocha1992algebraic} for the elementary tests. The overall simulation algorithms exploit the parallelism on one or more GPUs and can accurately simulate at $2-10$ fps on high-end desktop GPUs. The actual running time can vary based on mesh resolution as well as the number of colliding configurations between triangles. Most VR applications require 30 fps (or higher performance) and current physics-based methods cannot offer such performance. Furthermore, in many applications, we need to perform cloth simulation on a mobile device (e.g., a smartphone) and we need methods that have a lower computational overhead.} The time-consuming nature of collision processing has also prompted many researchers to focus on developing accelerated data structures such as bounding-volume hierarchies \cite{klosowski1998efficient}, distance fields \cite{fuhrmann2003distance}, shape approximation with simple primitives \cite{thiery2013sphere,wu2018variational}, and other spatial partitioning methods \cite{teschner2005collision}. Combined with position-based dynamics \cite{muller2007position}, these methods can achieve real-time clothing animation with plausible dynamics for a medium resolution mesh. However, the resulting clothing animation lacks detailed wrinkles. {\textbf{Data-driven clothing animation. }} Data-driven clothing animation techniques have received considerable attention in recent years for real-time applications that require high fidelity. De Aguitar et al. \cite{de2010stable} reduced the dimension of cloth space and body space with PCA and then learned a conditional dynamical model of cloth in the low-dimensional linear cloth space. Their method is fast and stable but cannot generate clothing deformations with highly detailed wrinkles. Wang et al. \cite{wang2010example} regarded wrinkles as a function of local joint angles and augmented coarse simulations with detailed wrinkles from a pre-computed wrinkle database. Kim et al. \cite{kim2013near} exhaustively searched a motion graph to generate realistic secondary motion of clothing deformations. However, both the memory space and the computational resources for the database are prohibitively high. Hahn et al. \cite{hahn2014subspace} simulated clothes in low-dimensional linear subspaces learned from performing PCA on clothing instances in different poses. While they can reproduce detailed folding patterns with only a few bases, the method is still too costly to be used in real-time clothing animation. To balance speed and quality, Xu et al. \cite{xu2014sensitivity} introduced real-time example-based clothing synthesis using sensitivity-optimized rigging to achieve physically plausible clothing deformation. Given a set of pre-computed example clothing deformations sampled at different poses, the method first rigs the example clothing shapes to their poses with the underlying body skeleton at each example pose in the offline stage. At runtime, the method synthesizes a clothing deformation of the input pose by blending skinned clothing deformations computed from nearby examples. \uline{Jin et al.\cite{jin2018pixel} represented clothing shapes as offsets from the underlying body surface, and performed convolutional neural networks to learn pose-dependent deformations in image space. Other methods directly used CNNs on triangle meshes, thereby predicting high resolution clothing deformations based on low resolution inputs~\cite{chentanez2020cloth}.} In addition to the significant advances achieved in pose-dependent clothing animation, variability in the human body shape has also been considered in clothing animation for virtual try-on applications. Inspired by SCAPE \cite{anguelov2005scape}, Peng et al. \cite{guan2012drape} introduced a model of clothing animation called DRAPE (DRessing Any PErson), which separated clothing deformations due to body shape from those due to pose variation. DRAPE can fit avatars of various poses and shapes with customized garments and change the clothing model according to the body shape. \uline{Recently, many techniques have been proposed for learning-based clothing simulation. Wang et al.\cite{wang2018learning} learned a shared shape space in which users are able to indicate desired fold patterns simply by sketching, and the system generates corresponding draped garment and body shape parameters. However, their approach cannot generate clothing deformations corresponding to different poses.} Inspired by SMPL \cite{loper2015smpl}, Santesteban et al. \cite{santesteban2019learning} introduced a learning model of cloth drape and wrinkles. The key innovation of their method is that they added corrective displacements caused by body shape and pose variations to the template cloth mesh and then deformed the template mesh using a skinning function. In many ways, our approach is complimentary to this method. A limitation of their method is that the training dataset is prohibitively large, which is similar to other learning-based methods\cite{lahner2018deepwrinkles,patel2020tailornet}. \uline{By treating clothing as an extra offset layer from the body, Ma et al.\cite{ma2020learning} trained a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model. This work has the limitation in that the level of geometric details they can achieve is upper-bounded by the mesh resolution of SMPL.} \uline{Yang et al.\cite{yang2018analyzing} modeled the clothing layer as an offset from the body and performed PCA to reduce the self-redundancies. In contrast, we perform PCA directly on the coordinates of clothing deformations to compute our clothing shape model.} Many other works focused on cloth reconstruction from a single image or scan data\cite{saito2019pifu,natsume2019siclope,pons2017clothcap}, \uline{as well as cloth material recovery from video\cite{yang2017learning}}. {\textbf{Sensitivity analysis. }} Sensitivity analysis was originally used to determine the impact of the input data on the output results in linear programming problems \cite{saltelli2004sensitivity}, and it has been widely used to solve optimization problems in the field of graphics, including \uline{shell} design \cite{kiendl2014isogeometric}, composite silicone \uline{rubber} design \cite{zehnder2017metasilicone}, and robotics design \cite{geilinger2018skaterbots, zimmermann2019puppetmaster}. Sensitivity analysis was first introduced to the clothing simulation community by Umetani et al. \cite{umetani2011sensitive} to build up a mapping between variations of 2D patterns and 3D clothing deformations to achieve interactive clothing editing. After that, Xu et al. \cite{xu2014sensitivity} proposed a technique called sensitivity optimized rigging (SOR) to perform real-time clothing animation. They use sensitivity to both rig the clothing instances in example poses and find the example poses nearest to the input pose. In this paper, we use sensitivity to find the anchoring points nearest to the input pose and shape. {\textbf{Taylor expansion. }} A complex function can be approximated by its first order Taylor expansion in a neighborhood of the keypoint locations. Such a method has been applied in the field of simulation and animation to solve various approximation problems. To generate real-time facial animations, Barrielle et al. \cite{barrielle2019realtime} applied first-order Taylor approximation to the computations of the Singular Value Decomposition, thereby significantly accelerating the simulation of volumetric forces. Shen et al. \cite{shen2015geometrically} adopted a finite Taylor series approximation of the potential energy to avoid numerical singularity and instability during the simulation of inextensible ribbon. In the cloth simulation community, Taylor expansion is generally used to make the first order approximation of the internal force. To solve the nonlinear equation involved in the implicit backward Euler method, Baraff et al. \cite{baraff1998large} applied a Taylor series expansion to the force acting on the cloth and made the first order approximation, which leads to a linear system. As a result, their cloth simulation system can \uline{handle} large time steps in a stable manner. \uline{Chen et al.\cite{chen2010fully} proposed a fully geometric approach to simulate inextensible cloth that is subjected to a conservative force. They use Taylor expansion to linearize the constraint that preserves isometric deformations.} In this paper, we use Taylor expansion to factor the clothing deformation (which is a complex high-dimensional nonlinear function without an analytical expression) into two parts, as introduced by body shape and pose variations. \section{Method} \label{sec:method} \subsection{Taylor Expansion For Clothing Deformation} \label{subsec:method_overview} \begin{figure*}[t] \setlength{\abovecaptionskip}{0.0cm} \centering \includegraphics[width=\textwidth]{figures/workflow/workflow.pdf} \caption{Overview of our clothing synthesis workflow. Given a query pose and shape, we first calculate the LBS synthesis results for the new shape and the original shape, and we refer to these as the shape dimension. The ``pose dimension'' is obtained from an external clothing pose model with the query pose and the original shape. Then we blend the ``shape dimension'' and the ``pose dimension'' using Taylor expansion to synthesize an intermediate result. Finally, we apply some refinement techniques such as penetration resolution to generate the final result.} \label{fig:workflow} \end{figure*} Fig. \ref{fig:workflow} illustrates the pipeline of our approach. Our system consists of two stages: the offline training stage and the run-time clothing synthesis stage. To model the nonlinear deformation of clothing mesh under different poses and shapes, we precompute a pose training set of clothing meshes with a single template body in multiple poses and develop a shape training set fit to different body shapes in each pose. \uline{We use the pose training data to train a clothing pose model to predict the pose-dependent clothing deformation, given a template body (e.g. SOR); for each pose in the shape training data, we train a clothing shape model to predict the shape-dependent clothing deformation with a fixed pose}. At run-time, the method separates deformations induced by pose and shape (i.e. a clothing pose model and a clothing shape model) through Taylor expansion. Given a query body pose and shape, we first find nearby clothing pose models and clothing shape models and then blend the Taylor expansion result of each nearby anchoring point to synthesize an intermediate mesh. We then resolve penetrations and add damping to get the final clothing mesh for the input body. Specifically, we represent the input body with shape parameters $\beta$ and pose parameters $\theta$. The clothing deformation $Y$ can be formulated as a high-dimensional function $Y = f(\beta, \theta)$ and approximated by the clothing on its nearby example body with shape parameters $\beta_0$ and pose parameters $\theta_0$ (we refer to $(\beta_0,\theta_0)$ as the anchoring point) using first-order Taylor expansion: \begin{equation} \label{eq:taylorexpansionoriginal} f(\beta, \theta)=f(\beta_0, \theta_0) + \Delta \beta f_\beta(\beta_0,\theta_0) + \Delta \theta f_\theta(\beta_0,\theta_0), \end{equation} where $\Delta \beta$ and $\Delta \theta$ are the shape and pose difference, respectively, between input body and its nearby example body. We use forward difference to calculate the partial derivatives $f_\beta(\beta_0,\theta_0)$ and $f_\theta(\beta_0,\theta_0)$ in Eq. \ref{eq:taylorexpansionoriginal} as follows: \begin{equation}\label{eq:taylorexpansionderivative} \begin{split} f(\beta, \theta)=f(\beta_0, \theta_0) + \Delta \beta \cdot \frac{ f(\beta, \theta_0) - f(\beta_0, \theta_0)}{\Delta \beta}\ \\ + \Delta \theta \cdot \frac{ f(\beta_0, \theta) - f(\beta_0, \theta_0)}{\Delta \theta}\ \\ = f(\beta_0,\theta) + (f(\beta,\theta_0)-f(\beta_0,\theta_0)), \end{split} \end{equation} where $f(\beta_0,\theta)$ represents the clothing under a new pose with the body shape unchanged, which can be computed via a \textit{clothing pose model}. $f(\beta, \theta_0)$ denotes the clothing for a new body shape with the pose fixed, which can be calculated through a \textit{clothing shape model}. Correspondingly, $(f(\beta,\theta_0)-f(\beta_0,\theta_0))$ represents the shape-dependent cloth deformation, which is the cloth mesh deformation during the body shape change from $\beta_0$ to $\beta$ under the anchoring pose $\theta_0$. In this way, the cloth deformations induced by body shape and pose can be separately computed and then combined. However, such an approximation cannot accurately obtain the clothing deformation under $(\beta, \theta)$ since the term of shape-dependent cloth deformation in Eq. \ref{eq:taylorexpansionderivative} should be measured under the new pose $\theta$ as $(f(\beta,\theta)-f(\beta_0,\theta))$, rather than under the anchoring pose $\theta_0$. To improve the approximation accuracy, we apply the Linear Blend Skinning (LBS) method to predict the cloth deformation for the new pose $\theta$ from the cloth mesh under its nearby sample pose $\theta_0$. Therefore, the shape-and-pose dependent cloth mesh deformation $f(\beta, \theta)$ can be formulated as the Augmented Taylor Expansion: \begin{equation}\label{eq:taylorexpansionfinal} f(\beta, \theta) = f(\beta_0,\theta) + \big(LBS^{\theta}_{\theta_0}(f(\beta,\theta_0))-LBS^{\theta}_{\theta_0}(f(\beta_0,\theta_0))\big), \end{equation} where $LBS^{\theta}_{\theta_0}$ stands for the linear blend skinning from pose $\theta_0$ to $\theta$. Since Taylor expansion can only predict the function value at points that are not far away from the expansion point, we apply Taylor expansion at multiple points, and then blend these approximation results. Note that both training and running a clothing pose model are time-consuming in practice. Taking SOR as an example, it takes about 30 hours to construct the necessary database and more than 10ms to predict the clothing deformation under a new pose, which is too time-consuming in our scenario since we need to run the clothing pose model once at each expansion point. To address this problem, we choose to apply Taylor expansion along the $\theta$ axis, i.e. expansion points have the same $\beta$ value, such as $(\beta_0,\theta_1)$, $(\beta_0,\theta_2)$, $(\beta_0,\theta_3)\cdots$, etc. In this way, for each expansion point $(\beta_0, \theta_s)$, the only thing we need to do is to train a clothing shape model at $\theta_s$, and we use only one clothing pose model $f(\beta_0,\theta)$ trained at $\beta_0$, referred to as the original shape. In our implementation, we set the shape parameters of the SMPL body model\cite{loper2015smpl} to be all zeros to get a medium stature body. At run-time, given a query pose and shape, our method first finds the nearby clothing shape models through a sensitivity-based distance measure. For each nearby anchoring point, we compute the approximation result for the query pose and shape according to Eq. \ref{eq:taylorexpansionfinal}. After that, we blend the approximation results with blending weights computed from the distance measure. In summary, our model predicts the clothing under any given body shape parameter $\beta$ and pose parameter $\theta$ by \begin{equation}\label{eq:blending} \begin{split} f(\beta,\theta)=f(\beta_0, \theta) + \sum_{s=1}^{N_s}w^s LBS^{\theta}_{\theta_s}(f(\beta,\theta_s))\ \\ - \sum_{s=1}^{N_s}w^s LBS^{\theta}_{\theta_s}(f(\beta_0,\theta_s)), \end{split} \end{equation} where $N_s$ is the number of clothing shape models and $w^s$ is the blending weight of the $s$-th data point. Figure \ref{fig:workflow} illustrates the computation of this equation where the three items in the right-hand side \uline{of Eq. \ref{eq:blending}} are referred to as the \textit{pose dimension}, the \textit{LBS synthesis for new shape}, and the \textit{LBS synthesis for original shape}. Details will be given in the following sections. \begin{comment} {\textbf{ System input.}} We regard the clothing deformation as a function of the body shape parameter $\beta$, and the body pose parameter $\theta$, denoted as $f(\beta,\theta)$. We employ a clothing pose model $f(\beta_0, \theta)$ such as SOR \cite{xu2014sensitivity} to predict the corresponding clothing deformation under the constant body shape $\beta_0$. Our goal is to add the shape dimension to a clothing pose model. {\textbf{Training stage.}} In the training stage, we first randomly choose some poses $\theta_1, \theta_2,\cdots , \theta_k$, and for each pose $\theta_s$, we train a clothing shape model $f(\beta,\theta_s)$, which can predict the corresponding clothing deformation under pose $\theta_s$ given a set of body shape parameters $\beta$. {\textbf{Runtime synthesis stage.}} As shown in Figure \ref{fig:workflow}, given the query pose and shape, our run time synthesis stage has two main streams. First, with the query pose, we use an external CPM (Clothing Pose Model) to predict the clothing instance under the query pose and the original shape. We call it the pose dimension. Second, with the query shape, we first search its nearby CSMs (Clothing Shape Models) using sensitivity-based distance. Then, for each CSM, we predict its clothing instance under \emph{the query shape}. After that, we rig and blend these clothing instances to obtain a coarse result of the clothing instance under the query pose and shape, which we call LBS1 result. Similarly, for each CSM, we predict its clothing instance under \emph{the original shape}, and then, we rig and blend these clothing instances to obtain a coarse result of clothing instances under the query pose and the original shape, which we call LBS0 result. LBS1 and LBS0 are, essentially, the shape dimension. Finally, through Taylor expansion, we synthesize the pose dimension and the shape dimension to get the clothing instance under the query pose and shape. Detailed will be elaborated in \ref{sec:method}. \end{comment} \subsection{Clothing Shape Model}\label{subsec:clothingshapemodel} \begin{figure*}[t] \setlength{\abovecaptionskip}{0.0cm} \centering \includegraphics[width=\textwidth]{figures/clothing_shape_model/principal_components.pdf} \caption{Clothing shape model. Deviations from the mean shape: (a) the average coordinates of the training data, referred to as the mean shape; (b-d) mean shape deformed along the first three principal component directions ( $\pm$ 3 standard deviations). Note that the global displacement in the vertical direction of (b) is introduced by the body height.} \label{fig:principalcomponents} \end{figure*} The clothing shape model captures clothing deformation induced only by body shape. The model is learned from the clothing shape examples that are simulated under various body shapes with a fixed pose (each column in Figure~\ref{fig:clothingshapemodeltrainingdata}). Thus, the following procedure will be run independently for all anchoring poses, each to generate a clothing shape model for a specific pose. We use the SMPL parametric human model to represent the variations in human body shape. In the offline database construction, we select 17 training body shapes in the same pose. For each of the first 4 principal components of the body shape parameters $\beta$ in the SMPL model, we generate 4 body shapes ($\beta_k=-2,-1,1,2$), keeping the remaining parameters in $\beta$ as 0. To these 16 body shapes, we add the nominal shape with $\beta=0$. For each generated body shape, we perform a one-second simulation to completely drape the clothing on the avatar. All simulations are produced using a clothing model that combines the StVK membrane model\cite{volino2009simple} and the isometric bending model\cite{bergou2007tracks} to simulate a garment example. It is worth noting that in the clothing simulation, the patterns and characteristics of the garment mesh do not change for all body shapes. For the $k$-th generated clothing instances, we concatenate the clothing coordinates of all vertices into a single column vector $\vec{c^k}$. Then all 17 clothing samples are collected into a matrix $S=[\vec{c^1},...,\vec{c^k},...,\vec{c^{17}}]$. Principal component analysis (PCA) is used to find a low-dimensional subspace so that $\vec{c^k}$ can be approximated by the following equation: \begin{equation} \label{eq:phipca} \vec{c^k}=U\vec{\phi}^k+\vec{u}, \end{equation} where $\vec{u}$ is the mean coordinates of clothing meshes and $\vec{\phi}^k$ is the clothing shape coefficients to represent the clothing shape $\vec{c^k}$. The matrix $U$ represents the first few principal components of the shape deformation space. As shown in Figure \ref{fig:pcaconvergence}, the variance converges around 5 principal components, which is 98.6\% of the total variance. Moreover, we find that the remaining principal components have little effect on the final result. Therefore, we use the first 5 principal components in our clothing shape model to balance efficiency and accuracy. Figure \ref{fig:principalcomponents} illustrates the mean and first three principal components for a men's long-sleeved shirt. Given 17 body and clothing training pairs, we learn a linear mapping, $W$, between body shape parameters and clothing shape parameters using L2-regularized least squares with the weight of the regularized term being 0.2. For an input body shape $\beta$, the corresponding clothing shape coefficients $\vec{\phi}$ can be predicted by: \begin{equation} \label{eq:beta2phi} \vec{\phi} = W\cdot \left(\vec{\beta}, \vec{\beta}^2, 1 \right)^T. \end{equation} Thereafter, the clothing shape deformation under the body shape $\beta$ can be predicted through $\vec{\phi}$ using Eq. \ref{eq:phipca}. In practice, we find that there might be some minor interpenetrations between the predicted clothing mesh and the body mesh. We resolve this by pushing the implicated clothing vertex \uline{in the direction of} the normal of its closest body vertex until the interpenetration is settled. This should not be confused with the penetration handling step in Sec. \ref{subsubsec:penetrationhandling}. \subsection{Clothing Pose Model}\label{subsec:clothingposemodel} The clothing pose model captures clothing deformation introduced only by pose changes. Many excellent clothing pose models have been presented over the last decade \cite{kim2013near,hahn2014subspace,xu2014sensitivity}. Our animation scheme does not limit the type of clothing pose model; in extreme cases, we apply a simulated sequence as the clothing pose model (see Sec. \ref{subsec:animationresult}). For real-time virtual try-on applications, we adopt the sensitivity-optimized rigging method proposed in \cite{xu2014sensitivity} since it has a good balance of accuracy and speed. The system consists of two phases: the offline rigging stage and the run-time clothing synthesis stage. Given a set of pre-computed example clothing deformations sampled at different poses, we first rig the example clothing shapes to their poses with the underlying body skeleton bones at each example pose in the offline stage. At run-time, a clothing deformation of the input pose is synthesized by blending skinned clothing deformations computed from its nearby examples. \begin{comment Specifically, to compute the position of a clothing vertex $\bar{y}^J$ for an input pose $\theta$, the skinning scheme transforms the position of the same clothing vertex $y^J$ at its nearby example pose $p^{J}$ by \begin{equation}\label{eq:sorequation1} \bar{y}^J = \sum_{b=1}^{N_b} \big \{ w_b^J \big (R_b^{J,\theta}y^J+T_b^{J,\theta} \big ) + R_b^{J,\theta}\tau_b^JT_b^{J,\theta} \big\}, \end{equation} where superscript $J$ indicates values for the $J$-th example and $R_b^{J,\theta}$ and $T_b^{J,\theta}$ are the relative rotation and translation of bone $b$ from example pose $p^J$ to input pose $p^{\theta}$, respectively. $w_b^J$ is the bone rotation weight defined on each clothing vertex (for simplicity, we drop the vertex index), which is used to deform the example clothing deformations to an input pose according to bone rotations. $\tau_b^J$ is the bone translation weight, which transforms the bone center translation to its influence on the clothing deformation. For each example pose, these two parameters are computed for each cloth vertex through a Sensitivity-Optimized Rigging (SOR) scheme in the offline stage. For an input pose $\theta$, the system first searches the examples to find the nearby poses $p^J$ and their clothing shapes $y^J$ using a sensitivity-based distance measure. For each nearby pose, the clothing deformation for the input pose $\theta$ is predicted by the skinning scheme according to Eq. \ref{eq:sorequation1}. After that, we get the position of each clothing vertex $y^{\theta}$ at the input pose by blending the clothing shapes deformed from all nearby example poses by: \begin{equation}\label{eq:sorequation2} f(\beta_0, \theta) = y^{\theta}=\sum_{J=1}^{N_J}w^J(\theta)\bar{y}^J, \end{equation} where $\beta_0$ is the same as the one in Eq. \ref{eq:blending} and represents the original body shape, which is used to compute example clothing deformations, and $w^J(\theta)$ is the blending weight for the $J$-th nearby example pose, which can be computed using the sensitivity-based distance measure technique. \end{comment} \begin{figure*}[t] \setlength{\abovecaptionskip}{-0.0cm} \centering \includegraphics[width=\textwidth]{figures/workflow/workflow_lbs.pdf} \caption{The LBS synthesizing process. Given a query pose and shape, we first choose nearby clothing shape models. For each chosen clothing shape model, we change its shape to the query shape (the original shape), and deform the clothing mesh to the query pose using a linear blend skinning. Finally, we blend these deformations to get what we call the \textit{LBS synthesis for the query shape (the original shape).}} \label{fig:lbssynthesis} \end{figure*} \subsection{Runtime Synthesis}\label{subsec:runtimeshythesis} Given an input body with shape parameters $\beta$ and pose parameters $\theta$, our method first finds nearby example poses through a sensitivity-based distance measure and then approximates the clothing deformation using Taylor expansion, as shown in Eq. \ref{eq:blending}. Specifically, the first term $f(\beta_0,\theta)$ computes the clothing under the input pose with the original body shape, which can be predicted through the clothing pose model described in Sec. \ref{subsec:clothingposemodel}. The computation of the second term $\sum_{s=1}^{N_s}w^s LBS^{\theta}_{\theta_s}(f(\beta,\theta_s))$ consists of three steps: predicting new clothing instances $f(\beta,\theta_s)$ for each clothing shape model, applying LBS to $f(\beta,\theta_s$), and blending the LBS results. For each clothing shape model, we first calculate the clothing shape coefficients through Eq. \ref{eq:beta2phi} using the input shape $\beta$. Then we compute the new clothing instances $f(\beta,\theta_s)$ by Eq. \ref{eq:phipca}. To apply LBS to new clothing instances $f(\beta,\theta_s)$, we first find the closest body vertex for each clothing vertex and set the bone weights, $w_b^s$, of a clothing vertex to that of its closest body vertex. We refer to this step as the \textit{binding information updating step}. Then we deform each nearby clothing mesh $f(\beta, \theta_s)$ towards the query pose as $\bar{y}^s = \sum_{b=1}^{N_b} w_b^s \big (R_b^{\theta_s,\theta}y^s+T_b^{\theta_s,\theta} \big )$, where $R_b^{\theta_s,\theta}$ and $T_b^{\theta_s,\theta}$ are the relative rotation and translation of bone $b$ from example pose $\theta_s$ to input pose $\theta$, respectively, and $w_b^s$ is the bone weight defined on the clothing vertex $y^s$ of $f(\beta, \theta_s)$. We denote this equation as $LBS^{\theta}_{\theta_s}(f(\beta,\theta_s))$, as shown in Eq. \ref{eq:blending}. We use a sensitivity-based distance measurement both to find nearby clothing shape models and to compute the blending weights $w^s$. To reduce the required clothing shape models in the database, we divide the clothing mesh into several regions, as in \cite{xu2014sensitivity}. To this end, we manually partition the bones into $N_g=7$ regions (shown on the top left of Figure \ref{fig:lbssynthesis}). A region weight $w_{g,y}$ for a clothing vertex $y$ is computed by summing the bone weights $w_b^0$ for the bones of the current region. $w_b^0$ is computed in the T-pose $s=0$ for $\beta$ through the \textit{binding information updating step}. In this way, our method synthesizes the result for each region separately. For each region $g$, we compute the sensitivity-based distance $D_g^s(\theta)$ between the input pose $\theta$ and the pose of the $s$-th data point as the weighted sum of differences of joint angles: \begin{equation}\label{eq:weighteddiffonejoint} \begin{split} D_g^s(\theta) = \sum_{y\in Y} w_{g,y} \sum_{m=1}^{3N_L} ||\bar{s}_{y,m}\cdot \Theta_m(\theta,\theta_s)||^2 \\ =\sum_{m=1}^{3N_L} Q_{g,m} ||\Theta_m(\theta,\theta_s)||^2, \end{split} \end{equation} where superscript $s$ indicates values for the $s$-th clothing shape model, $N_L$ is the number of joints, and $\Theta_m(\theta,\theta_s)$ calculates the $m$-th joint angle difference. $\bar{s}_{y,m} = \frac{1}{17} \sum_{k=1}^{17} ||s_{y,m}^k||$ is the average sensitivity of 17 training shapes, where $s_{y,m}^k$ indicates the three-dimensional coordinate differences of a clothing vertex $y$ under a small joint rotation of $m$-th joint angle, calculated under T-pose and the $k$-th training shape. $Q_{g,m}=\sum_{y\in Y} w_{g,y}||\bar{s}_{y,m}||^2$ reflects the influence of the $m$-th joint angle on region $g$. $\bar{s}_{y,m}$ indicates the influence of the $m$-th joint angle on clothing vertex $y$, which is computed in the database construction stage. Each time we change the body shape, we update $Q_{g,m}$ for each region once according to the new region weights of clothing vertices. At run-time, we can compute the sensitivity-based distance $D_g^s(\theta)$ efficiently since we only have to calculate the joint angle differences $\Theta_m(\theta,\theta_s)$. Given the distance $D_g^s(\theta)$, the weight for the region is calculated as $W_g^s(\theta)=1/(D_g^s(\theta) + \epsilon)^k$, where $\epsilon$ is a small number in case of zero division and $k$ regulates the influence of closer examples. A small $k$ tends to smooth the animation and lose fine wrinkles, while a large $k$ tends to preserve fine details but results in discontinuity. In our implementation, we set $k=3$. In practice, we use the first five nearby clothing shape models to synthesize the final result, i.e. we set $W_g^s(\theta)=0$ except for the top five largest ones. Finally, the blending weight for each clothing vertex in example $s$ is calculated as: \begin{equation}\label{eq:vertexblendingweight} w^s = \sum_{g=1}^{N_G}\big( w_g W_g^s(\theta)/\sum_{s=1}^{N_s}W_g^s(\theta) \big). \end{equation} The computation pipeline of the third term $\sum_{s=1}^{N_s}w^s LBS^{\theta}_{\theta_s}(f(\beta_0,\theta_s))$ is basically the same as the second term except for the body shape. In other words, we first compute clothing instances under the original shape $\beta_0$ for each clothing shape \uline{model}, denoted as $f(\beta_0,\theta_s)$. Then we blend the LBS results of $f(\beta_0, \theta_s)$ using the same weights as in the second term. Note that $f(\beta_0,\theta_s)$ and their binding information are computed only once for an application. \subsection{Refinement} \subsubsection{Decaying Effects} \begin{figure}[t] \setlength{\abovecaptionskip}{0.0cm} \centering \includegraphics[width=0.48\textwidth]{figures/choose_damping_ratio/choose_damping_ratio.pdf} \caption{Relationship between the damping ratio and the norm of the residual. We choose the maximum damping ratio that keeps the norm of the residual low.} \label{fig:choosedampingratio} \end{figure} In practice, we find our clothing synthesis result may experience sudden changes for sudden input pose changes. Similar to \cite{xu2014sensitivity}, we prevent such a problem by blending the distance at the current time step $D_{g}^{s}$ with that of the previous time step $D_g^{\prime s}$: \begin{equation}\label{eq:decaying} D_g^s = \eta D_g^{\prime s} + (1-\eta)D_g^s, \end{equation} where $\eta$ is the damping ratio, ranging from 0 to 1. To determine the damping ratio, we first calculate the relationship between the damping ratio and the norm of residual, computed under the original body shape. Then we choose the maximum value of the damping ratio while the norm of the residual is still low (see Figure \ref{fig:choosedampingratio}). The rationale is that the higher the value of the damping ratio, the stronger the decaying effect and the less flickering. We believe this simple technique will introduce some hysteresis while maintaining a natural result. \subsubsection{Penetration Handling}\label{subsubsec:penetrationhandling} \begin{figure}[t] \setlength{\abovecaptionskip}{0.0cm} \centering \includegraphics[width=0.48\textwidth]{figures/penetration_handling/penetration_handling.png} \caption{The result before (left) and after (right) our penetration handling process.} \label{fig:penetrationhandling} \end{figure} After we get the synthesized result through Eq. \ref{eq:blending}, there might be interpenetrations between the clothing mesh and the body mesh (as shown in Figure \ref{fig:penetrationhandling}). Like SOR \cite{xu2014sensitivity}, we use a re-projection technique with three steps to resolve this problem. First, every time we change the shape of the avatar, for each clothing shape model we re-calculate the initial \uline{distance of a clothing vertex from} its closest body vertex. We refer to this as the initial clearance. Second, in the run-time stage, if the clearance between a clothing vertex and its closest body vertex is less than the initial clearance, for each nearby clothing shape model, we re-project the clothing vertex towards the direction of the normal of its closest body vertex as: \begin{equation}\label{eq:penetrationhandling} \begin{cases} \hat{y}^s = y + d \cdot \vec{n} \\ d^s = max (0,\quad d_0^s - h^s), \end{cases}% \end{equation} where $y$ is a clothing vertex synthesized by Eq. \ref{eq:blending}, $\vec{n}$ is the normal of the closest body vertex of $y$, and $h^s$ is current clearance. We set $d_0^s=min\{h_0^s,\epsilon_p\}$, where $h_0^s$ is the initial clearance, and $\epsilon_p$ is to mimic the penetration depth margin in cloth simulation (in our implementation, we empirically set $\epsilon_p = 5mm$). Finally, we blend the re-projection results of all nearby examples as $\bar{y}=\sum_{s=1}^{N_s}w^s\hat{y}^s$. \subsection{Database Construction} \label{subsec:database} \begin{figure}[t] \setlength{\abovecaptionskip}{0.0cm} \centering \includegraphics[width=0.48\textwidth]{figures/csm_training/csm_training.pdf} \caption{Part of our training data. Each column is the training data for a clothing shape model.} \label{fig:clothingshapemodeltrainingdata} \end{figure} \begin{comment} \subsection{Anchoring Points for Taylor Expansion} We use the average penetration depth, $\bar{F}$, as the error to measure the quality of the predicted clothing deformation. Each time we sample, our goal is to find a body pose parameters $\theta_s$, which locally maximizes the total total error of the clothing deformations synthesized by the existing examples under several randomly chosen body shape parameters $\beta_{r_1},\beta_{r_2},\cdots,\beta_{r_k}$. . As shown in Eq. \ref{eq:totalresidual}. \begin{equation}\label{eq:totalresidual} F(\theta_s) = \sum_{k=1}^{k=N_r} \bar{F}(\theta_s,\beta_{r_k}) \end{equation} where $N_r$ is the number of shape parameters, $\bar{F}(\theta_s,\beta_{r_k})$ represents the average absolute value of displacement of clothing vertices, synthesized under $(\theta_s, \beta_{r_k})$, in the penetration handling process, and $\beta_{r_1},\beta_{r_2},\cdots,\beta_{r_k}$ are the same for each sampling. When we get a new sample points, we will train a clothing shape model at $\theta_s$. Similar to SOR \cite{xu2014sensitivity}, we use MCMC to sample data points. There are two key components in MCMC: the proposal distribution and the objective distribution. {\textbf{The Proposal Distribution}} For a given set of motion sequences from CMU motion Capture library \cite{cmumotiondata}, we first use k-means to cluster these motions into a certain number of clusters. (In our implementation, we use xx20 sequences containing xx80,000 motions, and we set cluster number to 1000.) We use quaternions to represent motions. After the clustering process, each motion belongs to a cluster, and each cluster center represents a state. Then, we build the \emph{state transition graph} as follows. For each motion sequence, a motion has transitions to both its previous motion and its following motion. Hence, the state a motion belongs to has transitions to the states its previous motion and its following motion respectively belong to. We accumulate the transition count. After this process, a state may have multiple paths to other states, and for each path, the transition count may be larger than 1. Finally, for each state, we eliminate the transitions to itself since in our application, we do not have to sample one particular pose more than one once. Then, to get the probability of the state transiting to another, we divide the transition count of each path by the total transition count. Thus far, we have built the state transition graph of discrete poses, which will be used as the proposal distribution in the MCMC process. {\textbf{The Objective Distribution}} The objective distribution of our sampling points is: \begin{equation}\label{eq:objectivedistribution} f(\theta) = exp\big(\frac{\tau||F(\theta_s)||}{||\theta-\theta_0||+\epsilon}\big), \end{equation} where $\tau$ is positive number larger than $1$, that used to raised the possibility to accept a candidate with a small objective function value, $||\theta-\theta_0||$ is used to penalize extreme pose with respect to the rest pose $\theta_0$ (the pose in Figure \ref{fig:clothingshapemodel}), and $\epsilon$ is a small number preventing zero division. After we obtain a certain number of data points, 20 in our implementation, we will keep the size of database fixed and use a technique we call adaptive sampling to optimize the existing database. When new sample arrives, we substitute one of the existing data point with this sample, and determine whether it improves the database or not, and if it does, we record this replacement and its improvement. We iterate every data point in the existing database to find the best replacement. This process will continue until we obtain a satisfied database. This technique can reduce about ?\% of the error of our result. \end{comment} To generate our example database, we first select 32 motion sequences from the CMU motion capture library \cite{cmumotiondata} and sample a pose for every four frames. In total, we obtain 17k different poses representing the whole pose space. Then we use weighted K-means to classify these poses into a certain number of clusters, which will be used to generate our example database of the clothing shape model. In our implementation, it takes about one hour to classify these poses into a typical value of 150 clusters. The weight of a joint should reflect its importance in clothing animation. For instance, while the rotation of the knee joint has little, if any, influence on the animation result of a T-shirt, it plays a crucial role in the deformation of pants. To this end, we use the sum of the norm of the sensitivity of a joint, $s^L$, as its weight in the clustering process, calculated as \begin{equation}\label{eq:kmeansweight} s^L = \sum_{y\in Y}\sum_{m=1}^{3}||\bar{s}_{y,m}||, \end{equation} where $m$ represents the degree of freedom of joint $L$ and $\bar{s}_{y,m}$ is the same as in Eq. \ref{eq:weighteddiffonejoint} (i.e. $s^L$ is computed over all the training shapes). The poses in each row of Figure \ref{fig:clothingshapemodeltrainingdata} are the clustering result of a male wearing a long-sleeved shirt. The cluster centers are essentially the anchoring points for Taylor expansion in Eq. \ref{eq:blending}. For each anchoring point, we generate a clothing shape model, which we elaborate on in Sec. \ref{subsec:clothingshapemodel}. \section{Experiments} \label{sec:experiment} We implemented our approach in C++ and reported its performance on an off-the-shelf computer with an Intel Core i7-7700K CPU 4.20GHz and 16GB memory. The same material parameters are applied for all types of clothing: bending stiffness is $10^{-5} N/m$ , stretching stiffness is $30 N/m$, area density is $0.1 kg/m^2$, and dynamics and static coefficient of friction is $0.3$. \subsection{Database Construction and Run-time Performance} \label{subsec:databaseconstruction} \begin{table}[htbp!] \begin{center} \resizebox{0.5\textwidth}{!} { \begin{tabular}{|c|c|c|c|} \hline Clothing & Long-sleeved shirt & T-shirt & Pants \\ \hline number of vertices & 12.2k & 12.0k & 11.7k \\ number of triangles & 23.4k & 23.8k & 22.4k \\ \hline number of data points & 150 / 150 & 150 / 150 & 150 / 170 \\ database size (MB) & 208.5 / 56.7 & 206.3 / 56.0 & 199.5 / 61.7 \\ \hline construction time (hrs) & 15 / 7 & 10 / 6 & 14 / 8\\ runtime frame rate (FPS) & 56 & 55 & 71 \\ \hline \end{tabular} } \end{center} \caption{Statistics for three different clothing databases. The number to the left of ``/'' is the data for our method while the number to the right of ``/'' indicates the data for the external clothing pose model we use, i.e. SOR. Note that we have discarded the translation item in SOR and replaced the sampling method with ours. These measures contribute to the high speed of the database generation process of SOR. } \label{tbl:statistics} \end{table} We create databases for three clothing models: a long-sleeved shirt and pants for a male body and a T-shirt for a female body (see Figures \ref{fig:teaser} and \ref{fig:moreresult}). For each clothing model, we first calculate its joint weights using Eq. \ref{eq:kmeansweight}; then we use weighted K-means to obtain anchoring points; finally, we generate a clothing shape model for each anchoring point. Based on the independence of the anchoring points, our data points can be generated in parallel. \uline{Take T-shirt as an example, we generate $150 \times 17 + 150$ (the data used for our method + the data for SOR) clothing instances to generate our database. On the other hand, \cite{santesteban2019learning} simulated $7117 \times 17$ clothing instances. That is, the size of their database is more than 40 times larger than ours.} Table \ref{tbl:statistics} shows the details of databases constructed for these clothing models. \subsection{Separability} \label{subsec:separability} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures/separability_validation/separability_validation.pdf} \caption{Validation of the local linear separability of pose-dependent deformation and shape-dependent deformation. Practically, we use Eq. \ref{eq:taylorexpansionfinal} to compute the synthesized clothing and, theoretically, we replace each term on the right-hand side of Eq. \ref{eq:taylorexpansionfinal} with the corresponding simulation result. As shown in the graph, both approximation errors converge to zero when the distance to the anchoring point approaches zero. The images on the right are the results for a randomly generated pose and shape with $r=0.5$. Both results (the middle two) can realistically recover the folds and wrinkles (the right-most figure is the ground truth) while the theoretical result has a smaller error.} \label{fig:separability} \end{figure*} In this section, we validate the local linear separability of pose-dependent deformation and shape-dependent deformation. For an anchoring point, we uniformly sample $m$ neighboring data points (we set $m = 100$ in our experiment), all of which are a distance $r$ from the anchoring point. Specifically, we randomly generate $3N_L+4$ (number of joint angles plus 4 shape parameters) numbers that are then concatenated into a vector $\vec{e}$. We normalize $\vec{e}$ in terms of 2-norm and then multiply it by $r$. The first $3N_L$ components of the final $\vec{e}$ are the displacement of joint angles from the anchoring point while the last $4$ components are the displacement of the shape parameters. Then, for each neighboring point, we calculate the average Euclidean vertex distance between the simulation result and the synthesized result as the approximation error. The average approximation error of all neighboring points is regarded as the approximation error of the anchoring point at distance $r$. We run this procedure both practically and theoretically. Practically, we apply Eq. \ref{eq:taylorexpansionfinal} to compute the synthesized clothing; theoretically, we replace each term on the right-hand side of Eq. \ref{eq:taylorexpansionfinal} (i.e. $f(\beta_0,\theta); f(\beta,\theta_0)$; and $f(\beta_0,\theta_0)$) with the corresponding simulation result. Hence, in the practical experiment, the error comes from three sources: the clothing pose model, the clothing shape model, and the linear approximation. In the theoretical experiment, the error is caused only by the linear approximation. As shown in Figure \ref{fig:separability}, for a randomly chosen anchoring point, both results can recover the folds and wrinkles with good fidelity; the error of the theoretical experiment is smaller than that of the practical experiment given the same distance $r$. Meanwhile, when $r$ is small, both approximation errors are relatively small. The errors converge to zero when $r$ approaches zero. This demonstrates the approximate separability of pose-dependent deformation and shape-dependent deformation. \subsection{Improvement on SOR by applying our sampling method} \label{subsec:improvementonsor} \begin{figure}[t] \setlength{\abovecaptionskip}{0.0cm} \centering \includegraphics[width=0.48\textwidth]{figures/sor_improvement/sor_improvement_visual.pdf} \caption{Qualitative comparison between the original SOR and our method: (a) original SOR result; (b) error between original SOR result and the simulation result; (c) our result; (d) error between our result and the simulation result. The errors are marked in green. We can see that our result is closer to the ground truth and looks more natural, especially in the sleeve area.} \label{fig:sorimprovementvisual} \end{figure} \begin{table*}[htbp!] \begin{center} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $n_{dp}$ & 1 & 10 & 20 & 30 & 40 & 50 & 60 & 70 & 80 & 90 & 100 & 110 & 120 & 130 & 140 & 150\\ \hline SOR & 45.6 & 28.0 & 25.7 & 28.1 & 29.0 & 29.8 & 28.9 & 26.0 & 25.8 & 25.2 & 23.7 & 23.3 & 23.1 & 22.6 & 22.3 & 22.2\\ \hline Ours & 27.6 & 20.5 & 20.0 & 18.4 & 16.5 & 18.7 & 18.2 & 17.3 & 17.4 & 18.9 & 17.6 & 17.2 & 16.4 & 16.9 & 17.4 & 17.2\\ \hline Imp & 39\% & 27\% & 22\% & 34\% & 43\% & 37\% & 37\% & 33\% & 33\% & 25\% & 26\% & 26\% & 29\% & 25\% & 22\% & 22\%\\ \hline \end{tabular} \end{center} \caption{Quantitative comparison between the original SOR and our method. $n_{dp}$ stands for the number of data points and ``Imp'' is an abbreviation of ``improvement''. We can see that our sample scheme can reduce the norm of the residual by more than 22\%, meaning that the synthesized clothing deformations of our method are closer to the equilibrium states.} \label{tbl:sorimprovementresidual} \end{table*} Each step of MCMC sampling in SOR aims to find the pose that maximizes the norm of the residuals of the synthesized clothing, i.e. they try to minimize the maximum error. However, a pose with the biggest error is sometime not a natural human pose. On the other hand, our sampling method can generate poses that are representative of real life. Therefore, we believe our sampling method is better than the one used in SOR. To demonstrate this, we replace the sampling scheme in SOR with our sampling method and regenerate the database for SOR. The weights of joints in the clustering process are computed with the sensitivity under the original body shape from SOR. Qualitatively, our method can generate more natural clothing deformation, as shown in Figure \ref{fig:sorimprovementvisual}. Quantitatively, as shown in Table \ref{tbl:sorimprovementresidual}, our method reduces the norm of the residual force by over 22\%, as computed over the 32 motion sequences described in Sec. \ref{subsec:databaseconstruction}. Furthermore, the sampling points generated by our sampling method are independent of each other, which enables the database to be generated in parallel. \subsection{Clothing Shape Model} \begin{figure}[t] \setlength{\abovecaptionskip}{0.0cm} \centering \includegraphics[width=0.48\textwidth]{figures/clothing_shape_model/pca_convergence.png} \caption{Convergence of variance during PCA of the training data (left most column in Figure \ref{fig:clothingshapemodeltrainingdata}). Our method applies PCA to coordinates of clothing vertices while DRAPE applies PCA to deformation gradients of clothing triangles. We can see that our method converges faster and captures more variance with a given number of principal components.} \label{fig:pcaconvergence} \end{figure} We compare our clothing shape model to that of DRAPE \cite{guan2012drape}. Instead of using the SCAPE body \cite{anguelov2005scape}, we map the first four shape parameters of SMPL \cite{loper2015smpl} to their clothing shape parameters and use our training data to train this mapping function. We take a men's long-sleeved shirt as the representative clothing type for quantitative experiments of clothing shape models. The results are similar for other clothing types. To better evaluate the performance of the clothing shape model, we use the average Euclidean vertex distance to measure the prediction error and compute the average prediction error for 100 randomly generated body shapes. Figure \ref{fig:clothingshapemodelscomparison} (left) illustrates the average errors for 20 different clothing shape models (20 different poses). Compared to DRAPE, our method reduces the error for each clothing shape model by 60\%. This can also be seen in the prediction results of the second clothing shape model for a random body shape; please see the right part of Figure \ref{fig:clothingshapemodelscomparison}. As shown in Figure \ref{fig:pcaconvergence}, our method captures more variance than DRAPE given the same number of principal components (98.6\% vs. 87.2\% when using the first 5 principal components), which contributes to the higher performance of our clothing shape model. It is also worth noting that the square term $\beta^2$ reduces the prediction error by 3\%. \begin{figure*}[t] \setlength{\abovecaptionskip}{0.0cm} \centering \includegraphics[width=0.98\textwidth]{figures/clothing_shape_model/clothing_shape_models_comparison.pdf} \caption{Comparison of our clothing shape model and that of DRAPE. The bar graph on the left shows the errors for 20 clothing shape models. The prediction results for the second clothing shape model for a random body shape are shown on the right. The errors are marked in green. } \label{fig:clothingshapemodelscomparison} \end{figure*} Furthermore, our mapping function from body shape to clothing deformation only involves multiplication and addition, and thus is more efficient to calculate than DRAPE, where an optimization problem needs to be solved. In practice, it takes 22ms for our clothing shape model to predict the new clothing information, compared to 300ms for DRAPE. \subsection{Animation Results}\label{subsec:animationresult} \begin{figure*}[t] \setlength{\abovecaptionskip}{0.0cm} \centering \includegraphics[width=0.92\textwidth]{figures/more_result/more_result.png} \caption{Synthesis results. Clothing patterns are shown in blue.} \label{fig:moreresult} \end{figure*} We use SMPL \cite{loper2015smpl} to generate template body meshes under different shapes and apply dual quaternion blending \cite{kavan2008geometric} to animate the body mesh. The skinning weights are calculated using Pinnochio \cite{baran2007automatic} and the motion sequences are from the CMU motion capture library \cite{cmumotiondata}. At run-time, given a new input pose, we employ Eq. \ref{eq:blending} to obtain a coarse result and then add the decaying effect and resolve interpenetrations to get the final result. Given a new input shape, we re-compute the new clothing instance $f(\beta, \theta_s)$ and update the binding information for each clothing shape model (See Sec. \ref{subsec:runtimeshythesis}), which takes 0.022s each time. In our implementation, all clothing shape models can be handled in parallel, and it takes 0.7s to handle 150 data points for the men's long-sleeved shirt. Theoretically, our algorithm can add the shape dimension to any clothing pose model. To demonstrate this, we run our method with both SOR and a sequence of simulated clothing instances. First, we use SOR as our clothing pose model. We turn off the penetration handling process in SOR and leave it for our method to address. We also discard the translation item (rightmost item in Eq. 1 of \cite{xu2014sensitivity}) in SOR since we find it has little impact on the final result, especially when using our sampling method. These measures contribute to the high speed of our method. We use our sampling method to generate both the SOR database and our database. As shown in Figure~\ref{fig:moreresult} and Figure~\ref{fig:teaser}, our method can predict clothing instances with fine wrinkles. Please refer to Table \ref{tbl:statistics} for detailed statistics. \begin{figure*}[t] \setlength{\abovecaptionskip}{0.0cm} \centering \includegraphics[width=0.98\textwidth]{figures/animation_comparison/animation_comparison.pdf} \caption{Comparison between the synthesis results achieved using SOR as the clothing pose model and those achieved using simulation as the clothing pose model. (a) Synthesis result (left) using SOR (right); (b) synthesis result (left) using simulation (right); (c) simulation result; (d) Euclidean vertex distance between the synthesis results and the simulation clothing. The errors are marked in green. We can see that the synthesis result achieved using simulation (right) has fewer errors than the result achieved using \uline{SOR (left)}.} \label{fig:animationcomparison} \end{figure*} Second, we use simulated clothing instances as our clothing pose model. We simulate the garment mesh under a randomly chosen motion sequence while keeping the same body shape parameters. Then we use the resulting clothing instances as the clothing pose model in Eq. \ref{eq:blending}. Figure \ref{fig:animationcomparison}(b) shows that our method can recover realistic fold patterns. We can see that the quality of our result relies on the external clothing pose model, i.e. if the clothing pose model is more accurate, our result is closer to the ground truth (Figure \ref{fig:animationcomparison}(c)). This can also be seen in Figure~\ref{fig:convergence}. We will elaborate on this in the following section. \subsection{Convergence} \begin{figure}[t] \setlength{\abovecaptionskip}{0.0cm} \centering \includegraphics[width=0.48\textwidth]{figures/convergence/convergence_error.png} \caption{Convergence of synthesis error while using an increasing number of data points.} \label{fig:convergence} \end{figure} We use the average Euclidean vertex distance between the synthesized clothing and the simulation clothing as the error to measure the physical accuracy of our result. The error is calculated over 16$\times$400 randomly chosen shape and pose pairs (16 shapes and 400 poses for each shape). As shown in Figure~\ref{fig:convergence}, the error of the \textit{results with SOR} drops to 1.06 cm at 150 data points, which we believe is acceptable in most scenarios. Given the number of data points, the error of the \textit{results with simulation} is much less than that of \textit{results with SOR}, which demonstrates that our method can be improved if a more accurate clothing pose model is employed. \subsection{VR Scenarios} \begin{figure}[t] \setlength{\abovecaptionskip}{0.0cm} \centering \includegraphics[width=0.475\textwidth]{figures/vr/vr_applications.pdf} \caption{The avatar in a VR scenario. We provide the user with an immersive VR experience from a first-person perspective with HTC Vive. \uline{The avatar looks around and observes an agent performing some tasks.}} \label{fig:vrapplications} \end{figure} Our method can be applied to VR scenarios. As shown in Figure \ref{fig:vrapplications}, we provide the user with an immersive VR experience from a first-person perspective with an HTC Vive (left most). The clothing deformations for the agents are generated by our method. \subsection{User Evaluation} \label{subsecuserstudy} \begin{figure*}[t] \setlength{\abovecaptionskip}{0.0cm} \centering \includegraphics[width=1.0\textwidth]{figures/user_study/user_study_statistics_3scenarios.pdf} \caption{Participant preferences in user evaluation. The questions were ``Which clothing animation looks more realistic?'', ``Which wrinkles look more natural?'', and ``Which agent looks more like a real person?''. Scores are normalized such that 1 indicates a strong preference for our method and 7 indicates a strong preference for LBS method. \uline{For each scenario, participants rated our method preferably in terms of generating more realistic or plausible clothing animation ($2.44 \pm 1.042$, $2.28 \pm 0.895$, $2.22 \pm 1.060$), generating more natural-looking wrinkles ($2.17 \pm 1.150$, $2.06 \pm 1.110$, $2.50 \pm 1.043$), and enhancing the presence of the agent ($2.56 \pm 1.338$, $2.33 \pm 1.138$, $2.50 \pm 1.150$).}} \label{fig:userstudystatistics} \end{figure*} \begin{table*}[htbp!] \begin{center} \resizebox{1.0\textwidth}{!} { \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Question (Which...) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & mean & SD\\ \hline clothing animation looks more realistic? & 4 / 4 / 4 & 5 / 6 / 9 & 6 / 7 / 3 & 3 / 1 / 1 & 0 / 0 / 1 & 0 / 0 / 0 & 0 / 0 / 0& 2.44 / 2.28 / 2.22 & $\pm 1.042$ / $\pm 0.895$ / $\pm 1.060$\\ \hline wrinkles look more natural? & 6 / 4 / 1 & 7 / 12 / 11 & 1 / 1 / 4 & 4 / 0 / 0 & 0 / 0 / 2 & 0 / 1 /0 & 0 / 0 / 0 & 2.17 / 2.06 / 2.50 & $\pm 1.150$ / $\pm 1.110$ / $\pm 1.043$\\ \hline agent looks more like a real person? & 4 / 5 / 5 & 6 / 6 / 3 & 4 / 3 / 6 & 3 / 4 / 4 & 0 / 0 / 0 & 1 / 0 / 0 & 0 / 0 / 0 & 2.56 / 2.33 / 2.50 & $\pm 1.338$ / $\pm 1.138$ / $\pm 1.150$\\ \hline \end{tabular} } \end{center} \caption{The response frequency of subjects in the study described in Sec. \ref{subsecuserstudy}. \uline{Scores are normalized such that $1$ indicates a strong preference for our method and $7$ indicates a strong preference for the LBS method. For each scenario (separated by ``/''), participants prefer our method over the prior method on several dimensions.}} \label{tbl:userstudydata} \end{table*} We conducted a user study to demonstrate the perceptual benefit of our method compared to the prior technique in generating clothing deformations for the agent in immersive settings. \textbf{Experiment Goals \& Expectations:} We hypothesize that the clothing deformations generated by our method will exhibit more detailed wrinkles and more dynamics compared to prior methods and that participants will strongly prefer our results to those of prior methods. \textbf{Experimental Design:} The study was conducted based on a within-subjects, paired-comparison design. Participants explored two simulations with a fixed exposure time wearing a HTC Vive headset. The clothing deformations in the two simulations were generated using our method and the LBS method. \uline{The order of scenes was counterbalanced, as well as the order of the methods.} After these two simulations, participants answered a set of questions, \uline{and our questionnaire design is inspired by prior methods~\cite{garau2005responses,slater2006analysis,narang2016pedvr}.} \textbf{Comparison Methods:} Previous methods generally adopt skinning methods to deform the clothing mesh of an agent in virtual environments. Therefore, we evaluated our method against the Linear Blend Skinning (LBS) method. \textbf{Environments:} \uline{We use three scenarios for the user study. The first scenario was comprised of a man sweeping in the living room, while wearing a long-sleeved shirt and pants. The second scenario consisted of a man that has a different shape and wiping the windows, while wearing the same clothes as the first scenario. The last scenario corresponded to a woman wandering in a living room, while wearing a t-shirt and shorts. } Participants can walk around and observe the agent doing some tasks. Please refer to Figure \ref{fig:vrapplications} and the supplemental video for more details. \textbf{Metrics:} Participants were asked to indicate their preference for a method using a 7-point Likert scale, with 1 indicating a strong preference for the method presented first, 7 indicating a strong preference for the method presented second, and 4 indicating no preference. \uline{In terms of reporting the results, we normalized the participant responses so that 1 indicates a strong preference for our method.} \textbf{Results:} Our study was taken by 18 participants, 9 male, with a mean age of $24.44\pm 1.95$ years. The participant responses clearly demonstrate the benefits of our algorithm. For each question, we performed a one-sample $t$-test comparing the mean of the question with a hypothetical mean of 4 (no preference or no impact). The question ``Which clothing animation looks more realistic?'' was shown to be significant for each scenario: $t$(17) = -6.336, $p < 0.001$; $t$(17) = -8.166, $p < 0.001$; $t$(17) = -7.114, $p < 0.001$. The question ``Which wrinkles look more natural?'' was also significant for each scenario: $t$(17) = -6.761, $p < 0.001$; $t$(17) = -7.432, $p < 0.001$; $t$(17) = -6.101, $p < 0.001$, as was ``Which agent looks more like a real person?'': $t$(17) = -4.579, $p < 0.001$; $t$(17) = -6.216, $p < 0.001$; $t$(17) = -5.532, $p < 0.001$;. Figure \ref{fig:userstudystatistics} and Table \ref{tbl:userstudydata} provide further details on participant responses. \section{Discussion}\label{sec:discussion} The rationale of our sampling method is that Taylor expansion locally approximates surrounding values of an anchoring point while cluster centers are the best set of points to approximate all the points, i.e. they are the best set of anchoring points. Compared to the MCMC sampling process used in SOR \cite{xu2014sensitivity}, our sampling process is able to sample more typical poses in real life, while their sampling step finds a pose that maximizes the error of the synthesized clothing, which might produce weird poses. Experimental results show that SOR results can be significantly improved when using our sampling method. Additionally, thanks to the mutual independence of our data points, our database can be constructed in parallel, while the MCMC process in SOR must run serially since their sampling process depends on the previously constructed data. Please refer to Sec. \ref{sec:experiment} for details. {\textbf{Limitations.}} Our method assumes that the clothing coordinate differences introduced by body shape variation are linear with body pose, i.e. the clothing coordinate differences caused by body shape variation under the new pose can be predicted from the anchoring pose using LBS. This hinders the application of our approach to garments that are loose on the avatar. Furthermore, the final results of our method are highly dependent on the clothing pose model used. If the clothing pose model is not accurate, then our result is also not accurate. In addition, the efficiency of the clothing pose model creates a bottle-neck in our method. To overcome this difficulty, we plan to develop our own clothing pose model in the future. The wrinkles and folds generated in the animation suffer from some sudden changes when body poses change too quickly. The reason for this is that our scheme is trained on clothing instances of static equilibrium, and the estimations from different poses are inconsistent. Although we have employed a decaying factor to smooth the animation, it cannot solve this problem completely. Finally, our method cannot guarantee that all the penetrations are resolved, especially when the clothing is too tight on the body. In this case, the LBS result for each anchoring pose may result in deep penetrations, and the blending result will make penetrations even worse, which is beyond the ability of our penetration resolving method to address. When this is the case, we believe that the clothing is not suitable for the body, suggesting that the clothing is a bad fit. In addition, when the clothing has too many folds, self-penetration might occur in our synthesis results. {\textbf{Future works.}} We want to extend our method to other control parameters like clothing material and clothing patterns. For clothing material, for example, we first need to devise a clothing material model. Given a set of clothing material parameters, this model can predict the corresponding clothing deformation. Then, we sample some valuable poses as the Taylor expansion anchoring points. Finally, we blend local approximation results at run-time. We currently use the average sensitivity of 17 training shapes to calculate the distances of anchoring points. However, as the body shape changes, the sensitivity changes accordingly. In the future, we plan to \uline{train} a sensitivity model, i.e. a model that can predict the corresponding sensitivity of the clothing given a new set of body shape parameters. We believe this will help us to find better anchoring points at run-time. \section{Conclusion}\label{sec:conclusion} In this paper, we have presented a clothing synthesis scheme that uses Taylor expansion to combine two independent components: the pose-dependent deformation and the shape-dependent deformation. As a result, our method can add the shape dimension to various clothing pose models. Our method does not need to modify or retrain the clothing pose model. The core innovation here is that we regard clothing deformation as a function of body shape parameters and body pose parameters and use Taylor expansion to locally approximate clothing deformations around anchoring points. Due to the high computational cost of higher-order derivatives, we use linear Taylor expansion in both offline and online processes. Our clothing shape model can efficiently predict realistic clothing deformations under various body shapes and has the potential for use as a stand-alone application. Using only a CPU, our method can generate realistic clothing deformations under various body poses and shapes in real time without resizing the cloth model. Furthermore, we believe that our method can be extended to add other parameters such as clothing material and clothing pattern to clothing pose models. The performance can improve based on data-driven parameter estimation methods~\cite{wolinski2014parameter}. \section{Introduction}\label{sec:introduction}} \input{spa_body} \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi Xiaogang Jin was supported by the National Key R\&D Program of China (Grant No. 2017YFB1002600), the Ningbo Major Special Projects of the "Science and Technology Innovation 2025" (Grant No. 2020Z007), the National Natural Science Foundation of China (Grant Nos. 61732015, 61972344), and the Key Research and Development Program of Zhejiang Province (Grant No. 2020C03096). Qianwen Chao was supported by the National Natural Science Foundation of China (Grant No. 61702393). We thank Zhejiang Linctex Digital Technology Co. Ltd for the help on designing the female T-shirt in Figure \ref{fig:moreresult}. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,532
Вимушений ремонт (; , ) – ремонт викликаний пошкодженням обладнання, апаратів і конструкцій, які викликали функціональні порушення в технологічному ланцюгу виробництва. Приклади Наприклад, вимушений ремонт свердловин пов'язаний з усування обривів чи відкрутів штанг, полірованого штока, пошкоджень кабелю, вимушений ремонт на шахті, кар'єрі, збагачувальній фабриці може бути викликаний пошкодженням конвеєрів, електрообладнання, основного і допоміжного технологічного обладнання. Див. також Ремонт Література Ремонт
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,974
#ifndef _MSM_PCM_H #define _MSM_PCM_H #include <sound/apr_audio-v2.h> #include <sound/q6asm-v2.h> /* Support unconventional sample rates 12000, 24000 as well */ #define USE_RATE \ (SNDRV_PCM_RATE_8000_48000 | SNDRV_PCM_RATE_KNOT) extern int copy_count; struct buffer { void *data; unsigned size; unsigned used; unsigned addr; }; struct buffer_rec { void *data; unsigned int size; unsigned int read; unsigned int addr; }; struct audio_locks { spinlock_t event_lock; wait_queue_head_t read_wait; wait_queue_head_t write_wait; wait_queue_head_t eos_wait; wait_queue_head_t enable_wait; wait_queue_head_t flush_wait; }; struct msm_audio { struct snd_pcm_substream *substream; unsigned int pcm_size; unsigned int pcm_count; unsigned int pcm_irq_pos; /* IRQ position */ uint16_t source; /* Encoding source bit mask */ struct audio_client *audio_client; uint16_t session_id; uint32_t samp_rate; uint32_t channel_mode; uint32_t dsp_cnt; int abort; /* set when error, like sample rate mismatch */ bool reset_event; int enabled; int close_ack; int cmd_ack; /* * cmd_ack doesn't tell if paticular command has been sent so can't * determine if it needs to wait for completion. * Use cmd_pending instead when checking whether a command is been * sent or not. */ unsigned long cmd_pending; atomic_t start; atomic_t stop; atomic_t out_count; atomic_t in_count; atomic_t out_needed; atomic_t eos; int out_head; int periods; int mmap_flag; atomic_t pending_buffer; bool set_channel_map; char channel_map[8]; int cmd_interrupt; bool meta_data_mode; uint32_t volume; }; struct output_meta_data_st { uint32_t meta_data_length; uint32_t frame_size; uint32_t timestamp_lsw; uint32_t timestamp_msw; uint32_t reserved[12]; }; struct msm_plat_data { int perf_mode; }; #endif /*_MSM_PCM_H*/
{ "redpajama_set_name": "RedPajamaGithub" }
6,888
Q: how to use create() method in laravel DB class I want to create a new row using the following SQL instead of eloquent the code I am trying with : $create_transections = DB::table('package_plan_fees') ->create([ 'paid_amount' => $post_data['total_amount'], 'enroll_able' => $post_data['enrollable'], 'user_id' => $post_data['user_id'], 'package_id' => $post_data['package_id'], 'plan_id' => $post_data['plan_id'], 'status' => $post_data['status'] ]); A: You need to use insert instead of create. You can check Insert Statements from Laravel documentation. Your code needs to be like this: $create_transections = DB::table('package_plan_fees') ->insert([ 'paid_amount' => $post_data['total_amount'], 'enroll_able' => $post_data['enrollable'], 'user_id' => $post_data['user_id'], 'package_id' => $post_data['package_id'], 'plan_id' => $post_data['plan_id'], 'status' => $post_data['status'] ]); But because you said you want to get inserted row's id, you can use insertGetId method which described in Auto-Incrementing IDs section of Laravel Documentation. $inserted_rows_id = DB::table('package_plan_fees') ->insertGetId([ 'paid_amount' => $post_data['total_amount'], 'enroll_able' => $post_data['enrollable'], 'user_id' => $post_data['user_id'], 'package_id' => $post_data['package_id'], 'plan_id' => $post_data['plan_id'], 'status' => $post_data['status'] ]); A: There is no create method. You need to use insert. Relevant documentation page is here. DB::table('package_plan_fees') ->insert([ 'paid_amount' => $post_data['total_amount'], 'enroll_able' => $post_data['enrollable'], 'user_id' => $post_data['user_id'], 'package_id' => $post_data['package_id'], 'plan_id' => $post_data['plan_id'], 'status' => $post_data['status'] ]); A: create_transections = DB::table('package_plan_fees') ->insertGetId([ 'paid_amount' => $post_data['total_amount'], 'enroll_able' => $post_data['enrollable'], 'user_id' => $post_data['user_id'], 'package_id' => $post_data['package_id'], 'plan_id' => $post_data['plan_id'], 'status' => $post_data['status'] ]); In order to insert & get inserted id back, you need to use insertGetId() method.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,331
Pair of his and hers chairs are vintage 1800's chairs are rare finds reimagined especially for Wynwood Lab in true Victodern style – the mix of the Victorian heritage and modern elements. Covered in deep plum faux leather and painted in rich high gloss lacquer Caribbean teal, these chairs are double fun and perfect for a sitting area, café table, bathroom, vanity, foyer, or as an accent. Price is for the pair and only sold as a pair.
{ "redpajama_set_name": "RedPajamaC4" }
7,781
Lakeside Estate is a former settlement in Benton County, Missouri, United States. Lakeside Estate was a resort community on the Osage River in southern Cole Township. References Former populated places in Benton County, Missouri Ghost towns in Missouri
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,945