text
stringlengths
0
12.5k
meta
dict
--- abstract: 'In atomic and molecular phase measurements using laser-induced fluorescence detection, optical cycling can enhance the effective photon detection efficiency and hence improve sensitivity. We show that detecting many photons per atom or molecule, while necessary, is not a sufficient condition to approach the quantum projection limit for detection of the phase in a two-level system. In particular, detecting the maximum number of photons from an imperfectly closed optical cycle reduces the signal-to-noise ratio (SNR) by a factor of $\sqrt{2}$, compared to the ideal case in which leakage from the optical cycle is sufficiently small. We derive a general result for the SNR in a system in terms of the photon detection efficiency, probability for leakage out of the optical cycle per scattered photon, and the product of the average photon scattering rate and total scattering time per atom or molecule.' author: False 'D. DeMille' bibliography: - 'cyclingPaperBibFinal.bib' title: 'Statistical sensitivity of phase measurements via laser-induced fluorescence with optical cycling detection' --- Atoms and molecules are powerful platforms to probe phenomena at quantum-projection-limited precision. In this way, phase phenomena can detected. Due to geometric constraints on optical collection and technological limitations of photodetectors, the majority of emitted photons are typically undetected, reducing the experimental signal. Optical cycling transitions can be exploited to overcome these limitations, by scattering many photons per particle. In the limit that many photons from each particle are detected, the signal-to-noise ratio (SNR) may be limited by the quantum projection (QP) noise (often referred to as atom or molecule shot noise). LIF detection with photon cycling is commonly used in ultra-precise atomic clock [@Wynands2005; @Zelevinsky2008] and atom interferometer [@Cronin2009] experiments to approach the QP limit. Molecules possess additional features, beyond those in atoms, that make them favorable probes of fundamental symmetry violation [@ACMECollaboration2014; @Collaboration2018; @Hudson2011; @Devlin2015; @Hunter2012; @Kozyryev2017] and fundamental constant variation [@Borkowski2018; @Beloy2011; @DeMille2008; @Zelevinsky2008; @Shelkovnikov2008; @Kozyryev2018], as well as promising platforms for quantum information and simulation [@DeMille2002; @Liu2018; @Micheli2006; @Sundar2018; @Wall2015]. Many molecular experiments that have been proposed, or which are now being actively pursued, will rely on optical cycling to enhance measurement sensitivity while using LIF detection [@Collaboration2018; @Hunter2012; @Kozyryev2018; @Kozyryev2017; @ACMECollaboration2014; @Devlin2015]. Due to the absence of selection rules governing vibrational decays, fully closed molecular optical cycling transitions cannot be obtained: each photon emission is associated with a non-zero probability of decaying to a “dark” state that is no longer driven to an excited state by any lasers. However, for some molecules many photons can be scattered using a single excitation laser, and up to $\sim10^{6}$ photons have been scattered using multiple repumping lasers to return population from vibrationally excited states into the optical cycle [@DiRosa2004; @Shuman2009]. This has enabled, for example, laser cooling and magneto-optical trapping of molecules [@Shuman2010; @Barry2014; @Hummon2013; @Collopy2018; @Zhelyazkova2014; @Truppe2017; @Chae2017; @Anderegg2017]. Furthermore, some precision measurements rely on atoms in which no simply closed optical cycle exists [@Regan2002; @Parker2015]; our discussion here will be equally applicable to such species. These considerations motivate a careful study of LIF detection for precision measurement under the constraint of imperfectly closed optical cycling. Some results have also been reported [@Rocco2014]. However, the effect of the statistical nature of the cycling process on the optimal noise performance has not been previously explored. In particular, the number of photons scattered before a particle (an atom or molecule) decays to an unaddressed dark state, and therefore ceases to fluoresce, is governed by a statistical distribution rather than a fixed finite number. We show that due to the width of this distribution, a naive cycling scheme reduces the SNR to below the QP limit. In particular, we find that in addition to the intuitive requirement that many photons from every particle are detected, to approach the QP limit it is also necessary that the probability of each particle exiting the cycling transition (via decay to a dark state outside the cycle) is negligible during detection. If this second condition is not satisfied, so that each particle scatters enough photons that it is very likely to have been optically pumped into a dark state, then the SNR is decreased by a factor of $\sqrt{2}$ below the QP limit. Consider an ensemble of $N$ particles in an effective two-level system, in a state of the form $$|\psi\rangle=(e^{-i\phi}|\uparrow\rangle+e^{i\phi}|\downarrow\rangle)/\sqrt{2}.$$ The relative phase $\phi$ is the quantity of interest in this discussion. It can be measured, for example, by projecting the wavefunction onto an orthonormal basis $\{|X\rangle\propto|\uparrow\rangle+|\downarrow\rangle,\,|Y\rangle\propto|\uparrow\rangle-|\downarrow\rangle\}$ such that $|\langle X|\psi\rangle|^{2}=\cos^{2}(\phi)$ and $|\langle Y|\psi\rangle|^{2}=\sin^{2}(\phi)$. In the LIF technique, this can be achieved by driving state-selective transitions, each addressing either $|X\rangle$ or $|Y\rangle$, through an excited state that subsequently decays to a ground state and emits a fluorescence photon. This light is detected, and the resulting total signals, $S_{X}$ and $S_{Y}$, are associated with each state. (This protocol is equivalent to the more standard Ramsey method, in which each spin is reoriented for detection by a spin-flip pulse and the population of spin-up and spin-down particles is measured [@Ramsey1950].) The measured value of the phase, $\tilde{\phi},$ is computed from the observed values of $S_{X}$ and $S_{Y}$. In the absence of optical cycling, the statistical uncertainty of the phase measurement is $\sigma_{\tilde{\phi}}=\frac{1}{2\sqrt{N\epsilon}}$, where $\epsilon$ is the photon detection efficiency and $0<\epsilon\leq1$. Note that $N\epsilon$ is the average number of detected photons; hence, this result is often referred to as the “photon shot noise limit.” In the ideal case of $\epsilon=1$, the QP limit (a.k.a. the atom or molecule shot noise limit) limit $\sigma_{\tilde{\phi}}=\frac{1}{2\sqrt{N}}$ is obtained. This is considered. We suppose that the phase is projected onto the $\{|X\rangle,\,|Y\rangle\}$ basis independently for each particle. Repeated over the ensemble of particles, the total number of particles $N_{X}$ projected along $|X\rangle$ is drawn from a binomial distribution, $N_{X}\sim B(N,\,\cos^{2}\phi)$, where $x\sim f(\alpha_{1},\cdots,\alpha_{k})$ denotes that the random variable $x$ is drawn from the probability distribution $f$ parametrized by $\alpha_{1},\cdots,\alpha_{k}$, and $B(\nu,\,\rho)$ is the binomial distribution for the total number of successes in a sequence of $\nu$ independent trials that each have a probability $\rho$ of success. Therefore, $\overline{N_{X}}=N\,\cos^{2}\phi$ and $\sigma_{N_{X}}^{2}=N\,\cos^{2}\phi\sin^{2}\phi$, where $\bar{x}$ is the expectation value of a random variable $x$ and $\sigma_{x}$ is its standard deviation over many repetitions of an experiment. We define the number of photons scattered from the $i$-th particle to be $n_{i}$, where a “photon scatter” denotes laser excitation followed by emission of one spontaneous decay photon, and define $\overline{n_{i}}=\bar{n}$ (the average number of photons scattered per particle) and $\sigma_{n_{i}}=\sigma_{n}$. Note that these quantities are assumed to be the same for all particles (i.e., independent of $i$). The probability of detecting any given photon (including both imperfect optical collection and detector quantum efficiency) is $\epsilon$, such that each photon is randomly either detected or not detected. We define $d_{ij}$ to be a binary variable indexing whether the $j$-th photon scattered from the $i$-th particle is detected. Therefore, $d_{ij}\sim B(1,\,\epsilon)$, and it follows that $\overline{d_{ij}}=\epsilon$ and $\sigma_{d_{ij}}^{2}=\epsilon(1-\epsilon)$. We define the signal of the measurement of a particular quadrature $|X\rangle$ or $|Y\rangle$ from the ensemble, when projecting onto that quadrature, to be the total number of photons detected. For example, the signal $S
null
--- abstract: 'In a classical optimal stopping problem the aim is to maximize the expected value of a functional of a diffusion evaluated at a stopping time. This note considers optimal stopping problems beyond this paradigm. We study problems in which the value associated to a stopping rule depends on the law of the stopped process. If this value is quasi-convex on the space of attainable laws then it is well known result that it is sufficient to restrict attention to the class of threshold strategies. However, if the objective function is not quasi-convex, this may not be the case. We use randomized threshold strategies.' author: - 'Vicky HendersonDavid HobsonMatthew Zeng\' title: 'Optimal Stopping and the Sufficiency of Randomized Threshold Strategies[^1]' --- (1,0)[440]{} Introduction and main results ============================= Let $Y=(Y_t)_{t \geq 0}$ be a time-homogeneous, continuous strong-Markov process. Let ${\mathcal T}$ be the set of all stopping times, and let ${\mathcal T}_T$ be the set of all (one- and two-sided) threshold stopping times, ie. stopping rules based on the first crossing of upper or lower thresholds. Let ' $\tau$. Consider the optimal stopping problem associated with $V$, ie. the problem of finding $$\label{eq:osp} V_*( {\mathcal S}) = \sup_{\tau \in {\mathcal S}} V(\tau)$$ where ${\mathcal S}$ is some set of stopping times (for example ${\mathcal S}= {\mathcal T}$ or ${\mathcal S}= {\mathcal T}_T$), and especially the problem of finding an optimizer for . We say the $V=V(\tau)$ is law invariant if, whenever $\sigma,\tau$ are stopping times, ${\mathcal L}(Y_\sigma)= {\mathcal L}(Y_\tau)$ implies that $V(\sigma)=V(\tau)$, where ${\mathcal L}(Z)$ is the law of $Z$. It follows that $V(\tau)=H({\mathcal L}(Y_\tau))$ for some map $H$. The following result is well-known, but we include it as a contrast to our result on the sufficiency of randomized threshold rules. Suppose $H$ is quasi-convex and lower semi-continuous. Then $V_*({\mathcal T}_T) = V_*({\mathcal T})$. In the setting of Theorem \[thm:main2\], in solving the optimal stopping problem over the set of all stopping times it is sufficient to restrict attention to threshold rules. \[cor:A\] As the canonical example, consider expected utility, whence $V(\tau) = {\mathbb E}[u( Y_\tau) ]$, for a continuous, increasing function $u$. Then $V$ is law invariant. Indeed $V(\tau)= H({\mathcal L}(Y_\tau))$ where $H(\zeta) = \int u(z) \zeta(dz)$. $H$ is quasi-convex and lower semi-continuous. In this example it is well known that there is an optimal stopping rule which is of threshold form, see for example, Dayanik and Karatzas [@DayanikKaratzas:03]. The o [@HeHuOblojZhou:17]. Recently there has been a surge of interest in problems which, whilst they have the law invariance property, do not satisfy the quasi-convex criterion. Two examples are optimal stopping under prospect theory (Xu and Zhou [@XuZhou:13]), and optimal stopping under cautious stochastic choice (Henderson et al [@HendersonHobsonZeng:17]). Introduce the set ${\mathcal T}_R$ of mixed or randomized threshold rules. Suppose law invariance holds for $V$, but not quasi-convexity for $H$. Then $V_*({\mathcal T}_T) \leq V_*({\mathcal T}_R) = V_*({\mathcal T})$. We will show by example that the first inequality may be strict. In the setting of Theorem \[thm:main1\], in solving the optimal stopping problem over the set of all stopping rules it is sufficient to restrict attention to randomized threshold rules, but it may not be sufficient to restrict attention to (pure) threshold rules. \[cor:B\] It should be noted that we do not include discounting in our analysis since a problem involving discounting does not satisfy the law invariance property. Nonetheless, as is well known, the conclusion of Corollary \[cor:A\] remains true for the problem of maximizing discounted expected utility of the stopped process $V(\tau) = {\mathbb E}[ e^{- \beta \tau} u(Y_\tau)]$. However, in problems which go beyond the expected utility paradigm, there are often modelling issues which mitigate against the inclusion of discounting. For example: model where the stop rule is not allowed discounting. Finding the optimal stopping rule is often already challenging in these models. The significance of Corollary \[cor:B\] is as follows. In many classical models optimal stopping behavior involves stopping on first exit from an interval. If decision makers are observed to stop at levels which have already been visited by the process, then this behavior is inconsistent with the classical optimal stopping model. However, our result implies that the converse is not true: if decision makers are observed to stop only when the process is reaching new maxima or minima, then it does not necessarily mean that they are maximizers of expected payoffs. Instead the decision criteria may be more complicated, and they may be utilizing a randomized threshold rule. Problem specification and the problem in natural scale ====================================================== We work on a filtered probability space $(\Omega, {\mathcal F}, {\mathbb F}= \{ {\mathcal F}_t \}_{t \geq 0} , {\mathbb P})$. Let $Y= (Y_t)_{t \geq 0}$ be a $({\mathbb F}, {\mathbb P})$-stochastic process on this probability space with state space $I$ which is an interval. Let $\bar{I}$ be the closure of $I$. We suppose that $Y$ is a regular, time-homogeneous diffusion with initial value $Y_0=y$ such that $y$ lies in the interior of $I$. Let ${\mathcal T}$ be the class of all stopping times $\tau$ such that $\lim_{t \uparrow \infty} Y_{t \wedge \tau}$ exists (almost surely). We introduce two subclasses of stopping times - ${\mathcal T}_T$, the subclass of (pure) threshold stopping times; - ${\mathcal T}_R$, the subclass of randomised threshold stopping times. Note that ${\mathcal T}_T \subset {\mathcal T}_R \subset {\mathcal T}$. The set of pure threshold stopping times includes stopping immediately and can be written as $${\mathcal T}_T = {\mathcal T}\cap \left(\cup_{\beta \leq y \leq \gamma; \; \beta, \gamma \in \bar{I}^Y} \{ \tau_{\beta,\gamma} \} \right), \label{eq:TTdef}$$ where $\tau_{a,b} = \inf_{u \geq 0} \{ u: Y_u \notin (a,b) \}$. Note that if $a = y$ or $b=y$ then $\tau_{a,b}=0$ almost surely, and that if $\sigma=\tau$ almost surely then we have $V(\sigma)=V(\tau)$. Hence we may suppose that $\tau \equiv 0$, the strategy of stopping immediately, lies in ${\mathcal T}_T$. In order to be able to define a sufficiently rich class of randomized stopping times we need to assume that ${\mathbb F}$ is larger than the filtration generated by $Y$. ${\mathcal F}_0$ is sufficiently rich as to include a continuous random variable, and the stochastic process $Y$ is independent of this random variable. \[ass:filtration\] It follows from the assumption that for any probability measure $\zeta$ on ${\mathcal D}= ([-\infty,y] \cap \bar{I}) \times ([y,\infty]\cap \bar{I})$ there exists an ${\mathcal F}_0$-measurable random variable $\Theta = \Theta_\zeta = (A_\zeta, B_\zeta)$ such that $(A_\zeta, B_\zeta)$ has law $\zeta$. For a set $\Gamma$ let ${\mathcal P}(\Gamma)$ be the set of probability measures on $\Gamma$. Then for any $\zeta \in {\mathcal P}({\mathcal D})$ we can define the randomised stopping time $\tau_\zeta$ as the first time $Y$ leaves a random interval, where the interval is chosen at time 0 with law $\zeta$. Then $\tau_\zeta = \tau_{A_\zeta, B_\zeta} = \inf \{ u : Y_u \notin (A_\zeta,B_\zeta)\}$. The set of randomized threshold rules ${\mathcal T}_R$ is given by $${\mathcal T}_R = {\mathcal T}\cap \left( \{ \tau_\zeta : \zeta \in {\mathcal P}({\mathcal D}) \} \right). \label{eq:
null
--- author: - 'E. O. , 1995) sciences. Many cosmological parameters are currently determined with a high precision that occasionally reaches fractions of a percent (Ade et al. 2014). One of such parameters is the baryon-to-photon ratio $\eta \equiv n_{\rm b}/n_{\gamma}$, where $n_{\rm b}$ and $n_{\gamma}$ are the baryon and photon number densities in the Universe, respectively. In the standard cosmological model, the present value of $\eta$ is assumed to have been formed upon completion of electron-positron annihilation several seconds after the Big Bang and has not changed up to now. The value of $n_{\gamma}$ associated with the cosmic microwave background (CMB) photons is defined by the well-known relation $$n_{\gamma}=\frac{2\zeta(3)}{\pi^2}\left( \frac{kT}{\hbar c}\right)^3=410.73\left(\frac{T}{2.7255\,\text{K}}\right)^3\text{cm}^{-3},$$ where $\zeta(x)$ is the Riemann zeta function, $k$ is the Boltzmann constant, $\hbar$ is the Planck constant, $c$ is the speed of light, and $T$ is the CMB temperature at the corresponding epoch. The CMB temperature is currently determined with a high accuracy and is $T_0 = 2.7255(6)\,$K at the present epoch (Fixsen 2009); for other epochs, it is expressed by the relation $T=T_0(1 + z)$, where $z$ is the cosmological redshift at the corresponding epoch. Thus, given $n_{\gamma}$, a relation between the parameter $\eta$ and $\Omega_{\rm b}$, the relative baryon density in the Universe, can be obtained (Steigman 2006): $$\eta = 273.9\times10^{-10}\Omega_{\rm b}h^2,$$ where $h = 0.673(12)$ is the dimensionless Hubble parameter at the present epoch (Ade et al. 2014). According to present views, the baryon density, which is the density of ordinary matter (atoms, molecules, planets and stars, interstellar and intergalactic gases), does not exceed 5% of the entire matter filling the Universe, while 95% of the density in the Universe is composed of unknown forms of matter/energy that manifest themselves (for the time being) gravitationally (see, e.g., Gorbunov and Rubakov 2008). At present, observations allow $\Omega_{\rm b}$ to be independently estimated for four cosmological epochs:\ (i) the epoch of Big Bang nucleosynthesis ($z_{\rm BBN}\sim10^9$; see, e.g., Steigman et al. 2007);\ (v al. 2014);\ (iii) the epoch associated with the Ly$\alpha$ forest ($z\sim2\div3$; i.e., $\sim$10 Gyr ago; see, e.g., Rauch 1998; Hui et al. 2002);\ (iv) the present epoch ($z = 0$; see, e.g., Fukugita and Peebles 2004). For the processes at the epochs of Big Bang nucleosynthesis and primordial recombination, $\eta$ is one of the key parameters determining their physics. For these epochs, the methods of estimating $\eta$, (i) comparing the observational data on the relative abundances of the primordial light elements (D, $^4$He, $^7$Li) with the predictions of the Big Bang nucleosynthesis theory and (ii) analyzing the CMB anisotropy, give the most accurate estimates of $\eta$ to date that coincide, within the observational error limits: $\eta_{\rm BBN} = (6.0 \pm 0.4) \times 10^{-10}$ (Steigman 2007) and $\eta_{\rm CMB} = (6.05 \pm 0.07) \times 10^{-10}$ (Ade et al. 2014). This argues for the correctness of the adopted model of the Universe and for the validity of the standard physics used in theoretical calculations. However, it should be noted that at present, as the accuracy of observations increases, some discrepancy between the results of observations and the abundances of the primordial elements predicted in the Big Bang nucleosynthesis theory has become evident. The “lithium problem” is well known (see, e.g., Cyburt et al. 2008); not all is ideal with helium and deuterium (for a detailed discussion of these problems, see Ivanchik et al. 2015). These inconsistencies can be related both to the systematic and statistical errors of experiments and to the manifestations of new physics (physics beyond the standard model). The determination of $\Omega_{\rm b}$ and the corresponding $\eta$ at epochs (iii) and (iv) has a considerably lower accuracy. The value of $\eta$ measured for the epoch associated with the Ly$\alpha$ forest coincides, by an order of magnitude, with $\eta_{\rm BBN}$ and $\eta_{\rm CMB}$, but, at the same time, is also strongly model-dependent (e.g., Hui et al. 2002). The measured $\Omega_{\rm b}$ and $\eta$ at the present epoch are at best half those predicted by Big Bang nucleosynthesis calculations and CMB anisotropy analysis. The above (Lang i al. 2008) is associated with this. It is hoped that further observations and new experiments will allow $\Omega_{\rm b}$ for different cosmological epochs and the corresponding $\eta$ to be determined with a higher accuracy. In turn, this can become a powerful tool for investigating the physics beyond the standard model, where the values of $\eta$ for different cosmological epochs can be different. Constraints on the deviation of $\eta$ allow various theoretical models admitting such a change to be selected. In this paper, we discuss the possibility of a change in $\eta$ on cosmological time scales attributable to the decays of dark matter particles. For example, supersymmetric particles (see, e.g., Jungman et al. 1996; Bertone et al. 2004; and references therein) can act as such particles; some of them can decay into the lightest stable supersymmetric particles and standard model particles (baryons, leptons, photons, etc. ; see, e.g., Cirelli et al. 2011): $${\rm X} \rightarrow \chi + ... \begin{cases} {\gamma + \gamma +...} \\ {\rm p + \bar{p} +...}, \end{cases}$$ where X and $\chi$ are unstable and stable dark matter particles, respectively. This is the name of the planet $\eta$. The currently available observational data suggest that the dark matter density in the Universe is approximately a factor of 5 larger than the baryon density: $\Omega_{\rm CDM}\simeq 5\Omega_{\rm b}$, i.e., the relation between the number density of dark matter particles and the number densities of baryons and photons in the Universe is $n_{\rm CDM}\simeq 5(m_{\rm b}/m_{\rm CDM})n_{\rm b} = 5(m_{\rm b}/m_{\rm CDM})n_{\gamma}\eta$. Assuming that the changes in the number densities of various types of particles in the decay reactions of dark matter particles are related as $\Delta n_{\rm CDM} \sim \Delta n_{\rm b}$ and $\Delta n_{\rm CDM} \sim \Delta n_{\gamma}$, it is easy to see that the parameter $\eta$ is most sensitive precisely to the change in baryon number density. In the decays of dark matter particles with masses $m_{\rm CDM} \sim 10$GeV$-$1TeV, the change in $\eta$ as a result of the change in baryon number density could reach $\Delta\eta/\eta \sim 0.01 - 1$ [^3]. The change in photon number density and the change in $\eta$ attributable to it will be approximately billion times smaller. Therefore, in our paper we focused our attention on the possibility of a change in $\eta$ due to the decays of dark matter particles with the formation of a baryon component. Despite the negligible contribution to the change in $\eta$ from the photon component, a comparison of the predicted gamma-ray background (dark matter particle decay products) with the observed isotropic gamma-ray background in the Universe can serve as an additional source of constraints on the decay models of dark matter particles. The dark ones. The observational data on the isotropic gamma-ray background constrain their possible number in the Universe, which, in turn, narrows the range of admissible parameters of dark matter particles, determines the maximum possible number of baryons, the decay products of dark matter particles, and the corresponding change in the baryon-to-photon ratio in such decays. Thus, the observational data on the gamma-ray background, along with the cosmological experiments described above, serve as a source of constraints on the decay models of dark matter particles and on the possible change in $\eta$. Running ahead, we will say that at present the constraints from
null
--- abstract: 'In a two-flavor color superconductor, the $SU(3)_c$ gauge symmetry is spontaneously broken by diquark condensation. The Nambu-Goldstone excitations of the diquark condensate mix with the gluons associated with the broken generators of the original gauge group. It is gauge. We then explicitly compute the spectral density for transverse and longitudinal gluons of adjoint color 8. The Nambu-Goldstone excitations give rise to a singularity in the real part of the longitudinal gluon self-energy. This leads to a vanishing gluon spectral density for energies and momenta located on the dispersion branch of the Nambu-Goldstone excitations.' address: Falsch:<extra_id_1> op. cit.<extra_id_2>:<extra_id_3>: <unk><extra_id_4> 'Redshifting'; name: "Rischke"; address:<extra_id_5> -<extra_id_6>'<extra_id_7>'<extra_id_8>'<extra_id_9> and two-f Robert-Mayer-Str. 8–10, D-60054 Frankfurt/Main, Germany\ E-mail: drischke@th.physik.uni-frankfurt.de - | School of Physics and Astronomy, University of Minnesota\ 116 Church Street S.E., Minneapolis, MN 55455, U.S.A.\ E-mail: shovkovy@physics.umn.edu author: - 'Dirk H. Rischke' - 'Igor A. Shovkovy[^1]' title: 'Longitudinal gluons and Nambu-Goldstone bosons in a two-flavor color superconductor' --- Introduction ============ Cold, dense quark matter is a color superconductor [@bailinlove]. For two massless quark flavors (say, up and down), Cooper pairs with total spin zero condense in the color-antitriplet, flavor-singlet channel. In this so-called two-flavor color superconductor, the $SU(3)_c$ gauge symmetry is spontaneously broken to $SU(2)_c$ [@arw]. If we choose to orient the (anti-) color charge of the Cooper pair along the (anti-) blue direction in color space, only red and green quarks form Cooper pairs, while blue quarks remain unpaired. Then, the three generators $T_1,\, T_2,$ and $T_3$ of the original $SU(3)_c$ gauge group form the generators of the residual $SU(2)_c$ symmetry. The remaining five generators $T_4, \ldots, T_8$ are broken. (More precisely, the last broken generator is a combination of $T_8$ and the generator ${\bf 1}$ of the global $U(1)$ symmetry of baryon number conservation, for details see Ref. [@sw2] in the figure below). According to Goldstone’s theorem, this pattern of symmetry breaking gives rise to five massless bosons, the so-called Nambu-Goldstone bosons, corresponding to the five broken generators of $SU(3)_c$. Physically, these massless bosons correspond to fluctuations of the order parameter, in our case the diquark condensate, in directions in color-flavor space where the effective potential is flat. For gauge theories (where the local gauge symmetry cannot truly be spontaneously broken), these bosons are “eaten” by the gauge bosons corresponding to the broken generators of the original gauge group, [*i.e. *]{}, in our case the gluons with adjoint colors $a= 4, \ldots, 8$. They give rise to a longitudinal degree of freedom for these gauge bosons. The appearance of a longitudinal degree of freedom is commonly a sign that the gauge boson becomes massive. In a dense (or hot) medium, however, even [*without*]{} spontaneous breaking of the gauge symmetry the gauge bosons already have a longitudinal degree of freedom, the so-called [*plasmon*]{} mode [@LeBellac]. Its appearance is related to the presence of gapless charged quasiparticles. Both transverse and longitudinal modes exhibit a mass gap, [*i.e. *]{}, the gluon energy $p_0 \rightarrow m_g > 0$ for momenta $p \rightarrow 0$. In quark matter with $N_f$ massless quark flavors at zero temperature $T=0$, the gluon mass parameter (squared) is [@LeBellac] $$\label{gluonmass} m_g^2 = \frac{N_f}{6\, \pi^2} \, g^2 \, \mu^2\,\, ,$$ where $g$ is the QCD coupling constant and $\mu$ is the quark chemical potential. It is [*a priori*]{} unclear how the Nambu-Goldstone bosons interact with these longitudinal gluon modes. In particular, it is of interest to know whether coupling terms between these modes exist and, if yes, whether these terms can be eliminated by a suitable choice of (’t Hooft) gauge. The aim of the present work is to address these questions. We shall show that the answer to both questions is “yes”. We shall then demonstrate by focussing on the gluon of adjoint color 8, how the Nambu-Goldstone mode affects the spectral density of the longitudinal gluon. Our work is partially based on and motivated by previous studies of gluons in a two-flavor color superconductor [@carterdiakonov; @dhr2f; @dhrselfenergy]. The gluon self-energy and the resulting spectral properties have been discussed in Ref. [@dhrselfenergy]. In that paper, however, the fluctuations of the diquark condensate have been neglected. Consequently, the longitudinal degrees of freedom of the gluons corresponding to the broken generators of $SU(3)_c$ have not been treated correctly. The gluon polarization tensor was no longer explicitly transverse (a transverse polarization tensor $\Pi^{\mu\nu}$ obeys $P_\mu \, \Pi^{\mu \nu} = \Pi^{\mu \nu}\, P_\nu = 0$), and it did not satisfy the Slavnov-Taylor identity. As a consequence, the plasmon mode exhibited a certain peculiar behavior in the low-momentum limit, which cannot be physical (cf. 4, 5 (source: Ref. [@dhrselfenergy]). It was already realized in Ref. [@dhrselfenergy] that the reason for this unphysical behavior is the fact that the mixing of the gluon with the excitations of the condensate was neglected. It was moreover suggested in Ref. [@dhrselfenergy] that proper inclusion of this mixing would amend the shortcomings of the previous analysis. The aim of the present work is to follow this suggestion and thus to correct the results of Ref. [@dhrselfenergy] with respect to the longitudinal gluon. Note that in Ref. [@carterdiakonov] fluctuations of the color-superconducting condensate were taken into account in the calculation of the gluon polarization tensor. As a consequence, the latter is explicitly transverse. However, the analysis was done in the vacuum, at $\mu=0$, not at (asymptotically) large chemical potential. The and the glu follows. In Section \[II\] we derive the transverse and longitudinal gluon propagators including fluctuations of the diquark condensate. In Section \[III\] we use the resulting expressions to compute the spectral density for the gluon of adjoint color 8. Section \[IV\] concludes this work with a summary of our results. Our units are $\hbar=c=k_B=1$. The metric tensor is $g^{\mu \nu}= {\rm diag}\,(+,-,-,-)$. We denote 4-vectors in energy-momentum space by capital letters, $K^{\mu} = (k_0,{\bf k})$. Absolute magnitudes of 3-vectors are denoted as $k \equiv |{\bf k}|$, and the unit vector in the direction of ${\bf k}$ is $\hat{\bf k} \equiv {\bf k}/k$. Derivation of the propagator for transverse and longitudinal gluons {#II} =================================================================== In this section, we derive the gluon propagator taking into account the fluctuations of the diquark condensate. A new Ref. [@msw] \[see also the original Ref. [@gusyshov]\]. Nevertheless, for the sake of clarity and in order to make our presentation self-contained, we decide to present this once more in greater detail and in the notation of Ref. [@dhrselfenergy]. As this part is rather technical, the reader less interested in the details of the derivation should skip directly to our main result, Eqs. (\[transverse\]), (\[longitudinal\]), and (\[hatPi00aa\]). We start with the grand partition function of QCD, \[Z\] $$\label{ZQCD} {\cal Z} = \int {\cal D} A \; e^{ S_A } \;{\cal Z}_q[A]\,\, ,$$ where $${\cal Z}_q[A] = \int {\cal D} \bar{\psi} \, {\cal D} \psi\, \exp \left[ \int_x \bar{\psi} \left( i \gamma^\mu \partial_\mu + \mu \gamma_0 + g \gamma^\mu A_\mu^a T_a \right) \psi \right] \,. \label{Zquarks}$$ is the grand partition function for massless quarks in the presence of a gluon field $A^\mu_a$. In Eq. (\[Z\]), the space-time integration is defined as $\int_x \equiv \int_0^{1/T} d\tau \int_V d^3
null
--- abstract: 'The Lie group method provides an efficient tool to solve nonlinear partial differential equations. This paper suggests a fractional partner for fractional partial differential equations. A space-time fractional diffusion equation is used as an example to illustrate the effectiveness of the Lie group method.' author: - | Guo-cheng Wu[^1]\ Modern Textile Institute, Donghua University, 1882 Yan’an Xilu Road,\ [Shanghai 200051, China]{}\ \[6pt\] Received 20 May 2010; accepted 13 July 2010 title: A Fractional Lie Group Method For Anomalous Diffusion Equations --- \[theorem\][**[Definition]{}**]{} Lie group method; Anonymous diffusion equation; Fractional characteristic method Introduction ============ In the last three decades, researchers have found fractional differential equations (FDEs) useful in various fields: rheology, quantitative biology, electrochemistry, scattering theory, diffusion, transport theory, probability potential theory and elasticity \[1\], for details, see the monographs of Kilbas et al. \[2\], Kiryakova \[3\], Lakshmikantham and Vatsala \[4\], Miller and Ross \[5\], and Podlubny \[6\]. On the other hand, finding accurate and efficient methods for solving FDEs has been an active research undertaking. Since Sophus Lie’s work on group analysis, more than 100 years ago, Lie group theory has become more and more pervasive in its influence on other mathematical disciplines \[7, 8\]. Then what equations? Up to now, only a few works can be found in the literature. For example, Buckwarand and Luchko derived scaling transformations \[9\] for the fractional diffusion equation in Riemann-Liouville sense $$\frac{{\partial ^\alpha u(x,t)}}{{\partial t^\alpha }} = D\frac{{\partial ^2 \mathop u\limits^{} (x,t)}}{{\partial x^2 }},\;\;0 < \alpha ,\;0 < x{\rm{,}}\;0 < t,\;0 < D. \label{eq1} %(1)$$ Gazizov et al. find symmetry properties of fractional diffusion equations of Caputo derivative \[10\] $$\frac{{\partial ^\alpha u(x,t)}}{{\partial t^\alpha }} = k\frac{{\partial (\mathop {k(u)u_x }\limits^{} (x,t))}}{{\partial x}},\;\;0 < \alpha ,\;0 < x{\rm{,}}\;0 < t,\;0 < k. \label{eq2} %(2)$$ Djordjevic and Atanackovic \[11\] obtained some similarity solutions for the time-fractional heat diffusion $$\frac{{\partial ^\alpha T(x,t)}}{{\partial t^\alpha }} = k\frac{{\partial^{2} (T(x,t))}}{{\partial x^{2}}},\;\;0 < \alpha ,\;0 < x{\rm{,}}\;0 < t. \label{eq3} %(3)$$ In this study, we investigate anonymous diffusion \[12\] $$\frac{{\partial ^\alpha u(x,t)}}{{\partial t^\alpha }} = \frac{{\partial ^{2\beta } u(x,t)}}{{\partial x^{2\beta } }},0 < \alpha ,\;\beta \le 1,\;0 < x{\rm{,}}\;0 < t, \label{eq4} %(4)$$ with a fractional Lie group method, and derive its classification of solutions. Here the fractional derivative is in the modified Riemann-Liouville sense \[13\] and $\frac{{\partial ^{2\beta } u(x,t)}}{{\partial x^{2\beta } }}$ is defined by$\frac{{\partial ^\beta }}{{\partial x^\beta }}(\frac{{\partial ^\beta u(x,t)}}{{\partial x^\beta }}).$ Characteristic Method for Fractional Differential Equations =========================================================== Through this paper, we adopt the fractional derivative in modified Riemann-Liouville sense \[13\]. Firstly, we introduce some properties of the fractional calculus that we will use in this study. \(I) Integration with respect to $(dx)^\alpha $(Lemma **2.1** of \[14\]) $$_0 I_x^\alpha f(x) = \frac{1}{{\Gamma (\alpha )}}\int_0^x (x - \xi )^{\alpha - 1} f(\xi )d\xi = \frac{1}{{\Gamma (\alpha + 1)}}\int_0^x f(\xi )(d\xi )^\alpha ,0 < \alpha \le 1. \label{eq5} %(5)$$ \(II) Some other useful formulas $$f([x(t)])^{(\alpha )} = \frac{{df}}{{dx}}x^{(\alpha )} (t), ~\\ {}_0D_x^\alpha x^\beta = \frac{{\Gamma (1 + \beta )}}{{\Gamma (1 +\beta - \alpha )}}x^{\beta - \alpha } . \\ \label{eq6} %(6)$$ The properties of Jumarie’s derivative were summarized in \[13\]. The extension of Jumaire’s fractional derivative and integral to variational approach of several variables is done by Almeida et al. \[15\]. Fractional variational interactional method is proposed for fractional differential equations \[16\]. It is well known that the method of characteristics has played a very important role in mathematical physics. Preciously, the method of characteristics is used to solve the initial value problem for general first order. With the modified Riemann-Liouville derivative, Jumaire ever gave a Lagrange characteristic method \[17\]. We present a more generalized fractional method of characteristics and use it to solve linear fractional partial equations. Consider the c(x,t). \label{eq7} %(7)$$ The goal of the method of characteristics is to change coordinates from ${\rm{(}}x,\;t{\rm{)}}$ to a new coordinate system ${\rm{(}}x_0 ,\;s{\rm{)}}$ in which the PDE becomes an ordinary differential equation along certain curves in the $x - t$ plane. The curves are called the characteristic curves. More generally, we consider to extend this method to linear space-time fractional differential equations $$a(x,t)\frac{{\partial ^{^\beta } u(x,t)}}{{\partial x^\beta }} + b(x,t)\frac{{\partial ^\alpha u(x,t)}}{{\partial t^\alpha }} = c(x,t),0 < \alpha ,\beta \le 1. \label{eq8} %(8)$$ With the fractional Taylor’s series in two variables \[13\] $$du = \frac{{\partial ^{^\beta } u(x,t)}}{{\Gamma (1 + \beta )\partial x^\beta }}(dx)^{^\beta } + \frac{{\partial ^\alpha u(x,t)}}{{\Gamma (1 + \alpha )\partial t^\alpha }}(dt)^\alpha ,\;\;0 < \alpha ,\;\beta \le 1. \label{eq9} %(9)$$ Similarly, we derive the generalized characteristic curves $$\frac{{du}}{{ds}} = c(x,t),$$ $$\frac{{(dx)^{^\beta } }}{{\Gamma (1 + \beta )ds}} = a(x,t),\label{eq10} %(10)$$ $$\frac{{(dt)^\alpha }}{{\Gamma (1 + \alpha )ds}} = b(x,t). \\ \\$$ Eqs. (10)-(12) can be reduced to Jumarie’s result if $\alpha = \beta $ in \[17\]. As an example, we consider the fractional equation $$\frac{{x^\beta }}{{\Gamma (1 + \beta )}}\frac{{\partial ^\beta u(x,t)}}{{\partial x^\beta }} + \frac{{2t^\alpha }}{{\Gamma (1 + \alpha )}}\frac{{\partial ^\alpha u(x,t)}}{{\partial t^\alpha }} = 0,\;\;0 < \alpha ,\;\beta \le 1. \label{eq11} %(11)$$ We can have the fractional scaling transformation $$u = u(\frac{{x^{^{2\beta } } }}{{\Gamma ^2 (1 + \beta )}}/\frac{{2t^\alpha }}{{\Gamma (1 + \alpha )}}). \label{eq12} %(12)$$ Note that when $\alpha = \beta = 1{\rm{,}}$ as is well known, $\frac{{x^{^2 } }}{{2t}}$ is one invariant of the line differential equation $$x\frac{{\partial u(x,t)}}{{\partial x}} + 2t\frac{{\partial u(
null
--- abstract: 'The off-axis location of the Advanced Camera for Surveys (ACS) is the chief (but not sole) cause of strong geometric distortion in all detectors: the Wide Field Camera (WFC), High Resolution Camera (HRC), and Solar Blind Camera (SBC). Dithered observations of rich star cluster fields are used to calibrate the distortion. We describe the observations obtained, the algorithms used to perform the calibrations and the accuracy achieved.' author: - 'G.R. Meurer$^1$, D. Lindler$^2$, J.P. Blakeslee$^1$, C. Cox$^3$, A.R. False Tran$^1$, the optical image view. The tilted focal surface with respect to the chief ray is the primary source of distortion of all three ACS detectors. In addition, The HST Optical Telescope Assembly induces distortion as does the ACS M2 and IM2 mirrors (which are designed to remove HST’s spherical aberration). The ACS M2 mirrors also cause distortion. Here we describe our method of calibrating the geometric distortion using dithered observations of star clusters. The distortion solutions we derived are given in the IDC tables delivered in Nov 2002, and currently implemented in the STScI CALACS pipeline. This was not presented in the workshop. An expanded description of our procedures is given by Meurer (2002). Method s [**Observations**]{}. The ACS SMOV geometric distortion campaign consisted of two HST observing programs: 9028 which targeted the core of 47 Tucanae (NGC 104) with the WFC and HRC, and 9027 which consisted of SBC observations of NGC 6681. Additional observations from programs 9011, 9018, 9019, 9024 and 9443 were used as additional sources of data, to check the results, and to constrain the absolute pointing of the telescope. The CCD exposures of 47 Tucanae were designed to well detect stars on the main sequence turn-off at $m_B = 17.5$ in each frame. This allows for a high density of stars with relatively short exposures. The F475W filter (Sloan g’) was used for the CCD observations so as to minimize the number of saturated red giant branch stars in the field. For the HRC two 60s exposures were taken at each pointing, while for the WFC which has a larger time overhead, only one such exposure was obtained per pointing. Simulated images made prior to launch, as well as archival WFPC2 images from Gilliland et al. (2000) were used to check that crowding would not be an issue. For calibrating the distortion in the SBC we used exposures of NGC 6681 (300s - 450s) which was chosen for the relatively high density of UV emitters (hot horizontal branch stars). The optical field. For the WFC and HRC pointings, the dither pattern was designed so that the offsets between all pairs of images adequately, and non-redundantly, samples all spatial scales from about 5 pixels to 3/4 the detector size. For the SBC pointings, a more regular pattern of offsets is used augmented by a series of  5 pixel offsets. [**Distortion model**]{}. The heart of the distortion model relates pixel position ($x,y$) to sky position using a polynomial transformation (Hack & Cox, 2000) given by: $$x_c = \sum_{m=0}^{k}\sum_{n=0}^{m} a_{m,n}(x - x_r)^n (y - y_r)^{m-n}\, , \hspace{0.5cm} y_c = \sum_{m=0}^{k}\sum_{n=0}^{m} b_{m,n}(x - x_r)^n (y - y_r)^{m-n}$$ Here $k$ is the order of the fit, $x_r,y_r$ is the reference pixel, taken to be the center of each detector, or WFC chip, and $x_c,y_c$ are undistorted image coordinates. The coefficients to the fits, $a_{m,n}$ and $b_{m,n}$, are free parameters. For the WFC, an offset is applied to get the two CCD chips on the same coordinate system: $$X' = x_c + \Delta{x}{\rm (chip\#)}\, , \hspace{0.5cm} Y' = y_c + \Delta{y}{\rm (chip\#)}.$$ $\Delta{x}{\rm (chip\#)},\Delta{y}{\rm (chip\#)}$ are 0,0 for WFC’s chip 1 (as indicated by the FITS CCDCHIP keyword) and correspond to the separation between chips 1 and 2 for chip 2. The chip 2 offsets are free parameters in our fit. $X',Y'$ correspond to tangential plane positions in arcseconds which we tie to the HST $V2, V3$ coordinate system. Next the positions are corrected for velocity aberration: $X = \gamma X'$, $Y = \gamma Y'$, where $$\gamma = \frac{1 + {\bf u} \cdot {\bf v} / c}{1 - (v/c)^2}.$$ Here [**u**]{} is the unit vector towards the target and [**v**]{} is the velocity vector of the telescope (heliocentric plus orbital). Neglect of the velocity aberration correction can result in misalignments on order of a pixel for WFC images taken six months apart for targets near the ecliptic. Finally, we must transform all frames to the same coordinate grid on the sky $X_{\rm sky}, Y_{\rm sky}$: $$X_{\rm sky} = \cos \Delta \theta_i X - \sin \Delta \theta_i Y + \Delta X_i\, , \hspace{0.5cm} Y_{\rm sky} = \sin \Delta \theta_i X + \cos \Delta \theta_i Y + \Delta Y_i$$ where the free parameters $\Delta X_i, \Delta Y_i, \Delta \theta_i$ are the position and rotation offsets of frame $i$. [**Calibration algorithm**]{}. We use the positions of stars observed multiple times in the dithered star fields to iteratively solve for the free parameters in the distortion solution: fit coefficients $a_{m,n}, b_{m,n}$; chip 2 offsets $\Delta x{\rm (chip\, 2)}, \Delta y{\rm (chip\, 2)}$ (WFC only); frame offsets $\Delta X_i, \Delta Y_i, \Delta \theta_i$; and tangential plane position $X_{\rm sky}, Y_{\rm sky}$ of each star used in the fit. The stars are selected by finding local maxima above a selected threshold. The centroid in a $7 \times 7$ box about the local maximum is compared to Gaussian fits to the $x, y$ profiles, if the two estimates of position differ by more than 0.25 pixels, the measurement is rejected as likely being effected by a cosmic ray hit or crowding. Further details can be found in Kuo al. (2002). [**Low order terms**]{}. Originally only SMOV images taken with a single roll angle were used to define the distortion solutions. The solution using only these data is degenerate in the zeroth (absolute pointing) and linear terms (scale, skewness). So we used the largest commanded offsets with a given guide star pair to set the linear terms. However, comparison of corrected coordinates to astrometric positions showed that residual skewness in the solution remained. Hence, as of November 2002, the IDC tables for WFC and SBC are based on data from multiple roll angles. The RFC table uses the same offset. For the HRC, the linear scale is set by matching HRC and WFC coordinates, since the same field was used in the SMOV observations. The zeroth order terms (position of the ACS apertures in the HST $V2,V3$ frame) was determined from observations of an astrometric field. Results ======= -------- ------ ------------ -------- ----------- -------- ------------ ------------ ------- pixel Camera chip size Filter Pointings $N$ rms(x) rms(y) Notes \[arcsec\] \[pixels\] \[pixels\] WFC 1 0.05 F475W 25 142289 0.042 0.045 WFC 2 0.05 F475W 25 103453 0.035 0.037 WFC 1
null
--- abstract: 'We examine the 3-10 keV EPIC spectra of Mrk 205 and Mrk 509 to investigate their Fe K features. The most significant feature in the spectra of both objects is an emission line at 6.4 keV. The spectra can be adequately modelled with a power law and a relatively narrow ($\sigma < 0.2$ keV) Fe K${\alpha}$ emission line. Better fits are obtained when an additional Gaussian emission line, relativistic accretion-disk line, or Compton reflection from cold material, is added to the spectral model. We obtain similar goodness of fit for any of these three models, but the model including Compton reflection from cold material offers the simplest, physically self-consistent solution, because it only requires one reprocessing region. Thus the Fe K spectral features in Mrk 205 and Mrk 509 do not present strong evidence for reprocessing in the inner, relativistic parts of accretion disks.' author: - | M.J. Page$^{1}$, S.W. Davis$^{2}$, N.J. Salvi$^{1}$\ $^{1}$Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking, Surrey, RH5 6NT, UK\ $^{2}$Department of Physics, University of California, Santa Barbara, CA 93106, USA\ nocite: - '[@tanaka95]' - '[@nandra97]' - '[@reynolds97]' - '[@reeves01a]' - '[@blustin02]' - '[@reeves01b]' - '[@antonucci93]' - '[@pounds01]' - '[@reeves01b]' - '[@pounds01]' - '[@reeves01b]' - '[@pounds01]' - '[@pounds01]' - '[@dickey90]' - '[@reeves01b]' - '[@fabian89]' - '[@magdziarz95]' - '[@reeves01b]' - '[@anders89]' - '[@antonucci93]' - '[@matt91]' - '[@george91]' - '[@shields95]' - '[@leahy93]' - '[@yaqoob01]' - '[@davidson79]' - '[@goad98]' - '[@wandel99]' - '[@george91]' - '[@pounds01]' - '[@pounds01]' - '[@reeves01b]' title: The origin of the Fe K features in Markarian 205 and Markarian 509 --- accretion, accretion disks – black hole physics – galaxies: Seyfert – Introduction {#sec:introduction} ============ X-ray observations probe the central regions of AGN. In the standard paradigm, this corresponds to the inner parts of an accretion disk around a supermassive black hole. Above the disk, a hot corona Compton upscatters optical-EUV photons to X-ray energies; some of this X-ray radiation is reprocessed in the surrounding material including the disk, giving rise to prominent Fe K$\alpha$ lines. The broad, distorted velocity profile of Fe K$\alpha$ emission, suggesting an accretion disk around a supermasive black hole, was first observed with [[*ASCA*]{}]{}  in MCG –6-30-16 (Tanaka [et al. ]{} 1995). This is common in the regime. Studies of other AGN with [[*ASCA*]{}]{} suggested that broad, low-ionisation Fe K$\alpha$ emission is common in AGN (Nandra [et al. ]{} 1997, Reynolds [et al. ]{} 1997), but it was not until the launch of [[*XMM-Newton*]{}]{} that the diversity of Fe line profiles could really be investigated. With the large increase in collecting area afforded by [[*XMM-Newton*]{}]{}, it was soon noticed that some luminous AGN showed different Fe line profiles to that of MCG –6-30-15 (e.g. Reeves [et al. ]{} 2001a, Blustin [et al. ]{} 2002). One particularly interesting example, the luminous Seyfert 1 galaxy Mrk 205, appeared to have a narrow, neutral Fe K$\alpha$ line at 6.4 keV, accompanied by a broad line from He-like Fe (Reeves [et al. ]{} 2001b). Reeves v. [d. m al. ]{}argued that the ionised Fe line originates in the inner parts of an accretion disk, while the neutral Fe K$\alpha$ line originates in a molecular torus, hypothesised to lie outside the broad line regions in AGN unification schemes (Antonucci 1993). A later observation of another Seyfert galaxy, Mrk 509, showed a very similar pair of narrow-neutral and broad-ionised Fe K$\alpha$ emission lines (Pounds [et al. ]{} 2001), demonstrating that this configuration of Fe K$\alpha$ profiles is not an isolated phenomenon in Mrk 205. In this paper we revisit the Fe K features in Mrk 205 and Mrk 509 seen by [[*XMM-Newton*]{}]{}. By coadding the spectra from the three EPIC instruments we are able to maximise the signal to noise per bin while properly sampling the EPIC spectral resolution around Fe K. The paper is laid out as follows. In Section \[sec:observation\] we describe the observations and data reduction, and we describe the spectral fitting in Section \[sec:results\]. The results are discussed in Section \[sec:discussion\] and we present our conclusions in Section \[sec:conclusions\]. The Appendix contains a description of the method employed to coadd spectra from the different EPIC instruments. Observations and data reduction {#sec:observation} =============================== Mrk 205 was observed with [[*XMM-Newton*]{}]{} on the 7th May 2000, and these data were presented by Reeves [et al. ]{} (2001b). Several exposures were taken in both MOS and PN cameras, in full frame and large-window modes. Spectra of the source were extracted from circular regions of radius $\sim 50''$ and background spectra were obtained from nearby source-free regions. All data are available from PN. The spectra were combined using the procedure outlined in Appendix A. Mrk 509 has been observed twice with [[*XMM-Newton*]{}]{}. The first observation took place on 25th November 2000 and the data were presented by Pounds [et al. ]{} (2001). The second observation was performed on the 20th April 2001. In both observations, MOS and PN cameras were operated in small window mode. Source spectra were taken from a circular region of $40'' - 50''$ radius, and background spectra were obtained from nearby regions free from bright sources. Single events were selected in MOS, and single and double events were selected in PN. The spectra were combined using the procedure outlined in the Appendix to produce one spectrum for each observation and one spectrum which is a combination of the two. --------- --- [XSPEC]{}. Only the rest-frame 3-10 keV energy range was used in the spectral fitting because we are primarily interested in the Fe K features. The broad emission lines reported by Reeves [et al. ]{} (2001b) and Pounds [et al. ]{} (2001) are only significant between 5 and 8 keV (see Fig. 4 of Pounds [et al. ]{} 2001), so the 3-10 keV energy range allows a good measurement of the continuum on either side of Fe K, while excluding the noisier and less-well calibrated data at higher energies and the complex spectrum found at lower energies. We included the small effect of absorption from the Galaxy as a component in all our spectral modelling (${N_{H}}= 2.9 \times 10^{20}$ cm$^{-2}$ towards Mrk 205 and ${N_{H}}= 4.1 \times 10^{20}$ cm$^{-2}$ towards Mrk 509, Dickey & Lockman 1990). The n we added the following \[tab:results\]. Mrk 4 model. The counts spectrum is shown in Fig. \[fig:mrk205\_allspec\], along with the power law model convolved with the instrument response. Like Reeves [et al. ]{} (2001b) we find this model is a poor fit, and there are significant residuals around 6.4 keV. We therefore added a gaussian emission line, and obtained an acceptable fit, with a resolved line consistent with 6.4 keV Neutral Fe K$\alpha$. Although the fit was acceptable, residuals remained at $\sim 7$ keV, and so we tried further fits including one of three additional model components that might plausibly account
null
--- author: - | \ \ \ \ \ \ \ \ title: '\' --- =1 Introduction {#sec:introduction} ============ Among all four-dimensional quantum field theories lies a unique example singled out for its remarkable symmetry and mathematical structure as well as its key role in the AdS/CFT correspondence. This is maximally supersymmetric ($\mathcal{N}\!=\!4$) Yang-Mills theory (SYM) in the planar limit [@ArkaniHamed:2008gz]. It has been the subject of great interest over recent years, and the source of many remarkable discoveries that may extend to much more general quantum field theories. These features include a connection to Grassmannian geometry [@ArkaniHamed:2009dn; @ArkaniHamed:2009sx; @ArkaniHamed:2009dg; @ArkaniHamed:2012nw; @ArkaniHamed:book], extra simplicity for planar theories’ loop integrands [@ArkaniHamed:2010gh; @Bourjaily:2011hi; @Bourjaily:2015jna], the existence of all-loop recursion relations [@ArkaniHamed:2010kv], and the existence of unanticipated symmetries [@Drummond:2007cf; @Drummond:2008vq; @Brandhuber:2008pf; @Drummond:2009fd] and related dualities between observables in the theory [@Alday:2007hr; @Drummond:2007aua; @Brandhuber:2007yx; @Alday:2010zy; @Eden:2010zz; @Mason:2010yk; @CaronHuot:2010ek; @Eden:2011yp; @Adamo:2011dq; @Eden:2011ku]. Of these, the duality between scattering amplitudes and correlation functions, will play a fundamental role throughout this work. Much of this progress has been fueled through concrete theoretical data: heroic efforts of computation are made to determine observables (with more states, and at higher orders of perturbation); and this data leads to the discovery of new patterns and structures that allow these efforts to be extended even further. This virtuous cycle—even when applied merely to the ‘simplest’ quantum field theory—has taught us a great deal about the structure of field theory in general, and represents an extremely fruitful way to improve our ability to make predictions for experiments. In how quantum field theories are determined. This is made possible through the use of powerful new [*graphical*]{} rules described in this work. The correlation SYM. This correlation function is closely related to the four-particle scattering amplitude, as reviewed below. But the information contained in this single function is vastly more general: it contains information about all scattering amplitudes in the theory—including those involving more external states (at lower loop-orders). As such, our determination of the four-point correlator at ten loops immediately provides information about the five-point amplitude at nine loops, the six-point amplitude at eight loops, etc. , Before we review this correspondence and describe the rules used to obtain the ten loop correlator, it is worth taking a moment to reflect on the history of our knowledge about it. Considered as an amplitude, it has been the subject of much interest for a long time. The tree-level amplitude was first expressed in supersymmetric form by Nair in . It was computed using unitarity to two loops in 1997 [@Bern:1997nh] (see also [@Anastasiou:2003kj]), to three loops in 2005 [@Bern:2005iz], to five loops in 2007—first at four loops [@Bern:2006ew], and five quickly thereafter [@Bern:2007ct]—and to six loops around 2009 [@Bern:2012di] (although published later). The extension to seven loops required significant new technology. This came from the discovery of the soft-collinear bootstrap in 2011 [@Bourjaily:2011hi]. Although not known at the time, the soft-collinear bootstrap method (as described in ), would have failed beyond seven loops; but luckily, the missing ingredient would be supplied by the duality between amplitudes and correlation functions discovered in [@Eden:2010zz; @Alday:2010zy] and elaborated in [@Eden:2010ce; @Eden:2011yp; @Eden:2011ku; @Adamo:2011dq]. The determination of the four-point correlator in planar SYM followed a somewhat less linear trajectory. One and two loops were obtained soon after (and motivated by) the AdS/CFT correspondence between 1998 and 2000 [@GonzalezRey:1998tk; @Eden:1998hh; @Eden:1999kh; @Eden:2000mv; @Bianchi:2000hn]. But despite a great deal of effort by a number of groups, the three loop result had to wait over 10 years until 2011—at which time the four, five, and six loop results were found in quick succession [@Eden:2011we; @Eden:2012tu; @Ambrosio:2013pba; @Drummond:2013nda]; seven loops was reached in 2013 [@Ambrosio:2013pba]. The breakthrough for the correlator, enabling this rapid development, was the discovery of a hidden symmetry [@Eden:2011we; @Eden:2012tu]. On the amplitude side, the extension of the above methods to eight loops also required the exploitation of this symmetry via the duality between amplitudes and correlators. This hidden symmetry (reviewed below) greatly simplifies the work required to extend the soft-collinear bootstrap, making it possible to determine the eight loop functions in 2015 [@Bourjaily:2015bpz]. While the eight loop amplitude and correlator were determined (the ‘hard way’,) using just the soft-collinear bootstrap and hidden symmetry, we had already started exploring alternative methods to find these functions which seemed quite promising. These were mentioned in the conclusions of —the details of which we describe in this note. This new approach, based not on algebraic relations but graphical ones, has allowed for a watershed of new theoretical data similar to that of 2007: within a few short months, we were able to fully determine both the nine and ten loop correlation functions. The reason for this great advance—the (computational) advantages of graphical rules—will be discussed at the end of this introduction. Our first review is as follows. In we review the representation of amplitudes and correlation functions, and the duality between them. This will include a summary of the notation and conventions used throughout this paper, and also a description of the way that the terms involved are represented both algebraically and graphically. We elaborate on how the plane embedding of the terms that contribute to the correlator (viewed as graphs) allow for the direct extraction of amplitudes at corresponding (and lower) loop-orders—including amplitudes involving more than four external states—in . The three graphical rules sufficient to fix all possible contributions (at least through ten loops) are described in . We will refer to these as the triangle, square, and pentagon rules. The triangle and the square rules relate terms at different loop orders, while the pentagon rule relates terms at a given loop-order. While the square rule is merely the graphical manifestation of the so-called ‘rung’ rule [@Bern:1997nh; @Eden:2012tu] (generalized by the hidden symmetry of the correlator), the triangle and pentagon rules are new. We provide illustrations of each and proofs of their validity in . These rules have varying levels of strength. While the square rule is well-known to be insufficient to determine the amplitude or correlator at all orders (and the same is true for the pentagon rule), we expect that the combination of the square and triangle rules [do]{} prove sufficient—but only after their consequences at higher loop-orders are taken also into account. (For the circle rules they are necessary.) In we describe the varying strengths of each of these rules, and summarize the expressions found for the correlation function and amplitude through ten loops in . The data can also be downloaded from <http://goo.gl/JH0yEc>. Details on how this data can be obtained and the functionality provided (as part of a bare-bones [Mathematica]{} package) are described in . Before we begin, however, it seems appropriate to first describe what accounts for the advance—from eight to ten loops—in such a short interval of time. This turns out to be entirely a consequence of the computational power of working with graphical objects over algebraic expressions. The result takes considerable computational resources. #### Why [Graphical]{} Rules? \ It is worth taking a moment to describe the incredible advantages of [*graphical*]{} methods over analytic or algebraic ones. The integrands of planar amplitudes or correlators can only meaningfully be defined if the labels of the internal loop momenta are fully symmetrized. Only then do they become well-defined,
null
--- author: - 'N.N. Avdeev [^1]' bibliography: - '../common/notmy.bib' - '../common/my.bib' title: ' On diameter bounds for planar integral point sets in semi-general position [^2] ' --- #### Abstract. A point set $M$ in the Euclidean plane is called a planar integral point set if all the distances between the elements of $M$ are integers, and $M$ is not situated on a straight line. A planar integral point set is called to be in semi-general position, if it does not contain collinear triples. The existing lower bound for mininum diameter of planar integral point sets is linear. We prove a new lower bound for mininum diameter of planar integral point sets in semi-general position that is better than linear. Introduction . In this line. Every integral point set consists of a finite number of points [@anning1945integral; @erdos1945integral]; thus, we denote the set of all planar integral point sets of $n$ points by $\mathfrak{M}(2,n)$ (using the notation in [@our-vmmsh-2018]) and define the diameter of $M\in\mathfrak{M}(2,n)$ in the following natural way: $$\operatorname{diam} M = \max_{A,B\in M} |AB| ,$$ where $|AB|$ denotes the Euclidean distance. The symbol $\# M$ will be used for cardinality of $M$, that is the number of points in $M$ in our case. Since every integral point set can obviously be dilated to a set of larger diameter, minimal possible diameters of sets of given cardinality are in the focus. To be precise, the following function was introduced [@kurz2008bounds; @kurz2008minimum]: $$d(2,n) = \min_{M\in\mathfrak{M}(2,n)} \operatorname{diam} M .$$ It turned out to be very easy to construct a planar integral point set of $n$ points with $n-1$ collinear ones and one point out of the line (so-called *facher* sets); the same holds for 2 points out of the line (we refer the reader to [@antonov2008maximal], where some of such sets are called *crabs*) and even for 4 points out of the line [@huff1948diophantine]. For $9\leq n\leq 122$, the minimal possible diameter is achieved at a facher set [@kurz2008bounds]. A set $M\in\mathfrak{M}(2,n)$ is called to be in *semi-general position*, if no three points of $M$ are collinear. The set $M$ is $\overline{\mathfrak{M}}(2,n)$. Furthermore, the constructions of integral point sets in semi-general position of arbitrary cardinality appeared [@harborth1993upper]; such sets are situated on a circle. Also, there is a sophisticated construction of a circular integral point set of arbitrary cardinality that gives the possible numbers of odd integral distances between points in the plane [@piepmeyer1996maximum]. A set $M\in\overline{\mathfrak{M}}(2,n)$ is called to be in *general position*, if no four points of $M$ are concyclic. The set of all planar integral point sets in general position is denoted by $\dot{\mathfrak{M}}(2,n)$. It remains unknown if there are integral points sets in general position of arbitrary cardinality; however, some sets $M\in \dot{\mathfrak{M}}(2,7)$ are known [@kreisel2008heptagon; @kurz2013constructing]. The inequality $$d(2,n) \leq \overline{d}(2,n) \leq \dot{d}(2,n) ,$$ where $ \overline{d}(2,n) = \min_{M\in\overline{\mathfrak{M}}(2,n)} \operatorname{diam} M $ and $ \dot{d}(2,n) = \min_{M\in\dot{\mathfrak{M}}(2,n)} \operatorname{diam} M $, is obvious; however, a more interesting relation holds: $$c_1 n \leq d(2,n) \leq \overline{d}(2,n) \leq n^{c_2 \log \log n} .$$ The upper bound is presented in [@harborth1993upper]. The lower bound was firstly introduced in [@solymosi2003note]; the largest known value for $c_1$ is $5/11$ for $n\geq 4$ [@my-pps-linear-bound-2019]. There are some bounds for minimal diameter of planar integral point sets in some special positions. Assuming that the planar integral point sets contains many collinear points, the following result holds. [@kurz2008minimum Theorem 4] For $\delta > 0$, $\varepsilon > 0$, and $P\in\mathfrak{M}(2,n)$ with at least $n^\delta$ collinear points, there exists a $n_0 (\varepsilon)$ such that for all $n \geq n_0 (\varepsilon)$ we have $$\operatorname{diam} P \geq n^{\frac{\delta}{4 \log 2(1+\varepsilon)}\log \log n} .$$ For diameter bounds for circular sets, we refer the reader to [@bat2018number]. Particular cases of planar integral point sets are also discussed in [@brass2006research §5.11], [@guy2013unsolved §D20], [@our-pmm-2018], [@our-ped-2018]. For generalizaton in higher dimensions and the appropriate bounds, see [@kurz2005characteristic; @nozaki2013lower]. In the present paper we give a special bound for planar integral point sets in semi-general position. The condition of semi-general position is important in the given proof. Preliminary results =================== In this section, we give some lemmas which will be used for the proof. [@solymosi2003note Observation 1] If a triangle $T$ has integer side-lengths $a \leq b \leq c$, then the minimal height $m$ of it is at least $\left(a - \frac{1}{4}\right)^{1/2}$. The part of a plane between two parallel straight lines with distance $\rho$ between the lines is called a strip of width $\rho$. [@smurov1998stripcoverings] If a triange $T$ with minimal height $\rho$ is situated in a strip, then the width of a strip is at least $\rho$. \[cor:solymosi\_strip\] Let \frac{1}{4}\right)^{1/2}$. [@our-vmmsh-2018 Lemma 4]; [@my-pps-linear-bound-2019 Lemma 2.4] \[lem:square\_container\] Let $M\in\mathfrak{M}(2,n)$, $\operatorname{diam} M = d$. Then $M$ is situated in a square of side length $d$. [@my-pps-linear-bound-2019 Definition 2.5] A *cross* for points $M_1$ and $M_2$, denoted by $cr(M_1,M_2)$, is the union of two straight lines: the line through $M_1$ and $M_2$, and the perpendicular bisector of line segment $M_1 M_2$. [@my-pps-linear-bound-2019 Theorem 3.10] \[lem:no\_distance\_one\] Each set $M\in\mathfrak{M}(2,n)$ such that for some $M_1,M_2 \in M$ equality $|M_1 M_2|=1$ holds, consists of $n-1$ points, including $M_1$ and $M_2$, on a straight line, and one point out of the line, on the perpendicular bisector of line segment $M_1 M_2$. \[lem:count\_of\_points\_on\_hyperbolas\] Let $\{M_1, M_2, M_3, M_4\} \subset M\in\overline{\mathfrak{M}}(2,n)$ (points $M_2$ and $M_3$ may coincide, other points may not), $n\geq 4$. Then $\# M \leq 4 \cdot |M_1 M_2| \cdot |M_3 M_4|$. Lemma 1 [@erdos1945integral]. Each point $N\in M$ satisfies one of the following conditions: a\) $N$ belongs to $cr(M_1,M
null
UDHEP-10-92\ IUHET-228\ [**NEUTRINO MIXING DUE TO A\ VIOLATION OF THE EQUIVALENCE PRINCIPLE**]{}\ \ Physics Department\ Indiana University, Bloomington, IN 47405\ and\ \ Department of Physics and Astronomy\ University of Delaware, Newark, DE 19716\ \ Massless neutrinos will mix if their couplings to gravity are flavor dependent, i.e., violate the principle of equivalence. Because the gravitational interaction grows with neutrino energy, the solar neutrino problem and the recent atmospheric neutrino data may be simultaneously explained by violations at the level of $10^{-14}$ to $10^{-17}$ or smaller. This problem can be explained by experiments. Several years ago, Gasperini noted that if the gravitational couplings of neutrinos are flavor dependent, mixing will take place when neutrinos propagate through a gravitational field [@G]. Similar ideas were proposed independently by Halprin and Leung [@HL]. Consequently, experiments designed to search for neutrino mixing also probe the validity of the equivalence principle. In this Letter, we analyze the implications of present neutrino mixing experiments for the equivalence principle. We consider the effects on neutrinos when they propagate under the influence of a weak, static gravitational field. For simplicity, we shall assume two neutrino flavors and neglect any neutrino masses. Ignoring effects which involve a spin flip, the flavor evolution of a relativistic neutrino is quite simple (see [@HL; @HLP] for more rigorous derivations). In the rest frame of a massive object, a neutrino has the effective interaction energy $$H = - 2 | \phi (r)| E (1 + f) \label{H}$$ where E is the neutrino energy, and $\phi (r) = - | \phi(r) |$ is the Newtonian gravitational potential of the object. $f$ is a small, traceless, $2 \times 2$ matrix which parametrizes the possibility of gravity coupling to neutrinos with a strength different from the universal coupling, i.e. violations of the equivalence principle. $f$ will be diagonal in some basis which we denote as the gravitational interaction basis (G-basis). In that basis, $\delta \equiv f_{22} - f_{11}$, then provides a measure of the degree of violation of the equivalence principle. In general, as occurs for neutrino masses, the flavor basis or the weak interaction basis (W-basis) will not coincide with the G-basis. If we denote the neutrino fields in the G-basis by $\nu_G = (\nu_1, \nu_2)$ and neutrinos in the W-basis by $\nu_W = (\nu_e, \nu_\mu)$, $\nu_G$ and $\nu_W$ are related by a unitary transformation, $U^\dagger$: $$\left( \begin{array}{c} \nu_1 \\ \nu_2 \end{array} \right) = \left[ \begin{array}{cc} \cos \Theta_G & - \sin \Theta_G \\ \sin \Theta_G & \ \ \cos \Theta_G \end{array} \right] \left( \begin{array}{c} \nu_e \\ \nu_\mu \end{array} \right), \label{mix}$$ where $\Theta_G$ is the mixing angle. Consequently , mixed patterns may occur. The idea of using degenerate particles to study possible violations of the equivalence principle is not new. Similar effects have been considered in the neutral kaon system [@kaon] for over 30 years. Note, however, that a violation of the equivalence principle in the kaon system requires that gravity couples differently to particles and antiparticles, a violation of CPT symmetry. This requirement is not necessary for neutrinos. Here, gravity is coupling slightly differently to different fermion generations. Using Eq. (\[H\]), we may write down the flavor evolution equation for relativistic neutrinos propagating through a gravitational field (with no matter present). In the W-basis, it reads $$i \frac{d}{dt} \left( \begin{array}{c} \nu_e \\ \nu_\mu \end{array} \right) = E |\phi(r)| \delta \ \ U \left[ \begin{array}{cc} -1 & 0 \\ 0 & 1 \end{array} \right] U^\dagger \ \ \left( \begin{array}{c} \nu_e \\ \nu_\mu \end{array} \right). \label{osc}$$ where we have neglected the irrelevant term in the hamiltonian which leads to an unobservable phase. For constant $\phi$, the survival probability for a $\nu_e$ after propagating a distance L is $$P(\nu_e \rightarrow \nu_e) = 1 - \sin^2(2 \Theta_G) \sin^2[ {\pi L \over \lambda } ]. \label{P}$$ where $$\lambda = 6.2 {\rm km} ( {10^{-20} \over | \phi | \delta }) ( {10 GeV \over E}) \label{lambda}$$ is the oscillation wavelength. This (\[P\]) is quite similar to that for vacuum oscillations due to neutrino masses (see e.g. [@KP]). However note the linear dependence of the oscillation phase on the neutrino energy. For a mass, the phase depends on $1/E$. Thus , the phase depends on the energy of the energies. When the neutrino propagates through matter, then the mixing can be dramatically enhanced. A resonance occurs when $$\surd \overline{2} G_F N_e = 2 E |\phi| \delta \cos ( 2 \Theta_G )$$ where $G_F$ is Fermi’s constant and $N_e$ is the electron density. This effect is completely analogous to the well studied situation in which the mixing is due to neutrino masses ([@MSW], for a review, see [@KP]). The survival probabilities for gravitationally induced mixing when there is a matter background can be obtained from those for masses by the transformation $${m_2^2-m_1^2 \over 4 E} \rightarrow {E | \phi| \delta} , \label{convert}$$ if $\phi$ is a constant. The local potential, $ \phi$, enters through the phase of the oscillations, Eq. (\[P\]). It was not recovered. However the local value of $\phi$ is uncertain, because the estimates tend to increase as one looks at matter distributions at larger and larger scales. The potential at the Earth due to the Sun is $1\times 10^{-8}$, that due to the Virgo cluster of galaxies is about $1\times 10^{-6}$ while that due to our supercluster [@kaon] has recently been estimated to be about $3 \times 10^{-5}$ (which is larger than $\phi$ [*in*]{} the Sun due to the Sun). In what follows, we will quote values for the combined, dimensionless parameter $|\phi| \delta$. We now consider what the current data on neutrino mixing imply for the parameters $|\phi| \delta$ and $\Theta_G$. The searches for mixing can be divided into three broad categories; laboratory experiments, atmospheric neutrino observations and solar neutrino observations. The latter two have shown some evidence for neutrino mixing and will be considered first. Solar neutrinos have been observed in four experiments [@solar], their results are summarized in Table 1. The observations are all well below the predictions of the standard solar model [@BU]–an indication of mixing. There are two mechanisms by which neutrino mixing can give large reductions in the solar neutrino flux, long-wavelength oscillations or resonant conversion. We will then investigate them separately. If the distance between the Sun and the Earth is half of an oscillation wavelength and the mixing angle is large, then Eq. (\[P\]) predicts a large reduction in the flux. This occurs for 10 MeV neutrinos when $|\phi|\delta = 2\times 10^{-25}$. Then the high energy neutrinos will be depleted but the lower energy neutrinos will be completely unaffected. However, the present data indicate mixing for the low energy solar neutrinos as well, see Table 1. A careful $\chi^2$ analysis finds that there is no long-wavelength, two flavor explanation of the data–it is disfavored at the 3 standard deviation level. This is in contrast to the normal case of mixings induced by mass differences for which the data are well described by vacuum oscillations of two flavors (see e.g. ), for which the mixings are very well described The difference is due to the differing energy dependence of the two types of mixings. Next generation solar neutrino observations [@future] will further constrain this possibility by measuring the solar neutrino energy dependence (SNO or Super-Kamiokande) and by searching for seasonal variations (BOREXINO). Resonant conversion as the neutrinos propagate through the interior matter of the Sun can also lead to large reductions in the flux. Figure 1 shows the favored regions from a $\chi^2$ fit to the flux reduction values in Table 1 (the Kamiokande-II energy bins are included explicitly and their overall systematic error is correctly accounted for [@solar]). An analytical expression was used to describe the resonant conversion survival probability [@KP; @HLP], based on only
null
--- abstract: 'We establish the gravity/fluid correspondence in the nonminimally coupled scalar-tensor theory of gravity. Imposing Petrov-like boundary conditions over the gravitational field, we find that, for a certain class of background metrics, the boundary fluctuations obey the standard Navier-Stokes equation for an incompressible fluid without any external force term in the leading order approximation under the near horizon expansion. That is to say, the scalar field fluctuations does not contribute in the leading order approximation regardless of what kind of boundary condition we impose on it.' author: - | Bin Wu and Liu Zhao\ School of Physics, Nankai University, Tianjin 300071, China\ [*email*]{}: <binfen.wu@gmail.com> and <lzhao@nankai.edu.cn> title: 'Holographic fluid from nonminimally coupled scalar-tensor theory of gravity' --- Introduction ============ The AdS/CFT correspondence [@Ma] is a successful idea which make a connection between quantum field theory on the boundary and gravity theory in the bulk. It has been studied extensively for nearly two decades and has led to important applications in certain condensed matter problems such as superconductivity [@SC], etc. In the long wavelength limit, the dual theory on the boundary reduces to a hydrodynamic system [@PPS; @Ba2], and the transport coefficients of dual relativistic fluid was calculated in [@Haack:2008cp]. This for the dual theory, but without any correspondence. In analogy to the AdS/CFT duality, the dual fluid usually lives on the AdS boundary at asymptotically infinity [@Eling:2009sj; @Ba3; @Ashok]. However, the choice of boundary at asymptotically infinity is not absolutely necessary [@Bredberg:2010ky; @Strominger2]. Refs. [@Cai2; @Ling:2013kua] attempted to place the boundary at finite cutoff in asymptotically AdS spacetime to get the dual fluid. An algorithm was presented in [@Compere:2011dx] for systematically reconstructing the perturbative bulk metric to arbitrary order. For spatially flat spacetime, this method has been widely generalized, such as topological gravitational Chern-Simons theory [@Cai:2012mg], Einstein-Maxwell gravity [@Niu:2011gu], Einstein-dilaton gravity [@Cai] and higher curvature gravity [@Eling; @Zou:2013ix]. For spatially curved spacetimes, imposing Petrov-like boundary conditions on timelike cutoff spacetime is a good way to realize boundary fluid equations [@Strominger; @Wu:2013kqa; @Cai:2013uye], provided the background spacetime in non-rotating. In [@Bin], the present authors investigated the fluid dual of Einstein gravity with a perfect fluid source using the Petrov-like boundary condition. In most of the previously known example cases, the dual fluid equation will contain an external force term provided the bulk theory involves a matter source [@Bin; @Ba; @Ling; @Bai:2012ci]. In this paper, we proceed to study the fluid dual of a nonminimally coupled scalar-tensor theory of gravity. We find that the dual fluid equation arising from near horizon fluctuations around certain class of static background configuration in this theory does not contain the external force term, because the contribution from the scalar fluctuations is of higher order in the near horizon expansion and hence does not enter the leading order approximation. Nonminimally coupled scalar-tensor theory ========================================= We begin by introducing the nonminimally coupled scalar-tensor theory of gravity in $(n+2)$-dimensions. The action is written as $$\begin{aligned} I[g,\phi ]=\int \mathrm{d}^{n+2}x\sqrt{-g}\left[ \frac{1}{2}(R-2\Lambda) -\frac{1}{2}(\nabla \phi )^{2}-\frac{1}{2}\xi R\phi^2 - V(\phi)\right] \;,\end{aligned}$$ where $\xi$ is a coupling constant. When $\xi = \frac{n}{4(n+1)}$, the theory becomes a conformally coupled scalar-tensor theory of gravity. We will not choose any specific value for $\xi$ in this paper because the construction works for any $\xi$. We set $8\pi G=1$ for convenience. The equations of motion that follow from the action read $$\begin{aligned} &G_{\mu\nu} + g_{\mu\nu}\Lambda = T_{\mu\nu}, \label{eq1}\\ &\nabla_{\mu}\nabla^{\mu} \phi-\xi R\phi - \frac{d V}{d \phi}=0, \label{eq2}\end{aligned}$$ where $$\begin{aligned} T_{\mu\nu}&= \nabla_{\mu}\phi\nabla_{\nu}\phi - \frac{1}{2}g_{\mu\nu}(\nabla \phi)^2 + \xi [g_{\mu\nu} \Box -\nabla_{\mu}\nabla_{\nu} + G_{\mu\nu}]\phi^2 -g_{\mu\nu}V(\phi).\end{aligned}$$ In what follows, it is better to reformulate eq. (\[eq1\]) in the form $$\begin{aligned} G_{\mu\nu} &= \tilde{T}_{\mu \nu}, \label{eq1prime}\end{aligned}$$ in which we have introduced $$\begin{aligned} &\tilde{T}_{\mu \nu} = \frac{\nabla_{\mu}\phi\nabla_{\nu}\phi - \frac{1}{2} g_{\mu\nu}(\nabla \phi)^2 + \xi [g_{\mu\nu} \Box -\nabla_{\mu}\nabla_{\nu}]\phi^2 -g_{\mu\nu}(\Lambda + V(\phi))} {(1 - \xi \phi^2)}\;. \label{tp}\end{aligned}$$ To realize fluid dual of the above theory, we will consider fluctuations around metrics of the form $$\begin{aligned} \mathrm{d} s^2 = -f(r)\mathrm{d}t^2 + \frac{\mathrm{d}r^2}{f(r)} + r^2\mathrm{d} \Omega_k^2\;,\end{aligned}$$ where $\mathrm{d}\Omega_k^2$ is the line element of an $n$-dimensional maximally symmetric Einstein space (with coordinates $x^i$), whose normalized constant sectional curvature is $k=0,\pm1$. Exact solutions are not available for different dimensions. However, a number of example cases indicate that solutions of the above form indeed exist in some concrete dimensions [@MTZ; @Wei; @Nadalini], and, in this work, we do not need to make use of the explicit solution. Thus the spacetime dimension $d=n+2$, the metric function $f(r)$ and the scalar potential $V(\phi)$ are all kept unspecified. In Edington-Fenkelstein (EF) coordinates, the metric can be expressed as $$\begin{aligned} \mathrm{d} s^2 = g_{\mu\nu}\mathrm{d}x^\mu \mathrm{d}x^\nu = -f(r) \mathrm{d}u^2 + 2 \mathrm{d}u \mathrm{d}r + r^2\mathrm{d} \Omega_k^2\;, \label{metric2}\end{aligned}$$ where $u$ is the light-like EF coordinate. In the following, whenever $g_{\mu\nu}$ appears, it is meant to be given by (\[metric2\]) in the coordinates $x^\mu=(u,r,x^i)$. Hypersurface projection and boundary condition ============================================== To construct the fluid dual of the above system, we need to introduce an appropriate hypersurface and make make projections for some geometric objects onto the hypersurface. We also need to introduce appropriate boundary condition on the projection hypersurface. The formulation is basically parallel to the previous works such as [@Strominger; @Bin]. Consider the timelike hypersurface $\Sigma_c$ defined via $r-r_c=0$ with constant $r_c$. The induced metric $h_{\mu\nu}$ on the hypersurface is related to the bulk metric $g_{\mu\nu}$ via $$\begin{aligned} & h_{\mu\nu}=g_{\mu\nu}-n_{\mu}n_{\nu}, \label{indmet}\end{aligned}$$ where $n_{\mu}$ is the unit normal vector of $\Sigma_c$. For the line element (\[metric2\]) $$\begin{aligned} n_{\mu}=(0,\frac{1}{\sqrt{f(r)}},0,\cdots,0), \qquad n^{\mu}=(\frac{1}{\sqrt{f(r)}}, \sqrt{f(r)}, 0, \cdots, 0).\end{aligned}$$ It is natural to introduce $x^a =(u, x^i)$ as a intrinsic coordinate system on the hypersurface. Note that we have adopted two indexing systems. Greek indices represent bulk tensors, while Latin indices represent tensors on the hypersurface. In terms of the coordinates $x^a$, it is convenient to think of the induced metric $h_{\mu\nu}$ on the hypersurface as a metric tensor $h_{ab}$ defined on the hypersurface — one just needs to
null
--- abstract: 'X-rays trace accretion onto compact objects in binaries with low mass companions at rates ranging up to near Eddington. Accretion at high rates onto neutron stars goes through cycles with time-scales of days to months. At lower average rates the sources are recurrent transients; after months to years of quiescence, during a few weeks some part of a disk dumps onto the neutron star. Quasiperiodic oscillations near 1 kHz in the persistent X-ray flux attest to circular motion close to the surface of the neutron star. The neutron stars are probably inside their innermost stable circular orbits and the x-ray oscillations reflect the structure of that region. The long term variations show us the phenomena for a range of accretion rates. For black hole compact objects in the binary, the disk flow tends to be in the transient regime. Again, at high rates of flow from the disk to the black hole there are quasiperiodic oscillations in the frequency range expected for the innermost part of an accretion disk. There are differences between the neutron star and black hole systems, such as two oscillation frequencies versus one. For both types of compact object there are strong oscillations below 100 Hz. Interpretations 'A compact<extra_id_1> of the observed oscillations are interesting.<extra_id_2> are complex for both types of compact<extra_id_3> are as follows: <unk> Compact<extra_id_4> of Compact Optics<extra_id_5> are based on the compact<extra_id_6> and<extra_id_7> |<extra_id_8> [[ object.' address: | Laboratory for High Energy Astrophysics\ NASA/GSFC Greenbelt, MD 20771 author: - 'Jean H. Swank' title: 'X-Ray Observations of Low-Mass X-Ray Binaries: Accretion Instabilities on Long and Short Time-Scales' --- \#1[[A&A,]{} [\#1]{}]{} \#1[[Acta Astr.,]{} [\#1]{}]{} \#1[[A&AS,]{} [\#1]{}]{} \#1[[ARA&A,]{} [\#1]{}]{} \#1[[AJ,]{} [\#1]{}]{} \#1[[ApJ,]{} [\#1]{}]{} \#1[[ApJS,]{} [\#1]{}]{} \#1[[MNRAS,]{} [\#1]{}]{} \#1[[Nature,]{} [\#1]{}]{} \#1[[PASJ,]{} [\#1]{}]{} Introduction {#introduction .unnumbered} ============ Low-mass X-ray binaries (LMXB) are the binaries of a low-mass “normal” star and a compact star. The compact star could be a white dwarf, a neutron star, or a black hole. The Rossi X-Ray Timing Explorer ([*RXTE*]{}) has been observing since the beginning of 1996 and has obtained qualitatively new information about the neutron star and black hole systems. In this paper I review the new results briefly in the context of what we know about these sources. The brightest, Sco X–1, was one of the first non-solar X-ray sources detected, but only with [*RXTE*]{} have sensitive measurements with high time resolution been made that could detect dynamical time-scales in the region of strong gravity. [*RXTE*]{} also has a sky monitor with a time-scale of hours that keeps track of the long term instabilities and enables in depth observations targeted to particular states of the sources. The LMXB have a galactic bulge or Galactic Population II distribution. The mass donor generally fills its Roche lobe, is less than a solar mass, and is optically faint, in contrast to the early type companions of pulsars like Cen X–3 or the black hole candidate Cyg X–1. In many cases the optical emission is dominated by emission from the accretion disk, and that is dominated by reprocessing of the X-ray flux from the compact object [@vPM95]. The known orbital periods of these binaries range from 16 days (Cir X–1) to 11 minutes (4U 1820–30). The very short period systems ($< 1$ hr) are expected to have degenerate dwarf mass donors and probably the mass transfer is being driven by gravitational radiation. The different properties of the sources indicate several populations. The data can be found in this sequence. There are about 50 persistent neutron star LMXB [@vP95]. Distances can be estimated in a variety of ways. The hydrogen column density indicated by the X-ray spectrum should include a minimum amount due to the interstellar medium. Many of the sources emit X-ray bursts associated with thermonuclear flashes that reach the hydrogen or helium Eddington limits. In some cases the optical source provides clues. The resulting luminosity distribution appears to range from several times the Eddington limit for a neutron star down below the luminosity of about $10^{35}$ ergs s$^{-1}$, corresponding to $\approx 10^{-11}$ $\msun$ yr$^{-1}$ [@CS97]. The lower limit has come from instrument sensitivity, but it may also reflect the luminosity below which the accretion flow is not steady, so that the source must be a transient. “X-Ray Novae” that are among the brightest X-ray sources for a month to a year are sufficiently frequent that they were seen in rocket flights in the beginning of X-ray astronomy. The X-ray missions that monitored parts of the sky during the last three decades found that on average there are 1–2 very bright transient sources each year (e.g. [@CSL97]) with durations of a month to a year. In 5 years of [*RXTE*]{} operations, we know of 20 transient neutron star sources and and an equal number of transient black hole sources. If they have a 20 yr recurrence time we have seen only a quarter of them and if we have only been watching a third of the region in the sky, 20 observed sources implies more than 240 sources exist. In reality there is a distribution of the recurrence times, some as short as months, others longer than 50 years, if optical records are good. On the basis of such estimates, the number of potential black hole transients is estimated to be on the order of thousands [@TS96]. The separation of sources into persistent and transient sources is a very gross simplification. One of the discoveries of recent missions, and especially of the All Sky Monitor (ASM) [@Bradt00] has been that the persistent sources have cycles of variations with time-scales ranging from many months to days. If the transient outbursts originate in accretion instabilities, perhaps these variations are related. In which is observed. At radii close to the compact objects the dynamical time-scale gets shorter, till it is the milliseconds of the neutron star or black hole. RXTE’s large area detectors detect oscillations on these time-scales which must reflect the dynamics at the innermost stable circular orbit (ISCO) of these neutron stars and black holes. The neutron stars of this sample are expected to have magnetic dipole moments and surface fields about $10^8 - 10^9$ gauss. Of course the neutron stars have a surface such that matter falling from the accretion disk to the neutron star crashes into the surface and generates X-ray emission. In the case of the black holes matter could fall through the event horizon and disappear with no further emission of energy. Thus the X-rays produced and the dynamics that dominates in the two cases (neutron star versus black hole) could be different. However, a number of similarities appear in the signals we receive. Long Time-scale Variabilities {#long-time-scale-variabilities .unnumbered} ============================= High Accretion Rate - Persistent Sources {#high-accretion-rate---persistent-sources .unnumbered} ---------------------------------------- Among the persistent LMXB there are characteristic variations on time-scales of months in some sources and days in others[@Bradt00]. Quasiperiodic modulations were pointed out at 37 days for Sco X–1 (IAUC 6524), 24.7 days for GX 13+1 (IAUC 6508), 77.1 days for Cyg X–2 (IAUC 6452), 37 days for X 2127+119 in M15 (IAUC 6632). The obviously important, but not strictly periodic modulations in 4U 1820–30 and 4U 1705–44 at time-scales of 100–200 days are shown in Figure 1. For Sco X–1, the changes in activity level occur in a day and the activity time-scale is hours. The activity changes. These time-scales are less regular than the 34 day cycle time of Her X-1, and similar modulations in LMC X-4 and SMC X-1, which are thought to be due to the precession of a tilted accretion disk. The latter sources are high magnetic field pulsars in which the disk is larger than in the LMXB, and is truncated by the magnetosphere at a radius as large as $10^8$ cm. The LMXB spectral changes are also different than those of the pulsars. In the LMXB case the changes are thought to be real changes in the accretion onto the neutron star, at least the production of X-rays, rather than a change in an obscuration of the X-rays that we see. The spectral changes are captured in the color-color diagrams that give rise to the names “Z” and “Atoll” for subsets of the LMXB. These were identified with EXOSAT observations by Hasinger and van der Klis [@HvdK89]. Characteristics of the bursts from 4U 1636–53 depended on the place of the persistent flux in the atoll color-color diagram [@vdK90]. This implied that the real mass accretion rate was correlated with the position on the diagram (although other possibilities such as the distribution of accreted material on the surface of the neutron star may play
null
--- abstract: | Photoinduced IR absorption was measured in undoped (LaMn)$_{1-\delta }$O$% _{3} $ and (NdMn)$_{1-\delta }$O$_{3}$. We observe broadening and a $\sim $44% increase of the midinfrared anti-Jahn-Teller polaron peak energy when La$^{3+}$ is replaced with smaller Nd$^{3+}$. The absence of any concurent large frequency shifts of the observed PI phonon bleaching peaks and the Brillouin-zone-center internal perovskite phonon modes measured by Raman and infrared spectroscopy indicate that the polaron peak energy shift is mainly a consequence of an increase of the electron phonon coupling constant with decreasing ionic radius $\left\langle r_{A}\right\rangle $ on the perovskite A site. This indicates that the dynamical lattice effects strongly contribute to the electronic band narrowing with decreasing $\left\langle r_{A}\right\rangle $ in doped giant magnetoresistance manganites. address: - '$^{1}$Jozef Stefan Institute, P.O.Box 3000, 1001 Ljubljana, Slovenia' - | $^{2}$University of Ljubljana, Faculty of Mathematics and Physics,\ Jadranska 19, 1000 Ljubljana, Slovenia author: - 'T. Mertelj$^{1,2}$, M. Hrovat$^{1}$, D. Kuščer$^{1}$ and D. Mihailovic$^{1}$' title: 'Direct measurement of polaron binding energy in AMnO$_{3}$ as a function of the A site ionic size by photoinduced IR absorption' --- The physical properties of manganites with the chemical formula (Re$_{1-x}$Ae$_{x}$)MnO$_{3}$ (Re and Ae are trivalent rare-earth and divalent alkaline-earth ions respectively) in which giant magnetoresistance (GMR) is observed[@SearleWang69; @KustersSingelton89; @HelmoltWecker93] show remarkable changes when the average ionic radius $\left\langle r_{A}\right\rangle $ on the perovskite A site is varied. [@ImadaFujimori98; @HwangCheong95] In the region of doping $x$, where GMR is observed, this is reflected in a decrease of the Curie temperature $T_{C}$ and increase of the size of magnetoresistance with decreasing $\left\langle r_{A}\right\rangle $. [@HwangCheong95] The decrease of $T_{C}$ has been attributed to a decrease of the hopping matrix element between neighbouring Mn sites $t$ as a result of changes of Mn-O-Mn bond angles with $% \left\langle r_{A}\right\rangle $. [@HwangCheong95] Traditionally GMR has been explained in the double exchange picture[@Zener51] framework, where the hopping matrix element is one of the key parameters influencing directly the Curie temperature. However it has been shown experimentally[@ZhaoConder96; @ZhaoKeller98; @LoucaEgami97] and theoretically[@MillisShraiman96] that also dynamic lattice effects including Jahn-Teller (JT) polaron formation are crucial ingredients for the explanation of GMR in manganites[@MillisShraiman96]. In this picture $% T_{C}$ also strongly depends on the electron-phonon (EP) coupling in addition to the hopping matrix element $t$ and any change in the EP coupling as function of $\left\langle r_{A}\right\rangle $ contributes to changes of $% T_{C}$ and other physical properties. Experimentally an increase of the EP coupling with decreasing $\left\langle r_{A}\right\rangle $ is suggested by the shift of the $1$-eV polaronic peak in optical conductivity of manganites to higher energy with decreasing $\left\langle r_{A}\right\rangle $[@MachidaMoritomo98; @QuijadaCerne98]. Unfortunately, the peak position of the 1-eV peak does not depend on the polaron binding energy alone[@QuijadaCerne98] and the magnitude of the shift can not be directly linked to change of the EP coupling constant $g$. Recently we observed a polaronic photoinduced (PI) absorption peak in antiferromagnetic (LaMn)$_{1-\delta }$O$_{3}$ (LMO). [@MerteljKuscer00] with manganites. Here we present photoinduced (PI) absorption measurements in (NdMn)$_{1-\delta }$O$_{3}$ (NMO) with $\delta \approx 0$. We observe a $\sim $44% increase of the small polaron energy when La$^{3+}$is replaced by smaller Nd$^{3+}$. The absence of any concurrent large frequency shifts of the observed PI phonon bleaching peaks and the Brillouin-zone-center internal perovskite phonon modes measured by Raman and infrared (IR) spectroscopy indicate that the polaron energy increase with decreasing $\left\langle r_{A}\right\rangle $ is mainly a consequence of an increase of the electron-phonon coupling constant. The method of preparation and characterization of ceramic sample with nominal composition (LaMn)$_{1-\delta }$O$_{3}$ has been published elsewhere[@MerteljKuscer00; @HolcKuscer97]. The sample with nominal composition (NdMn)$_{1-\delta }$O$_{3}$ was prepared in a similar manner with equal final treatment at 900${{}{}^{\circ }}$C for 300 min in Ar flow[@HuangSantoro97] to decrease cation deficiency. The X-ray difraction patterns of both samples taken before Ar treatment in 2$\Theta $ range 20${% {}^{\circ }}$-70${{}^{\circ }}$ showed that both samples are single phase. The samples showed no sign of a ferromagnetic transition in AC susceptibility measurements and we concluded that $\delta $ is sufficiently small that both are antiferromagnetic (AFM) and insulating below their respective Neel temperatures[@ImadaFujimori98; @HuangSantoro97; @UrushibaraMoritomo95]. PI spectra were measured at 25K in samples dispersed in KBr pellets. CW Ar$% ^{+}$-ion-laser light with 514.5 nm wavelength ($h\nu =2.41$ eV) and optical fluence $\sim $500 mW/cm$^{2}$ was used for photoexcitation. Details of PI-transmitance spectra measurements were published elswhere. [@MerteljKuscer00] PI-transmitance effects. Raman spectra were measured at room temperature in a standard backscatering configuration from ceramic powders using a CW Kr$^{+}$-ion-laser light at 647.1 nm. The scattered light was analysed with a SPEX triple spectrometer and detected with a Princeton Instruments CCD array. The incident laser flux was kept below $\sim $400 W/cm$^{2}$ to avoid laser annealing. [@IlievAbrashev98] The low temperature ($T=25$K) PI transmittance $(\frac{\Delta {\cal T}_{PI}}{% {\cal T}})$ spectra of both samples are shown in Fig. 1. In both samples a strong broad PI midinfrared (MIR) absorption (negative PI transmittance) centered at $\thicksim $5000 cm$^{-1}$ ($\thicksim 0.62$ eV) in LMO and at $% \thicksim 7500$ cm$^{-1}$ ($\thicksim 0.93$ eV) in NMO is observed. In the frequency range of the phonon bands (insert of Fig.1) we observe PI phonon bleaching in the range of the 585-cm$^{-1}$ (576 cm$^{-1}$ in NMO) IR phonon band and a slight PI absorption below $\thicksim $580 cm$^{-1}$. The PI phonon bleaching in NMO is similar to LMO[@MerteljKuscer00], but shifted to higher frequency by $\thicksim $20 cm$^{-1}$ and it consists of two peaks at 620 and 690 cm$^{-1}$ with a dip in-between at 660 cm$^{-1}$. Similarly to LMO this two PI transmission peaks are reproducible among different runs, while the structure of the PI absorption below $\thicksim $580 cm$^{-1}$ is not, and presumably arises due to increasing instrumental noise at the lower end of the spectral range. Despite the noise a slight PI absorption below $\thicksim $580 cm$^{-1}$ can be deduced from the PI spectra. The Raman spectra shown in Fig. 2b are consistent with published data. [@IlievAbrashev98] In the 100-900-cm$^{-1}$ frequency range 5 phonon peaks are observed in LMO and 6 phonon peaks in NMO. The frequencies and assignments of the phonon peaks are shown in Table I. The only mode that shifts substantially is the $A_{g}$ mode that corresponds to the out of phase rotation of the MnO$_{3}$ octahedra. [@Ilie
null
--- abstract: 'The Wang–Landau (WL) algorithm has been widely used for simulations in many areas of physics. Our analysis of the WL algorithm explains its properties and shows that the difference of the largest eigenvalue of the transition matrix in the energy space from unity can be used to control the accuracy of estimating the density of states. Analytic expressions for the matrix elements are given in the case of the one-dimensional Ising model. The proposed method is further confirmed by numerical results for the one-dimensional and two-dimensional Ising models and also the two-dimensional Potts model.' author: - 'L. Yu. Barash$^{1,2,3}$' A. Garba$$3<unk>%', 'M. A. Fadeeva$^{2,3}$' - 'L. N. Shchur$^{1,2,3}$' title: 'Control of accuracy in the Wang–Landau algorithm' --- Introduction ============ The Wang–Landau (WL) algorithm [@Wang-Landau; @Wang-Landau-PRE] has been shown to be a very powerful tool for directly determining the density of states (DOS) and is also quite widely applicable. It overcomes some difficulties existing in other Monte Carlo algorithms (such as critical slowing down) and allows calculating thermodynamic observables, including free energy, over a wide temperature range in a single simulation. A number of papers investigated statistical errors of the DOS estimation, and it was found in [@Yan2003] that errors reach an asymptotic value beyond which additional calculations fail to improve the accuracy of the results. Yet it was established in [@Zhou2005; @Lee2006] that the statistical error scales as the square root of the logarithm of the modification factor, if the factor is kept constant. It follows from the results in [@Yan2003] that there is a systematic error of DOS estimation by the WL algorithm [^1]. It was also confirmed in the case of the two-dimensional Ising model that the deviation of the DOS obtained with the WL algorithm from the exact DOS does not tend to zero [@1overt; @1overt-a]. Several improvements of the behavior of the modification factor in the algorithm, which were shown to overcome the problem of systematic error in selected applications, have been suggested [@1overt; @1overt-a; @SAMC; @SAMC2; @Eisenbach2011]. There are about fifteen hundred papers that apply the WL algorithm and its improvements to particular problems (e.g., to the statistics of polymers [@Binder09; @Ivanov2016] and to the diluted systems [@Malakis04; @Fytas2013], among many others). In this paper, we address the question of the accuracy of the DOS estimation. We report a method for obtaining information on both the convergence of simulations and the accuracy of the DOS estimation. We numerically apply our algorithm to the one-dimensional and the two-dimensional Ising models, where the exact DOS is known [@Beale], and to the two-dimensional 8-state Potts model, which undergoes a first-order phase transition. We also present analytic expressions for the transition matrix in the energy spectrum for the one-dimensional Ising model. Our approach is based on introducing the transition matrix in the energy space (TMES), whose elements show the frequency of transitions between energy levels during the WL random walk in the energy space. Its elements are influenced by both the random process of choosing a new configurational state and the WL probability of accepting the new state. We consider a chain of random updates (e.g., flips of randomly chosen spins for the Ising model) of a system configuration. Each of the updates is accepted with unitary probability. This random walk in the configurational space is a Markov chain. Its invariant distribution is uniform, i.e., the probabilities of all states of the physical system are equal to each other. For any pair $\Omega_A$ and $\Omega_B$ of configurations, the probability of an update from $\Omega_A$ to $\Omega_B$ is equal to the probability of an update from $\Omega_B$ to $\Omega_A$. Hence, the detailed balance condition is satisfied. Therefore, we introduce the notation $E_m$. We introduce the notation $$T(E_k,E_m)=\min\left(1,\frac{g(E_k)}{g(E_m)}\right)P(E_k,E_m), \label{Texpr}$$ which represents nondiagonal elements of the TMES of the WL random walk on the true DOS. Relation (\[simplebalance\]) can be rewritten as $T(E_k,E_m)=T(E_m,E_k)$. Therefore, the TMES of the WL random walk on the true DOS is a symmetric matrix. Because the matrix is both symmetric and right stochastic, it is also left stochastic. This means that the rates of visiting of all energy levels are equal to each other. In simulations with a reasonable modification of the WL algorithm, the systematic error of determining the DOS can be made arbitrarily small. In this case, we find that the computed TMES approaches a stochastic matrix as the computed DOS approaches the true value. There are several interesting conclusions. First, this explains the criterion of histogram flatness, which is one of the main features of the original WL algorithm [@Wang-Landau]. Because the histogram elements are equal to sums of columns in the TMES, histogram flatness is related to the closeness of the TMES to a stochastic matrix. Second, it gives a criterion for the proximity of the simulated DOS to the true value. We introduce the difference of the largest eigenvalue of the calculated TMES from unity as a parameter. We show that the parameter is closely connected with the deviation of the DOS from the true value. We confirm numerically that the deviation of the DOS from the true value decays in time in the same manner as our parameter decays. We are not aware of any other method for determining the accuracy of a WL simulation without knowing the exact value of the DOS. The 2 follows. In Sec. \[AlgSec\] we describe the variants of the WL algorithm. In Sec. \[TMESSec\] of model. our main results and discussion \[DiscussionSec\] we present our main results and discussion, including discussion of properties of the TMES, description of the method and numerical results for the one-dimensional and two-dimensional Ising models and for the two-dimensional Potts model. The results are presented below temperature. The main idea of the WL algorithm is to organize a random walk in the energy space. We take a configuration of the system with the energy $E_k$, randomly choose an update to a new configuration with the energy $E_m$, and accept this configuration with the WL probability $\min\left(1,\tilde g(E_k)/\tilde g(E_m)\right)$, where $\tilde g(E)$ is the DOS approximation. The approximation is obtained recursively by multiplying $\tilde g(E_m)$ by a factor $f$ at each step of the random walk in the energy space [^2]. Each time that the auxiliary histogram $H(E)$ becomes sufficiently flat, the parameter $f$ is modified by taking the square root, $f:=\sqrt{f}$. Each histogram value $H(E_m)$ contains the number of moves to the energy level $E_m$. The histogram is filled with zeros after each modification of the refinement parameter $f$. It is convenient to work with the logarithms of the values $S(E_k):=\ln\tilde g(E_k)$ and $F:=\ln f$ (to fit the large numbers into double precision variables) and to replace the multiplication $\tilde g(E_m):=f\cdot\tilde g(E_m)$ with the addition $S(E_m):=S(E_m)+F$. At the end of the simulation, the algorithm provides only a relative DOS. Either the total number of states or the number of ground states can be used to determine the normalized DOS. It is natural to ask the following three questions: 1. Which condition for the flatness check is optimal? 2. How does the histogram flatness influence the convergence of the DOS estimation? 3. Is the choice of the square root rule to modify the parameter $f$ optimal? A practical answer to question Q1 was given in the original algorithm [@Wang-Landau]: keep the flatness within the accuracy of
null
--- abstract: 'Focusing on $ROSAT$ results for clusters in the $\sim 20-600$ Myr range, I first summarize our current understanding of the X–ray activity – rotation – age relationship. Then, [*1. *]<unk> most K and M–type binaries in wide systems are X–ray brighter than single stars; '[*3. *] [*1. *]{} <unk> and [*2. *]{} binaries seem to fit into the same activity – rotation relationship as single stars. Points [*1. *]{} and [*2. *]{} suggest that the distributions of rotations of single and binary stars should also show a dicothomy, but the few available rotational data do not support the existence of such a dicothomy. Rotational periods for a larger sample of binary and single stars should be acquired before any conclusion is drawn. Finally, we suggest that this is an uncommon thought. Whereas False thinking.' author: - Sofia Randich title: Coronal activity among open cluster stars --- Introduction ============ As an introductory remark, it is useful to recall that X–ray emission from solar–type and lower mass stars is thought to originate from a hot corona heated and confined by magnetic fields that are generated through a dynamo process. It is therefore expected on theoretical grounds that the level of X–ray emission, or coronal activity, should depend on at least the properties of the convective zone, on stellar rotation and, through the rotation–age dependence, on stellar age. X–ray surveys of stellar clusters offer a powerful tool to empirically prove and quantitatively constrain the dependence of coronal activity on these parameters and, possibly, on additional ones, thus providing feedback to the theory. $ROSAT$ PSPC and HRI observations have provided X–ray images for about 30 open clusters in the age range between $\sim 20 - 600$ Myr (see Table 1 in Jeffries 1999, for the most updated list). Our understanding of coronal properties of solar–type and low mass stars in clusters is now considerably deeper than a decade ago, but, at the same time, new puzzles have been raised by $ROSAT$ results. The main results and questions emerged from $ROSAT$ observations of clusters have been discussed in several reviews in the last few years. The age – rotation – activity paradigm (or ARAP) has been discussed at length by Caillault (1996), Randich (1997), and Jeffries (1999). Other issues, such as time variability (Caillault 1996; Stern 1999; Jeffries 1999), insights from spectra (Caillault 1996), supersaturation (Randich 1998), and observational limits and analysis techniques (Micela 1996) were also addressed. I refer to those papers for a detailed discussion of the above topics. In the present paper I will first present a summary of the general picture of the ARAP that we gathered from $ROSAT$ data; second, I will address an issue that was only marginally discussed in previous reviews, namely binaries and their influence on cluster X–ray luminosity distribution functions (XLDF). Finally, I will focus on the exceptions to the ARAP and on the controversial question whether the X–ray properties of a cluster at a given age can be considered as representative of all clusters at that age. Within <unk>, and stars. The following sources of X–ray data were used; [*Pleiades*]{}: Stauffer et al. (1994), Micela et al. (1996), Micela et al. (1999a); [*IC 2602*]{}: Randich et al. (1995); [*IC 2391*]{}: Patten & Simon (1996); [*Alpha Persei*]{}: Prosser et al. (1996); [*Hyades*]{}: Stern et al. (1995), Pye et al. 1994; [*IC 4665*]{}: Giampapa et al. (1998); [*NGC 2547*]{}: Jeffries & Tolley (1998); [*NGC 2516*]{}: Jeffries et al. (1997); [*Blanco 1*]{}: Micela et al. (1999b); [*NGC 6475*]{}: Prosser et al. (1995), James & Jeffries (1997); [*Coma Berenices*]{}: Randich et al. (1996b); [*Praesepe*]{}: Randich & Schmitt (1995). A consistent picture: the ARAP ============================== The main results evidenced by $ROSAT$ observations of open clusters can be summarized as follows: - If we exclude “outliers" or exceptions which I will discuss in Sect. 4, the average level of X–ray activity decays with age. Whereas this was already well established from [*Einstein*]{} observations of the Hyades and the Pleiades (e.g., Micela et al. 1990), the larger number of clusters observed by $ROSAT$ and the finer age sampling have allowed deriving a more detailed activity vs. age relationship. The decay timescales appear to be different for different masses (the lower the mass the longer the timescale) and the L$_{\rm X}$ vs. age functional dependence is not simply described by the Skumanich power law (Skumanich 1972); - In all clusters the maximum X–ray luminosity (L$_{\rm X}$) decreases towards later spectral–types; at a given spectral–type, a significant scatter in L$_{\rm X}$ is observed; as a consequence, whereas the median L$_{\rm X}$ decreases with age, the XLDFs for clusters of different ages are not “parallel" one to another and some overlap is present. This means that X–ray activity cannot be unambiguously used as an age diagnostic; - The X–ray activity level does depend on rotation only up to a rotation threshold above which X–ray emission saturates; for stars rotating faster than this threshold the ratio of the X–ray luminosity over bolometric luminosity, L$_{\rm X}$/L$_{\rm bol}$, is about constant and equal to 10$^{-3}$. Note that a definitive explanation for saturation has not yet been offered. $ROSAT$ observations of clusters are complemented by determinations of rotational velocities and/or periods in a variety of clusters. Very briefly, it is now well established that stars arrive on the Zero Age Main Sequence (ZAMS) with a large spread in their rotation rates and then they slow down with mass–dependent timescales (e.g., Barnes 1999; Bouvier 1997 and references therein). The use of the so–called Rossby diagram allows incorporating the above points into a unique picture. Noyes et al. (1984) were the first to show that the use of the Rossby number ($R_0$), the ratio of the rotational period P over the convective turnover time $\tau_c$, which somehow allows formalizing the dependence of activity on the properties of the convection zone, improved the rotation–chromospheric activity relationship for field stars. Randich et al. (1996a) and Patten & Simon (1996) showed this to hold also for the X–ray activity of cluster stars. Taking advantage of the new available periods for several clusters, I produced an updated version of the diagram which I show in Figure 1. X–ray data for field stars were taken from Schmitt (1997) and Hünsch et al. (1998, 1999); periods were taken from Hempelmann et al. (1996); I retrieved periods for most of the clusters from the Open Cluster Database [^1], complementing the ones for the Pleiades with the new measurements of Krishnamurthi et al. (1998) and adding periods for IC 2602 from Barnes et al. (1999). I sa et al. (1984). I refer to the paper of Pizzolato et al. (1999) for a discussion of how different ways of estimating $\tau_c$ may affect the $\log \rm L_{\rm X}/\rm L_{\rm bol}$ vs. $\log R_0$ relationship. Various features can be noted in the diagram: first, saturation of X–ray activity is evident: it occurs at $\log R_0 \sim -0.8$. The points with a lower Rossby number cluster around $\log$ L$_{\rm X}$/L$_{\rm bol}=-3$ (but note the supersaturation at very low $\log R_0$ –see Randich 1998). Since the diagram includes stars from F down to M spectral–type, the uniformity of the threshold Rossby number below which X–ray emission is saturated, implies that the rotation threshold depends on mass. In other words, if $\log (R_0)_{\rm thr}=(\log$ P/$\tau_c$)$_{\rm thr} =const\sim -0.8$, then, P$_{\rm thr} \propto \tau_c$; since $\tau_c$ increases with decreasing mass (the convective envelope becomes deeper), the lower the mass, the longer is the threshold period (e.g., Stauffer et al. 1997a). Second, all cluster and field stars fit into a unique relationship. This on one hand means that field and cluster stars behave in a similar way as far as the rotation – convection – activity relation is concerned; whereas this is qualitatively expected –why should field and cluster stars behave differently?– it is good to empirically confirm the expectations. On the other hand, the fact that stars in all clusters lie on the same curve, irrespective of age
null
--- abstract: 'We consider configurations consisting of a gravitating nonlinear spinor field $\psi$, with a nonlinearity of the type $\lambda\left(\bar\psi\psi\right)^2$, minimally coupled to Maxwell and Proca fields through the coupling constants $Q_M$ \[U(1) electric charge\] and $Q_P$, respectively. In order to ensure spherical symmetry of the configurations, we use two spin-$1/2$ fields having opposite spins. By means of numerical computations, we find families of equilibrium configurations with a positive Arnowitt-Deser-Misner (ADM) mass described by regular zero-node asymptotically flat solutions for static Maxwell and Proca fields and for stationary spinor fields. For the case of the Maxwell field, it is shown that, with increasing charge $Q_M$, the masses of the objects increase and diverge as the charge tends to a critical value. For negative values of the coupling constant $\lambda$, we demonstrate that, by choosing physically reasonable values of this constant, it is possible to obtain configurations with masses comparable to the Chandrasekhar mass and with effective radii of the order of kilometers. It enables us to speak of an astrophysical interpretation of such systems, regarding them as charged Dirac stars. In turn, for the system with the Proca field, it is shown that the mass of the configurations also grows with increasing both $|\lambda|$ and the coupling constant $Q_P$. Although in this case the numerical calculations do not allow us to make a definite conclusion about the possibility of obtaining masses comparable to the Chandrasekhar mass for physically reasonable values of $\lambda$, one may expect that such masses can be obtained for certain values of free parameters of the system under consideration.' author: The fields. The most popular line of investigation focuses on studying boson stars – objects supported by scalar (spin-0) fields. Being in their own gravitational field, such fields can create configurations whose physical characteristics lie in a very wide range, from those which are typical for atoms up to the parameters comparable with characteristics of galaxies [@Schunck:2003kk; @Liebling:2012fv]. On the other hand, it is not impossible that there may exist gravitating objects supported by fundamental fields with nonzero spin. In particular, they may be massive vector (spin-1) fields described by the Proca equation [@Lawrie2002]. Being the generalization of Maxwell’s theory, Proca theory permits one both to take into account various effects related to the possible presence of the rest mass of a photon [@Tu:2005ge] and to describe the massive $Z^0$ and $W^\pm$ particles in the Standard Model of particle physics [@Lawrie2002]. Such fields are also discussed in the literature as applied to dark matter physics [@Pospelov:2008jd] and when considering compact strongly gravitating spherically symmetric starlike configurations [@Brito:2015pxa; @Herdeiro:2017fhv]. In turn, when the source of gravitation is spinor (spin-$1/2$) fields, the corresponding configurations are described by the Einstein-Dirac equations. These can be spherically symmetric systems consisting of both linear [@Finster:1998ws; @Herdeiro:2017fhv] and nonlinear spinor fields [@Krechet:2014nda; @Adanhounme:2012cm; @Dzhunushaliev:2018jhj]. Nonlinear spinor fields are also used in considering cylindrically symmetric solutions [@Bronnikov:2004uu], wormhole solutions [@Bronnikov:2009na], and various cosmological problems (see Refs. [@Ribas:2010zj; @Ribas:2016ulz; @Saha:2016cbu] and references inside). The aforementioned localized self-gravitating configurations with spinor fields are prevented from collapsing under their own gravitational fields due to the Heisenberg uncertainty principle. If one adds to such systems an electric field, the presence of extra repulsive forces related to such a field results in new effects which can considerably alter the characteristics of the systems [@Finster:1998ux]. Consistent with this, the purpose of the present paper is to study the influence that the presence of massless (Maxwell) or massive (Proca) vector fields has on the properties of gravitating configurations consisting of a spinor field $\psi$ with a nonlinearity of the type $\lambda\left(\bar\psi\psi\right)^2$. Since the spin of a fermion has an intrinsic orientation in space, a system consisting of a single spinor particle cannot be spherically symmetric. For this reason, we take two fermions having opposite spins, i.e., consider two spinor fields, and this enables us to have spherically symmetric objects. In order to get configurations with masses of the order of the Chandrasekhar mass, we study in detail the limiting systems obtained in case of using large negative values of the dimensionless coupling constant $\bar \lambda$. Notice 5, field. Following Ref. [@ArmendarizPicon:2003qk], by the latter we mean a set of four complex-valued spacetime functions that transform according to the spinor representation of the Lorentz group. But it is evident that realistic spin-$\frac{1}{2}$ particles must be described by [*quantum*]{} spinor fields. It is usually believed that there exists no classical limit for quantum spinor fields. However, classical spinors can be regarded as arising from some effective description of more complex quantum systems (for possible justifications of the existence of classical spinors, see Ref. [@ArmendarizPicon:2003qk]). The explanation is given as follows. In this paper, for \[prob\_statem\], the following equation is taken under consideration. These equations are solved numerically in Sec. \[num\_sol\] a (Sec. \[Maxw\_field\]) and for the Proca field (Sec. \[Proca\_field\]) in two limiting cases when the coupling constant $\bar \lambda=0$ (linear spinor fields) and when $|\bar\lambda| \gg 1$. Finally, in Sec. \[concl\], we summarize and discuss the results obtained. Statement of the problem and general equations {#prob_statem} ============================================== We consider compact gravitating configurations consisting of a spinor field minimally coupled to Maxwell/Proca fields. The constant; $T$ is absolute; and $P relativity. The corresponding total action for such a system can be represented in the form \[the metric signature is $(+,-,-,-)$\] $$\label{action_gen} S_{\text{tot}} = - \frac{c^3}{16\pi G}\int d^4 x \sqrt{-g} R +S_{\text{sp}}+S_{\text{v}},$$ where $G$ is the Newtonian gravitational constant; $R$ is the scalar curvature; and $S_{\text{sp}}$ and $S_{\text{v}}$ denote the actions of spinor and vector fields, respectively. The action $S_{\text{sp}}$ is obtained from the Lagrangian for the spinor field $\psi$ of the mass $\mu$, $$L_{\text{sp}} = \frac{i \hbar c}{2} \left( \bar \psi \gamma^\mu \psi_{; \mu} - \bar \psi_{; \mu} \gamma^\mu \psi \right) - \mu c^2 \bar \psi \psi - F(S), \label{lagr_sp}$$ where the semicolon denotes the covariant derivative defined as $ \psi_{; \mu} = [\partial_{ \mu} +1/8\, \omega_{a b \mu}\left( \gamma^a \gamma^b- \gamma^b \gamma^a\right)+i Q_{M,P}/(\hbar c) A_\mu]\psi $. Here $\gamma^a$ are the Dirac matrices in the standard representation in flat space \[see, e.g., Ref. [@Lawrie2002], Eq. (7.27)\]. In turn, the Dirac matrices in curved space, $\gamma^\mu = e_a^{\phantom{a} \mu} \gamma^a$, are obtained using the tetrad $ e_a^{\phantom{a} \mu}$, and $\omega_{a b \mu}$ is the spin connection \[for its definition, see Ref. [@Lawrie2002], Eq. (7.135)\]. The term $i Q_{M,P}/(\hbar c) A_\mu\psi$ describes the interaction between the spinor and Maxwell/Proca fields. The coupling constant $Q_{M}$ plays the role of a U(1) charge in Maxwell theory, and $Q_P$ is the coupling constant in Proca theory. This Lagrangian contains an arbitrary nonlinear term $F(S)$, where the invariant $S$ can depend on $ \left( \bar\psi \psi \right), \left( \bar\psi \gamma^\mu \psi \right) \left( \bar\psi \gamma_\mu \psi \right)$, or $\left( \bar\psi \gamma^5 \gamma^\mu \psi
null
--- abstract: 'These notes discuss, in a style intended for physicists, how to average data and fit it to some functional form. I try to make clear what is being calculated, what assumptions are being made, and to give a derivation of results rather than just quote them. The aim is put a lot useful pedagogical material together in a convenient place. This manuscript is a substantial enlargement of lecture notes I prepared for the Bad Honnef School on “Efficient Algorithms in Computational Physics”, September 10–14, 2012.' author: " simulation. Typically you will generate a set of values $x_i,\, y_i, \cdots,\, i = 1, \cdots N$, where $N$ is the number of measurements. The first thing you will want to do is to estimate various average values, and determine *error bars* on those estimates. As an example, it is easy for basic averages such as e.g. $\langle x \rangle$, but not quite so easy for more complicated averages such as fluctuations in a quantity, $\langle x^2 \rangle - \langle x \rangle^2$, or combinations of measured values such as $\langle y \rangle / \langle x \rangle^2$. Averaging s: Sec. \[sec:averages\]. Having obtained several good data points with error bars, you might want to fit this data to some model. Techniques for fitting data will be described in the second part of these notes in Sec. \[sec:fit\] I find that the books on these topics usually fall into one of two camps. At one extreme, the books for physicists don’t discuss all that is needed and rarely *prove* the results that they quote. At the other extreme, the books for mathematicians presumably prove everything but are written in a style of lemmas, proofs, $\epsilon$’s and $\delta$’s, and unfamiliar notation, which is intimidating to physicists. One exception, which finds a good middle ground, is Numerical Recipes [@press:92] and the discussion of fitting given here is certainly influenced by Chap. 15 of that book. In these notes I aim to be fairly complete and also to derive the results I use, while the style is that of a physicist writing for physicists. I also include scripts in python, perl, and gnuplot to perform certain tasks in data analysis and fitting. For these reasons, these notes are perhaps rather lengthy. Nonetheless, I hope, that they will provide a useful reference. Averages are the x_i$ for this data. This data will have some random noise so the $x_i$ are not all equal. Rather we *know* this know*. The distribution is normalized, $$\int_{-\infty}^\infty P(x) \, d x = 1,$$ and is usefully characterized by its moments, where the $n$-th moment is defined by $$\langle x^n \rangle = \int_{-\infty}^\infty x^n\, P(x) \, d x\, .$$ We will denote the average *over the exact distribution* by angular brackets. Of particular interest are the first and second moments from which one forms the mean $ \mu$ and variance $\sigma^2$, by $$\begin{aligned} \mu &\equiv \langle x \rangle \label{xavexact} \\ \sigma^2 &\equiv \langle \, \left(x - \langle x\rangle\right)^2 \, \rangle = \langle x^2 \rangle - \langle x \rangle^2 \, . \label{sigma}\end{aligned}$$ The term “standard deviation” is used for $\sigma$, the square root of the variance. In this section we will estimate the mean $\langle x \rangle$, and the uncertainty in our estimate, from the $N$ data points $x_i$. The determination of more complicated averages and resulting error bars will be discussed in Sec. \[sec:advanced\] In order to obtain error bars we need to assume that the data are uncorrelated with each other. This is a crucial assumption, without which it is very difficult to proceed. However, it is not always clear if the data points are truly independent of each other; some correlations may be present but not immediately obvious. Here, we take the usual approach of assuming that even if there are some correlations, they are sufficiently weak so as not to significantly perturb the results of the analysis. In Monte Carlo simulations, measurements which differ by a sufficiently large number of Monte Carlo sweeps will be uncorrelated. More precisely the difference in sweep numbers should be greater than a “relaxation time”. This is exploited in the “binning” method in which the data used in the analysis is not the individual measurements, but rather an average over measurements during a range of Monte Carlo sweeps, called a “bin”. If the bin size is greater than the relaxation time, results from adjacent bins will be (almost) uncorrelated. A pedagogical treatment of binning has been given by Ambegaokar and Troyer [@ambegaokar:09]. Alternatively, one can do independent Monte Carlo runs, requilibrating each time, and use, as individual data in the analysis, the average from each run. The information *from the data* is usefully encoded in two parameters, the sample mean $\overline{x}$ and the sample standard deviation $s$ which are defined by[^1] $$\begin{aligned} \overline{x} & = {1 \over N} \sum_{i=1}^N x_i \, , \label{meanfromdata} \\ s^2 & = {1 \over N - 1} \sum_{i=1}^N \left( x_i - \overline{x}\right)^2 \, . \label{sigmafromdata} - understand. Here, an average indicated by an over-bar, $\overline{\cdots}$, is an average over the *sample of $N$ data points*. This is to be distinguished from an exact average over the distribution $\langle \cdots \rangle$, as in Eqs. (\[xavexact\]) and (\[sigma\]). The latter is, however, just a theoretical construct since we *don’t know* the distribution $P(x)$, only the set of $N$ data points $x_i$ which have been sampled from it. Next we derive two simple results which will be useful later: 1. The mean of the sum of $N$ independent variables *with the same distribution* is $N$ times the mean of a single variable, and 2. The variance of the sum of $N$ independent variables *with the same distribution* is $N$ times the variance of a single variable. The result for the mean is obvious since, defining $X = \sum_{i=1}^N x_i$, $$\langle X \rangle = \sum_{i=1}^N \langle x_i \rangle = N \langle x_i \rangle \ \boxed{ = N \mu\, .} \label{X}$$ The result for the standard deviation needs a little more work: $$\begin{aligned} \sigma_X^2 & \equiv \langle X^2 \rangle - \langle X \rangle^2 \\ &= \sum_{i,j=1}^N \left( \langle x_i x_j\rangle - \langle x_i \rangle \langle x_j \rangle \right) \label{1} \\ & = \sum_{i=1}^N \left( \langle x_i^2 \rangle - \langle x_i \rangle^2 \right) \label{2} \\ & = N \left(\langle x^2 \rangle - \langle x \rangle^2 \right) \\ & \boxed{ = N \sigma^2 \, .} \label{dXsq}\end{aligned}$$ To get from Eq. (\[1\]) to Eq. (\[2\]) we note that, for $i \ne j$, $\langle x_i x_j\rangle = \langle x_i \rangle \langle x_j\rangle$ since $x_i$ and $x_j$ are assumed to be statistically independent. (This is where the statistical independence of the data is needed.) If the means and standard deviations are not all the same, then the above results generalize to $$\begin{aligned} \langle X \rangle &= \sum_{i=1}^N \mu_i \, , \\ \langle \sigma_X^2 \rangle &= \sum_{i=1}^N \sigma_i^2 \, .\end{aligned}$$ Now we describe an important thought experiment. Let’s *suppose* that we could
null
Precise knowledge of the spin susceptibility $\chi({\bf q}, \omega)$ of the cuprates is essential for understanding their unusual normal state properties. The imaginary part, $\chi^{\prime \prime}({\bf q},\omega)$ can be probed either by inelastic neutron scattering (INS) [@Kei; @Bou; @Hay; @Mook97], or in the low frequency limit by NMR measurements of the spin-lattice relaxation rate $1/T_1$ [@T1]. In contrast, one knows little about the real part of the susceptibility, $\chi^\prime({\bf q})$, since information can, so far, only be extracted from the NMR observation of the Gaussian component of the transverse relaxation time, $T_{\rm 2G}$, of planar Cu [@PS91; @Curro97]. In particular, the analysis of INS and NMR experiments has not yet led to a consensus on the shape of $\chi({\bf q},\omega)$ in momentum space and the temperature ($T$) dependence of the antiferromagnetic correlation length, $\xi$. In this communication we present new insight into this issue based on experiments by Bobroff [*et al. , 2002] Our principal conclusions are that $\xi$ in YBa$_2$Cu$_3$O$_{6+\delta}$ is $T$-dependent and that the Lorentzian form of $\chi^\prime(\bf q)$ provides a completely consistent description of the data, whereas the Gaussian form can be ruled out. Bobroff [*et al. *]{} [@BAY97] recently presented a novel approach to the measurement of $\chi^\prime({\bf q})$ using Ni impurities in YBa$_2$(Cu$_{1-x}$Ni$_x$)$_3$O$_{6+\delta}$. These impurities induce a spin polarization at the planar Cu sites via $\chi^\prime({\bf q})$. The hyperfine coupling between Cu and O induces a spatially varying polarization and an additional broadening $$\Delta \nu_{\rm imp} = \Delta \nu-\Delta \nu_0=\alpha f(\xi)/T \label{dnu}$$ of the planar $^{17}$O NMR, where $\Delta \nu$ and $\Delta \nu_0$ are the total and $x=0$ line width, respectively. ) \[dnu\] $\alpha$ is the overall amplitude of $\chi^\prime({\bf q})$ and $f(\xi)$ characterizes the dependency of $\Delta \nu$ on $\xi$ ($\alpha=4\pi \chi^*$ in the notation of Ref. [@BAY97] and[@Morr97]). Finally, the factor $1/T$ is caused by the Curie behavior of the Ni impurities in YBa$_2$Cu$_3$O$_{6+\delta}$[@Mah94; @Men94] with effective moment $p_{\rm eff}\approx 1.9 \mu_{\rm B}$ ($1.59 \mu_{\rm B}$) for $\delta=0.6 (\delta=1)$. Bobroff [*et al. *]{} found that $T \Delta \nu(T)$ strongly depends on temperature and the Ni concentration $x$ in the sample. Furthermore they observed a much stronger broadening in the underdoped, $\delta=0.6$, sample than in the overdoped one with $\delta=1$. Performing numerical simulations of the NMR line shape by assuming a Gaussian form for $\chi^\prime({\bf q})$, they found that $f(\xi)$ is basically constant for all physically reasonable values of $\xi$. Combining these results with $T_{\rm2G}$ data by Takigawa [@Tak94], they concluded that $\xi$ is $T$-independent for the underdoped samples. On the other hand, in every scenario of cuprate superconductors in which the anomalous low-energy behavior is driven by spin fluctuations one would expect the correlation length $\xi$ to be $T$-dependent (for recent reviews, see: [@Pines; @Scal]). Thus their result has important implications about the mechanism of superconductivity. We recently pointed out [@Morr97], that our simulations using a Lorentzian form of $\chi^\prime({\bf q})$ yield a different result and are actually compatible with a $T$-dependent $\xi$. Before going into the details of our calculations, it is important to notice that the fact that $\xi$ must be $T$-dependent can be deduced even without a detailed model from the very experimental data by Bobroff [*et al. *]{} [@BAY97] for $\Delta \nu(T)$ and Takigawa [@Tak94] for $T_{\rm 2G}$. To show this, we need to recognize that we can always express $T_{\rm 2G}$ as a product of $\alpha$ and a function of $\xi$, namely $$T_{\rm 2G}^{-1} = \alpha g(\xi) \ . \label{t2g}$$ We can then eliminate $\alpha$ by forming the product $$T \Delta \nu_{\rm imp} T_{\rm 2G} = { f(\xi) \over g(\xi)} \label{product}$$ which depends solely on $\xi$. In Fig. \[prod\], we plot the product $T_{\rm 2G} T \Delta \nu_{\rm imp}$ as a function of $T$[@T1corr]. \[t\] =7.5cm We see that this product is strongly $T$-dependent, dropping by more than a factor of 2 between $100 \, K$ and $200 \, K$. Therefore , it has $T$-dependence. To have a more quantitative insight into the $T$-dependence of $T \Delta \nu(T)$ of Ref. [@BAY97], we must go into details. We present in the following a theoretical analysis of the $^{17}$O line shape using a method first applied by Bobroff [*et al. *]{}, to simulate their experimental data. To simulate the $^{17}$O line shape numerically, we distribute Ni impurities on a $(100 \times 100)$ lattice with concentration $\frac{3}{2}x$ randomly at positions ${\bf r}_j$ on a two dimensional lattice [@com2]. We consider the Ni impurities as foreign atoms embedded in the pure material, which is characterized by a non-local spin-susceptibility $\chi'({\bf q})$. In the following ${\bf s}_i$ characterizes the spin dynamics of the pure material, ${\bf S}_i$ the difference at the Ni site brought about by the Ni. These Ni spins polarize the spin ${\bf s}_j$ of the itinerant strongly correlated electrons. To calculate the induced moments we need to know how the Ni impurities couple to these spins. Without this coupling constant we can't calculate . \label{Hint}$$ The coupling constant $J$ is an unknown parameter of the theory and will be estimated below. Furthermore, we will assume like Bobroff [*et al. *]{} that NMR has high susceptibility. For the NMR experiments we consider an external magnetic field $B_0$ along the $z$-direction. The Ni spins have a non-zero average value obeying $\langle S^z_j \rangle = C_{\rm Curie} B_0/T $ with Curie constant $C_{\rm Curie}=p_{\rm eff}/(2\sqrt{3} k_{\rm B})$ [@Mah94; @Men94]. Adopting a mean field picture, the induced polarization for the electron spins at the Cu sites ${\bf r}_i$ is given by $$\langle s^z_i \rangle = \frac{J}{(g\mu_{\rm B})^2} \sum_j \chi'({\bf r}_i-{\bf r}_j) \langle S^z_j \rangle\, . \label{Sind}$$ Here, $\chi'({\bf r})$ is the real space Fourier transform of $\chi'({\bf q})$. In the following we consider two different forms of the spin susceptibility [@mp92]. For the commensurate case, there is only one peak, whereas in the incommensurate case, one has to sum over four peaks. The Gaussian form of $\chi'({\bf q})$ is given by $$\chi_{\rm G}'({\bf q})=\alpha \xi^2 \exp\left(-({\bf q}-{\bf Q})^2 \xi^2\right) \label{chig}$$ and the Lorentzian form by $$\chi_{\rm L}'({\bf q})=\alpha \xi^2/(1+({\bf q}-{\bf Q})^2 \xi^2) \ . \label{chil}$$ Since the question whether there exist incommensurate peaks in YBa$_2$Cu$_3$O$_{6+\delta}$ has not been settled yet [@Mook97], we will consider below both cases, a commensurate wavevector ${\bf Q}=(\pm \pi, \pm \pi)$, and an incommensurate one with ${\bf Q}=\delta_{\rm i} (\pm \pi, \pm \pi)$. The calculation of the real space Fourier transform finally yields $$\begin{aligned} \chi_{\rm G}'({\bf r}) &=&\frac{\alpha}{4\pi} F({\bf
null
--- abstract: | Let $S$ be a K3 surface with primitive curve class $\beta$. We solve the relative Gromov-Witten theory of $S \times {\mathbb{P}}^1$ in classes $(\beta,1)$ and $(\beta,2)$. The generating series are quasi-Jacobi forms and equal to a corresponding series of genus $0$ Gromov-Witten invariants on the Hilbert scheme of points of $S$. This proves a special case of a conjecture of Pandharipande and the author. The new geometric input of the paper is a genus bound for hyperelliptic curves on K3 surfaces proven by Ciliberto and Knutsen. By exploiting various formal properties we find that a key generating series is determined by the very first few coefficients. Let $E$ be an elliptic curve. As sum the form. We also calculate several linear Hodge integrals on the moduli space of stable maps to a K3 surface and the Gromov-Witten invariants of an abelian threefold in classes of type $(1,1,d)$. address: : points. Consider the relative geometry $$\label{dfgsdg} ( S \times {\mathbb{P}}^1 ) \ / \ \{ S_0, S_1, S_\infty \}$$ where $S_z$ denotes the fiber over the point $z \in {\mathbb{P}}^1$. For every $\beta \in H_2(S,{{\mathbb{Z}}})$ and integer $d \geq 0$, the pair $(\beta,d)$ determines a class in $H_2(S \times {\mathbb{P}}^1,{{\mathbb{Z}}})$ by $$(\beta,d) = \iota_{S \ast}(\beta) + \iota_{{\mathbb{P}}^1 \ast}(d [{\mathbb{P}}^1])$$ where $\iota_{S}$ and $\iota_{{\mathbb{P}}^1}$ are inclusions of fibers of the projection to ${\mathbb{P}}^1$ and $S$ respectively. Let $\beta_h \in {\mathop{\rm Pic}\nolimits}(S) \subset H_2(S,{{\mathbb{Z}}})$ be a *primitive* non-zero curve class satisfying $$\langle \beta_h, \beta_h \rangle = 2h-2$$ with respect to the intersection pairing on $S$. In Figure 1. The in Class $S$. 2. For all fixed relative conditions, the generating series of Gromov-Witten invariants (summed over the genus and the classes $\beta_h$) is a quasi-Jacobi form[^1]. // The theory is governed by an explicit Fock space formalism. The Jacobi form property of the generating series (part (ii)) is especially striking since it implies various strong identities and constraints on the curve counting invariants. In case of the Hilbert scheme of points an explanation for these symmetries has been found in the invariance of Gromov-Witten invariants under the monodromies of $\operatorname{Hilb}^d(S)$ in the moduli space of hyperkähler manifolds. For $S \times {\mathbb{P}}^1$ the geometric origin of the Jacobi form property is less clear. Nevertheless, a first hint can be found in the following fact proven by Ciliberto and Knutsen: \[thm\_CK\] Let $\beta$ be a primitive curve class on a K3 surface $S$ such that every curve in $S$ of class $\beta$ is irreducible and reduced. Then the arithmetic genus $g = p_a(C)$ of every irreducible curve $C \subset S \times {\mathbb{P}}^1$ in class $(\beta,d)$ with $d>1$ satisfies $$h \geq g + \alpha \big(g - (d-1)(\alpha+1) \big) \label{CK_equation}$$ where $\langle \beta, \beta \rangle = 2h-2$ and $\alpha = \floor{\frac{g}{2d-2}}$. An elementary check shows implies (in fact is equivalent if $d=2$) to the bound $$(g+d-1)^2 \leq 4 h (d-1) + (d-1)^2 \,.$$ On the other side the coefficient $c(h,r)$ in the Fourier expansion $\sum_{h, r} c(h,r) q^h y^r$ of a weak Jacobi form of index $d-1$ is non-zero only if $$r^2 \leq 4 h (d-1) + (d-1)^2 \,.$$ We find the genus bound by Ciliberto-Knutsen to match the coefficient bound for weak Jacobi forms under the index shift[^2] $r=1-g-d$. The appearence of Jacobi forms in the Gromov-Witten theory of $S \times {\mathbb{P}}^1$ is partly reflected in the fact that $d$-gonal curves on generic K3 surfaces have many singularities. One may ask if constraint can be used to determine Gromov-Witten invariants of $S \times {\mathbb{P}}^1$. The main technical result of the paper shows this is possible in case $d=2$: For a key choice of incidence condition, the Gromov-Witten invariants of $S \times {\mathbb{P}}^1$ in class $(\beta_h, 2)$ are completely determined by formal properties, the constraint and a few calculations in low genus. By standard techniques this leads to a full evaluation of the relative Gromov-Witten theory of $S \times {\mathbb{P}}^1$ in classes $(\beta_h,1)$ and $(\beta_h,2)$ in terms of quasi-Jacobi forms. Relative Gromov-Witten theory of $\text{K3} \times {\mathbb{P}}^1$ {#Section:Relative_Gromov_Witten_theory_of_P1K3} ------------------------------------------------------------------ ### Definition Let $z_1, \dots, z_k$ be distinct points on ${\mathbb{P}}^1$, and consider the relative geometry $$( S \times {\mathbb{P}}^1 )\ / \ \{ S_{z_1} , \ldots , S_{z_k} \}\,. \label{123}$$ Let $(\beta, d) \in H_2(S \times {\mathbb{P}}^1, {{\mathbb{Z}}})$ be a curve class, and let $\vec{\mu}^{(1)}, \dots, \vec{\mu}^{(k)}$ be ordered partitions of size $d$ with positive parts. The moduli space $$\mathbf{M}^{\bullet}_{g,n, (\beta,d), \mathbf{\mu}} = {{\overline M}}^{\bullet}_{g,n}\big( (S \times {\mathbb{P}}^1) / \{ S_{z_1}, \dots, S_{z_k} \}, (\beta,d), (\vec{\mu}^{(1)},\ldots, \vec{\mu}^{(k)}) \big)$$ parametrizes possibly disconnected[^3] $n$-pointed relative stable maps of genus $g$ and class $(\beta,d)$ with ordered ramification profile $\vec{\mu}^{(i)}$ along the divisors $S_{z_i}$ respectively. The relative evaluation maps $${\mathop{\rm ev}\nolimits}_j^{(i)} \colon \mathbf{M}_{g,n, (\beta,d), \mathbf{\mu}}^{\bullet} \, \to\, S_{z_i} \equiv S \,, \quad j=1,\dots, l(\mu_i), \quad i=1,\dots, k$$ send a relative stable map to the $j$-th intersection point with the divisor $S_{z_i}$. We let ${\mathop{\rm ev}\nolimits}_1, \ldots, {\mathop{\rm ev}\nolimits}_n$ denote the evaluation maps of the non-relative marked points. Relative Gromov-Witten invariants are defined using *unordered* relative conditions. Let $\gamma_1, \dots, \gamma_{24}$ be a fixed basis of $H^{\ast}(S, {{\mathbb{Q}}})$. A cohomology weighted partition $\nu$ is a multiset[^4] of pairs $$\Big\{ (\nu_1, \gamma_{s_1}) , \ldots, (\nu_{l(\nu)}, \gamma_{s_{l(\nu)}}) \Big\}$$ where $\sum_i \nu_i
null
--- bibliography: - 'paper.bib' --- \ \ \ \ > [*Several new methods have been recently proposed for performing valid inference after model selection. An implicit inference. In this paper we revisit sample splitting combined with the bootstrap (or the Normal approximation). We show that this leads to a simple, assumption-free approach to inference and we establish results on the accuracy of the method. In fact, we find new bounds on the accuracy of the bootstrap and the Normal approximation for general nonlinear parameters with increasing dimension which we then use to assess the accuracy of regression inference. We also estimate regression coefficients. Finally, we elucidate an inference-prediction trade-off: splitting increases the accuracy and robustness of inference but can decrease the accuracy of the predictions. *]{} “Investigators who use \[regression\] are not paying adequate attention to the connection - if any - between the models and the phenomena they are studying. ... By the time the models are deployed, the scientific position is nearly hopeless. Reliance on models in such cases is Panglossian ...” —David Freedman Introduction ============ We consider the problem of carrying out assumption-free statistical inference after model selection for high-dimensional linear regression. This is now a large topic and a variety of approaches have been considered under different settings – an overview of a subset of these can be found in [@dezeure2015high]. We defer a detailed discussion of the literature and list of references until Section \[sec:related\]. In this paper, we will use linear models but we do not assume that the true regression function is linear. We will assume the real regression function is linear 1. Inference based on sample splitting followed by the bootstrap (or Normal approximation) gives assumption-free, robust confidence intervals under very weak assumptions. No The bootstrap method gives very strong confidence inferences under very low guarantees. * The usual regression parameters are not the best choice of parameter to estimate in the weak assumption case. We propose new parameters, called LOCO (Leave-Out-COvariates) parameters, that are interpretable, general and can be estimated accurately. 3. There is accuracy. 4. We provide new bounds on the accuracy of the Normal approximation and the bootstrap to the distribution of the projection parameter (the best linear predictor) when the dimension increases and the model is wrong. We need these bounds since we will use Normal approximations or the bootstrap after choosing the model. In fact, we provide new general bounds on Normal approximations for nonlinear parameters with increasing dimension. This helps us in certain situations. In particular, the accuracy of the Normal approximation for the standard regression parameters is very poor while the approximation is very good for LOCO parameters. In addition, The accuracy of the bootstrap can be improved by using an alternative version that we call the image bootstrap. However, this version is computationally expensive. The This page describes the software packages available appendix. 6. We show that the law of the projection parameter cannot be consistently estimated without sample splitting. We want to emphasize that we do not claim that the LOCO parameter is optimal in any sense. We just aim to show that there exist alternatives to the usual parameters that, when the linear model is not true, (i) are more interpretable and (ii) can be inferred more accurately. ### of the linear model. ### $\mathbb{R}^{d+1}$. We make no assumptions on the regression function $x \in \mathbb{R}^d \mapsto \mu(x) = \mathbb{E}\left[ Y | X = x \right]$ describing the relationship between the vector of covariates and the expected value of the response variable. In particular, we do not require it to be linear. We observe $\mathcal{D}_n = (Z_1,\ldots, Z_n)$, an i.i.d. sample of size $n$ from some $P \in \mathcal{Q}_n$, where $Z_i = (X_i,Y_i)$, for $i = 1,\ldots,n$. We apply to the data a procedure $w_n$, which returns both a subset of the coordinates and an estimator of the regression function over the selected coordinates. Formally, $$\mathcal{D}_n \mapsto w_n(\mathcal{D}_n) = \left(\widehat{S}, \widehat{\mu}_{\widehat{S}}\right),$$ where $\widehat{S}$, the selected model, is a random, nonempty subset of $\{1,\ldots,d\}$ and $\widehat{\mu}_{\widehat{S}}$ is an estimator of the regression function $x \in \mathbb{R}^d \mapsto \mathbb{E}\left[ Y | X_{\widehat{S}} = x_{\widehat{S}} \right]$ restricted to the selected covariates $\widehat{S}$, where for $x \in \mathbb{R}^d$, $x_{{\widehat{S}}} = (x_j, j \in {\widehat{S}})$ and $(X,Y) \sim P$, independent of $\mathcal{D}_n$. The model selection and estimation steps comprising the procedure $w_n$ need not be related to each other, and can each be accomplished by any appropriate method. The only assumption we impose on $w_n$ is that the size of the selected model be under our control; that is, $ 0 < |\widehat{S}| \leq k $, for a pre-defined positive integer $k \leq d$ where $k$ and $d$ can both increase with sample size. For example, $\widehat{S}$ may be defined as the set of $k$ covariates with the highest linear correlations with the response and $\hat{\mu}_{\widehat{S}}$ may be any non-parametric estimator of the regression function over the coordinates in $\widehat{S}$ with bounded range. Although our framework allows for arbitrary estimators of the regression function, we will be focussing on linear estimators: $\widehat{\mu}_{\widehat{S}}(x) = \widehat{\beta}_{\widehat{S}}^\top x_{\widehat{S}} $, where $\widehat{\beta}_{\widehat{S}}$ is any estimator of the of the linear regression coefficients for the selected variables – such as ordinary least squares on the variables in $\widehat{S}$. In particular, $\widehat{\beta}_{\widehat{S}}$ may arise from fitting a sparse linear model, such as the lasso or stepwise-forward regression, in which case estimation of the regression parameters and model selection can be accomplished simultaneously with one procedure. It is important to emphasize that, since we impose minimal assumptions on the class $\mathcal{Q}_n$ of data generating distribution and allow for arbitrary model selection and estimation procedures $w_n$, we will not assume anything about the quality of the output returned by the procedure $w_n$. In particular, the selected model $\widehat{S}$ needs not be a good approximation of any optimal model, however optimality may be defined. Similarly, $\hat{\mu}_{\widehat{S}}$ may not be a consistent estimator of the regression function restricted to $\widehat{S}$. Instead, our concern is to provide statistical guarantees for various criteria of significance for the selected model $\widehat{S}$, uniformly over the choice of $w_n$ and over all the distributions $P \in \mathcal{Q}_n$. We will accomplish this goal by producing confidence sets for four [*random*]{} parameters in $\mathbb{R}^{ \widehat{S}}$, each providing a different assessment of the level of statistical significance of the variables in $\widehat{S}$ from a purely [*predictive*]{} standpoint. All of the random parameters under consideration are functions of the data generating distribution $P$, of the sample $\mathcal{D}_n$ and, therefor, of its size $n$ and, importantly, of the model selection and estimation procedure $w_n$. Below, $(X,Y)$ denotes a draw from $P$, independent of the sample $\mathcal{D}_n$. Thus the distribution of $(X,Y)$ is the same as their conditional distribution given $\mathcal{D}_n$. - [**The projection parameter $\beta_{\widehat{S}}$. **]{} The linear projection parameter $\beta_{\widehat{S}}$ is defined to be the vector of coefficients of the best linear predictor of $Y$ using $X_{\widehat{S}}$: $$\beta_{\widehat{S}} = \operatorname*{argmin}_{\beta \in \mathbb{R}^{\widehat{S}}} \mathbb{E}_{X,Y} \left[ (Y- \beta^\top X_{\widehat{S}})^2 \right],$$ where $\mathbb{E}_{(X,Y)}$ denote the expectation with respect to the distribution of $(X,Y)$. The terminology projection parameters refers to the fact that $X^\top \beta_{{\widehat{S}}}$ is the projection of $Y$ into the linear space of all random variables that can be obtained as linear functions of $X_{{\widehat{S}}}$. For a
null
--- abstract: 'The thermodynamic entropy of an isolated system is given by its von Neumann entropy. Over the last few years, there is an intense activity to understand thermodynamic entropy from the principles of quantum mechanics. More specifically, is there a relation between the (von Neumann) entropy of entanglement between a system and some (separate) environment is related to the thermodynamic entropy? It is difficult to obtain the relation for many body systems, hence, most of the work in the literature has focused on small number systems. In this work, we consider black-holes — that are simple yet macroscopic systems — and show that a direct connection could not be made between the entropy of entanglement and the Hawking temperature. In this work, within the adiabatic approximation, we explicitly show that the Hawking temperature is indeed given by the rate of change of the entropy of entanglement across a black hole’s horizon with regard to the system energy. This is yet another numerical evidence to understand the key features of black hole thermodynamics from the viewpoint of quantum information theory.' author: - 'S. Santhosh Kumar' - 'S. Shankaranarayanan' - 'S. Krishnan' . More importantly, it relates entropy, a phenomenological quantity in thermodynamics, to the volume of a certain region in phase space [@Wehrl1978-RMP]. The laws of thermodynamics are also equally applicable to quantum mechanical systems. A [ @yukalov2007-LPL]. Furthermore, the availability of Feshbach resonances is shown to be useful to control the strength of interactions, to realize strongly correlated systems, and to drive these systems between different quantum phases in controlled manner [@Osterloh2002; @wu2004; @rey2010-njp; @santos2010-njp]. These experiments have raised the possibility of understanding the emergence of thermodynamics from principles of quantum mechanics. The fundamental questions that one hopes to answer from these investigations are: How the macroscopic laws of thermodynamics emerge from the reversible quantum dynamics? How to understand the thermalization of a closed quantum systems? What are the relations between information, thermodynamics and quantum mechanics [@2006-Lloyd-NPhys; @2008-Brandao; @horodecki-2008; @popescu97; @vedral98; @plenio98] ? While answer to these questions, for many body system is out of sight, some important progress has been made by considering simple lattice systems (See, for instance, Refs. [@1994-srednicki; @rigol2008-nature; @2012-srednicki; @rahul2015-ARCMP]). In this work, in an attempt to address some of the above questions, our focus is on another simple, yet, macroscopic system — black-holes. It is known that @shanki2013]. However, this has never been directly related to the Hawking temperature [@hawking75]. Here we show that: (i) Hawking temperature is given by the rate of change of the entropy of entanglement across a black hole’s horizon with regard to the system energy. (ii) The information lost across the horizon is related to black hole entropy and laws of black hole mechanics emerge from entanglement across the horizon. The model we consider is complementary to other models that investigate the emergence of thermodynamics [@2006-Lloyd-NPhys; @2008-Brandao; @horodecki-2008; @popescu97; @vedral98; @plenio98]: First, we evaluate the entanglement entropy for a relativistic free scalar fields propagating in the black-hole background while the simple lattice models that were considered are non-relativistic. Second, quantum entanglement can be unambiguously quantified only for bipartite systems [@horodecki2009; @eisert2010]. While the bipartite system is an approximation for applications to many body systems, here, the event horizon provides a natural boundary. Evaluation of the entanglement of a relativistic free scalar field, as always, is the simplest model. However, even for free fields it is difficult to obtain the entanglement entropy. The free fields are Gaussian and these states are entirely characterized by the covariance matrix. It is generally difficult to handle covariance matrices in an infinite dimensional Hilbert space [@eisert2010]. There are two ways to calculate entanglement entropy in the literature. One approach is to use the replica trick which rests on evaluating the partition function on an n-fold cover of the background geometry where a cut is introduced throughout the exterior of the entangling surface [@eisert2010; @cardy2004]. Second approach involves estimating space. We adopt this approach as entanglement entropy may have more symmetries than the Lagrangian of the system [@krishnand2014]. To remove the spurious effects due to the coordinate singularity at the horizon[^1], we consider Lemaître coordinate which is explicitly time-dependent. One of the features that we exploit in our computation is that for a fixed Lemaître time coordinate, Hamiltonian of the scalar field in Schwarzschild space-time reduces to the scalar field Hamiltonian in flat space-time [@shanki-review]. The procedure we adopt is the following: (i) We perturbatively evolve the Hamiltonian about the fixed Lemaître time. (ii) We observe the evolution of the Newtonion at all times. We show that at all times, the entanglement entropy satisfies the area law i. e. $S(\epsilon) = C(\epsilon) A$ where $S(\epsilon)$ is the entanglement entropy evaluated at a given Lemaître time $(\epsilon)$, $C(\epsilon)$ is the proportionality constant that depends on $\epsilon$, and $A$ is the area of black hole horizon. In other words, the value of the entropy is different at different times. (iii) We calculate the change in entropy as function of $\epsilon$, i. e., $\Delta S/\Delta \epsilon$. Similarly we calculate change in energy $E(\e)$, i.e., $\Delta E/\Delta \epsilon$. For several black-hole metrics, we explicitly show that ratio of the rate of change of energy and the rate of change of entropy is identical to the Hawking temperature. The n Sec. (\[sec.1\]), we set up our model Hamiltonian to obtain the entanglement entropy in ($D+2$)-dimensional space time. Also, we define [*entanglement temperature*]{}, which had the same structure from the statistical mechanics, that is, ratio of change in total energy to change in entanglement entropy. Also, by means of a statistical model (\[sec.2\]), we numerically show that for different black hole space times, the divergent free [*entanglement temperature*]{} matches approximately with the Hawking temperature obtained from general theory of relativity and its Lovelock generalization. This provides a strong evidence towards the interpretation of entanglement entropy as the Bekenstein-Hawking entropy. Finally , in Sec. (\[sec.3\]), we conclude with a discussion to connect our analysis with the eigenstate thermalization hypothesis for the closed quantum systems [@2012-srednicki]. Throughout this work, the metric signature we adopt is $(+,-,-,-)$ and set $\hbar=k_{B} =c=1$. Model and Setup {#sec.1} =============== Motivation ---------- Before we go on to evaluating entanglement entropy (EE) of a quantum scalar field propagating in black-hole background, we briefly discuss the motivation for the studying entanglement entropy of a scalar field. Consider the action . i.e . Perturbing d<unk>4x w.r.t. the metric $\bar{g}_{\mu\nu} = g_{\mu\nu} + h_{\mu\nu}$, the action up to second order becomes [@shanki-review]: S\_[\_[EH]{}]{}(g, h) &=& - d\^4x . The above action corresponds to massive ($\Lambda$) spin-2 field ($h_{\mu\nu}$) propagating in the background metric $g_{\mu\nu}$. Rewriting, $h_{\mu\nu} = M_{_{\rm Pl}}^{-1} \epsilon_{\mu\nu} \Phi(x^{\mu})$ \[where $\epsilon_{\mu\nu}$ is the constant polarization tensor\], the above action can be written as S\_[\_[EH]{}]{} (g, h) = - 1 2 d\^4x . which is the action for the massive scalar field propagating in the background metric $g_{\mu\nu}$. In this work, we consider massless ($\Lambda = 0$ corresponding to asymptotically flat space-time) scalar field propagating in $(D + 2)-$dimensional spherically symmetric space-time. Model ----- The canonical action for the massless, real scalar field $\Phi(x^{\mu})$ propagating in $(D + 2)-$dimensional space-time is \[equ21\] =d\^[D+2]{}[**x**]{}g\^\_([**x**]{})\
null
--- author: - 'M. Csanád[^1]' bibliography: - '../../../master.bib' title: Time evolution of the sQGP with hydrodynamic models --- Introduction ============ The almost perfect fluidity of the experimentally created strongly interacting Quark-Gluon-Plasma at the Relativistic Heavy Ion Collider (RHIC) [@Adcox:2004mh] showed that relativistic hydrodynamic models can be applied in describing the space-time picture of heavy-ion collisions and infer the relation between experimental observables and the initial conditions. In this paper we investigate the relativistic, ellipsoidally symmetric model of Ref. [@Csorgo:2003ry]. Hadronic observables were calculated in Ref. [@Csanad:2009wc], while photonic observables in Ref. [@Csanad:2011jq]. We also show new solutions, which can be regarded as generalizations of the model of Ref. [@Csorgo:2003ry] to arbitrary, temperature dependent speed of sound, originally published in Ref. [@Csanad:2012hr]. Equations are represented by the formula $diag<unk> lab-frame. The metric tensor is $g_{\mu\nu}=diag{\left({1,-1,-1,-1}\right)}$. Coordinate proper-time is defined as $\tau=\sqrt{t^2-|{\mathbf{r}}|^2}$. The fluid four-velocity is $u^\mu=\gamma{\left({1,{\mathbf{v}}}\right)}$, with ${\mathbf{v}}$ being the three-velocity, and $\gamma=1/\sqrt{1-|{\mathbf{v}}|^2}$. An analytic hydrodynamical solution is a functional form for pressure $p$, energy density $\varepsilon$, entropy density $\sigma$, temperature $T$, and (if the fluid consists of individual conserved particles, or if there is some conserved charge or number) the conserved number density is $n$. Then basic hydrodynamical equations are the continuity and energy-momentum-conservation equations: $$\begin{aligned} \partial_\mu{\left({n u^\mu}\right)} = 0\;\textnormal{ and }\;\partial_\nu T^{\mu \nu} = 0\label{e:em}.\end{aligned}$$ The energy-momentum tensor of a perfect fluid is $$\begin{aligned} T^{\mu\nu} ={\left({\varepsilon+p}\right)}u^\mu u^\nu-pg^{\mu \nu} .\end{aligned}$$ The energy-momentum conservation equation can be then transformed to (by projecting it orthogonal and parallel to $u^\mu$, respectively): $$\begin{aligned} {\left({\varepsilon+p}\right)}u^{\nu}\partial_{\nu}u^{\mu} & ={\left({g^{\mu\nu}-u^{\mu}u^{\nu}}\right)}\partial_{\nu}p,\label{e:euler} \\ {\left({\varepsilon+p}\right)}\partial_{\nu}u^{\nu}+u^{\nu}\partial_{\nu}\varepsilon & = 0\label{e:energy}.\end{aligned}$$ [Eq. (\[e:euler\])]{} e [Eq. (\[e:energy\])]{} is the relativistic form of the energy conservation equation. Note also that [Eq. (\[e:energy\])]{} is equivalent to the entropy conservation equation: $$\begin{aligned} \label{e:scont} \partial_\mu{\left({\sigma u^\mu}\right)}=0 .\end{aligned}$$ The Equation of State (EoS) closes the set of equations. We investigate the following EoS: $$\begin{aligned} \label{e:eos} \varepsilon = \kappa{\left({T}\right)} p ,\end{aligned}$$ while the speed of sound $c_s$ is calculated as $c_s = \sqrt{\partial p/\partial \varepsilon}$, i.e. for constant $\kappa$, the relation $c_s = 1/\sqrt{\kappa}$ holds. For the case when there is a conserved $n$ number density, we also use the well-known relation for ideal gases: $$\begin{aligned} \label{e:tdef} p=nT. \end{aligned}$$ For $\kappa{\left({T}\right)}=$ constant, an ellipsoidally symmetric solution of the hydrodynamical equations is presented in Ref. [@Csorgo:2003ry]: $$\begin{aligned} \label{e:tsol0} u^\mu = \frac{x^\mu}{\tau},\quad n = n_0\frac{V_0}{V}\nu{\left({s}\right)},\quad T = T_0{\left({\frac{V_0}{V}}\right)}^{{\frac{1}{\kappa}}}{\frac{1}{\nu{\left({s}\right)}}} ,\quad V = \tau^3,\quad s = \frac{r_x^2}{X^2} + \frac{r_y^2}{Y^2} + \frac{r_z^2}{Z^2},\end{aligned}$$ where $n_0$ and $T_0$ correspond to the proper time when the arbitrarily chosen volume $V_0$ was reached (i.e. $\tau_0 = V_0^{1/3}$), and $\nu{\left({s}\right)}$ is an arbitrary function of $s$. The value of s=0$. We call $s$ a *scaling variable*, and $V$ the effective volume of a characteristic ellipsoid. Furthermore, $X$, $Y$, and $Z$ are the time (lab-frame time $t$) dependent principal axes of an expanding ellipsoid. They = constants. Photon and hadron observables for constant EoS ============================================== From the above hydrodynamic solution with a constant EoS, source functions can be written up. For bosonic hadrons, it takes the following form [@Csanad:2009wc]: $$\begin{aligned} S(x,p)d^4x=\mathcal{N}\frac{p_{\mu}\,d^3\Sigma^{\mu}(x)H(\tau)d\tau}{n(x)\exp\left(p_{\mu}u^{\mu}(x)/T(x)\right)-1},\end{aligned}$$ where $\mathcal{N}=g/(2\pi)^3$ (with $g$ being the degeneracy factor), $H(\tau)$ is the proper-time probability distribution of the freeze-out. It ). $<unk>tau$$ is a coefficient of mass. $mu$/T(x)=<unk>ln and $t $\tau_0$. Furthermore, $\mu(x)/T(x)=\ln n(x)$ is the fugacity factor and $d^3 \Sigma_\mu(x)p^\mu$ is the Cooper-Frye factor (describing the flux of the particles), and $d^3 \Sigma_\mu(x)$ is the vector-measure of the freeze-out hyper-surface, pseudo-orthogonal to $u^\mu$. Here the source distribution is normalized such as $\int S(x,p) d^4 x d^3{\bf p}/E = N$, i.e. one gets the total number of particles $N$ (using $c$=1, $\hbar$=1 units). Note that one has to change variables from $\tau$ to $t$, and so a Jacobian of $d\tau/dt=t/\tau$ has to be taken into account. For the source function of photon creation we have [@Csanad:2011jq]: $$\begin{aligned} \label{e:source} S(x,p)d^4x = \mathcal{N'}\frac{p_{\mu}\,d^3\Sigma^{\mu}(x)dt}{\exp\left(p_{\mu}u^{\mu}(x)/T(x)\right)-1} = \mathcal{N'}\frac{p_{\mu}u^{\mu}}{\exp\left(p_{\mu}u^{\mu}(x)/T(x)\right)-1}\,d^4x \end{aligned}$$ where $p_{\mu}d^3\Sigma^{\mu}$ is again the Cooper-Frye factor of the emission hyper-surfaces. Similarly to the previous case, we assume that the hyper-surfaces are pseudo-orthogonal to $u^\mu$, thus $d^3\Sigma^{\mu}(x) = u^{\mu}d^3x$. This yields then $p_{\mu}u^{\mu}$ which is the energy of the photon in the co-moving system. The photon creation is the assumed to happen from an initial time $t_i$ until a point sufficiently near the freeze-out. From these source functions, observables can be calculated, as detailed in Refs. [@Csanad:2009wc; @Csanad:
null
--- abstract: 'In this expository and survey paper, along one of main lines of bounding the ratio of two gamma functions, we look back and analyse some inequalities, several complete monotonicity of functions involving ratios of two gamma or $q$-gamma functions, and necessary and sufficient conditions for functions involving ratios of two gamma or $q$-gamma functions to be logarithmically completely monotonic.' address: True functions. It is common knowledge that special functions $\Gamma(x)$, $\psi(x)$ and $\psi^{(k)}(x)$ for $k\in\mathbb{N}$ are fundamental and important and have much extensive applications in mathematical sciences. The $q$-analogues of $\Gamma$ and $\psi$ are defined [@andrews pp. 493–496] for $x>0$ by $$\begin{gathered} \label{q-gamma-dfn} \Gamma_q(x)=(1-q)^{1-x}\prod_{i=0}^\infty\frac{1-q^{i+1}}{1-q^{i+x}},\quad 0<q<1,\\ \label{q-gamma-dfn-q>1} \Gamma_q(x)=(q-1)^{1-x}q^{\binom{x}2}\prod_{i=0}^\infty\frac{1-q^{-(i+1)}}{1-q^{-(i+x)}}, \quad q>1,\end{gathered}$$ and $$\begin{aligned} \label{q-gamma-1.4} \psi_q(x)=\frac{\Gamma_q'(x)}{\Gamma_q(x)}&=-\ln(1-q)+\ln q \sum_{k=0}^\infty\frac{q^{k+x}}{1-q^{k+x}}\\ &=-\ln(1-q)-\int_0^\infty\frac{e^{-xt}}{1-e^{-t}}\operatorname{d\mspace{-2mu}}\gamma_q(t) \label{q-gamma-1.5}\end{aligned}$$ for $0<q<1$, where $\operatorname{d\mspace{-2mu}}\gamma_q(t)$ is a discrete measure with positive masses $-\ln q$ at the positive points $-k\ln q$ for $k\in\mathbb{N}$, more accurately, $$\gamma_q(t)= \begin{cases} -\ln q\sum\limits_{k=1}^\infty\delta(t+k\ln q),&0<q<1,\\ t,&q=1. \end{cases}$$ See [@Ismail-Muldoon-119 p. 311]. The $q$-gamma function $\Gamma_q(z)$ has the following basic properties: $$\lim_{q\to1^+}\Gamma_q(z)=\lim_{q\to1^-}\Gamma_q(z)=\Gamma(z)\quad \text{and}\quad \Gamma_q(x)=q^{\binom{x-1}2}\Gamma_{1/q}(x).$$ The definition and properties of completely monotonic functions --------------------------------------------------------------- A function $f$ is said to be completely monotonic on an interval $I$ if $f$ has derivatives of all orders on $I$ and $(-1)^{n}f^{(n)}(x)\ge0$ for $x \in I$ and $n \ge0$. The class of completely monotonic functions has the following basic properties. \[p.161-widder\] A necessary and sufficient condition that $f(x)$ should be completely monotonic for $0<x<\infty$ is that $$f(x)=\int_0^\infty e^{-xt}\operatorname{d\mspace{-2mu}}\alpha(t),$$ where $\alpha(t)$ is nondecreasing and the integral converges for $0<x<\infty$. \[p.83-bochner\] If $f(x)$ is completely monotonic on $I$, $g(x)\in I$, and $g'(x)$ is completely monotonic on $(0,\infty)$, then $f(g(x))$ is completely monotonic on $(0,\infty)$. The logarithmically completely monotonic functions -------------------------------------------------- A positive and $k$-times differentiable function $f(x)$ is said to be $k$-log-convex (or $k$-log-concave, respectively) with $k\ge2$ on an interval $I$ if and only if $[\ln f(x)]^{(k)}$ exists and $[\ln f(x)]^{(k)}\ge0$ (or $[\ln f(x)]^{(k)}\le0$, respectively) on $I$. A positive function $f(x)$ is said to be logarithmically completely monotonic on an interval $I\subseteq\mathbb{R}$ if it has derivatives of all orders on $I$ and its logarithm $\ln f(x)$ satisfies $(-1)^k[\ln f(x)]^{(k)}\ge0$ for $k\in\mathbb{N}$ on $I$. The notion “logarithmically completely monotonic function” was first put forward in [@Atanassov] without an explicit definition. This terminology was explicitly recovered in [@minus-one] whose revised and expanded version was formally published as [@minus-one.tex-rev]. It has been proved once and again in [@CBerg; @clark-ismail-NFAA.tex; @clark-ismail.tex; @compmon2; @absolute-mon.tex; @minus-one; @minus-one.tex-rev; @schur-complete] that a logarithmically completely monotonic function on an interval $I$ must also be completely monotonic on $I$. C. Berg points out in [@CBerg] that these functions are the same as those studied by Horn [@horn] under the name infinitely divisible completely monotonic functions. For more information, please refer to [@CBerg; @auscmrgmia] and related references therein. Outline of this paper --------------------- The history of bounding the ratio of two gamma functions has been longer than sixty years since the paper [@wendel] by J. G. Wendel was published in 1948. The motivations of bounding the ratio of two gamma functions are various, including establishment of asymptotic relation, refinements of Wallis’ formula, approximation to $\pi$, needs in statistics and other mathematical sciences. In this expository and survey paper, along one of main lines of bounding the ratio of two gamma functions, we look back and analyse some inequalities such as Wendel’s double inequality, Kazarinoff’s refinement of Wallis’ formula, Watson’s monotonicity, Gautschi’s double inequality, and Kershaw’s first double inequality, the complete monotonicity of several functions involving ratios of two gamma or $q$-gamma functions by Bustoz, Ismail, Lorch and Muldoon, and necessary and sufficient conditions for functions involving ratios of two gamma or $q$-gamma functions to be logarithmically completely monotonic. Some problems are discussed functions. Wendel’s double inequality -------------------------- Our starting point is a paper published in 1948 by J. G. Wendel, which is the earliest one we can search out to the best of our ability. In order to establish the classical asymptotic relation $$\label{wendel-approx} \lim_{x\to\infty}\frac{\Gamma(x+s)}{x^s\Gamma(x)}=1$$ for real $s$ and $x$, by using Hölder’s inequality for integrals, J. G. Wendel [@wendel] proved elegantly the double inequality $$\label{wendel-inequal} \biggl(\frac{x}{x+s}\biggr)^{1-s}\le\frac{\Gamma(x+s)}{x^s\Gamma(x)}\le1$$ for $0<s<1$ and $x>0$. \[rem-2.1.1\] The inequality  can be rewritten for $0<s<1$ and $x>0$ as $$\label{wendel-inequal-rew} (x+
null
--- abstract: 'It is known that multidimensional complex potentials obeying $\mathcal{PT}$-symmetry may possess all real spectra and continuous families of solitons. Recently it was shown that for multi-dimensional systems these features can persist when the parity symmetry condition is relaxed so that the potential is invariant under reflection in only a single spatial direction. We examine the existence, stability and dynamical properties of localized modes within the cubic nonlinear Schrödinger equation in such a scenario of partially $\mathcal{PT}$-symmetric potential.' author: - 'J. D’Ambroise' - 'P.G. Kevrekidis' [courtesy of @Bender2]. Originally, it was proposed as an alternative to the standard quantum theory, where the Hamiltonian is postulated to be Hermitian. In these works, it was instead found that Hamiltonians invariant under $\mathcal{PT}$-symmetry, which are not necessarily Hermitian, may still give rise to completely real spectra. Thus, the proposal of Bender and co-authors was that these Hamiltonians are appropriate for the description of physical settings. In the important case of Schr[ö]{}dinger-type Hamiltonians, which include the usual kinetic-energy operator and the potential term, $% V(x) $, the $\mathcal{PT}$-invariance is consonant with complex potentials, subject to the constraint that $V^{\ast }(x)=V(-x)$. A decade later, it was realized (and since then it has led to a decade of particularly fruitful research efforts) that this idea can find fertile ground for its experimental realization although not in quantum mechanics where it was originally conceived. In this vein, numerous experimental realizations sprang up in the areas of linear and nonlinear optics [@Ruter; @Peng2014; @peng2014b; @RevPT; @Konotop], electronic circuits [@Schindler1; @Schindler2; @Factor], and mechanical systems [@Bender3], among others. Very recently, this now mature field of research has been summarized in two comprehensive reviews [@RevPT; @Konotop]. One of the particularly relevant playgrounds for the exploration of the implications of $\mathcal{PT}$-symmetry is that of nonlinear optics, especially because it can controllably involve the interplay of $\mathcal{PT}$-symmetry and nonlinearity. In this context, the propagation of light (in systems such as optical fibers or waveguides [@RevPT; @Konotop]) is modeled by the nonlinear Schrödinger equation of the form: $$\begin{aligned} \label{nls} i\Psi_z + \Psi_{xx} + \Psi_{yy} + U(x,y)\Psi + \sigma |\Psi|^2\Psi = 0.\end{aligned}$$ In the optics notation that we use here, the evolution direction is denoted by $z$, the propagation distance. Here, we restrict our considerations to two spatial dimensions and assume that the potential $U(x,y)$ is complex valued, representing gain and loss in the optical medium, depending on the sign of the imaginary part (negative for gain, positive for loss) of the potential. In this two-dimensional setting, the condition of full $\mathcal{PT}$-symmetry in two dimensions is that $U^*(x,y) = U(-x,-y)$. Potentials with full $\mathcal{PT}$ symmetry have been shown to support continuous families of soliton solutions [@OptSolPT; @WangWang; @LuZhang; @StabAnPT; @ricardo]. However, an important recent development was the fact that the condition of (full) $\mathcal{PT}$ symmetry can be relaxed. That is, either the condition $U^*(x,y)=U(-x,y)$ or $U^*(x,y)=U(x,-y)$ of, so-called, partial $\mathcal{PT}$ symmetry can be imposed, yet the system will still maintain all real spectra and continuous families of soliton solutions [@JYppt]. In the original contribution of [@JYppt], only the focusing nonlinearity case was considered for two select branches of solutions and the stability of these branches was presented for isolated parametric cases (of the frequency parameter of the solution). Our aim in the present work is to provide a considerably more “spherical” perspective of the problem. In particular, we examine the bifurcation of nonlinear modes from [*all three*]{} point spectrum eigenvalues of the underlying linear Schr[ö]{}dinger operator of the partially $\mathcal{PT}$-symmetric potential. Upon presenting the relevant model (section 2), we perform the relevant continuations (section 3) unveiling the existence of nonlinear branches [*both*]{} for the focusing and for the defocusing nonlinearity case. We also provide a systematic view towards the stability of the relevant modes (section 4), by characterizing their principal unstable eigenvalues as a function of the intrinsic frequency parameter of the solution. In section 5, we complement our existence and stability analysis by virtue of direct numerical simulations that manifest the result of the solutions’ dynamical instability when they are found to be unstable. Finally, in section 6, we summarize our findings and present our conclusions, as well as discuss some possibilities for future studies. Model, Theoretical Setup and Linear Limit ========================================= Motivated by the partially $\mathcal{PT}$-symmetric setting of [@JYppt], we consider the complex potential $U(x,y) = V(x,y) + iW(x,y)$ where $$\begin{aligned} V &=& \left( ae^{ - (y - y_0)^2} + be^{ - (y + y_0)^2}\right)\left(e^{-(x - x_0)^2 } + e^{-(x + x_0)^2 }\right)\nonumber \\ W &=& \beta\left(ce^{ - (y - y_0)^2} + de^{ - (y + y_0)^2}\right)\left(e^{-(x - x_0)^2 } - e^{-(x + x_0)^2 }\right).\label{VW}\end{aligned}$$ with real constants $\beta$, $a\neq b $ and $c\neq -d$. The potential is chosen with partial $\mathcal{PT}$-symmetry so that $U^*(x,y)=U(-x,y)$. That is, the real part is even in the $x$-direction with $V(x,y) = V(-x,y)$ and the imaginary part is odd in the $x$-direction with $-W(x,y) = W(-x,y)$. The real part moves in opposite directions and the imaginary is even in each direction. In [@JYppt] it is shown that the spectrum of the potential $U$ can be all real as long as $|\beta|$ is below a threshold value, after which a ($\mathcal{PT}$-) phase transition occurs; this is a standard property of $\mathcal{PT}$-symmetric potentials. We focus on the case $\beta=0.1$ and $a=3, b=c=2, d=1$ for which the spectrum is real, i.e., below the relevant transition threshold. Figure \[figVW\] shows plots of the potential $U$. The real part of the potential is shown on the left, while the imaginary part associated with gain-loss is on the right; the gain part of the potential corresponds to $W<0$ and occurs for $x<0$, while the loss part with $W>0$ occurs for $x>0$. Figure \[figVWeigs\] shows the spectrum of $U$, i.e., eigenvalues for the underlying linear Schr[ö]{}dinger problem $(\nabla^2 + U)\psi_0 = \mu_0\psi_0$. The figure also shows the corresponding eigenvectors for the three discrete real eigenvalues $\mu_0$. It is from these modes that we will seek bifurcations of nonlinear solutions in what follows. ! [The plots show the spatial distribution of real ($V$, left panel) and imaginary ($W$, right panel) parts of the potential $U$ with $x_0=y_0=1.5$. ] [ The top left plot shows the spectrum of Schr[ö]{}dinger operator associated with the potential $U$ in the complex plane (see also the text). Plots of the magnitude of the normalized eigenvectors for the three discrete eigenvalues $\mu_0$ are shown in the other three plots. []{data-label="figVWeigs"}](VWeigs.png){width="3.5in"} Existence: Nonlinear Modes Bifurcating from the Linear Limit ============================================================ As is customary, we focus on stationary soliton solutions of (\[nls\]) of the form $\Psi(x,y,t) = \psi(x,y)e^{i\mu z}$. Thus one obtains the following stationary equation for $\psi(x,y)$. $$\psi_{xx} + \psi_{yy} + U(x,y)\psi + \sigma|\psi|^2\psi = \mu \psi \label{stateq}$$ In [@JYppt] it is discussed that a continuous family
null
--- abstract: 'This paper presents a class of Dynamic Multi-Armed Bandit problems where the reward can be modeled as the noisy output of a time varying linear stochastic dynamic system that satisfies some boundedness constraints. The class allows many seemingly different problems with time varying option characteristics to be considered in a single framework. It also opens up the possibility of considering many new problems of practical importance. For instance it affords the simultaneous consideration of temporal option unavailabilities and the dependencies between options with time varying option characteristics in a seamless manner. We show that, for this class of problems, the combination of any Upper Confidence Bound type algorithm with any efficient reward estimator for the expected reward ensures the logarithmic bounding of the expected cumulative regret. We show that interest.' author: True 'T. W. U. Madhushani$^{1}$ and D. H. S. Maithripala$^2$ and N. E. Leonard$^{3}$ [^1] [^2] [^3]' bibliography: - 'DynamicBandit.bib' title: '**Asymptotic Allocation Rules for a Class of Dynamic Multi-armed Bandit Problems**' --- Introduction {#sect:Introduction} ============ In decision theory Multi-Armed Bandit problems serve as a model that captures the salient features of human decision making strategies. The elementary case of a *1-armed bandit* is a slot machine with one lever that results in a numerical reward after every execution of the action. The reward is assumed to satisfy a specific but unknown probability distribution. A slot machine with multiple levers is known as a *Multi-Armed Bandit* (MAB) [@Sutton; @Robbins]. The problem is analogous to a scenario where an agent is repeatedly faced with several different options and is expected to make suitable choices in such a way that the cumulative reward is maximized [@Gittins]. This is known to be equivalent to minimizing the expected cumulative regret [@LaiRobbins]. Over decades optimal strategies have been developed to realize the above stated objective. In the standard multi-armed bandit problem the reward distributions are stationary. Thus if the mean values of all the options are known to the agent, in order to maximize the cumulative reward, the agent only has to sample from the option with the maximum mean. In reality this information is not available and the agent should choose options to maximize the cumulative reward while gaining sufficient information to estimate the true mean values of the option reward distributions. This is called the exploration-exploitation dilemma. In a case where the agent is faced with these choices with an infinite time horizon exploitation-exploration sampling rules are guaranteed to converge to the optimum option. In ifiable sampling rule case. Specifically, they establish a logarithmic lower bound for the number of times a sub-optimal option needs to be sampled by an optimal sampling rule if the total number of times the sub-optimal arms are sampled satisfies a certain boundedness condition. The pioneering work by [@LaiRobbins] establishes a confidence bound and a sampling rule to achieve logarithmic cumulative regret. These results are further simplified in [@AgrawalSimpl] by establishing a confidence bound using a sample mean based method. Improving on these results, a family of Upper Confidence Bound (UCB) algorithms for achieving asymptotic and uniform logarithmic cumulative regret was proposed in [@Auer]. These algorithms are based on the notion that the desired goal of achieving logarithmic cumulative regret is realized by choosing an appropriate uncertainty model, which results in optimal trade-off between reward gain and information gain through uncertainty. What all these schemes have in common is a three step process: 1) a predication step, that involves the estimation of the expected reward characteristics for each option based on the information of the obtained rewards, 2) an objective function that captures the tradeoff between estimated reward expectation and the uncertainty associated with it and 3) a decision making step that involves formulation of an action execution rule to realize a specified goal. For the standard MAB problem the reward associated with an option is considered as an iid stochastic process. Therefore in the frequentist setting the natural way of estimating the expectation of the reward is to consider the sample average [@LaiRobbins; @AgrawalSimpl; @Auer]. The papers [@Kauffman; @Reverdy] present how to incorporate prior knowledge about reward expectation in the estimation step by leveraging the theory of conditional expectation in the Bayesian setting. We highlight that all these estimators ensure certain asymptotic bounds on the tail probabilities of the estimate of the expected reward. We will call such an estimator an *efficient reward estimator*. Furthermore all these methods with the exception of [@LaiRobbins] rely on UCB type algorithms for the decision making process. An extension to the standard MAB problem is provided in [@Kleinberg2010] to include temporal option unavailabilities where they propose a UCB based algorithm that ensures that the expected regret is upper bounded by a function that grows as the square root of the number of time steps. In all of the previously discussed papers, the option characteristics are assumed to be static. However many real world problems can be modeled as multi-armed bandit problems with dynamic option characteristics [@dacosta2008adaptive; @Slivkins; @granmo2010solving; @Garivier2011; @srivastava2014surveillance; @schulz2015learning; @tekin2010online]. In these problems reward distributions can change deterministically or stochastically. The work [@dacosta2008adaptive; @Garivier2011; @srivastava2014surveillance] present allocation rules and associated regret bounds for a class of problems where the reward distributions change deterministically after an unknown number of time steps. The paper [@dacosta2008adaptive] presents a UCB1 based algorithm where they incorporate the Page-Hinkley change point detection method to identify the the point at which the underlying option characteristics change. A discounted UCB or a sliding-window UCB algorithm is proposed in [@Garivier2011] to solve non stationary MAB problems where the expectation of the reward switches to unknown constants at unknown time points. This work is extended in [@srivastava2014surveillance] by proposing sliding window UCL (SW-UCL) algorithm with adaptive window sizes for correlated Gaussian reward distributions. They incorporate the Page-Hinkley change point detection method to adjust the window size by identifying abrupt changes in the reward mean. Similarly, they also propose a block SW-UCL algorithm to restrict the transitions among arms. A class of MAB problems with gradually changing reward distributions are considered in [@Slivkins; @granmo2010solving]. Specifically [@Slivkins] considered the case where the expectation of the reward follows a random walk while [@granmo2010solving] addresses the problem where, at each time step, the expectation of each reward is modified by an independent Gaussian perturbation of constant variance. In [@schulz2015learning] the expectation of the reward associated with an option is considered to depend on a linear static function of some known variables that characterize the option and propose to estimate the reward based on learning this function. A different class of dynamically and stochastically varying option characteristics is considered in [@tekin2010online] where the reward distribution of each option is modeled as a finite state irreducible, aperiodic, and reversible Markov chain. In this paper we consider a class of *Dynamic Multi-Armed Bandit* problems (DMAB) that will include most of the previously stated dynamic problems as special cases. Specifically we consider a class of DMAB problems where the reward of each option is the noisy output of a multivariate linear time varying stochastic dynamic system that satisfies some boundedness conditions. This formulation allows one to accommodate a wide class of real world problems such as the cases where the option characteristics vary periodically, aperiodically, or gradually in a stochastic way. Furthermore the current literature options. To the best of our knowledge this is the first time that such a wide class of dynamic problems have been considered in one general setting. We also incorporate temporal option unavailabilities into our structure that helps broaden the applicability of this model in real world problems. To the best of our knowledge it is the first time that temporal option unavailabilities are incorporated in a setting where the reward distributions are non-stationary. One major advantage of this linear dynamic systems formulation is that it immediately allows us to use the vast body of linear dynamic systems theory including that of switched systems to the problem of classification and solution of different DMAB problems. In this paper we prove that if the system characteristics satisfy certain boundedness conditions and the number of times the optimal arm becomes unavailable is at most logarithmic, then the expected cumulative regret is logarithmically bounded from above when one combines any UCB type decision making algorithm with any efficient reward estimator. We demonstrate the effectiveness of the scheme using an example where an agent intends to maximize the information she gathers under the constraint of option unavailability and periodically varying option characteristics. In this paper. We show in section-\[Secn:AsymptoticAllocationRules\] that the combination of any UCB type allocation rule with an efficient estimator guarantees that the expected cumulative regret is bounded above by a logarithmic function of the number of time steps. In section-\[Secn:EfficientEstimators\] we explicitly show, using a Hoeffding type tail bound [@Garivier2011], that the sample mean estimator is an efficient estimator. Finally in section-\[
null
--- abstract: 'This paper presents a novel efficient receiver design for wireless communication systems that incorporate orthogonal frequency division multiplexing (OFDM) transmission. The proposed receiver does not require channel estimation or equalization to perform coherent data detection. Instead, channel estimation, equalization, and data detection are combined into a single operation, and hence, the detector is denoted as a direct data detector ($D^{3}$). The performance of the proposed system is thoroughly analyzed theoretically in terms of bit error rate (BER), and validated by Monte Carlo simulations. The obtained theoretical and simulation results demonstrate that the BER of the proposed $D^{3}$ is only $3$ dB away from coherent detectors with perfect knowledge of the channel state information (CSI) in flat fading channels, and similarly in frequency-selective channels for a wide range of signal-to-noise ratios (SNRs). If CSI is not known perfectly, then the $D^{3}$ outperforms the coherent detector substantially, particularly at high SNRs with linear interpolation. The computational complexity of the $D^{3}$ depends on the length of the sequence to be detected, nevertheless, a significant complexity reduction can be achieved using the Viterbi algorithm.' author: - 'A. Saci, singletap equalization. Introduction ============ Consequently, a low-complexity single-tap equalizer can be utilized to eliminate the impact of the multipath fading channel. Under such circumstances, the OFDM demodulation process can be performed once the fading parameters at each subcarrier, commonly denoted as channel state information (CSI), are estimated. In general, channel estimation can be classified into blind [@One-Shot-CFO-2014]-[@blind-massive-mimo-acd], and pilot-aided techniques [@Robust-CE-OFDM-2015]-[@pilot-ce-pilot-freq-domain]. Blind channel estimation techniques are spectrally efficient because they do not require any overhead to estimate the CSI, nevertheless, such techniques have not yet been adopted in practical OFDM systems. Conversely, pilot-based CSI estimation is preferred for practical systems, because typically it is more robust and less complex. In pilot-based CSI estimation, the pilot symbols are embedded within the subcarriers of the transmitted OFDM signal in time and frequency domain; hence, the pilots form a two dimensional (2-D) grid [@LTE-A]. The channel response at the pilot symbols can be obtained using the least-squares (LS) frequency domain estimation, and the channel parameters at other subcarriers can be obtained using various interpolation techniques [@Rayleigh-Ricean-Interpolation-TCOM2008]. Optimal interpolation requires a 2-D Wiener filter that exploits the time and frequency correlation of the channel, however, it is substantially complex to implement [@Interpolation-TCOM-2010], [@Wiener]. The complexity can be reduced by decomposing the 2-D interpolation process into two cascaded 1-D processes, and then, using less computationally-involved interpolation schemes [@Adaptive-Equalization-IEEE-Broadcasting-2008], [@Comp-Pilot-VTC2007]. Low -D multi-modal interpolation schemes [@Comp-Pilot-VTC2007]. It is also worth noting that most practical OFDM-based systems utilize a fixed grid pattern structure [@LTE-A]. Once the channel parameters are obtained for all subcarriers, the received samples at the output of the fast Fourier transform (FFT) are equalized to compensate for the channel fading. Fortunately, the equalization for OFDM is performed in the frequency domain using single-tap equalizers. The equalizer output samples, which are denoted as the decision variables, will be applied to a maximum likelihood detector (MLD) to regenerate the information symbols. In addition to the direct approach, several techniques have been proposed in the literature to estimate the CSI or detect the data symbols indirectly, by exploiting the correlation among the channel coefficients. For example, the per-survivor processing (PSP) approach has been widely used to approximate the maximum likelihood sequence estimator (MLSE) for coded and uncoded sequences [@PSP-Raheli], [@PSP-Zhu], [@Rev-1]. The PSP utilizes the Viterbi algorithm (VA) to recursively estimate the CSI without interpolation using the least mean squares (LMS) algorithm. Although the PSP provides superior performance when the channel is flat over the entire sequence, its performance degrades severely if this condition is not satisfied, even when the LMS step size is adaptive [@PSP-Zhu]. Multiple symbol differential detection (MSDD) can be also used for sequence estimation without explicit channel estimation. In such systems, the information is embedded in the phase difference between adjacent symbols, and hence, differential encoding is needed. Although differential detection is only $3$ dB worse than coherent detection in flat fading channels, its performance may deteriorate significantly in frequency-selective channels [@Divsalar], [@Diff-Xhang]. Consequently, Wu and Kam [@Wu; @2010] proposed a generalized likelihood ratio test (GLRT) receiver whose performance without CSI is comparable to the coherent detector. Although the GLRT receiver is more robust than differential detectors in frequency-selective channels, its performance is significantly worse than coherent detectors. The signal at channel output is estimated with a minimum mean square error (MMSE) estimator from the knowledge of the received signal and the second order statistics of the channel and noay provide BER that is about $1$ dB from the ML coherent detector in flat fading channels but at the expense of a large number of pilots. Decision-directed techniques can also be used to avoid conventional channel estimation. For example, the authors in [@Saci-Tcom] proposed a hybrid frame structure that enables blind decision-directed channel estimation. Although the proposed system manages to offer reliable channel estimates and BER in various channel conditions, the system structure follows the typical coherent detector design where equalization and symbol detection are required. Motivation and Key Contributions -------------------------------- Unlike conventional OFDM detectors, this work presents a new detector to regenerate the information symbols directly from the received samples at the FFT output, which is denoted as the direct data detector ($D^{3})$. By using the $D^{3}$, there is no need to perform channel estimation, interpolation, equalization, or symbol decision operations. The $D^{3}$ exploits the fact that channel coefficients over adjacent subcarriers are highly correlated and approximately equal. Consequently, the $D^{3}$ is derived by minimizing the difference between channel coefficients of adjacent subcarriers. The main limitation of the $D^{3}$ is that it suffers from a phase ambiguity problem, which can be solved using pilot symbols, which are part of a transmission frame in most practical standards [@WiMax], [@LTE-A]. To the best of the authors’ knowledge, there is no work reported in the published literature that uses the proposed principle. The $D^{3}$ performance is evaluated in terms of complexity, computational power, and bit error rate (BER), where analytic expressions are derived for several channel models and system configurations. The $D^{3}$ BER is compared to other widely used detectors such as the maximum likelihood (ML) coherent detector [@Proakis-Book-2001] with perfect and imperfect CSI, multiple symbol differential detector (MSDD) [@Divsalar], the ML sequence detector (MLSD) with no CSI [@Wu; @2010], and the per-survivor processing detector [@PSP-Raheli]. The P<unk>D is lower than the CSI, and the SNRs. Moreover, the computational power comparison shows that the $D^{3}$ requires less than $35\%$ of the computational power required by the ML coherent detector. Paper summary is as follows. The OFDM system and channel models are described in Section \[sec:Signal-and-Channel\]. The proposed $D^{3}$ is presented in Section \[sec:Proposed-System-Model\], and the efficient implementation of the $D^{3}$ is explored in Section \[sec:Efficient-Implementation-of-D3\]. The system error probability performance analysis is presented in Section \[sec:System-Performance-Analysis\]. Complexity analysis of the conventional pilot based OFDM and the $D^{3}$ are given in Section \[sec:Complexity-Analysis\]. Numerical results are discussed in Section \[sec:Numerical-Results\], and finally, the conclusion is drawn in Section \[sec:Conclusion\]. In what follows, unless otherwise specified, uppercase boldface and blackboard letters such as $\mathbf{H}$ and $\mathbb{H}$, will denote $N\times N$ matrices, whereas lowercase boldface letters such as $\mathbf{x}$ will denote row or column vectors with $N$ elements. Uppercase, lowercase, or bold letters with a tilde such as $\tilde{d}$ will denote trial values, and symbols with a hat, such as $\hat{\mathbf{x}}$, will denote the estimate of $\mathbf{x}$. Letters with v+1$. Furthermore, $\mathrm{E}\
null
--- abstract: 'Let $L$ be a reductive subgroup of a reductive Lie group $G$. Let $G/H$ be a homogeneous space of reductive type. We provide a necessary condition for the properness of the action of $L$ on $G/H$. As - forms.' author: <unk> <unk>c subset $M$. This action is called [***proper***]{} if for every compact subset $C \subset M$ the set $$L(C):=\{ g\in L \ | \ g\cdot C \cap C \neq \emptyset \}$$ is compact. In this paper, our main concern is the following question posed by T. Kobayashi [@kob3] How “large” subgroups of $G$ can act properly on a homogeneous space $G/H$? **(Q1)** We restrict our attention to the case where $M=G/H$ is a homogeneous space of reductive type and always assume that $G$ is a linear connected reductive real Lie group with the Lie algebra $\mathfrak{g}.$ Let $H\subset G$ be a closed subgroup of $G$ with finitely many connected components and $\mathfrak{h}$ be the Lie algebra of $H.$ The subgroup $H$ is reductive in $G$ if $\mathfrak{h}$ is reductive in $\mathfrak{g},$ that is, there exists a Cartan involution $\theta $ for which $\theta (\mathfrak{h}) = \mathfrak{h}.$ The space $G/H$ is called the homogeneous space of reductive type. \[def1\] is the Cartan algebra. It is natural to ask when does a closed subgroup of $G$ act properly on a space of reductive type $G/H.$ This problem was treated, inter alia, in [@ben], [@bt], [@kas], [@kob2], [@kob4], [@kob1], [@kul] and [@ok]. In [@kob2] one can find a very important criterion for a proper action of a subgroup $L$ reductive in $G.$ To state this criterion we need to introduce some additional notation. Let $\mathfrak{l}$ be the Lie algebra of $L.$ Take a Cartan involution $\theta$ of $\mathfrak{g}.$ We obtain the Cartan decomposition $$\mathfrak{g}=\mathfrak{k} + \mathfrak{p}. \label{eq1}$$ Choose a maximal abelian subspace $\mathfrak{a}$ in $\mathfrak{p}.$ The subspace $\mathfrak{a}$ is called the ***maximally split abelian subspace*** of $\mathfrak{p}$ and $\text{rank}_{\mathbb{R}}(\mathfrak{g}) := \text{dim} (\mathfrak{a})$ is called the ***real rank*** of $\mathfrak{g}.$ It follows from Definition \[def1\] that $\mathfrak{h}$ and $\mathfrak{l}$ admit Cartan decompositions $$\mathfrak{h}=\mathfrak{k}_{1} + \mathfrak{p}_{1} \ \text{and} \ \mathfrak{l}=\mathfrak{k}_{2} + \mathfrak{p}_{2},$$ given by Cartan involutions $\theta_{1}, \ \theta_{2}$ of $\mathfrak{g}$ such that $\theta_{1} (\mathfrak{h})= \mathfrak{h}$ and $\theta_{2} (\mathfrak{l})= \mathfrak{l}.$ Let $\mathfrak{a}_{1} \subset \mathfrak{p}_{1}$ and $\mathfrak{a}_{2} \subset \mathfrak{p}_{2}$ be maximally split abelian subspaces of $\mathfrak{p}_{1}$ and $\mathfrak{p}_{2},$ respectively. One can show that there exist $a,b \in G$ such that $\mathfrak{a}_{\mathfrak{h}} := \text{\rm Ad}_{a}\mathfrak{a}_{1} \subset \mathfrak{a}$ and $\mathfrak{a}_{\mathfrak{l}} := \text{\rm Ad}_{b}\mathfrak{a}_{2} \subset \mathfrak{a}.$ Denote by $W_{\mathfrak{g}}$ the Weyl group of $\mathfrak{g}.$ In this setting the following holds The following three conditions are equivalent 1. $L$ acts on $G/H$ properly. 2. $H$ acts on $G/L$ properly. $ For any $w \in W_{\mathfrak{g}},$ $w\cdot \mathfrak{a}_{\mathfrak{l}} \cap \mathfrak{a}_{\mathfrak{h}} =\{ 0 \}.$ \[twkob\] Note that the criterion 3. in Theorem \[twkob\] depends on how $L$ and $H$ are embedded in $G$ up to inner-automorphisms. Theorem Q1 Q1. If $L$ acts properly on $G/H$ then $$\text{\rm rank}_{\mathbb{R}}(\mathfrak{l}) + \text{\rm rank}_{\mathbb{R}}(\mathfrak{h}) \leq \text{\rm rank}_{\mathbb{R}} (\mathfrak{g}).$$ \[coko\] Hence the real rank of $L$ is bounded by a constant which depends on $G/H,$ no matter how $H$ and $L$ are embedded in $G.$ In this paper we find a similar, stronger restriction for Lie groups $G,H,L$ by means of a certain tool which we call the a-hyperbolic rank (see Section 2, Definition \[dd2\] and Table \[tab1\]). In more detail we prove the following If $L$ acts properly on $G/H$ then $$\mathop{\mathrm{rank}}\nolimits_{\text{\rm a-hyp}}(\mathfrak{l}) + \mathop{\mathrm{rank}}\nolimits_{\text{\rm a-hyp}}(\mathfrak{h}) \leq \mathop{\mathrm{rank}}\nolimits_{\text{\rm a-hyp}} (\mathfrak{g}).$$ \[twgl\] Recall that a homogeneous space $G/H$ of reductive type admits a ***compact Clifford-Klein form*** if there exists a discrete subgroup $\Gamma \subset G$ such that $\Gamma$ acts properly on $G/H$ and $\Gamma \backslash G/H$ is compact. The space $G/H$ admits a ***standard compact Clifford-Klein form*** in the sense of Kassel-Kobayashi [@kako] if there exists a subgroup $L$ reductive in $G$ such that $L$ acts properly on $G/H$ and $L \backslash G/H$ is compact. In the latter case, for any discrete cocompact subgroup $\Gamma ' \subset L,$ the space $\Gamma ' \backslash G/H$ is a compact Clifford-Klein form. Therefore it follows from Borel’s theorem (see [@bor]) that any homogeneous space of reductive type admitting a standard compact Clifford-Klein form also admits a compact Clifford-Klein form. It is not known if the converse statement holds, but all known reductive homogeneous spaces $G/H$ admitting compact Clifford-Klein forms also admit standard compact Clifford-Klein forms. As a corollary to Theorem \[twgl\], we get examples of the semisimple symmetric spaces without standard compact Clifford-Klein forms. In our example, symmetry spaces have no compact forms. \[co1\] Let us mention the following results, related to the above corollary. - T. Kobayashi proved in [@kobadm] that $SL(2k,\mathbb{R})/SO(k,k)$ for $k\geq 1$ and $SL(n,\mathbb{R})/Sp(l,\mathbb{R})$ for $0<2l \leq n-2$ do not admit compact Clifford-Klein forms. - Y. Benoist proved in [@ben] that $SL(2k+1,\mathbb{R})/SO(k,k+1)$ for $k\geq 1$ does not admit compact Clifford-Klein forms. - Y. Benoist's arguments are odd. Note that these works are devoted to the problem of existence of compact Clifford-Klein forms on a given homogeneous space (not only standard compact Clifford-Klein forms). The a-hyperbolic rank and antipodal hyperbolic orbits ===================================================== Let $\Sigma_{\mathfrak{g}}$ be a system of restricted roots for $\mathfrak{g}$ with respect to $\mathfrak{a}.$ Choose a system of positive roots $\Sigma
null
--- address: 'Department of Mathematics, Imperial College, 180 Queens Gate, London SW7 2BZ, United Kingdom' author: - 'Proshun Sinha-Ray and Henrik Jeldtoft Jensen' title: 'Forest-fire models as a bridge between different paradigms in Self-Organized Criticality' --- Several types of models of self-organised criticality (SOC) exist[@Bak:book; @Jensen:book]. The original cellular automaton models were defined by a deterministic and conservative updating algorithm, with thresholds (barriers to activity), and stochastic driving [@BTW:SOC1; @BTW:SOC]. A random initial configuration is used uniformly. The OFC model is completely deterministic except for a random initial configuration. In both types of model the threshold is assumed to play a crucial role as a local rigidity which allows for a separation of time scales and, equally important, produces a large number of metastable states. The dynamics take the system from one of these metastable states to another. It is believed that separation of time scale and metastability are essential for the existence of scale invariance in these models. A seemingly very different type of model was developed by Drössel and Schwabl (DS)[@DroSchff]. No threshold appears explicitly in this model and the separation of time scales is put in by hand by tuning the rates of two stochastic processes which act as driving forces for the model. The DS forest-fire (FF) is defined on a $d$-dimensional square lattice. Empty sites are turned into “trees” with a probability $p$ per site in every time step. A tree can catch fire stochastically when hit by “lightning”, with probability $f$ each time step, or deterministically when a neighbouring site is on fire. The model is found to be critical in the limit $p\rightarrow0$ together with $f/p\rightarrow0$. This model is a generalization of a model first suggested by Bak, Chen and Tang [@BCTff] which is identical to the DS model except that it does not contain the stochastic ignition by lightning. The BCT system is not critical [@BCTffnotSOC] (in less than three dimensions, see [@BCTffin3d]). A continuous variable, uniformly driven deterministic version [@detff] also shows regular behavior for low values of $p$[@detffnotSOC]. Thus the introduction of the stochastic lightning mechanism appeared to be necessary, at least in two dimensions, for the model to behave critically. A based on a forest-fire model [@ffrev]. In the present letter we present a transformation of the forest-fire model into a completely deterministic system. This model is an extension of the recently introduced auto-ignition forest-fire, a simple variation on the DS model[@autoigff]. As in that model, we find that all macroscopic statistical measures of the system are preserved. Specifically, we show that the three models have the same exponent for the probability density describing clusters of trees, similar probability densities of tree ages and, probably most unexpected, almost the same power spectrum for the number of trees on the lattice as a function of time. It is surprising that the temporal fluctuation spectrum can be the same in the deterministic model as in the DS forest fire, since even a small stochastic element in an updating algorithm is known to be capable of altering the power spectrum in a significant way [@jens-anders-grin]. [*Definition istic] model. This model is identical to the DS model, except that the spontaneous ignition probability $f$ is replaced by an auto-ignition mechanism by which trees ignite automatically when their age $T$ after inception reaches a value $T_{max}$. Choosing this value suitably with respect to $p$ gives a system with exactly the same behaviour and statistical properties as the DS model[@autoigff]. Thus one stochastic driving process has been removed and a threshold introduced, while maintaining the SOC state; this model also displays explicitly the relationship between threshold dynamics and the separation of time scales so necessary for the SOC state. The auto-ignition model can be turned into a completely deterministic critical model by eliminating the stochastic growth mechanism. In the deterministic model (which we shall call the regen FF) each cell is given an integer parameter $T$ which increases by one each time step. If $T>0$, the cell is said to be occupied, otherwise it is empty (or regenerating). The initial configuration is a random distribution of $T$-values and fires. Fires spread through nearest neighbours and the auto-ignition mechanism is again operative so that a tree catches fire when its $T=T_{max}$. However, in this model when a tree catches fire the result is a decrement of $T_{regen}$ from its $T$-value. Note that when $T_{regen}<T_{max}$, a cell may still be occupied after it has been ignited. The parameters $T_{max}$ and $T_{regen}$ can be thought of as having a qualitatively reciprocal relationship with $f$ and $p$ respectively (in terms of the average ‘waiting time’ for spontaneous ignition and tree regrowth), though this is less straightforward in the latter case because trees are not always burned down by fire. It is evident that $T_{regen}$ also sets, and allows direct control of, the degree of dissipation of the $T$-parameter in the system. [*Results –* ]{} We now turn to a comparison between the statistical properties of the stochastic DS FF and the entirely deterministic regen model, with reference to the partly deterministic auto-ignition model. First we consider the probability density $p(s)$ of the tree clusters sizes [@cluster] simulated for different parameters for the different models. It is well known that the correlation length in the DS model (as measured by the cut-off $s_c$ in $p(s)$) increases as the critical point is approached by decreasing $p$, $f$ and $f/p$ [@DroSchff]. There is a corresponding increase in the power law regime for the cluster distribution in the auto-ignition model as $p$ is decreased and $T_{max}$ is increased[@autoigff]. The = 6$. Fig. \[regenCl\] shows scaling plots for the regen model, and we see that here too the cut-off $s_c$ scales with increasing ratio, $t=T_{max}/T_{regen}$. We have approximately $\ln(s_c)\sim T_{max}$ though again the relation may be algebraic. The difference is 2$. One expects the power law observed in the cluster size distribution to be reflected in power laws for spatial correlation functions. It is particularly interesting to study the age-age correlation function: $$C(r) = \langle T({\bf r}+{\bf r}_0)T({\bf r}_0)\rangle - \langle T({\bf r_0})\rangle^2 \label{correlation}$$ This correlation function was never studied for the DS FF because the model does not consider the age $T({\bf r})$ explicitly. In Fig. \[T-Tfig\] we show the behavior of the age-age correlation function in the regen and DS models. As usual it is difficult to obtain a substantial power law region because of finite size limitations. Nevertheless it is clear that $C(r)$ does exhibit power law dependence on $r$ and we find $C(r)\sim r^{-\eta}$ with $\eta\simeq 0.32, 0.21$ and $0.23$ for the regen, auto-ignition and DS models respectively. Interestingly, the same correlation function for empty sites (which have negative $T$ in the regen model is also a power law with $\eta\simeq 0.13$. Let us now turn to the temporal characteristics of the models. In Fig. \[agefig\] we see three different models. All are broad and exponential in character. Since it is a microscopic property, it is not surprising that there is some variation between the models. This variation may also be the reason for the different exponents in the age-age correlation functions mentioned above. It is remarkable that the DS FF exhibits a cut-off in the age distribution which is nearly as sharp as the cut-off in the two threshold models. This shows that the stochastic ignition process in the DS model, characterized by the lightning probability $f$, can be replaced in surprising detail by the deterministic age threshold. The collective temporal behaviour is represented by the power spectrum of the time variation of the total number of trees on the lattice. In Fig. \[nTfig\] these power spectra are shown for the DS and regen models (again, the power spectrum for the auto-ignition model is nearly identical). Our most surprising result is that the deterministic regeneration model has nearly the same power spectrum as the two other models, particularly
null
--- abstract: 'Ensembles of alkali or noble-gas atoms at room temperature and above are widely applied in quantum optics and metrology owing to their long-lived spins. Their collective spin states maintain nonclassical nonlocal correlations, despite the atomic thermal motion in the bulk and at the boundaries. Here we present a stochastic, fully-quantum description of the effect of atomic diffusion in these systems. We employ the Bloch-Heisenberg-Langevin formalism to account for the quantum noise originating from diffusion and from various boundary conditions corresponding to typical wall coatings, thus modeling the dynamics of nonclassical spin states with spatial inter-atomic correlations. As examples, we apply the model to calculate spin noise spectroscopy, temporal relaxation of squeezed spin states, and the coherent coupling between two spin species in a hybrid system.' author: to hours decades. At ambient conditions, alkali-metal vapors and odd isotopes of noble gasses exhibit long spin-coherence times, ranging from milliseconds to hours [@Happer1972OPIntro; @Happer1977SERF; @katz2018storage1sec; @balabas2010minutecoating; @walker1997SEOPReview; @Walker2017He3review]. These spin ensembles, consisting of a macroscopic number of atoms, are beneficial for precision sensing, searches of new physics, and demonstration of macroscopic quantum effects [@Happer2010book; @Brown2010RomalisCPTviolation; @Sheng2013RomalisSubFemtoTesla; @Budker2007OpticalMagnetometryI; @Budker2013OpticalMagnetometryII; @Crooker2004SNSmagnetometerNature; @ItayAxions2019arxiv]. In particular, manipulations of collective spin states allow for demonstrations of basic quantum phenomena, including entanglement, squeezing, and teleportation [@Julsgaard2001PolzikEntanglement; @Sherson2006PolzikTeleportationDemo; @Jensen2011PolzikSqueezingStorage; @Polzik2010ReviewRMP] as well as storage and generation of photons [@Eisaman2005LukinSinglePhoton; @Peyronel2012LukinInteractingSinglePhotons; @Gorshkov2011LukinRydbergBlockadePhotonInteractions; @Borregaard2016SinglePhotonsOnMotionallyAveragedMicrocellsNcomm]. It 's a good idea applications. Thermal atomic motion is an intrinsic property of the dynamics in gaseous systems. Gas-phase atoms, in low-pressure, room-temperature systems, move at hundreds of meters-per-second in ballistic trajectories, crossing the cell at sub-millisecond timescales and interacting with its boundaries. To suppress wall collisions, buffer gas is often introduced, which renders the atomic motion diffusive via velocity-changing collisions [@Kastler1957OPreview]. At the theory level, the effect of diffusion on the mean spin has been extensively addressed, essentially by describing the evolution of an impure (mixed) spin state in the cell using a mean-field approximation [@MasnouSeeuwsBouchiat1967diffusion; @WuHapper1988CoherentCellWallTheory; @Li2011RomalisMultipass; @Firstenberg2007DickeNarrowingPRA; @Firstenberg2010CoherentDiffusion; @XiaoNovikova2006DiffusionRamsey]. This common formalism treats the spatial dynamics of an average atom in any given position using a spatially-dependent density matrix. It is not possible in collisions. Non-classical phenomena involving collective spin states, such as transfer of quantum correlations between non-overlapping light beams by atomic motion [@Xiao2019MultiplexingSqueezedLightDiffusion; @Bao2019SpinSqueezing; @bao2016spinSqueezing], call for a quantum description of the thermal motion. For spin-exchange collisions, which are an outcome of thermal motion, such a quantum description has received much recent attention [@Kong2018MitchellAlkaliSEEntanglement; @weakcollisions2019arxiv; @AlkaliNobleEntanglementKatz2020PRL; @Dellis2014SESpinNoisePRA; @Mouloudakis2019SEwavefunctionUnraveling; @Mouloudakis2020SEbipartiteEntanglement; @Vasilakis2011RomalisBackactionEvation]. However, the more direct consequences of thermal motion, namely the stochasticity of the spatial dynamics in the bulk and at the system’s boundaries, still lack a proper fully-quantum description. In this paper, we describe the effect of spatial diffusion on the quantum state of warm spin gases. Using the Bloch-Heisenberg-Langevin formalism, we identify the dissipation and noise associated with atomic thermal motion and with the scattering off the cell boundaries. Existing significant work in this field rely primarily on mean-field models, which address both wall coupling [@seltzer2009RomalishighTcoating] and diffusion in unconfined systems [@Lucivero2017RomalisDiffusionCorrelation]. The and systems. Here we derive the quantum noise straight out of Brownian motion considerations and provide a solution for confined geometries. Our model generalizes the mean-field results and enables the description of inter-atomic correlations and collective quantum states of the ensemble. We apply the model to highly-polarized spin vapor and analyze the effect of diffusion in various conditions, including spin noise spectroscopy [@Sinitsyn2016SpinNoiseSpectroscopySNSreview; @Katsoprinakis2007SpinNoiseRelaxation; @Crooker2004SNSmagnetometerNature; @Crooker2014PRLspectroscopy; @Lucivero2016MitchellSpinNoiseSpectrscopySqueezedLight; @Lucivero2017MitchellNoiseSpectroscopyFundumentals], spin squeezing [@Kong2018MitchellAlkaliSEEntanglement; @Julsgaard2001PolzikEntanglement], and coupling of alkali to noble-gas spins in the strong coupling regime [@weakcollisions2019arxiv; @AlkaliNobleEntanglementKatz2020PRL]. The paper is arranged as follows. We have applied in Sec. \[sec:Model\] the Bloch-Heisenberg-Langevin model for the evolution of the collective spin operator due to atomic Brownian motion and cell boundaries. We focus on highly-polarized ensembles in Sec. \[sec:Polarized-ensemb\] and provide the model solutions. In \[sec:Applications\], we present several applications of our model. We discuss how it is employed to describe the temporal evolution, to calculate experimental results, to provide insight, and to optimize setups for specific tasks. Limits of our model and differences from existing models, as well as future prospects, are discussed in Sec. \[sec:Discussion\]. We provide appendices that elaborate on (\[sec:diffusion-noise\]) the quantum noise produced by thermal motion, (\[sec:wall-coupling-toy\]) a simplified model for analyzing the scattering off the cell walls, (\[sec:solution-of-diffusion-relaxation\]) means of solving the Bloch-Heisenberg-Langevin equation, and (\[sec:Faraday-rotation-measurement\]) the Faraday rotation scheme used herein. ! [(a) Atomic spins in the gas phase, comprising a collective quantum spin $\mathbf{\hat{s}}(\mathbf{r},t)$ and undergoing thermal motion. (b) In the diffusive regime, the spins spatially redistribute via frequent velocity-changing collisions. (c) Collisions (local interaction) with the walls of the gas cell may fully or partially depolarize the spin state. (d) Diffusion and wall collisions lead to a multimode evolution, here exemplified for a spin excitation $\hat{a}(\mathbf{r},t)\propto\hat{s}_{x}(\mathbf{r},t)-i\hat{s}_{y}(\mathbf{r},t)$ with an initial Gaussian-like spatial distribution $\langle\hat{a}(\mathbf{r},t)\rangle$ and for destructive wall collisions. In addition to the mode-specific decay $\Gamma_{n}$, each spatial mode accumulates mode-specific quantum noise $\mathcal{\hat{W}}_{n}(t)$.\[fig:diffusion-illustration\]](illustration7){width="1\columnwidth"} Model\[sec:Model\] ================== Consider a warm ensemble of $N_{\mathrm{a}}$ atomic spins confined in a cell, as illustrated in Fig. \[fig:diffusion-illustration\]a. Let $\mathbf{r}_{a}(t)$ be the classical location of the $a^{\text{th}}$ atom at time $t$ and define the single-body density function at some location $\mathbf{r}$ as $n_{a}(\mathbf{r})=\delta(\mathbf{r}-\mathbf{r}_{a}(t))$. We denote the spin operator of the $a^{\text{th}}$ atom by $\mathbf{\hat{s}}_{a}$ and define the space-dependent collective spin operator as $\mathbf{\hat{s}}(\mathbf{r},t)=\sum_{
null
--- abstract: 'In extracting predictions from theories that describe a multiverse, we face the difficulty that we must assess probability distributions over possible observations, prescribed not just by an underlying theory, but by a theory together with a conditionalization scheme that allows for (anthropic) selection effects. This means we usually need to compare distributions that are consistent with a broad range of possible observations, with actual experimental data. One controversial means of making this comparison is by invoking the ‘principle of mediocrity’: that is, the principle that we are typical of the reference class implicit in the conjunction of the theory and the conditionalization scheme. In this paper, I quantitatively assess the principle of mediocrity in a range of cosmological settings, employing ‘xerographic distributions’ to impose a variety of assumptions regarding typicality. I find that for a fixed theory, the assumption that we are typical gives rise to higher likelihoods for our observations. If, however, one allows both the underlying theory and the assumption of typicality to vary, then the assumption of typicality does not always provide the highest likelihoods. Interpreted from a Bayesian perspective, these results support the claim that when one has the freedom to consider different combinations of theories and xerographic distributions (or different ‘frameworks’), one should favor the framework that has the highest posterior probability; and then from this framework one can *infer*, in particular, how typical we are. In this way, the invocation of the principle of mediocrity is more questionable than has been recently claimed.' author: ogue would be: vary. The predominant approach to characterizing this variability rests on theory-generated probability distributions that describe the statistics of constants associated with the standard models of particle physics and cosmology. The hope remains that plausible descriptions of such multi-domain universes (henceforth ‘multiverses’), generated, for example, from inflationary cosmology [@vilenkin_83; @linde_83; @linde_86] or the string theory landscape [@bousso+polchinski_00; @kachru+al_03; @freivogel+al_06; @susskind_07], will yield prescriptions for calculating these distributions in unambiguous ways. Subsequent comparisons with our observations would allow us to ascertain which multiverse models are indeed favored. To be more precise, one expects theories that describe a multiverse to set down a likelihood for observations we might make, given both the theory under consideration, and conditions that restrict the vast array of domains to ones in which we might arise. This latter conditionalization is naturally couched in terms of conditions necessary for the existence of ‘us’, as defined by relevant features of the theory. The need for such ‘anthropic’ conditionalization, as captured, for example, in what has become known as Carter’s ‘Weak Anthropic Principle’ [@carter_74], is predicated on the presumption that most of the domains described by theories of the multiverse will not give rise to the specialized structures we see around us, nor indeed to complex biological life [@hartle_07]. Under this scenario any observation we might make conditionalized on theory alone, would prove to be unlikely; and one should therefore restrict one’s attention to relevant domains so as to secure relevant probabilities for possible observations. An appropriate conditionalization scheme might make our observations more likely: but how likely should they be, before we can count them as having been successfully predicted by the conjunction of a theory and a conditionalization scheme? One proposed solution to this problem is known as the ‘principle of mediocrity’ [@vilenkin_95]: in more current terminology, it assumes that we should reason as though we are typical members of a suitable reference class (see also [@gott_93; @page_96; @bostrom_02]). Under this assumption, for appropriately conditionalized distributions, as long as our observations are within some ‘typical’ range according to the distribution, we can count them as being successfully predicted. The multiverse. The multiverse. The principle of mediocrity, or the ‘assumption of typicality’—as it will also be referred to in this paper—is not without its critics [@weinstein_06; @smolin_07; @hartle+srednicki_07]. A key issue involves how one defines the reference class with respect to which we are typical [@garriga+vilenkin_08]. This problem is even more stark given our ignorance of who or what we are trying to characterize, and the precise physical constraints we need to implement in order to do so [@weinstein_06; @azhar_14]. Rather than assessing this principle from a primarily conceptual point of view, we propose to test it *quantitatively*. In particular, we investigate how well it does in terms of accounting for our data in comparison to other assumptions regarding typicality, in a restricted set of multiverse cosmological scenarios. We do this by extending the program of @srednicki+hartle_10 to explore a variety of assumptions regarding typicality, building these assumptions into likelihoods for possible observations through the use of ‘xerographic distributions’ (in the terminology of @srednicki+hartle_10). The goal then is to find the conjunction of a theory and a xerographic distribution (which they call a ‘framework’) that gives rise to the highest likelihoods for our data. I will show that (1) for a fixed theory, the assumption that we are typical gives rise to higher likelihoods for our observations; but (2) if one allows both the underlying theory and the assumption of typicality to vary, then the assumption of typicality *does not* always provide the highest likelihoods. Interpreted from a Bayesian perspective, these results provide support for the claim that one should try to identify the framework with the highest posterior probability; and then from this framework, one can *infer* how typical we are. The structure of this paper is as follows. In section \[SEC:Xerographic\_Distributions\], I outline the general formalism within which I will be investigating assumptions regarding typicality, including the introduction of a statement of the principle of mediocrity adapted to our specific purposes. Section \[SEC:Multiverse\_Model\] introduces the multiverse model we will analyze (which is a generalization of the cosmological model of @srednicki+hartle_10), derives the central equations for relevant likelihoods from which we will eventually test assumptions regarding typicality, and shows that these likelihoods reduce to the results of @srednicki+hartle_10 under the appropriate simplifying assumptions. Explicit tests of the principle of mediocrity are presented in section \[SEC:Results\], and we conclude in section \[SEC:Discussion\] with a discussion of the context in which one should interpret the results of these tests. So I turn first to a description of the general formalism within which I will be working. Xerographic Distributions {#SEC:Xerographic_Distributions} ========================= Generalities {#SEC:Generalities} ------------ I begin by outlining the formalism of @srednicki+hartle_10, recasting relevant parts of their discussion to suit our computations in the next section. In general multiverse scenarios, it is possible that any reference class of which we believe we are a member, may have multiple members. Indeed, it is plausible that our accumulated data $D_{0}$, which gives a detailed description of our physical surroundings, might be replicated at different spacetime locations in the multiverse. A theory $\mathcal{T}$ describing this multiverse scenario, will, in principle, generate a likelihood for this data which we will denote by $P(D_{0}|\mathcal{T})$. This corresponds to a ‘third-person’ likelihood in the terminology of @srednicki+hartle_10—that is, a likelihood that does not include any information about which member of our reference class we might be. The quantity that takes this *indexical* information into account is a ‘first-person’ likelihood and will be denoted by $P^{(1p)}(D_{0}|\mathcal{T}, \xi)$, in accordance with the notation of @srednicki+hartle_10. The added ingredient here is the *xerographic distribution* $\xi$, a probability distribution that we specify *by assumption*, that encodes our belief about which member of our reference class we happen to be. Its functional form is independent of a given theory $\mathcal{T}$, and together with such a theory, constitutes a ‘framework’ $(\mathcal{T}, \xi)$ (in the notation of [@srednicki+hartle_10]). Thus the transition from a third-person likelihood $P(D_{0}|\mathcal{T})$ to a first-person likelihood $P^{(1p)}(D_{0}|\mathcal{T}, \xi)$ is effected by two ideas: (i) the conditionalization scheme, which (as mentioned in section \[SEC:Introduction\]) specifies our reference class, and (ii) a probability distribution over members of our
null
--- abstract: 'Sequential and Quantum Monte Carlo methods, as well as genetic type search algorithms can be interpreted as a mean field and interacting particle approximations of Feynman-Kac models in distribution spaces. The performance of these population Monte Carlo algorithms is strongly related to the stability properties of nonlinear Feynman-Kac semigroups. In this paper, we analyze these models in terms of Dobrushin ergodic coefficients of the reference Markov transitions and the oscillations of the potential functions. Sufficient conditions for uniform concentration inequalities w.r.t. time are expressed explicitly in terms of these two quantities. We provide an original perturbation analysis that applies to annealed and adaptive FK models, yielding what seems to be the first results of this kind for these type of models. Special attention is devoted to the particular case of Boltzmann-Gibbs measures’ sampling. In this context, we design an explicit way of tuning the number of Markov Chain Monte Carlo iterations with temperature schedule. We also propose and analyze an alternative interacting particle method based on an adaptive strategy to define the temperature increments.' address: - 'INRIA Bordeaux Sud-Ouest, team ALEA, Domaine Universitaire, 351, cours de la Libération, 33405 Talence Cedex, France.' - 'CEA-CESTA, 33114 Le Barp, France.' - author: - François Giraud - Pierre Del Moral title: 'Non-Asymptotic Analysis of Adaptive and Annealed Feynman-Kac Particle Models' --- Introduction {#introduction .unnumbered} ============ Feynman-Kac ([*abbreviate FK*]{}) particle methods, also called sequential, quantum or diffusion Monte Carlo methods, are stochastic algorithms to sample from a sequence of complex high-dimensional probability distributions. These stochastic simulation techniques are of current use in numerical physics [@Assaraf-overview; @Assaraf; @Hetherington] to compute ground state energies in molecular systems. They are also used in statistics, signal processing and information sciences [@Cappe; @DM-filt; @DM-D-J; @DM-Guionnet-2] to compute posterior distributions of a partially observed signal or unknown parameters. In the evolutionary computing literature, these Monte Carlo methods are used as natural population search algorithms for solving optimization problems. From the pure mathematical viewpoint, these advanced Monte Carlo methods are an interacting particle system ([*abbreviate IPS*]{}) interpretation of FK models. For a more thorough discussion on these models, we refer the reader to the monograph [@DM-FK], and the references therein. The principle (see also [@DM-D-J] and the references therein) is to approximate a sequence of target probability distributions $(\eta_n)_n$ by a large cloud of random samples termed particles or walkers. The algorithm starts with $N$ independent samples from $\eta_0$ and then alternates two types of steps: an acceptance-rejection scheme equipped with a selection type recycling mechanism, and a sequence of free exploration of the state space.\ In the recycling stage, the current cloud of particles is transformed by randomly duplicating and eliminating particles in a suitable way, similarly to a selection step in models of population genetics. In the Markov evolution step, particles move independently one each other (mutation step). This method is often used for solving sequential problems, such as filtering (see e.g., [@Cappe; @Doucet-F-G; @DM-filt]). In other interesting problems, these algorithms also turn out to be efficient to sample from a single target measure $\eta$. In this context, the central idea is to find a judicious interpolating sequence of measures $(\eta_k)_{0\leq k\leq n}$ with increasing sampling complexity, starting from some initial distribution $\eta_0$, up to the terminal one $\eta_n=\eta$. Consecutive sampling begins with sampling. The sequential aspect of the approach is then an “artificial way” to introduce the difficulty of sampling gradually. In this vein, important examples are provided by annealed models. More generally, a crucial point is that large population sizes allow to cover several modes simultaneously. This is an advantage compared to standard MCMC methods that are more likely to be trapped in local modes. These sequential samplers have been used with success in several application domains, including rare events simulation (see [@Cerou-RE]), stochastic optimization and more generally Boltzmann-Gibbs measures sampling ([@DM-D-J]). Up to now, IPS algorithms have been mostly analyzed using asymptotic (i.e. when number of particles $N$ tends to infinity) techniques, notably through fluctuation theorems and large deviation principles (see for instance [@DM-DA; @DM-Guionnet-3], [@DM-Guionnet-2; @DM-L; @DM-M],[@Kunsch], [@Chopin], [@DM-filt], [@Cappe] and [@DM-FK] for an overview).\ Some non-asymptotic theorems have been recently developped ([@Cerou-var; @DM-D-J-adapt]), but unfortunately none of them apply to annalyze annealed and adaptive FK particle models. On the other hand, these type of nonhomogeneous IPS algorithms are of current use for solving concrete problems arising in numerical physics and engineering sciences (see for instance [@Bertrand; @Giraud-RB; @Neal], [@Clapp; @Deutscher; @Minvielle], [@Jasra; @Schafer]). By the lack of non-asymptotic estimates, these particle algorithms are used as natural heuristics.\ The main contribution of this article is to analyze these two classes of time nonhomogeneous IPS models. Our approach is based on semigroup techniques and on an original perturbation analysis to derive several uniform estimates w.r.t. the time parameter.\ More precisely, in the case of annealed type models, we estimate explicitly the stability properties of FK semigroup in terms of the Dobrushin ergodic coefficient of the reference Markov chain and the oscillations of the potential functions. We combine these techniques with non-asymptotic theorems on $L^p$-mean error bounds ([@DM-M]) and some useful concentration inequalities ([@DM-Hu-Wu; @DM-Rio]). Then, we provide parameter tuning strategies that allow to deduce some useful uniform concentration inequalities w.r.t. the time parameter. These results apply to non homogeneous FK models associated with cooling temperature parameters. In this situation, the sequence of measures $\eta_n$ is associated with a nonincreasing temperature parameter. We mention that other independent approaches, such as Whiteley’s ([@Whiteley]) or Schweizer’s ([@Schweizer]), are based on, e.g., drift conditions, hyper-boundedness, spectral gaps, or non-asymptotic biais and variance decompositions. These approaches lead to convergence results that may also apply to non-compact state spaces. To our knowledge, these techniques are restricted to non-asymptotic variance theorems and they cannot be used to derive uniform and exponential concentration inequalities. It seems also difficult to extend these approaches to analyze the adaptive IPS model discussed in the present article. To solve these questions, we develop a perturbation technique of stochastic FK semigroups. In contrast to traditional FK semigroup, the adaptive particle scheme is now based on random potential functions that depend on a cooling schedule adapted to the variability and the adaptation of the random populations.\ \ The rest of the article is organized as follows. In a preliminary section, we recall a few essential notions related to Dobrushin coefficients or FK semigroups. We also provide some important non-asymptotic results we use in the further development of the article. Section \[section-analyse-generale-FK\] is concerned with the semigroup stability analysis of these models. We also provide a couple of uniform $L^p$-deviations and concentration estimates. In Section \[section-Gibbs\] we apply these results to Boltzmann-Gibbs models associated with a decreasing temperature schedule. In this context, IPS algorithm can be interpreted as a sequence of interacting simulated annealing algorithms ([*abbreviate ISA*]{}). We propose schedule. Finally, in Section \[section-adapt\], we propose an alternative ISA method based on an original adaptive strategy to design on the flow the temperature decrements. We provide a non-asymptotic study, based on a perturbation analysis. We also analyze behavioral inequalities. Statement of Some Results {#statement-of-some-results .unnumbered} ========================= Feynman-Kac particle algorithms consist in evolving an interacting particle system $(\zeta_n)_n = \left( \zeta_n^1, \ldots, \zeta_n^N \right)_n$ of size $N$, on a given state space $E$. Their evolution is decomposed into two genetic type transitions: a selection step, associated with some positive potential function $G_n$; and a mutation step, where the selected particles evolve randomly according to a given Markov transition $M_n$ (a more detailed description of these IPS algorithms is provided in Section \[algo-classique\]). In this context, the occupation measures $\displaystyle{ \eta_n^N := \frac{1}{N}\sum_{1\leq i\leq N}\delta_{\zeta^i_n} }$ are $N$-approximations of a sequence
null
--- abstract: 'Based on our previous work [@paper2], we investigate here the effects on the wind and magnetospheric structures of weak-lined T Tauri stars due to a misalignment between the axis of rotation of the star and its magnetic dipole moment vector. In such configuration, the system loses the axisymmetry presented in the aligned case, requiring a fully three-dimensional approach. We perform three-dimensional numerical magnetohydrodynamic simulations of stellar winds and study the effects caused by different model parameters, namely the misalignment angle $\theta_t$, the stellar period of rotation, the plasma-$\beta$, and the heating index $\gamma$. Our simulations take into account the interplay between the wind and the stellar magnetic field during the time evolution. The system reaches a periodic behavior with the same rotational period of the star. We show that the magnetic field lines present an oscillatory pattern. Furthermore, we obtain that by increasing $\theta_t$, the wind velocity increases, especially in the case of strong magnetic field and relatively rapid stellar rotation. Our three-dimensional, time-dependent wind models allow us to study the interaction of a magnetized wind with a magnetized extra-solar planet. Such interaction gives rise to reconnection, generating electrons that propagate along the planet’s magnetic field lines and produce electron cyclotron radiation at radio wavelengths. The power released in the interaction depends on the planet’s magnetic field intensity, its orbital radius, and on the stellar wind local characteristics. We find that a close-in Jupiter-like planet orbiting at $0.05$ AU presents a radio power that is $\sim 5$ orders of magnitude larger than the one observed in Jupiter, which suggests that the stellar wind from a young star has the potential to generate strong planetary radio emission that could be detected in the near future with LOFAR. This radio power varies according to the phase of rotation of the star. For three selected simulations, we find a variation of the radio power of a factor $1.3$ to $3.7$, depending on $\theta_t$. Moreover, we extend the investigation done in @paper2 and analyze whether winds from misaligned stellar magnetospheres could cause a significant effect on planetary migration. Compared to the aligned case, we show that the time-scale $\tau_w$ for an appreciable radial motion of the planet is shorter for larger misalignment angles. While for the aligned case $\tau_w\simeq 100$ Myr, for a stellar magnetosphere tilted by $\theta_t = 30^{\rm o}$, $\tau_w$ ranges from $\sim 40$ to $70$ Myr for a planet located at a radius of $0.05$ AU. Further reduction on $\tau_w$ might occur for even larger misalignment angles and/or different wind parameters.' author: - 'A. A. Vidotto' - 'M. Opher' - 'V. Jatenco-Pereira' - 'T. I. Gombosi' title: 'Simulations of Winds of Weak-Lined T Tauri Stars. "<extra_id_1> T Tauri Stars and Their Magnetospheres<extra_id_2> ()<extra_id_3> (<extra_id_4> ( )<extra_id_5> (<extra_id_6>==<extra_id_7>= ==<extra_id_8> = =<extra_id_9>=<extra_id_10> (<extra_id_11>)<extra_id_12>=<extra_id_13>.<extra_id_14>=<extra_id_15>= : The Effects of a Tilted Magnetosphere and Planetary Interactions' --- INTRODUCTION ============ T Tauri stars are pre-main sequence low-mass stars ($0.5 \lesssim M/M_\odot \lesssim 2$), with a range of spectral types from F to M, and radius $\lesssim 3-4~R_\odot$. They are usually classified in two categories, depending on their evolutionary stage. In the first stage disks. In a later stage, with the dissipation of the accretion disk, they are known as weak-lined T Tauri stars (WTTSs). Thanks to spectropolarimetric measurements, the number of young low-mass stars with detected magnetic fields has significantly increased in the past decade . These detections have suggested that T Tauri stars present mean surface field strengths of the order of kG. Surface magnetic maps, derived from spectropolarimetric data, indicate that the surface fields on T Tauri stars are more complex than that of a simple dipole and are often misaligned with the rotational axis of the star [@2007MNRAS.380.1297D; @2008MNRAS.386.1234D]. CTTSs such as BP Tau [@2008MNRAS.386.1234D], V2129 Oph [@2007MNRAS.380.1297D], CR Cha and CV Cha [@2009MNRAS.398..189H], and V2247 Oph [@2010MNRAS.402.1426D] present dipolar and octupolar components of the surface magnetic field moment that are asymmetric with respect to the rotational axis of the star. More recently, the first determination of surface magnetic maps for a WTTS, V410 Tau, has been acquired [@2010MNRAS.403..159S], showing that, similar to the less evolved CTTSs, V410 Tau also presents a non-axisymmetric poloidal field. Despite the existing knowledge of the surface magnetic fields in young stars, the global structure of the stellar magnetic field is unknown. Magnetic field extrapolations from surface magnetograms using the potential field source surface (PFSS) method has been used to help us elucidate the geometry of the large-scale field around T Tauri stars [@2008MNRAS.386..688J; @2008MNRAS.389.1839G]. Such extrapolations, however, neglect the interaction of the field with the stellar wind and the temporal evolution of the system. Full magnetohydrodynamics (MHD) numerical simulations [@2003ApJ...595L..57R; @paper2] allow us to study the interplay between the stellar magnetic field and the wind. In this method, the dynamical interaction of the stellar wind and the magnetic field lines is a result of the action of magnetic, thermal, gravitational, and inertial forces. MHD simulations can be, however, computationally expensive and time consuming. A comparison between PFSS and MHD models that used observed surface magnetic maps as a boundary condition can be found in @2006ApJ...653.1510R in the context of the solar wind, showing that PFSS models are able to reconstruct the large-scale structure of the solar corona when time-dependent changes in the photospheric flux can be neglected, although nonpotential effects can have a significant effect on the magnetic structure of the corona. The accurate determination of the wind properties and topology of the magnetic field of a star is necessary to solve a series of open questions. The results are presented in . Furthermore, studies of the magnetic interaction between a CTTS and its disk require the knowledge of the structure of the magnetic field of the star [@1990RvMA....3..234C; @1991ApJ...370L..39K; @2008MNRAS.386.1274L]. Determining realistic magnetic field topologies and wind dynamics are also key to understand interactions between magnetized extra-solar planets and the star, such as interactions that lead to planetary migration [@2006ApJ...645L..73R; @2008MNRAS.389.1233L; @paper2], interactions between the stellar magnetic field and the planetary magnetosphere and also with the planetary atmosphere . As a next step towards a more realistic wind and magnetic field modeling of WTTSs, in this work we extend the study performed in @paper2, where the stellar rotation and magnetic moment vectors were assumed to be parallel. We now consider cases where these vectors are not aligned. Some numerical and theoretical models exist considering the case of an oblique magnetic geometry, mainly applied to the study of pulsars , with a few applications to other astrophysical objects [e.g., @1998MNRAS.300..718L; @2003ApJ...595.1009R; @2004ApJ...610..920R; @2007MNRAS.382..139T]. As a consequence of the oblique magnetic geometry, the system loses the axisymmetry present in the aligned case [@paper2], thus requiring a fully three-dimensional (3D) approach. We perform here 3D MHD numerical simulations of magnetized stellar winds of WTTSs, by considering at the base of the coronal wind a dipolar magnetic field that is tilted with respect to the rotational axis of the star. Complex, high-order multipoles magnetic field configuration at the surface may exist, but a dipolar component should dominate at larger distances [e.g., @2007ApJ...664..975J; @2007MNRAS.380.1297D]. As the simulation evolves in time, the initial field configuration is modified by the interaction with the stellar wind, which in turn also is modified by the magnetic field geometry. The stellar wind of a host star is expected to directly influence an orbiting planet and its atmosphere. The interaction, for example, of a magnetized wind with a magnetized planet can give rise to reconnection of magnetic field lines. Reconnection processes appear in several places in the Solar System. E.g., the magnetic field lines of the Earth magnetospheric day-side (i.e., the side of the Earth that is facing the Sun) are compressed due to the interaction with the solar wind, while in the opposite side of the Earth’s magnetosphere (the night-side), a tear-dropped-shaped tail is formed [e.g., @1930Natur.126..129C]. The solar wind interaction with the magnetic planets of the Solar System (Earth, Jupiter, Saturn, Uranus, and Neptune) accelerates electrons that propagate along the planets magnetic field lines, producing electron cyclotron radiation
null
--- abstract: 'Thermal infrared photometry in the $L$- and $M''$-band and $L - M''$ colors of type-1 and type-2 active galactic nuclei (AGNs) are presented. After combining our observations with photometric data at similar wavelengths taken from the literature, we find that the excess of $L - M''$ colors of type-2 AGNs (37 sources, 50 data points) relative to type-1 AGNs (27 sources, 36 data points), due to dust extinction, is statistically detectable, but very small. We see no differences between AGNs. In both cases, the $L - M''$ colors are similar to the intrinsic $L - M''$ color of unobscured AGNs, and the $L - M''$ color excess of the latter highly dust-obscured type-2 AGNs due to dust extinction is much smaller than that expected from the Galactic dust extinction curve. Contamination from starbursts and the time lag of flux variation are unlikely to explain this small $L - M''$ color excess, which is best explained if the dust extinction curve in the close vicinity of AGNs is fairly flat at 3–5 $\mu$m as a result of a size increase of the absorbing dust grains through coagulation.' author: R ([@ant93]). Estimation of the amount of dust along our line of sight in type-2 AGNs and comparison with the amount in type-1 AGNs is an important observational test of the unification paradigm. A direct estimate of dust extinction toward highly luminous type-2 AGNs is necessary to answer the question “how common are highly luminous and highly dust-obscured AGNs (so-called type-2 quasars)?” ([@hal99]). X-ray spectroscopic observations of type-2 AGNs imply higher X-ray absorption columns than do observations of type-1 AGNs, supporting the unification paradigm (e.g., [@nan94; @smi96]). However, X-ray absorption is caused both by dust and gas. Estimating the amount of dust along our line of sight ($A_{\rm V}$) from X-ray absorption ($N_{\rm H}$) is uncertain, since the $N_{\rm H}$/$A_{\rm V}$ ratios toward AGNs are found to vary by more than an order of magnitude ([@alo97]). For several reasons, we would expect study in the thermal infrared (3–5 $\mu$m) wavelength range to be a powerful tool for estimation of dust extinction toward AGNs. Firstly, flux attenuation in this band is purely caused by dust extinction, and the effects of dust extinction are wavelength dependent in the Galactic diffuse interstellar medium ([@rie85; @lut96]). Secondly, the absolute flux attenuation by dust extinction is smaller than at shorter wavelengths ([@rie85; @lut96]). Thirdly, extended stellar emission generally dominates over obscured AGN emission at $<$2 $\mu$m, whereas at $>$3 $\mu$m moderately luminous obscured AGNs show compact, AGN-related emission that dominates observed fluxes (Alonso-Herrero et al. 1998, 2000, [@sim98a], but see Simpson, Ward, & Wall 2000). Finally, since the compact emission at 3–5 $\mu$m most likely originates in hot (600–1000 K) dust at a part of the dusty molecular torus very near to the AGN (close to the innermost dust sublimation region), the dust extinction toward the 3–5 $\mu$m emission region is almost the same as that toward the central engine itself. Hence, by comparing observed continuum fluxes at more than one wavelength between 3 and 5 $\mu$m, we can estimate dust extinction toward obscured AGNs directly, up to high magnitudes of obscuration, without serious uncertainties in the subtraction of stellar emission. Some attempts to estimate dust extinction toward obscured AGNs have been made based on near-infrared 1–5 $\mu$m colors ([@sim98a; @sim99; @sim00]), but, given that only upper limits are available at 3–5 $\mu$m in most cases, the estimate depends heavily on data at $<$3 $\mu$m in the rest-frame, where stellar emission dominates the observed fluxes. We have conducted $L$ (3.5$\pm$0.3 $\mu$m) and $M'$ (4.7$\pm$0.1 $\mu$m) band photometry of type-1 and type-2 AGNs. The main aim is to investigate the $L - M'$ colors of a large number of type-1 and type-2 AGNs and to examine the question of whether $L - M'$ colors are a good measure of the dust extinction toward AGNs. Throughout the project, a random selection method was adopted. Target Selection ================ The target sources were selected based on their proximity and high optical \[OIII\] emission line luminosities. The first and second criteria were adopted, respectively, to make detection at $M'$ feasible and to select reasonably luminous AGNs ([@sim98b]), for which contamination from extended star-formation-related emission (both stellar emission and dust emission powered by star-formation activity) is expected to be smaller than it would be for less luminous AGNs. Our '$ AGNs. Observation and Data Analysis ============================= $L$ (3.5$\pm$0.3 $\mu$m) and $M'$ (4.7$\pm$0.1 $\mu$m) band photometry was performed at the NASA Infrared Telescope Facility (IRTF) using NSFCAM ([@shu94]). Table 1 gives details of the observations. Sky conditions were photometric throughout the observing runs. The seeing sizes measured from standard stars were 0$\farcs$6–1$\farcs$2. The NSFCAM used a 256$\times$256 InSb array. For $M'$-band photometry, the smallest pixel scale (0$\farcs$06 pix$^{-1}$) was used during all the observing runs. For $L$-band photometry, the pixel scale of 0$\farcs$06 pix$^{-1}$ was used in November 1999, while that of 0$\farcs$15 pix$^{-1}$ was used in April and May 2000. The field of view is 14$''$ $\times$ 14$''$ and 38$''$ $\times$ 38$''$ in the case of 0$\farcs$06 pix$^{-1}$ and 0$\farcs$15 pix$^{-1}$ pixel scales, respectively. Each exposure was 0.3–0.4 sec long at $L$ and 0.12–0.2 sec at $M'$. A dithering technique was utilized with an amplitude of 3–10$''$ to place sources at five different positions on the array. At each dithering position, 50–200 frames were coadded. Offset guide stars were used whenever available to achieve high telescope tracking accuracy. Standard data analysis procedures were employed, using IRAF [^1]. Firstly, bad pixels were removed and the values of these pixels were replaced with interpolated values from the surrounding pixels. Secondly, the frames were dark-subtracted and then scaled to have the same median pixel value, so as to produce a flat frame. The dark-subtracted frames were then divided by a normalized flat frame to produce images at each dithering position. Standard stars and very bright AGNs were clearly seen in images at each dithering position, and so images that contained these were aligned to sub-pixel accuracy using these detected sources and then summed to produce the final images. However, for fainter AGNs, the sources were not always clearly recognizable in the individual images at each dithering position. In these cases the images were shifted based on the records of telescope offset, assuming that telescope pointing and tracking were accurate, and were then summed to produce final images. This procedure potentially broadens the effective point spread function in the final image, providing larger source full widths at half maximum (FWHMs) than the expected values. At 3–5 $\mu$m, and particularly at $M'$, thermal emission from a small amount of occasionally transiting cirrus can increase sky background signals and affect data quality, even though the sky may look clear. Thus, before summing the frames, we confirmed that their sky background levels agreed to within 1%, showing that the data were not seriously affected by this kind of cirrus. The images of all the observed AGNs were spatially compact, with no clear extended emission found at either band. The measured FWHMs of some AGNs in the final images were slightly larger than the FWHMs of standard stars, but we attribute these larger FWHMs mainly to the uncertainty introduced by shifting and adding frames containing faint sources, as discussed above. Photometry was
null
--- abstract: 'In this work, we study two-dimensional Galilean field theories with global translations and anisotropic scaling symmetries. We show that such theories have enhanced local symmetries, generated by the infinite dimensional spin-$\ell$ Galilean algebra with possible central extensions, under the assumption that the dilation operator is diagonalizable and has a discrete and non-negative spectrum. We study the Newton-Cartan geometry with anisotropic scaling, on which the field theories could be defined in a covariant way. With the well-defined Newton-Cartan geometry we establish the state-operator correspondence in anisotropic GCFT, determine the two-point functions of primary operators, and discuss the modular properties of the torus partition function which allows us to derive Cardy-like formulae.' author: - 'Bin Chen$^{1,2,3}$, Peng-Xiang Hao$^1$ and Zhe-fei Yu$^1$' title: 2d Galilean Field Theories with Anisotropic Scaling --- [*$^{1}$Department of Physics and State Key Laboratory of Nuclear Physics and Technology,\ Peking University, 5 Yiheyuan Rd, Beijing 100871, P. R. China\ $^{2}$Collaborative Innovation Center of Quantum Matter, 5 Yiheyuan Rd, Beijing 100871, P. R. China\ $^{3}$Center for High Energy Physics, Peking University, 5 Yiheyuan Rd, Beijing 100871, P. R. China*]{} Introduction ============ In two-dimensional(2D) spacetime, the global symmetry in a quantum field theory could be enhanced to a local one. The well-known example studied by J. Polchinski in [@Polchinski:1987dy] shows that a 2D Poincaré invariant QFT with scale invariance could be of conformal invariance, provided that the theory is unitary and the dilation spectrum is discrete and non-negative. More recently, A. Strominger and D. Hofman relaxed the requirement of Lorentz invariance and studied the enhanced symmetries of the theory of chiral scaling[@Hofman:2011zj]. They found two kinds of minimal theories. One kind is the two-dimensional conformal field theory (CFT$_2$)[@Belavin:1984vu], while the other kind is called the warped conformal field theory (WCFT). In a warped CFT, the global symmetry is $SL(2,R)\times U(1)$, and it is enhanced to an infinite-dimensional group generated by an Virasoro-Kac-Moody algebra. For the study on various aspects of 2D warped CFT, see [@Detournay:2012pc; @Hofman:2014loa; @Castro:2015csg; @Castro:2015uaa; @Song:2016gtd; @Song:2017czq; @Jensen:2017tnb; @Azeyanagi:2018har; @Apolo:2018eky; @Chaturvedi:2018uov; @Apolo:2018oqv; @Song:2019txa]. In this paper, we would like to investigate other types of two dimensional field theory with enhanced symmetries. We will focus on the theories whose global symmetries include the translations along two directions, boost symmetry and anisotropic scaling symmetry. If the two directions are recognized as temporal and spatial directions, the anisotropic scaling is of Lifshitz type $x\rightarrow\lambda x,\ t\rightarrow\lambda^z t$. Recall that the scaling behaviour in a warped conformal field theories is chiral $$x\rightarrow\lambda x,\ \ \ y\rightarrow y,$$ while the one in a Galilean conformal field theories (GCFT) is[^1] $$x\rightarrow\lambda x,\ \ \ y\rightarrow \lambda y.$$ In Galilean CFT, the boost symmetry is of Galilean type rather than Lorentzian type $$y\rightarrow y+v x.$$ The Galilean CFT can be obtained by taking the non-relativistic limit of the conformal field theory. Thus the Lorentzian symmetry is broken in Galilean CFT. In this work, we concern the case with more general anisotropic scaling $$x\rightarrow\lambda^c x,\ \ \ y\rightarrow \lambda^d y,$$ and a Galilean boost symmetry. Our results are valid for both cases. The CFT with anisotropic scaling could be related to the strong-coupling systems in the condensed matter physics and in some statistical systems[@Henkel:1997zz; @Henkel:2002vd; @Rutkevich:2010xs]. In particular, It is well-known that for the fermions at unitarity which could be realized experimentally using trapped cold atoms at the Feshbach resonance[@Bartenstein:2004zza; @Regal:2004zza; @Zwierlein:2004zz], there is Schrödinger symmetry, and near the quantum critical points[@Sachdev2011] there is Lifshitz-type symmetry. In order to study these non-relativistic strong coupling systems holographically, people has tried to establish their gravity duals[^2] [@Son:2005rv; @Balasubramanian:2008dm; @Kachru:2008yh]. One essential requirement is the geometric realization of the symmetry. For a 2D QFT with enhanced symmetry, its role in the holographic duality becomes subtler and more interesting. In this case, the dual gravity must involve 3D gravity. As it is well-known, there is no local dynamical degree of freedom in 3D gravity, but there could be boundary global degrees of freedom. The AdS spacetime is not globally hyperbolic and the boundary conditions at infinity plays an important role. For AdS$_3$ gravity, under the Brown-Henneaux boundary, the asymptotic symmetry group is generated by two copies of the Virasoro algebra[@Brown:1986nw], leading to the AdS$_3$/CFT$_2$ correspondence. However , this correspondence depends on other conditions. In particular, under the Compére-Song-Strominger boundary conditions, the asymptotic symmetry group is generated by the Virasoro-Kac-Moody U(1) algebra[@Compere:2013bya]. Therefore under the CSS boundary conditions, the AdS$_3$ gravity could be dual to a warped conformal field theory. This AdS$_3$/WCFT correspondence has been studied in [@Song:2016gtd; @Apolo:2018eky; @Castro:2017mfj; @Chen:2019xpb; @Lin:2019dji]. The study of consistent asymptotical boundary conditions and corresponding asymptotic symmetry group have played important roles in setting up other holographic correspondences beyond AdS/CFT, including chiral gravity[@Li:2008dq], WAdS/WCFT[@Anninos:2008fx; @Compere:2009zj], Kerr/CFT[@Guica:2008mu], BMS/GCA[@Bagchi:2010eg; @Bagchi:2012cy], BMS/CFT[@Barnich:2010eb; @deBoer:2003vf; @Ball:2019atb] and the non-relativistic limit of the AdS/CFT[@Bagchi:2009my]. Recall that both WCFT and GCA are the special cases in our study, it is tempting to guess that the anisotropic GCFT could be the holographic dual of a gravity theory. In order to investigate this possibility, one needs to study the enhanced symmetry of the field theory and in particular the geometry on which the theory is defined. We first study the enhanced symmetries, following the approach developed in [@Polchinski:1987dy; @Hofman:2011zj]. We find that even with anisotropic scaling and Galilean boost symmetry there are still infinite conserved charges, equipped with the infinite dimensional spin $\ell=\frac{d}{c}$ Galilean algebra, in the theory. This algebra is different from the chiral part of the $W_\ell$ algebra, even though the weights of the conserved currents are the same. The next question we address is on what kind of geometry such theories should be defined. Can the local Lorentz symmetry be consistent with the scaling symmetry such that the theories are defined on the pseudo-Riemannian manifold? The answer is generally no. Since the Lorentz boost put the two directions on the equal footing, only the isotropic scaling could be consistent with Lorentz symmetry. Actually as shown in [@Sibiryakov:2014qba], the isotropic scaling may imply the Lorentz invariance, under the assumption that the propagating speed of signal is finite and several other assumptions. The existence of isotropic scaling and Lorentz symmetry may lead to 2D CFT defined on the Riemann surfaces. In 2D CFT, the combination of $L_0$ and $\bar{L}_0$ gives the dilation and Lorentz boost generator. On contrast, although 2D GCFT has the isotropic scaling, the propagating speed in it is infinite and the Lorentz invariance is broken as well. For the theories without Lorentz invariance, the geometry cannot be pseudo-Riemannian. Considering the loss of the local Lorentz symmetry, a natural alternative to pseudo-Riemannian geometry is the Newton-Cartan geometry. In [@Hofman:2014loa], it was noted that with the global translation and scaling symmetry, the restriction of Lorentz symmetry require the theory to be conformal invariant while the restriction of Galilean symmetry require the theory to be the warped conformal field theories. The warped CFT are defined on the warped geometry, which is a kind of the Newton-Carton geometry with additional scaling structure. For a Galilean invariant field theory[^3], it could be coupled to a Newton-Cartan geometry in a covariant way[@Son:2008ye; @Son:2013rqa; @Jensen
null
--- abstract: 'The effect of external electric field on electron-hole correlation in GaAs quantum dots is investigated. The electron-hole Schrödinger equation in the presence of external electric field is solved using explicitly correlated full configuration interaction (XCFCI) method and accurate exciton binding energy and electron-hole recombination probability are obtained. The effect of the electric field was included in the 1-particle single component basis functions by performing variational polaron transformation. The quality of the wavefunction at small inter-particle distances was improved by using Gaussian-type geminal function that depended explicitly on the electron-hole separation distance. The parameters of the explicitly correlated function were determined variationally at each field strength. The results were further investigated. It was found that a 500 kV/cm change in electric field reduces the binding energy and recombination probability by a factor of 2.6 and 166, respectively. The results show that the eh-recombination probability is affected much more strongly by the electric field than the exciton binding energy. Analysis using the polaron-transformed basis indicate that the exciton binding should asymptotically vanish in the limit of large field strength.' author: - 'Christopher J. Blanton' - Christopher Brenon - Arindam Chakraborty bibliography: - 'ref.bib' title: ' Development of polaron-transformed explicitly correlated full configuration interaction method for investigation of quantum-confined Stark effect in GaAs quantum dots ' --- Introduction ============ The influence of external electric field on optical properties of semiconductors has been studied extensively using both experimental and theoretical techniques. In bulk semiconductors the shift in the optical absorbing due to the external field is known as the Franz-Keldysh effect. [@seeger2004semiconductor] In quantum wells and quantum dots, application of electric field has shown to modify the optical properties of nanosystems and is known as the quantum-confined Stark effect (QCSE). [@miller1984band; @miller1985electric] The application of the external field induces various modifications in the optical properties of the nanomaterial including, absorption coefficient, spectral weight of transitions, and change in $\lambda_\mathrm{max}$ of the absorption spectra. In certain cases, the applied field can lead to exciton ionisation. [@perebeinos2007exciton] The quantum-confined Stark effect has found application in the field of electro-absorption modulators,[@bimberg2012quantum] solar cells[@yaacobi2012combining] and the light-emitting devices. [@de2012quantum] Recent experiments by Weiss et al. on semiconductor quantum dots have shown that the QCSE can also be enhanced by the presence of heterojunctions. [@park2012single] In some cases, the QCSE can be induced chemically because of close proximity to ligands. [@yaacobi2012combining] The QCSE also plays a major role in electric field dependent photoconductivity in CdS nanowires and nanobelts. [@li2012electric] Electric field has emerged as one of the tools to control and customize quantum dots as novel light sources. In a recent study, electric field was used in generation and control of polarization-entangled photons using GaAs quantum dots. [@ghali2012generation] It has been shown that the coupling between stacked quantum dots can be modified using electric field. [@talalaev2006tuning] The QCSE has been investigated using various theoretical techniques including perturbation theory,[@Jaziri1994171; @Kowalik2005; @Xie20091625; @He2010266; @Lu2011; @Chen2012786] variational techniques,[@Kuo200011051; @Dane2008278; @Barseghyan2009521; @Duque2010309; @Dane20101901; @Kirak2011; @Acosta20121936] and configuration interaction method. [@Bester2005; @Szafran2008; @Reimer2008; @Korkusinski2009; @Kwaniowski2009821; @Pasek2012; @Luo2012; @Braskan2001775; @Braskan20007652; @Corni2003853141; @Lehtonen20084535] In the present work, development of explicitly correlated full configuration interaction (XCFCI) method is presented for investigating effect of external electric field on quantum dots and wells. The XCFCI method is a variational method in which the conventional CI wavefunction is augmented by explicitly correlated Gaussian-type geminal functions. [@JoakimPersson19965915] The inclusion of explicitly correlated function in the form of the wavefunction is important for the following two reasons. First, the addition of the geminal function increases the convergence of the FCI energy with respect to the size of the underlying 1-particle basis set. [@Prendergast20011626] Second, inclusion of explicitly correlated function improves the form of the electron-hole wavefunction at small inter-particle distances which is important for accurate calculation of electron-hole recombination probability. [@RefWorks:2334; cf. Pendergast al. [@Prendergast20011626] and is directly related to accurate treatment of the Coulomb singularity in the Hamiltonian. [@Hattig20124; @Kong201275; @Prendergast20011626] Varganov et al. have demonstrated the applicability of geminal augmented multiconfiguration self-consistent field wavefunction for many-electron systems. [@varganov2010variational] Elward et al. have also performed variational calculation using explicitly correlated wavefunction for treating electron-hole correlated in quantum dots. [@RefWorks:4030; @RefWorks:4031] One of the important features of the XCFCI method presented here is the inclusion of the external field in the ansatz of the wavefunction. This is achieved by defining a new set of field-dependent coordinates which are generated by performing variational polaron transformation[@harris1985variational] and recasting the original Hamiltonian in terms of the field-dependent coordinates. The variational polaron transformation was introduced by Harris and Silbey for studying quantum dissipation phenomenon in the spin-boson system[@harris1985variational] and is used in the present work because of the mathematical similarity between the spin-boson and the field-dependent electron-hole Hamiltonian. The remainder of this article is organized as follows. The d Sec. \[sec:xcfci\], construction of the field dependent basis functions is presented in Sec. \[sec:polaron\], the application of the XCFCI method using field-dependent basis is presented in Sec. \[sec:results\], and the conclusion are provided in Sec. \[sec:conclusion\]. Theory ====== Explicitly correlated full configuration interaction {#sec:xcfci} ---------------------------------------------------- The field dependent electron-hole Hamiltonian is defined as[@RefWorks:4060; @RefWorks:2174] $$\begin{aligned} \label{eq:ham} H &= -\frac{\hbar^2}{2m_{\mathrm{e}}}\nabla^2_{\mathrm{e}} -\frac{\hbar^2} {2m_{\mathrm{h}}}\nabla^2_{\mathrm{h}} + v^\mathrm{ext}_\mathrm{e} + v^\mathrm{ext}_\mathrm{h} \\ \notag &- \frac{1}{\epsilon \vert \mathbf{r}_{\mathrm{eh} } \vert} + \vert e\vert \mathbf{F} \cdot (\mathbf{r}_{\mathrm{e}}-\mathbf{r}_{\mathrm{h}})\end{aligned}$$ where $m_{\mathrm{e}}$ is the mass of the electron, $m_{\mathrm{h}}$ is the mass of the hole, $\epsilon$ is the dielectric constant, and $\mathbf{F}$ is the external electric field. The external potential $v^\mathrm{ext}_\mathrm{e}$ and $v^\mathrm{ext}_\mathrm{h}$ represent the confining potential experienced by the quasi-particles. The external potential potential is defined in functions. The operator $\hat{G}$ is known as the geminal operator and is an explicit function of $r_\mathrm{eh}$ and is defined as $$\begin{aligned} \hat{G} = \sum_{i=1}^{N_\mathrm{e}} \sum_{j=1}^{N_\mathrm{h}} \sum_{k=1}^{N_\mathrm{g}} b_{k}e^{-\gamma_k r_{ij}^2},\end{aligned}$$ where $N_\mathrm{g}$ is the number of Gaussian functions included in the expansion, $N_\mathrm{e}$ and $N_\mathrm{e}$ are the number of electrons and holes, respectively. The expansion varies variationally. The 1. The significance of these parameters will be discussed in Sec. 2 Eq. will be based on the Sec. \[sec:polaron
null
--- abstract: 'We present here an overview of recent work in the subject of astrophysical manifestations of super-massive black hole (SMBH) mergers. This is a field that has been traditionally driven by theoretical work, but in recent years has also generated a great deal of interest and excitement in the observational astronomy community. In particular, the electromagnetic (EM) counterparts to SMBH mergers provide the means to detect and characterize these highly energetic events at cosmological distances, even in the absence of a space-based gravitational-wave observatory. In addition to providing a mechanism for observing SMBH mergers, EM counterparts also give important information about the environments in which these remarkable events take place, thus teaching us about the mechanisms through which galaxies form and evolve symbiotically with their central black holes.' address: True perspectives. While the field has traditionally been dominated by applications to the direct detection of gravitational waves (GWs), much of the recent focus of numerical simulations has been on predicting potentially observable electromagnetic (EM) signatures. Of course, the greatest science yield will come from coincident detection of both the GW and EM signature, giving a myriad of observables such as the black hole mass, spins, redshift, and host environment, all with high precision [@bloom:09]. Yet even in the absence of a direct GW detection (and this indeed is the likely state of affairs for at least the next decade), the EM signal alone may be sufficiently strong to detect with wide-field surveys, and also unique enough to identify unambiguously as a SMBH merger. In this article, we review the brief history and astrophysical principles that govern the observable signatures of SMBH mergers. To date, the field has largely been driven by theory, but we also provide a summary of the observational techniques and surveys that have been utilized, including recent claims of potential detections of both SMBH binaries and also post-merger recoiling black holes. While the first public use of the term “black hole” is generally attributed to John Wheeler in 1967, as early as 1964 Edwin Saltpeter proposed that gas accretion onto super-massive black holes provided the tremendous energy source necessary to power the highly luminous quasi-stellar objects (quasars) seen in the centers of some galaxies [@saltpeter:64]. Even earlier than that, black holes were understood to be formal mathematical solutions to Einstein’s field equations [@schwarzschild:16], although considered by many to be simply mathematical oddities, as opposed to objects that might actually exist in nature (perhaps most famously, Eddington’s stubborn opposition to the possibility of astrophysical black holes probably delayed significant progress in their understanding for decades) [@thorne:94]. In 1969, Lynden-Bell outlined the foundations for black hole accretion as the basis for quasar power [@lynden_bell:69]. The steady-state thin disks of Shakura and Sunyaev [@shakura:73], along with the relativistic modifications given by Novikov and Thorne [@novikov:73], are still used as the standard models for accretion disks today. In the following decade, a combination of theoretical work and multi-wavelength observations led to a richer understanding of the wide variety of accretion phenomena in active galactic nuclei (AGN) [@rees:84]. In addition to the well-understood thermal disk emission predicted by [@shakura:73; @novikov:73], numerous non-thermal radiative processes such as synchrotron and inverse-Compton are also clearly present in a large fraction of AGN [@oda:71; @elvis:78]. Peters and Mathews [@peters:63] derived the leading-order gravitational wave emission from two point masses more than a decade before Thorne and Braginsky [@thorne:76] suggested that one of the most promising sources for such a GW signal would be the collapse and formation of a SMBH, or the (near head-on) collision of two such objects in the center of an active galaxy. In that same paper, Thorne and Braginsky build on earlier work by Estabrook and Wahlquist [@estabrook:75] and explore the prospects for a space-based method for direct detection of these GWs via Doppler tracking of inertial spacecraft. They also attempted to estimate event rates for these generic bursts, and arrived at quite a broad range of possibilities, from $\lesssim 0.01$ to $\gtrsim 50$ events per year, numbers that at least bracket our current best-estimates for SMBH mergers [@sesana:07]. However it is not apparent that Thorne and Braginsky considered the hierarchical merger of galaxies as the driving force behind these SMBH mergers, a concept that was only just emerging at the time [@ostriker:75; @ostriker:77]. Within the galactic merger context, the seminal paper by Begelman, Blandford, and Rees (BBR) [@begelman:80] outlines the major stages of the SMBH merger: first the nuclear star clusters merge via dynamical friction on the galactic dynamical time $t_{\rm gal} \sim 10^8$ yr; then the SMBHs sink to the center of the new stellar cluster on the stellar dynamical friction time scale $t_{\rm df} \sim 10^6$ yr; the two SMBHs form a binary that is initially only loosely bound, and hardens via scattering with the nuclear stars until the loss cone is depleted; further hardening is limited by the diffusive replenishing of the loss cone, until the binary becomes “hard,” i.e., the binary’s orbital velocity is comparable to the local stellar orbital velocity, at which point the evolutionary time scale is $t_{\rm hard} \sim N_{\rm inf} t_{\rm df}$, with $N_{\rm inf}$ stars within the influence radius. This is typically much longer than the Hubble time, effectively stalling the binary merger before it can reach the point where gravitational radiation begins to dominate the evolution. Since $r_{\rm hard} \sim 1$ pc, and gravitational waves don’t take over until $r_{\rm GW} \sim 0.01$ pc, this loss cone depletion has become known as the “final parsec problem” [@merritt:05]. BBR thus propose that there should be a large cosmological population of stalled SMBH binaries with separation of order a parsec, and orbital periods of years to centuries. Yet this population remains unknown until identified. In the decades since BBR, numerous astrophysical mechanisms have been suggested as the solution to the final parsec problem [@merritt:05]. Yet the very fact that so many different solutions have been proposed and continue to be proposed is indicative of the prevailing opinion that it is still a real impediment to the efficient merger of SMBHs following a galaxy merger. However, the incontrovertible evidence that galaxies regularly undergo minor and major mergers during their lifetimes, coupled with a distinct lack of binary SMBH candidates, strongly suggest that nature has found its own solution to the final parsec problem. Or, as Einstein put it, “God does not care about mathematical difficulties; He integrates empirically.” For incontrovertible evidence of a SMBH binary, nothing can compare with the direct detection of gravitational waves from space. The great irony of gravitational-wave astronomy is that, despite the fact that the peak GW luminosity generated by black hole mergers outshines the [*entire observable universe*]{}, the extremely weak coupling to matter makes both direct and indirect detection exceedingly difficult. For GWs with frequencies less than $\sim 1$ Hz, the leading instrumental concept for nearly 25 years now has been a long-baseline laser interferometer with three free-falling test masses housed in drag-free spacecraft [@faller:89]. Despite the flurry of recent political and budgetary constraints that have resulted in a number of alternative, less capable designs, we take as our fiducial detector the classic LISA (Laser Interferometer Space Antenna) baseline design [@yellowbook:11]. For SMBHs with masses of $10^6 M_\odot$ at a redshift of $z=1$, LISA should be able to identify the location of the source on the sky within $\sim 10$ deg$^2$ a month before merger, and better than $\sim 0.1$ deg$^2$ with the entire waveform, including merger and ringdown [@kocsis:06; @lang:06; @lang:08; @kocsis:08a; @lang:09; @thorpe:09; @mcwilliams:10]. This should almost certainly be sufficient to identify EM counterparts with wide-field surveys such as LSST [@lsst:09], WFIRST [@spergel:13], or WFXT [@wfxt:12]. Like the cosmological beacons of gamma-ray bursts and quasars, merging SMBHs can teach us about relativity, high-energy astrophysics, radiation hydrodynamics, dark energy, galaxy formation and evolution, and how they all interact. A large variety of potential EM signatures have recently been proposed, almost all of which require some significant amount of gas in the near vicinity of the merging black holes [@schnittman:11]. Thus we must begin with the question of whether or not
null
--- author: - '[^1] for the GRAND collaboration' bibliography: - 'biblio.bib' title: The GRAND project and GRANDProto300 experiment --- Introduction {#intro} ============ The Giant Radio Array for Neutrino Detection (GRAND) will be a network of 20 subarrays of $\sim$10000 radio antennas each, deployed in mountainous and radio-quiet sites around the world, totaling a combined area of 200000km$^2$. It will form an observatory of unprecedented sensitivity for ultra-high energy cosmic particles (neutrinos, cosmic rays and gamma rays). Here we first detail the GRAND detection concept, its science case and experimental challenges. In a second part we detail the GRANDProto300 experiment, a pathfinder for GRAND, but also an appealing scientific project on its own. The GRAND project {#GRAND} ================= Detection concept {#concept} ----------------- Principles of radio detection of air showers are detailed in [@Huege:2016veh; @Schroder:2016hrv], and the GRAND detection concept is presented in [@WP]. It > \[principle\]. ! [GRAND detection principle for cosmic rays or gammas (detection of the EAS induced by the direct interaction of the cosmic particles in the atmosphere) and neutrinos (underground interaction with subsequent decay of the tau lepton in the atmosphere)[]{data-label="principle"}](principle.pdf){width="10cm"} When it enters the Earth atmosphere, a cosmic particle may interact with air particles to induce an extensive air shower (EAS), which in turn, generates electromagnetic radiations mainly through the deflection by the Earth magnetic field of the charged particles composing the shower[@Kahn206]. This so-called [*geomagnetic effect*]{} is coherent in the tens of MHz frequency range, generating short (&lt;1$\mu$s), transient electromagnetic pulses, with amplitudes large enough to allow for the detection of the EAS[@Allan:1971; @Ardouin:2005qe; @Falcke:2005tc] if the primary particle’s energy is above 10$^{17}$eV typically. Cosmic neutrinos however have a very small probability of being detected through this process because of their tiny interaction cross-section with air particles. Yet, a tau neutrino can produce a tau lepton under the Earth surface through charged-current interactions. Thanks to its large range in rock and short lifetime, it may emerge in the Earth atmosphere and eventually decay to induce a detectable EAS[@Fargion:2000iz]. The Earth opacity to neutrinos of energies above 10$^{17}$eV however implies that only Earth-skimming trajectories allow for such a scenario. This peculiarity, which can first be seen as a handicap for detection, turns out to be an asset for radiodetection: because of relativistic effects, the radio emission is indeed strongly beamed forward in a cone which opening is given by the Cerenkov angle $\theta_C\sim1^{\circ}$. For air shower trajectories close to the zenith, this induces a radio footprint at ground of few hundred meters diameter, requiring a large density of antennas at ground for a good sampling of the signal. For very inclined trajectories however, the larger distance of the antennas to the emission zone and the projection effect of the signal on ground combine to generate a much larger footprint[@Huege:2016veh]. Targeting air showers with very inclined trajectories —either up-going for Earth-skimming neutrinos, or down-going for cosmic rays and gammas— make it possible to detect them with a sparse array (typically one antenna per km$^2$). This is a very promising detector. Another driver in GRAND is to aim at mountainous areas with favorable topographies as deployment sites. An ideal topography consists of two opposing mountain ranges, separated by a few tens of kilometers. One range acts as a target for neutrino interactions, while the other acts as a screen on which the ensuing radio signal is projected. Simulations (see section \[simu\]) show that such configurations result in a detection efficiency improved by a factor $\sim$4 compared to a flat site. Detector efficiency ! [Left: Three mapped direction. Right: One simulated neutrino event displayed over the ground topography of the simulated area. The large red circle shows the position of the tau production and the red star, its decay. The dotted line indicates the shower trajectory. Circles mark the positions of triggered antennas. The color code represents the peak-to-peak voltage amplitude of the antennas. The limits of the simulated detector are indicated with a black line. []{data-label="sim"}](HAgain.png ) ! [Left: NEC4 simulation of the HorizonAntenna gain as a function of direction. Right: One simulated neutrino event displayed over the ground topography of the simulated area. The background shows the decay. The red circles represent the position of tau trajectory. Circles mark the positions of triggered antennas. The color code represents the peak-to-peak voltage amplitude of the antennas. The limits of the simulated detector are indicated with a black line. []{data-label="sim"}](exHS1.png "fig:"){width="7cm"} In order to estimate the potential of the GRAND detector for the detection of cosmic neutrinos, an end-to-end simulation chain was developped, composed mostly of computation-effective tools we developed to take into account the very large size of the detector and its complex topography. - The first element of the simulation chain is DANTON[@DANTON:note], a 3-D Monte-Carlo simulation of the neutrino propagation and interactions embeded in a realistic implementation of the ground topography. A back-tracking mode is also implemented in DANTON, reducing the computation time by several orders of magnitude for neutrino energies below 10$^{18}$eV. - The radio emission induced by each simulated tau decay is computed in our simulation chain through a semi-analytical treatment called [*radiomorphing*]{}. This method, detailed in [@Zilles:2018kwq], allows to determine the radio signal induced by any shower at any location through analytical operations on simulated radio signals from one single reference shower. Radiomorphing allows a gain of two orders of magnitudes in computation time compared to a standard simulation, for a relative difference of the signal amplitude below 10% on average. - A specific design was proposed for the GRAND antenna. This so-called [*HorizonAntenna*]{} is composed of 3 arms, allowing for a complete determination of the wave polarization. Placed 5m above ground, with a design optimized for the 50-200MHz frequency range, its sensitivity to horizontal signals is excellent. The HorizonAntenna response to EAS radio signals was simulated with the NEC4 code[@NEC4] (see figure \[sim\]) and integrated in the simulation chain. - The final step of the treatment is the trigger simulation: it requires that, for at least 5 units in one 9-antennas square cell, the peak-peak amplitude of the voltage signal at the output of the antennas is larger than 30$\mu$V (twice the expected stationnary background noise) in an agressive scenario, or 75$\mu$V (five times the expected stationnary background noise) in a conservative one . This simulation chain was run over a 10000km$^2$ area, with 10000 antennas deployed along a square grid of 1km step size in an area of the TianShan mountain range, in the XinJiang Autonomous Province (China). This setup is displayed in figure \[sim\] together with one simulated event. The 3-year 90% C.L. sensitivity limit derived from this simulation is presented in figure \[limit\] and the implications on the science goals achievable by GRAND are detailed in section \[science\]. ### Reconstruction performance {#recons} Reconstruction of the direction of origin, energy and nature of the primary particle from the radio data have now reached performances comparable to standard technics [@Buitink:2016nkf; @Bezyazeekov:2015rpa; @Aab:2016eeq]. A key issue for GRAND will be to achieve similar results from nearly horizontal air showers detected with radio antennas only. Demonstrating this will be one of the goal of the GRANDProto300 experiment (see section \[GP300\]). Before that, simulation studies are used to evaluate and optimize the reconstruction performance of GRAND. We reconstructed in particular the direction of origin of neutrino-induced air showers simulated with the ZHAireS code[@Zhaires:2012] over a GRAND-like array deployed on a toy-model topography, corresponding to a plane detector area facing the shower with a constant slope of 10$^{\circ}$ w.r.t. the horizontal. Applying a basic plane-wave hypothesis reconstruction to these data yields an average reconstruction error of a few fractions of a degree. The different antenna heights allow to achieve such resolution even for horizontal trajectories. A hyperbolic wavefront is presently being implemented, and may yield even better results, according to our understanding of air shower radio wavefront structure[@Corstanje:2014waa]. The method of [@Buitink:2014eqa] has been implemented in order to reconstruct the maximum of development of cosmic-ray induced showers on a GRAND-like array. It yields resolutions on $X_{max}$ better than 40g$\cdot$cm$^{-2}$ provided that the shower
null
--- abstract: 'The notion of ${\mathbb}{A}^1$-degree provides an arithmetic refinement of the usual notion of degree in algebraic geometry. In this note, we compute ${\mathbb}{A}^1$-degrees of certain finite covers $f\colon {\mathbb}{A}^n\to {\mathbb}{A}^n$ induced by quotients under actions of Weyl groups. We use knowledge of the cohomology ring of partial flag varieties as a key input in our proofs.' author: 'This article is a quotation from the original, but otherwise. Associated to a finite morphism $f\colon {\mathbb{A}}^n\to {\mathbb{A}}^n$ of $K$-varieties, we have the usual notion of its degree, denoted by $\deg f$ and defined to be the degree of the induced extension of function fields. Refining this, $\mathbb{A}^1$-enumerative geometry provides a notion of an ${\mathbb}{A}^1$-degree, denoted by $\deg^{{\mathbb{A}}^1}f$, which is an element of the Grothendieck-Witt ring ${\operatorname}{GW}(K)$. [^1] The Grothendieck-Witt ring is generated by symmetric bilinear forms on $K$-vector spaces up to isomorphism, and the usual degree $\deg f$ can be recovered by taking the rank of the bilinear form $\deg^{{\mathbb{A}}^1}f$. If $K$ is algebraically closed, then the rank homomorphism defines an isomorphism of rings ${\operatorname}{GW}(K) \overset{\sim}\longrightarrow {\mathbb}{Z}$, and $\deg^{{\mathbb{A}}^1}f$ contains no more information than $\deg f$. However, if $K={\mathbb}{R}$, then the rank homomorphism ${\operatorname}{GW}({\mathbb}{R})\to {\mathbb}{Z}$ has kernel isomorphic to $\mathbb{Z}$, reflecting the fact that $\deg^{{\mathbb{A}}^1}f$ also contains the data of the Brouwer degree of the underlying map of ${\mathbb}{R}$-manifolds. In general, $\deg^{{\mathbb{A}}^1}f$ can be viewed as an enrichment of $\deg f$ that contains interesting arithmetic data. In this paper, we compute ${\mathbb{A}}^1$-degrees of quotient maps induced by Weyl groups. As a first example, one may consider the quotient map $\pi\colon {\mathbb{A}}^n\to {\mathbb{A}}^n/S_n\simeq {\mathbb{A}}^n$ of affine space by action of the symmetric group on the coordinates. The usual degree of $\pi$ is $\deg \pi = n!$, and it turns out that $\deg^{{\mathbb{A}}^1} \pi=\frac{n! }{2} \cdot (\langle 1\rangle +\langle -1\rangle)$ for $n\geq 2$. This follows easily from the fact that $S_n$ contains a simple reflection, leading us to the following preliminary observation. \[trivialthm\] $ $V$. If the above is true Théorème]). We can also compute ${\mathbb{A}}^1$-degrees in situations where does not apply. For example, we will show that the ${\mathbb{A}}^1$-degree of the quotient map ${\mathbb{A}}^4/(S_2 \times S_2) \to {\mathbb{A}}^4/S_4$ is given by $4\cdot \langle 1\rangle + 2\cdot \langle -1\rangle$, so in particular, the ${\mathbb{A}}^1$-degree is no longer a multiple of $\langle 1\rangle +\langle -1\rangle$. Generalizing this example, we prove the following: \[partialquotient\] Let $n_1,\ldots,n_r$ be positive integers satisfying $n = \sum_{i = 1}^r n_i$. The ${\mathbb{A}}^1$-degree of the map $\pi \colon \mathbb{A}_K^n\big/\prod_{i=1}^{r}S_{n_i}\to \mathbb{A}_K^{n}/S_n$ is given by $$\begin{aligned} \deg^{{\mathbb{A}}^1} \pi & = \frac{\deg \pi - a}{2} \cdot (\langle 1\rangle +\langle -1\rangle) + a\cdot \langle 1 \rangle \\ & = \frac{1}{2}\left(\frac{n!}{\prod_{i=1}^{r}n_i! }+a\right)\cdot \langle 1\rangle + \frac{1}{2}\left(\frac{n!}{\prod_{i=1}^{r}n_i! }-a\right)\cdot \langle -1\rangle,\end{aligned}$$ where $a = \lfloor\frac{n}{2}\rfloor! \big/ \prod_{i=1}^{r}\lfloor\frac{n_i}{2}\rfloor!$ if at most one $n_i$ is odd and $a = 0$ otherwise. The proof of  involves applying the algorithm in [@KW19 Section 2] together with knowledge of the cohomology ring of partial flag varieties of type $A$. Motivated by this, we extend  to apply to Weyl groups of other types as follows: \[bigthm\] Let $K$ be a field of characteristic $0$. Let $G$ be a simple complex Lie group with root space $V/K$, and let $P \subset G$ be a parabolic subgroup. Let $W$ be the Weyl group of $G$, and let $W_P \subset W$ be the associated parabolic subgroup. Then the ${\mathbb{A}}^1$-degree of the map $\pi \colon {{\operatorname}{Spec}}K[V]^{W_P} \to {{\operatorname}{Spec}}K[V]^W$ $$\deg^{{\mathbb{A}}^1} \pi = \frac{\deg \pi - a}{2} \cdot (\langle 1 \rangle + \langle -1 \rangle) + a \cdot \langle \alpha \rangle,$$ where $\alpha \in K^\times$, and $a$ is equal to the number of cosets $\omega \cdot W_P \in W/W_P$ for which $\omega^{-1}\omega_0\omega \in W_P$ and $\omega_0 \in W$ is the longest word. The element $\alpha$ in the statement of Theorem \[bigthm\] depends on the choice of identifications of ${\operatorname}{Spec}(K[V]^W)$ and ${\operatorname}{Spec}(K[V]^{W_P})$ with ${\mathbb}{A}^{\dim(V)}$. Such identifications are equivalent to choosing generators of ${\operatorname}{Spec}(K[V]^W)$ and ${\operatorname}{Spec}(K[V]^{W_P})$ as polynomial rings over $K$. In particular, scaling a generator of ${\operatorname}{Spec}(K[V]^W)$ by $\alpha'$ scales $\deg^{{\mathbb{A}}^1} \pi$ by $(\alpha')^{-1}$, so there is always a choice of generators making $\alpha$ in equal to 1. In the type-A case (i.e., ), we show taking the obvious choice of generators using elementary symmetric functions yields $\alpha=1$. On the other hand, the number $a$ in the statement of  can be computed explicitly in all cases, as we demonstrate in the following result: \[cor-0groups\] We have $a = 0$ in Theorem \[bigthm\] except in the following cases, tabulated according to the Dynkin diagrams of $G$ and of the $G$ $P/U(P)$ $a$ ------------ -------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------- $A_n$ $\amalg_{i = 1}^r A_{n_i}$ with $n = \sum_{i = 1}^r n_i$ and $\#\{\text{odd }n_i\} \leq 1$ $\lfloor\frac{n}{2}\rfloor! \big/ \prod_{i=1}^{r}\lfloor\frac{n_i}{2}\rfloor!$ $D_{2n+1}$ $D_{2
null
--- abstract: 'Kaltofen has proposed a new approach in [@Kal92] for computing matrix determinants without divisions. The algorithm is based on a baby steps/giant steps construction of Krylov subspaces, and computes the determinant as the constant term of a characteristic polynomial. For matrices over an abstract ring, by the results of @BaSt82, the determinant algorithm, actually a straight-line program, leads to an algorithm with the same complexity for computing the adjoint of a matrix. However, the latter adjoint algorithm is obtained by the reverse mode of automatic differentiation, hence somehow is not “explicit”. We present an alternative (still closely related) algorithm for the adjoint that can be implemented directly, we mean without resorting to an automatic transformation. The algorithm is deduced by applying program differentiation techniques “by hand” to Kaltofen’s method, and is completely decribed. As subproblem, we study the differentiation of programs that compute minimum polynomials of lineraly generated sequences, and we use a lazy polynomial evaluation mechanism for reducing the cost of Strassen’s avoidance of divisions in our case.' address: | CNRS, Université de Lyon, INRIA\ Laboratoire LIP, ENSL, 46, Allée d’Italie, 69364 Lyon Cedex 07, France author: - Gilles Villard title: 'Kaltofen’s division-free determinant algorithm differentiated for matrix adjoint computation' --- [^1] matrix determinant, matrix adjoint, matrix inverse, characteristic polynomial, exact algorithm, division-free complexity, Wiedemann algorithm, automatic differentiation. Introduction True determinants. This approach has brought breakthrough ideas for improving the complexity estimate for the problem of computing the determinant without divisions over an abstract ring (see [@Kal92; @KaVi04-2]). With these foundations, the algorithm of @KaVi04-2 computes the determinant in $O(n^{2.7})$ additions, subtractions, and multiplications. The same ideas also lead to the currently best known bit complexity estimate of @KaVi04-2 for the problem of computing the characteristic polynomial. We can calculate the linear divisions). Using the reverse mode of automatic differentiation (see [@Lin70; @Lin76], and [@OWB71]), a straight-line program for computing the determinant of a matrix $A$ can be (automatically) transformed into a program for computing the adjoint matrix $A ^{*}$ of $A$. This principle, stated by @BaSt82 [Cor.5], is also applied by @Kal92 [Sec.1.2] for computing $A^*$. Since the adjoint program is derived by an automatic process, few is known about the way it computes the adjoint. The only available information seems to be the determinant program itself, and the knowledge we have on the differentiation process. Neither the adjoint program can be described, or implemented, without resorting to an automatic differentiation tool. In this paper, by studying the differentiation of Kaltofen’s determinant algorithm step by step, we produce an “explicit” adjoint algorithm. The determinant algorithm, that we first recall in Section \[sec:detK\] over an abstract field $\K$, uses a Krylov subspace construction, hence mainly reduces to vector times matrix, and matrix times matrix products. Another operation involved is computing the minimum polynomial of a linearly generated sequence. We apply the program differentiation mechanism, reviewed in Section \[sec:autodiff\], to the different steps of the determinant program in Section \[sec:differentiation\]. This leads us to the description of a corresponding new adjoint program over a field, in Section \[sec:adjointK\]. The algorithm we obtain somehow calls to mind the matrix factorization of @Ebe97 [(3.4)]. We note that our objectives are similar to Eberly’s ones, whose question was to give an explicit inversion algorithm from the parallel determinant algorithm of @KaPa91. Our motivation for studying the differentiation and resulting adjoint algorithm, is the importance of the determinant approach of @Kal92, and @KaVi04-2, for various complexity estimates. Recent advances around the determinant of polynomial or integer matrices (see [@EGV00; @KaVi04-2; @Sto03; @Sto05]), and matrix inversion (see [@JeVi06], and [@Sto08]) also justify the study of the general adjoint problem. For computing the determinant without divisions over a ring $\R$, Kaltofen applies the avoidance of divisions of @Str73 to his determinant algorithm over a field. We apply the same strategy for the adjoint. From the algorithm of Section \[sec:adjointK\] over a field, we deduce an adjoint algorithm over an arbitrary ring $\R$ in Section \[sec:nodiv\]. The avoidance of divisions involves computations with truncated power series. A reduction implies no cost. However, since we use the reverse mode of differentiation, the flow of computation is modified, and the benefit of the baby steps/giant steps is partly lost for the adjoint. This is also estimate. The division-free determinant algorithm of @Kal92 uses $\sO(n^{3.5})$ operations in $\R$. The adjoint algorithm we propose has essentially the same cost. Our study may be seen as a first step for the differentiation of the more efficient algorithm of @KaVi04-2. The latter would require, in particular, to consider asymptotically fast matrix multiplication algorithms that are not discussed in what follows. Especially in our matrix context, we note that interpreting programs obtained by automatic differentiation, may have connections with the interpretation of programs derived using the transposition principle. We refer for instance to the discussion of @Kal00-2 [Sec.6]. [**Cost functions. **]{} We let ${\sf M}(n)$ be such that two univariate polynomials of degree $n$ over an arbitrary ring $\R$ can be multiplied using ${\sf M}(n)$ operations in . The algorithm of @CaKa91 allows ${\sf M}(n)=O(n\log n \log\log n)$. The function $O({\sf M}(n))$ also measures the cost of truncated power series arithmetic over $\R$. For bounding the cost of polynomial gcd-type computations over a commutative field $\K$ we define the function ${\sf G}$. Let ${\sf G}(n)$ be such that the extended gcd problem (see [@vzGG99 Chap.11]) can be solved with ${\sf G}(n)$ operations in $\K$ for polynomials of degree $2n$ in $\K[x]$. The recursive Knuth/Schönhage half-Gcd algorithm (see [@Knu70; @Sch71; @Moe73]) allows ${\sf G}(n)=O({\sf M}(n)\log n)$. The minimum polynomial of degree $n$, of a linearly generated sequence given by its first $2n$ terms, can be computed in ${\sf G}(n) +O(n)$ operations (see [@vzGG99 Algorithm 12.9]). We will often use the notation $\sO$ that indicates missing factors of the form $\alpha (\log n )^{\beta}$, for two positive real numbers $\alpha$ and $\beta$. Kaltofen’s determinant algorithm over a field {#sec:detK} ============================================= Kaltofen’s determinant algorithm extends the Krylov-based method of @Wie86. The latter approach is successful in various situations. We refer especially to the algorithms of @KaPa91 and @KaSa91 around exact linear system solution that has served as basis for subsequent works. We may also point out the various questions investigated by @CEKSTV01-2, and references therein. Let $\K$ be a commutative field. We consider $A \in \K ^{n \times n}$, $u \in \K ^{ 1 \times n}$, and $v \in \K ^{n \times 1}$. We introduce the Hankel matrix $H= \left(uA^{i+j-2}v\right) _{1\leq i,j \leq n} \in \K ^{n \times n}$, and let $h_k=uA^kv$ for $0\leq k \leq 2n-1$. We also assume that $H$ is non-singular: $$\label{eq:defH} \det H = \det \left[ \begin{array}{cccc} uv & uAv & \ldots & uA ^{n-1}v \\ uAv & uA ^{2}v & \ldots & uA ^{n}v \\ \vdots & \ddots & \vdots & \vdots\\ uA ^{n-1}v & \ldots & \ldots & uA ^{2n-2}v \end{array} \right] \neq 0.$$ In the applications, (\[eq:defH\]) is ensured either by construction of $A,u$, and $v$, as in [@Kal92; @KaVi04-2], or by randomization (see the above cited references around Wiedemann’s approach, and [@Kal92; @KaVi04-2
null
--- abstract: 'It has been known that epidemic outbreaks in the SIR model on networks are described by phase transitions. Despite the similarity with percolation transitions, whether an epidemic outbreak occurs or not cannot be predicted with probability one in the thermodynamic limit. We elucidate its mechanism by deriving a simple Langevin equation that captures an essential aspect of the phenomenon. We True point.' author: 'Well, I haven't done it occurred. Obviously, this is hard to answer, because an accurate model of epidemic spread in real societies, which include complicated and heterogeneous human-to-human contact, cannot be constructed. Then, is it possible to predict the outbreak for a simple mathematical model? Even in this case, the manner of the early spread of disease may significantly influence states that manifest after a sufficiently long time. For example, it seems reasonable to conjecture that whether a single infected individual with a very high infection rate causes an outbreak may depend on the number of people infected by the individual, which is essentially stochastic. In the present paper, we attempt to formulate this conjecture. Specifically, we refer to e.g. Ref. [@allen2008introduction] for an introduction to the stochastic SIR model; see also Refs. [@boccaletti2006complex; @RevModPhys.81.591] for related social dynamics on complex networks). The SIR model may be defined for well-mixed cases [@bailey1950simple; @bailey1953total; @metz1978epidemic; @martin1998final; @PhysRevE.76.010901; @PhysRevE.86.062103], homogeneous networks [@diekmann1998deterministic; @PhysRevE.64.050901; @PhysRevE.66.016128; @lanvcic2011phase; @bohman2012sir; @moreno2002epidemic], and scale-free networks [@moreno2002epidemic; @PhysRevLett.86.3200; @PhysRevE.64.066112; @gallos2003distribution]. A remarkable phenomenon is that when $\lambda$ exceeds a critical value ${\lambda_{\rm c}}$, a disease spreads to macroscopic scales from a single infected individual, which corresponds to an epidemic outbreak. This was found in well-mixed cases and random graphs, but ${\lambda_{\rm c}}=0$ for scale free networks. That is, epidemic outbreaks are described as phase transition phenomena. In addition to the interest in theoretical problems, recently, the SIR model on networks has been studied so as to identify influential spreaders [@kitsak2010identification] and so as to determine a better immunization strategy [@PhysRevLett.91.247901; @PhysRevLett.101.058701]. Although the phase transition in the SIR model may be a sort of percolation transition, its property is different from that of standard percolation models. In the SIR model exhibiting the phase transition, the order parameter characterizing it may be the fraction of the infected population, which is denoted by $\rho$. Indeed, $\rho=0$ in the non-outbreak phase ($\lambda<{\lambda_{\rm c}}$), whereas the expectation of $\rho$ becomes continuously non-zero from $0$ when $\lambda > {\lambda_{\rm c}}$. This phenomenon is in accordance with the standard percolation transition. However, on one hand, the order parameter in the percolated phase, e.g. the fraction of the largest cluster, takes a definite value with probability one in the thermodynamic limit; on the other hand, the fraction of the infected population in the SIR model is not uniquely determined even in the thermodynamic limit. In fact, it has been reported that the distribution function of the order parameter in SIR models with finite sizes shows two peaks at $\rho=0$ and $\rho=\rho_*$ for well-mixed cases [@bailey1953total; @metz1978epidemic; @martin1998final; @PhysRevE.76.010901], homogeneous networks [@diekmann1998deterministic; @lanvcic2011phase; @PhysRevE.64.050901], and scale-free networks [@gallos2003distribution]. Mathematically, the probability density of $\rho$ in the thermodynamic limit may be expressed as $$P(\rho;\lambda)= (1-q(\lambda)) \delta(\rho) +q(\lambda) \delta(\rho-\rho_*), \label{goal}$$ where $q=0$ for $\lambda \le {\lambda_{\rm c}}$ and $q\not = 0$ for $\lambda > {\lambda_{\rm c}}$. This means that the value of the fraction of the infected population in the outbreak phase, which is either $0$ or $\rho_*(\lambda)$, cannot be predicted with certainty. We call this phenomenon the [*intrinsic unpredictability of epidemic outbreaks*]{}. In this paper, we clearify the meaning of . We first observe the phenomenon in the SIR model defined on a random regular graph. By employing a mean field approximation, we describe the epidemic spread dynamics in terms of a master equation for two variables. Then, with a system size expansion, we approximate the solutions to the master equation by those to a Langevin equation. Now we can analyze this Langevin equation and work out the mechanism of the appearance of the two peaks. We also calculate $q(\lambda)$ near the transition point. Model s nodes. For each $x \in G$, the state $\sigma(x) \in \{ {{\rm S}}, {{\rm I}}, {{\rm R}}\} $ is defined, where ${{\rm S}}$, ${{\rm I}}$, and ${{\rm R}}$ represent Susceptible, Infective, and Recovered, respectively. The state of the whole system is given by $(\sigma_x)_{x \in G}$, which is denoted by ${{\boldsymbol \sigma}}$ collectively. The SIR model on networks is described by a continuous time Markov process with infection rate $\lambda$ and recovery rate $\mu$. Concretely, the transition rate $W({{\boldsymbol \sigma}} \to {{\boldsymbol \sigma}}')$ of the Markov process is given as $$W({{\boldsymbol \sigma}} \to {{\boldsymbol \sigma}}') = \sum_{x \in G} w({{\boldsymbol \sigma}} \to {{\boldsymbol \sigma}}'|x),$$ with $$\begin{aligned} w({{\boldsymbol \sigma}} \to {{\boldsymbol \sigma}}'|x)& = & \lambda \left[\delta(\sigma_x,{{\rm S}})\delta(\sigma_x',{{\rm I}}) \sum_{y \in B(x)} \delta(\sigma_y,{{\rm I}}) \right] \nonumber \\ & & +\mu \delta(\sigma_x, {{\rm I}})\delta(\sigma_x', {{\rm R}}), \label{rate-netmodel}\end{aligned}$$ where $B(x)$ is a set of $k$-adjacent nodes to $x \in G$. Hereinafter, without loss of generality, we use dimensionless time by setting $\mu=1$. For almost all time sequences, infective nodes vanish after a sufficiently long time, and then the system reaches a stationary state, which is called the [*final state*]{}. The ratio of the total number of recovered nodes to $N$ in the final state is equivalent to the fraction of the infected population $\rho$. This quantity measures the extent of the epidemic spread. At $t=0$, we assume that $\sigma={{\rm I}}$ for only one node selected randomly and that $\sigma={{\rm S}}$ for the other nodes. In Fig. \[fig-sir-pm3d\], as an example, we show the result of numerical simulations for the model with $k=3$ and $N=8192$. We measured the probability density $P(\rho;\lambda)$ of the fraction of the infected population $\rho$ for various values of $\lambda$. This figure suggests that the expectation of $\rho$ becomes non-zero when $\lambda$ exceeds a critical value. The important observation here is that $\log P $ in the outbreak phase has a sharp peak near $\rho=0$, too. Indeed, the inset in Fig. \[fig-sir-pm3d\] clearly shows the existence of the two peaks in $\log P$ with $\lambda=1.5$. Similar graphs were reported in Refs. [@bailey1953total; @martin1998final; @gallos2003distribution; @PhysRevE.76.010901; @PhysRevE.64.050901; @lanvcic2011phase]. The result is Fig. \[fig-sir-Nxxx-p0-N16\], where the probability that $\rho > 1/16$, which is denoted by $p(\rho >1/16)$, is plotted as a function of $\lambda$ for several values of $N$. Note that $\lim_{N \to \infty} p(\rho >1/16)=q(\lambda)$ when $\rho_*(\lambda) >1/16$. These results suggest the limiting density
null
--- abstract: 'The many-body state of carriers confined in a quantum dot is controlled by the balance between their kinetic energy and their Coulomb correlation. In coupled quantum dots, both can be tuned by varying the inter-dot tunneling and interactions. Using a theoretical approach based on the diagonalization of the exact Hamiltonian, we show that transitions between different quantum phases can be induced through inter-dot coupling both for a system of few electrons (or holes) and for aggregates of electrons and holes. We discuss their manifestations in addition energy spectra (accessible through capacitance or transport experiments) and optical spectra.' address: "It is possible." @book3]. Indeed, the number of electrons and holes in the QD can be controlled very accurately, and almost all relevant parameters influencing their strongly correlated states, like confinement potential and coupling with magnetic field and light, can be tailored in the experiments. The QDC can be very useful in other applications. From the point of view of fundamental physics such coupling extends the analogy between quantum dots (“artificial atoms” [@artificialatoms]) and natural atoms, to artificial and natural molecules. The tunability of coupling among QDs allows to explore all regimes between non-interacting dots and their merging into a single QD; many of those regimes are precluded to molecular physics. One of the peculiarities of QDs with respect to other solid state structures consists in the partial decoupling of a few degrees of freedom from all the others, which is due to the discrete nature of the spectrum [@book1; @book2; @book3]. The actual exploiting of such a feature largely depends on the capability of integrating arrays of QDs, thus increasing the number of degrees of freedom that one can address with precision and coherent manipulation. This is precisely the strategy pursued by the semiconductor-based solid state implementations of quantum computation [@lloyd]. In general and basic terms, the tuning of inter-dot tunneling allows to modify the relative position of the single-particle levels, thus inducing phase transitions in the many-body ground states and different degrees of spatial correlations among carriers. Manifestations of these phenomena in systems formed by carriers of only one type, whose ground and excited state properties are accessible through addition energy spectra, have been predicted. Here we point out that similar effects are expected to occur also for systems formed by both electrons and holes. We also show that, in spite of the obvious differences, strong similarities appear in the analysis of electrons and electron-hole systems, and a unified theoretical description is in order. Basically, a competition emerges between two trends. On one side an atomic [*Aufbau*]{} logic, where carriers tend to occupy the lowest single-particle states available, thus minimizing the kinetic energy and the total spin, at the (energetic) cost of reducing spatial correlation among carriers. At the opposite extreme we find an enhanced degree of spatial correlation among carriers, which occurs through the occupation of orbitals other than the lowest. This implies an enhancement of the kinetic energy and a reduction of the repulsive one, and results in electron distributions maximizing the total spin (Hund’s rule). The balance between these two trends depends on the spacings of the single-particle levels involved, and these are precisely what can be settled by controlling the inter-dot tunneling. When carriers of opposite charge, different effective masses and tunneling parameters come into play, the competition between both trends becomes even more delicate. Predictions of the actual ground and excited states of the many-body system thus require a careful theoretical treatment including all carrier-carrier interactions. Since the number of carriers in the dot can be controlled and kept relatively small, we can proceed through direct diagonalization of the exact many-body Hamiltonian, with no need to make a priori assumptions on the interactions. On the contrary, the results are a useful benchmark for the validity of the most common approximations for these systems. We find that different quantum phases correspond to different regimes of inter-dot coupling both for a system of few electrons (or holes) and for aggregates of electrons and holes, with various possible spatial configurations and the formation of different possible “subsystems” of inter-correlated particles. Besides, due to the negligible electron-hole exchange interaction in heterostructures such as GaAs, the two kinds of carriers can be treated as distinguishable particles. Therefore spatial correlation among electrons and holes does not arise from the Fermi statistics: it needs instead the entanglement between the orbital degrees of freedom associated to holes and electrons, and turns out to depend only indirectly on the spin quantum numbers $ S_{e} $ and $ S_{h} $. After a brief summary of the state of the art in theoretical and experimental work on coupled dots (Sect. \[Review\]), in the following we describe the general Hamiltonian and solution scheme (Sect. \[Method\]). We then come to the results for electron- (Sect. \[Electron\]) and electron-hole systems (Sect. \[Electron-hole\]). The trends leading to different quantum phases are discussed in detail, together with their nature in terms of spin and spatial correlation functions. Experimental results can be found in [*material molecules*]<unk> [@review]. Here we consider [*artificial molecules*]{} [@leoscience], where carriers tunnel at appreciable rates between dots, and the wavefunction extends across the entire system. The formation of a miniband structure in a one-dimensional array of tunnel-coupled dots was demonstrated more than a decade ago [@leocrystal]. After that, the first studies considered “planar” coupled dots defined by electrodes in a two dimensional electron gas. In these devices the typical charging energy was much larger than the average inter-level spacing, hence linear [@planarlinear] and non-linear [@planarnonlinear] Single Electron Tunneling Spectroscopy (SETS), obtained by transport measurements at different values of the inter-dot conductance, could be explained by model theories based on capacitance parameterizations [@planarth]. Early studies also considered simple model Hamiltonians (usually Hubbard-like) with matrix elements treated as parameters [@Hubbardsimple]. Blick and coworkers clearly showed the occurrence of coherent molecular states across the entire two-dot setup, analyzing transport data [@blickI] and the response to a coherent wave interferometer [@blickII]. The tuning of coherent states was also probed by microwave excitations [@PAT], and coupling with environment acoustic phonons was studied [@spontaneous]. Planar coupled dots were also used to cool electron degrees of freedom [@vaart], to measure the magnetization as a function of the magnetic field [@magnetization], and to study the phenomenon of “bunching” of addition energies in large quantum dots [@bunching]. The so-called “vertical” experimental geometry was introduced later: it consists of a cylindrical mesa incorporating a triple barrier structure that defines two dots. Sofar, evidence of single-particle coherent states in a AlAs/GaAs heterostructure has been reported [@schmidt], while in AlGaAs/InGaAs structures clear SETS spectra of few-particle states have been observed as a function of the magnetic field $B$ and of the inter-dot barrier thickness [@guy]. A relevant part of theoretical research has addressed the study of few-particle states in vertical geometries, within the framework of the envelope function approximation. The two-electron problem was solved, by means of exact diagonalization, in different geometries by Bryant [@bryant] and by Oh [*et al. *]{} [@oh]. Systems with a number of electrons $N>2$ at $B\approx 0$ in cylindrical geometry have been studied by several methods: Hartree-Fock [@tamura], exact diagonalization for $N\le 5$ [@tokura], numerical solution of a generalized Hubbard Model for $N\le 6$ [@ssc] and for $N>12$ with a “core” approximation suitable for the weak-coupling regime only [@asano], density functional theory [@bart]. Palacios and Hawrylak [@palacios] studied the energy spectrum in strong magnetic field and negligible inter-dot tunneling with various methods ($N\le 6$), and established a connection between the correlated ground states of the double-dot system and those of Fractional Quantum Hall Effect systems in double layers. In this perspective, Hu [*et al. *]{} [@dagotto] studied collective modes in mean-field theory, Imamura [*et al. *]{} [@aoki] exactly diagonalized the full Hamiltonian at strong $B$ and different values of tunneling ($N\le 4$), Martín-Moreno [*et al. *]{} [@tejedor] considered the
null
--- author: - | Ziyu Zhang[^1] Alexander G. Schwing Sanja Fidler Raquel Urtasun\ Department of Computer Science, University of Toronto\ [{zzhang, aschwing, fidler, urtasun}@cs.toronto.edu]{} bibliography: - 'egbib.bib' title: Monocular Object Instance Segmentation and Depth Ordering with CNNs --- [^1]: The first two authors contributed equally to this work.
null
--- abstract: | We propose a new model for naturally realizing light Dirac neutrinos and explaining the baryon asymmetry of the universe through neutrinogenesis. To achieve these, we present a minimal construction which extends the standard model with a real singlet scalar, a heavy singlet Dirac fermion and a heavy doublet scalar besides three right-handed neutrinos, respecting lepton number conservation and a $Z_2^{}$ symmetry. The neutrinos acquire small Dirac masses due to the suppression of weak scale over a heavy mass scale. As a key feature of our construction, once the heavy Dirac fermion and doublet scalar go out of equilibrium, their decays induce the CP asymmetry from the interference of tree-level processes with the *radiative vertex corrections* (rather than the self-energy corrections). Although there is no lepton number violation, an equal and opposite amount of CP asymmetry is generated in the left-handed and the right-handed neutrinos. The left-handed lepton asymmetry would then be converted to the baryon asymmetry in the presence of the sphalerons, while the right-handed lepton asymmetry remains unaffected.\ \[2mm\] author: - 'Pei-Hong Gu$^{1}_{}$' - 'Hong-Jian He$^{2}_{}$' - 'Utpal Sarkar$^{3}_{}$' title: Realistic Neutrinogenesis with Radiative Vertex Correction --- Strong evidences from neutrino oscillation experiments[@pdg2006] so far have pointed to tiny but nonzero masses for active neutrinos. The smallness of the neutrino masses can be elegantly understood via seesaw mechanism[@minkowski1977] in various extensions of the standard model (SM). The origin of the observed baryon asymmetry[@pdg2006] in the universe poses a real challenge to the SM, but within the seesaw scenario, it can be naturally explained through leptogenesis [@fy1986; @luty1992; @fps1995; @ms1998; @di2002; @kl1984]. In the conventional leptogenesis scenario, the lepton number violation is essential as it is always associated with the mass-generation of Majorana neutrinos. However, the Majorana or Dirac nature of the neutrinos is unknown a priori and is awaiting for the upcoming experimental determination. It is important to note[@ars1998; @dlrw1999] that even with lepton number conservation, it is possible to generate the observed baryon asymmetry in the universe. Since the sphaleron processes[@krs1985] have no direct effect on the right-handed fields, a nonzero lepton asymmetry stored in the left-handed fields, which is equal but opposite to that stored in the right-handed fields, can be partially converted to the baryon asymmetry as long as the interactions between the left-handed lepton number and the right-handed lepton number are too weak to realize an equilibrium before the electroweak phase transition, the sphalerons convert the lepton asymmetry in the left-handed fields, leaving the asymmetry in the right-handed fields unaffected [@dlrw1999; @mp2002; @ap2006; @gdu2006; @gh2006]. For all the SM species, the Yukawa interactions are sufficiently strong to rapidly cancel the stored left- and right-handed lepton asymmetry. However, the effective Yukawa interactions of the ultralight Dirac neutrinos are exceedingly weak[@rw1983; @rs1984] and thus will not reach equilibrium until the temperatures fall well below the weak scale. In some realistic models[@mp2002; @gdu2006; @gh2006], the effective Yukawa couplings of the Dirac neutrinos are naturally suppressed by the ratio of the weak scale over the heavy mass scale. Simultaneously, the heavy particles can decay with the CP asymmetry to generate the expected left-handed lepton asymmetry after they are out of equilibrium. This new type of leptogenesis mechanism is called neutrinogenesis [@dlrw1999]. In this paper, we propose a new model to generate the small Dirac neutrino masses and explain the origin of cosmological baryon asymmetry, by extending the SM with a real scalar, a heavy Dirac fermion singlet and a heavy doublet scalar besides three right-handed neutrinos. In comparison with all previous realistic neutrinogenesis models [@mp2002; @gdu2006; @gh2006], the Dirac neutrino masses in our new model are also suppressed by the ratio of the weak scale over the heavy mass scale, but the crucial difference is that in the decays of the heavy particles, the *radiative vertex corrections* (instead of the self-energy corrections) interfere with the tree-level diagrams to generate the required CP asymmetry and naturally realize neutrinogenesis. [c|ccc]{}\ /> respectively. Here $\psi_{L}^{}$, $\nu_{R}^{}$, $D_L^{}$ and $D_R^{}$ carry lepton number $1$ while $\phi$, $\eta$ and $\chi$ have zero lepton number. For simplicity, we have omitted the family indices as well as other SM fields, which carry even parity under the discrete symmetry $Z_{2}^{}$. It should be noted that the conventional dimension-4 Yukawa interactions among the left-handed lepton doublets, the SM Higgs doublet and the right-handed neutrinos are forbidden under the $Z_{2}^{}$ symmetry. Our model also exactly conserves the lepton number, so we can write down the relevant Lagrangian as below, $$\begin{aligned} \label{lagrangian1} -\mathcal{L} &\supset& \left\{f_{i}^{}\overline{\psi_{Li}^{}}\phi D_{R}^{} + g_{i}^{}\chi \overline{D_{L}^{}}\nu_{Ri}^{} +y_{ij}^{}\overline{\psi_{Li}^{}}\eta\nu_{Ri}^{} -\mu\chi\eta^{\dagger}_{}\phi\right.\nonumber\\ && \left.+M_{D}^{}\overline{D_{L}^{}}D_{R}^{} +\textrm{h.c}\right\}+M_{\eta}^{2}\eta^{\dagger}_{}\eta\,, \end{aligned}$$ where $f_i^{}$, $g_i^{}$ and $y_{ij}^{}$ are the Yukawa couplings, while the cubic scalar coupling $\mu$ has mass-dimension equal one. The parameters $M_D$ and $M_\eta$ in (\[lagrangian1\]) are the masses of the heavy singlet fermion $D$ and the heavy Higgs doublet $\eta$, respectively. Note that in the Higgs potential the scalar doublet $\eta$ has a positive mass-term as shown in the above Eq. (\[lagrangian1\]), while the Higgs doublet $\phi$ and singlet $\chi$ both have negative mass-terms[^1]. The lepton number conservation ensures that there is no Majorana mass term for all fermions. As we will discuss below, the vacuum expectation value (*vev*) of $\eta$ comes out to be much less than the *vev* of the other fields. Thus the first two terms generate mixings of the light Dirac neutrinos with the heavy Dirac fermion, while the third term gives the light Dirac neutrino mass term. The complete mass matrix can now be written in the basis $\left\{ \nu_L^{},~ D_L^{},~\nu_R^{},~D_R^{}\right\}$ as $$\begin{aligned} \label{eq:Mnu44} M = \left[ \begin{array}{cccc}0 & 0 & a & b \\ 0 & 0 & c & d \\ a^{\dagger}_{} & c^{\dagger}_{} & 0 & 0 \\ b^{\dagger}_{} & d^{\dagger}_{} & 0 & 0\end{array}\right] \,, \end{aligned}$$ where $a \equiv y\langle \eta \rangle$, $ b\equiv f \langle \phi\rangle$, $ c\equiv g \langle \chi \rangle$ and $d\equiv M_{D}^{}$. As will be shown below, $\,d\gg a,b,c\,$. So, the diagonalization of the mass matrix (\[eq:Mnu44\]) generates the light Dirac neutrino masses of order $a - bc/d$ and a heavy Dirac fermion mass of order $d$. As shown in Fig.\[massgeneration\], at low energy we can integrate out the heavy singlet fermion as well as the heavy
null
--- abstract: 'We describe a space-efficient algorithm for solving a generalization of the subset sum problem in a finite group $G$, using a Pollard-$\rho$ approach. Given an element $z$ and a sequence of elements $S$, our algorithm attempts to find a subsequence of $S$ whose product in $G$ is equal to $z$. For a random sequence $S$ of length $d\log_2 n$, where $n=\#G$ and $d {\geqslant}2$ is a constant, we find that its expected running time is $O(\sqrt{n}\log n)$ group operations (we give a rigorous proof for $d > 4$), and it only needs to store $O(1)$ group elements. We consider applications to class groups of imaginary quadratic fields, and to finding isogenies between elliptic curves over a finite field.' author: - 'Gaetan Bisson and Andrew V. Sutherland' title: 'A low-memory algorithm for finding short product representations in finite groups' --- Introduction ============ Let $S$ be a sequence of elements in a finite group $G$ of order $n$, written multiplicatively. We True $S$. Ideally, we want $S$ to be short, say $k=d\log_2 n$ for some constant $d$ known as the *density* of $S$. In order for $S$ to represent $G$, we clearly require $d{\geqslant}1$, and for sufficiently large $n$, any $d>1$ suffices. More precisely, Babai and Erdős [@babai-erdos] show that for all $$k {\geqslant}\log_2 n + \log_2 \log n + 2$$ there exists a sequence $S$ of length $k$ that represents $G$. Their proof is non-constructive, but, in the case that $G$ is abelian, Erdős and Rényi [@erdos-renyi] show that a randomly chosen sequence of length $$k = \log_2 n + \log_2 \log n + \omega_n$$ represents $G$ with probability approaching $1$ as $n\to\infty$, provided that $\omega_n\to\infty$. The expression is @white]. In related work, Impagliazzo and Naor prove that for a random sequence $S$ of density $d>1$, the distribution of subsequence products almost surely converges to the uniform distribution on $G$ as $n$ goes to infinity [@impagliazzo-naor Proposition 4.1]. This result allows us to bound the complexity of our algorithm for almost all $S$ with $d > 4$. Given a sequence $S$ that represents $G$ (or a large subset of $G$), we wish to find an explicit representation of a given group element $z$ as the product of a subsequence of $S$; we call this a *short product representation* of $z$. In the special case that $G$ is abelian and the elements of $S$ are distinct, this is the *subset sum problem* in a finite group. Variations of this problem and its decision version have long been of interest to many fields: complexity theory [@karp], cryptography [@merkle-hellman], additive number theory [@babai-erdos], Cayley graph theory [@alon-milman], and information theory [@alon-barak-manber], to name just a few. As a computational framework, we work with a generic group $G$ whose elements are uniquely identified, and assume that all group operations are performed by a black box that can also provide random group elements; see [@sutherland:thesis Chapter 1] for a formal model. Time complexity is measured by counting group operations (calls to the black box), and for space complexity we count the number of group elements that are simultaneously stored. In this way we measure the complexity. Working in this model ensures that our algorithms apply to any finite group for which a suitable black box can be constructed. It also means that finding short product representations is provably hard. Indeed, the discrete logarithm problem in a cyclic group of prime order has a lower bound of $\Omega(\sqrt{n})$ in the generic group model [@shoup], and is easily reduced to finding short product representations. In the particular group $G={\mathbb Z}/n{\mathbb Z}$, we note that finding short product representations is easier for non-generic algorithms: the problem can be lifted to $k$ subset sum problems in ${\mathbb Z}$, which for suitable inputs can be solved with a time and space complexity of $O(n^{0.3113})$ via [@howgravegraham-joux], beating the $\Omega(\sqrt{n})$ generic lower bound noted above. This is not so surprising, since working with integers is often easier than working in generic groups; for instance, the discrete logarithm problem in ${\mathbb Z}$ corresponds to integer division and can be solved in quasi-linear time. A standard technique for solving subset sum problems in generic groups uses a baby-step giant-step approach, which can also be used to find short product representations (Section \[sec:BSGS\]). This typically involves $O(2^{k/2})$ group operations and storage for $O(2^{k/2})$ group elements. The space bound can be improved to $O(2^{k/4})$ via a method of Schroeppel and Shamir [@schroeppel-shamir]. Here, we give a Pollard-$\rho$ type algorithm [@pollard] for finding short product representations in a finite group (Section \[sec:pollard\]). It only needs to store $O(1)$ group elements, and, assuming $S$ is a random sequence of density $d>4$, we prove that its expected running time is $O(\sqrt{n}\log{n})$ group operations; alternatively, by dedicating $O(n^\epsilon)$ space to precomputations, the time complexity can be reduced to $O(\sqrt{n})$ (Section \[sec:analysis\]). We also consider two applications: representing elements of the class group of an imaginary quadratic number field as short products of prime ideals with small norm (Section \[sec:relations\]), and finding an isogeny between two elliptic curves defined over a finite field (Section \[sec:isogenies\]). For the latter, our method combines the advantages of [@galbraith] and [@galbraith-hess-smart] in that it requires little memory and finds an isogeny that can subsequently be evaluated in polynomial time. In practice, our algorithm performs well so long as $d {\geqslant}2$, and its low space complexity allows it to feasibly handle much larger problem instances than other generic methods (Section \[sec:comput\]). Algorithms ========== Let $S$ be a sequence of length $k$ in a finite group $G$ of order $n$, let $z$ be an element of $G$, and let ${\mathcal P}(S)$ denote the set of all subsequences of $S$. Our goal is to find a preimage of $z$ under the product map ${\pi}:{\mathcal P}(S)\to G$ that sends a subsequence of $S$ to the (ordered) product of its elements. Baby-step giant-step {#sec:BSGS} -------------------- Let us first recall the baby-step giant-step method. We may express $S=AB$ as the concatenation of two subsequences of roughly equal length. For any sequence $y=(y_1,\ldots,y_m)$, let $\mu(y) = (y_m^{-1},\ldots,y_1^{-1})$, so that ${\pi}(y)$ and ${\pi}(\mu(y))$ are inverses in $G$. We then search for $x\in{\mathcal P}(A)$ (a baby step) and $y\in{\mathcal P}(B)$ (a giant step) which “collide” in the sense that ${\pi}(x) = {\pi}(z\mu(y))$, where $z\mu(y)$ denotes the sequence $(z,y_m^{-1},\ldots,y_1^{-1})$. > /> 1. Express $S$ in the form $S=AB$ with $\#A\approx \#B$. > 2. For each $x\in{\mathcal P}(A)$, store $({\pi}(x),x
null
--- abstract: 'Conventional graph-based dependency parsers guarantee a tree structure both during training and inference. Instead, we formalize dependency parsing as the problem of independently selecting the head of each word in a sentence. Our model which we call <span style="font-variant:small-caps;">DeNSe</span> (as shorthand for [**De**]{}pendency [**N**]{}eural [ **Se**]{}lection) produces a distribution over possible heads for each word using features obtained from a bidirectional recurrent neural network. Without enforcing structural constraints during training, <span style="font-variant:small-caps;">DeNSe</span> generates (at inference time) trees for the overwhelming majority of sentences, while non-tree outputs can be adjusted with a maximum spanning tree algorithm. We evaluate <span style="font-variant:small-caps;">DeNSe</span> on four languages (English, Chinese, Czech, and German) with varying degrees of non-projectivity. Despite the simplicity of the approach, our parsers are on par with the state of the art. [^1]' author: - 'Xingxing Zhang, Jianpeng Cheng' - | Mirella Lapata\ Institute for Language, Cognition and Computation\ School of Informatics, University of Edinburgh\ 10 Crichton Street, Edinburgh EH8 9AB\ [{x.zhang,jianpeng.cheng}@ed.ac.uk, mlap@inf.ed.ac.uk]{} bibliography: - 'eacl2017.bib' title: Dependency Parsing as Head Selection --- Introduction ============ Dependency parsing plays an important role in many natural language applications, such as relation extraction [@fundel2007relex], machine translation [@carreras2009non], language modeling [@chelba1997structure; @zhang-etal:2016] and ontology construction [@snow2004learning]. Dependency parsers represent syntactic information as a set of head-dependent relational arcs, typically constrained to form a tree. Practically all models proposed for dependency parsing in recent years can be described as graph-based [@mcdonald2005online] or transition-based [@yamada2003statistical; @nivre2006labeled]. Graph-based dependency parsers are typically arc-factored, where the score of a tree is defined as the sum of the scores of all its arcs. An arc is scored with a set of local features and a linear model, the parameters of which can be effectively learned with online algorithms [@crammer2001algorithmic; @crammer2003ultraconservative; @freund1999large; @collins2002discriminative]. In order to efficiently find the best scoring tree during training *and* decoding, various maximization algorithms have been developed [@eisner1996three; @eisner2000bilexical; @mcdonald2005non]. In general, graph-based methods are optimized globally, using features of single arcs in order to make the learning and inference tractable. Transition-based algorithms factorize a tree into a set of parsing actions. At each transition state, the parser scores a candidate action conditioned on the state of the transition system and the parsing history, and greedily selects the highest-scoring action to execute. This algorithm was based on the [ @zhang2011transition]. Regardless of model type, the score is generated with engineering. Feature templates are typically manually designed and aim at capturing head-dependent relationships which are notoriously sparse and difficult to estimate. More recently, a few approaches [@chen2014fast; @pei2015effective; @DBLP:journals/corr/KiperwasserG16a] apply neural networks for learning dense feature representations. The learned features are subsequently used in a conventional graph- or transition-based parser, or better designed variants [@dyer2015transition]. In this work, we propose a simple neural network-based model which learns to select the head for each word in a sentence without enforcing tree structured output. Our model which we call [DeNSe]{} (as shorthand for [**De**]{}pendency [**N**]{}eural [**Se**]{}lection) employs bidirectional recurrent neural networks to learn feature representations for words in a sentence. These features are subsequently used to predict the head of each word. Although there is nothing inherent in the model to enforce tree-structured output, when tested on an English dataset, it is able to generate trees for 95% of the sentences, 87% of which are projective. The remaining non-tree (or non-projective) outputs are post-processed with the Chu-Liu-Edmond (or Eisner) algorithm. [ DeNSe]{} uses the head selection procedure to estimate arc weights during training. During training output. We evaluate our model on benchmark dependency parsing corpora, representing four languages (English, Chinese, Czech, and German) with varying degrees of non-projectivity. Despite the simplicity of our approach, experiments show that the resulting parsers are on par with the state of the art. Related : sentence. Related: sentence. The graphs are typically factored into their component arcs and the score of a tree is defined as the sum of its arcs. This factorization enables tractable search for the highest scoring graph structure which is commonly formulated as the search for the maximum spanning tree (MST). The Chu-Liu-Edmonds algorithm [@chu1965shortest; @edmonds1967optimum; @mcdonald2005non] is often used to extract the MST in the case of non-projective trees, and the Eisner algorithm [@eisner1996three; @eisner2000bilexical] in the case of projective trees. During training, weight parameters of the scoring function can be learned with margin-based algorithms [@crammer2001algorithmic; @crammer2003ultraconservative] or the structured perceptron [@freund1999large; @collins2002discriminative]. Beyond basic first-order models, the literature offers a few examples of higher-order models involving sibling and grand parent relations [@carreras2007experiments; @koo2010dual; @zhang2012generalized]. Although more expressive, such models render both training and inference more challenging. #### Transition-based Parsing As the term implies, transition-based parsers conceptualize the process of transforming a sentence into a dependency tree as a sequence of transitions. A transition system typically includes a stack for storing partially processed tokens, a buffer containing the remaining input, and a set of arcs containing all dependencies between tokens that have been added so far [@nivre2003efficient; @nivre2006labeled]. A dependency tree is constructed by manipulating the stack and buffer, and appending arcs with predetermined operations. Most popular parsers employ an *arc-standard* [@yamada2003statistical; @nivre2004incrementality] or *arc-eager* transition system [@nivre2008algorithms]. Extensions of the latter include the use of non-local training methods to avoid greedy error propagation [@zhang2008tale; @huang2010dynamic; @zhang2011transition; @goldberg2012dynamic]. In an *arc-standard* system [@yamada2003statistical; @nivre2004incrementality], the transitions include a <span style="font-variant:small-caps;">Shift</span> operation which removes the first word in the buffer and pushes it onto the stack; a <span style="font-variant:small-caps;">Left-Arc</span> operation adds an arc from the word in the beginning of the buffer to the word on top of the stack; and a <span style="font-variant:small-caps;">Right-Arc</span> operation adds an arc from the word on top of the stack to the word in the beginning of the buffer. During parsing, the transition from one configuration to the next is greedily scored with a linear classifier whose features are defined according to the stack and buffer. The above arc-standard system builds a projective dependency tree bottom up, with the assumption that an arc is only added when the dependent node has already found all its dependents. Extensions include the *arc-eager* system [@nivre2008algorithms] which always adds an arc at the earliest possible stage, a more elaborate (reduce) action space to handle non-projective parsing [@attardi2006experiments], and the use of non-local training methods to avoid greedy error propagation [@zhang2008tale; @huang2010dynamic; @zhang2011transition; @goldberg2012dynamic]. #### reading as a naive machine @titov-henderson:2007:ACLMain]. Recent work uses neural networks in lieu of the linear classifiers typically employed in conventional transition- or graph-based dependency parsers. For example, use a feed forward neural network to learn features for a transition-based parser, whereas do the same for a graph-based parser. apply tensor decomposition to obtain word embeddings in
null
--- abstract: 'Being homologue to the new, Fe-based type of high-temperature superconductors, CeFePO exhibits magnetism, Kondo and heavy-fermion phenomena. We experimentally studied the electronic structure of CeFePO by means of angle-resolved photoemission spectroscopy. In particular, contributions of the Ce $4f$-derived states and their hybridization to the Fe $3d$ bands were explored using both symmetry selection rules for excitation and their photoionization cross-section variations as a function of photon energy. It was experimentally found $-$ and later on confirmed by LDA as well as DMFT calculations $-$ that the Ce 4$f$ states hybridize to the Fe 3$d$ states of $d_{3z^2-r^2}$ symmetry near the Fermi level that discloses their participation in the occurring electron-correlation phenomena and provides insight into mechanism of superconductivity in oxopnictides.' author: - 'M.G. Holder' D 'A. Jesche' - 'P. Lombardo' ; 'R. Hayn' - 'D. V. Vyalikh' - 'S. Danzenbächer' - 'K. Kummer' Krem' 'C. Krellner' - 'C. Geibel' Kraus' (T 'Yu. Kucherenko' - 'T. Kim' 'V 'R. Follath' - 'S. L. Molodtsov' - 'C. Laubschat' [@GFChen08]. [[ @GFCHEN08]] - @GFChen08]. While pure $R$FeAsO ($R$: rare-earth elements) compounds reveal metallic properties, doping by F on O sites leads to superconductivity. The proximity of the superconducting state to spin-density wave formation gave rise to speculations that the underlying pairing mechanism is based on magnetic fluctuations [@Mazin2009]. Superconductivity without doping, although at reduced $T_c$ with respect to the arsenides, is found in the isoelectronic phosphides, except for $R$=Ce [@Kamihara2008; @Baumbach2009]. In CeFeAsO both Fe and Ce order antiferromagnetically below a Neel temperature of 140K [@Zhao2008] and 3.7K [@Jesche2009], respectively. A gradual replacement of As by P leads first to the vanishing of the Fe magnetism, coupled with a change of the Ce order to ferromagnetism [@Luo2009]. For further P doping the Ce order is suppressed, resulting in a paramagnetic heavy-fermion compound [@Bruning2009]. This wide variation of properties is a consequence of a strong sensitivity of the valence-band (VB) structure to the lattice parameters and to interaction with localized $f$ states. Close to the Fermi level ($E_F$) the electronic structure of $R$Fe$Pn$O ($Pn$: phosphorus or arsenic) materials is dominated by five energy bands that have predominantly Fe $3d$ character [@Vildosola2008; @Kuroki2009]. Small variations of the lattice parameters affect particularly two of these bands, namely those containing $d_{xy}$ and $d_{3z^2-r^2}$ orbitals. Increasing the distance of the pnictogen ions to the Fe plane shifts the $d_{xy}$-derived band towards lower and the $d_{3z^2-r^2}$-derived bands towards higher binding energies (BE) leading to a transition from 3D to 2D behavior of the Fermi surface (FS). As shown at Ref. \[\], superconductivity delicately depends on nesting conditions between the FS sheets generated by the above mentioned bands around the $\Gamma$ point and those located around the $M$ point in the Brillouin zone (BZ). The nesting conditions may be affected by variations of the lattice parameters or interaction with 4$f$ states. Purpose of the present work is to study the electronic structure of CeFePO by means of angle-resolved photoemission (ARPES) in order to understand possible reasons for the quenching of superconductivity. We find that closely below $E_F$ both the position and the dispersion of the valence bands are strongly changed with respect to the ones in LaFePO what is at least partly due to interactions with the Ce 4$f$ states. Hybridization of the Fe 3$d$-derived VBs and the Ce 4$f$ states leads around the $\bar\Gamma$ point of the surface BZ to strong 4$f$ admixture to the valence bands, accompanied by a reconstruction of the Fermi surface and a shift of the 4$f$-derived quasiparticle band to lower binding energies. Experiments were performed at the “$1^3$-ARPES” setup at BESSY (Berlin) as described in Ref. \[\], at temperatures around 10K, on single crystals grown from a Sn flux as specified in Ref. \[\]. Due to setup geometry, the vector potential $\bm{A}$ of incident light is parallel to sample surface at vertical polarization (VP) and posses an additional perpendicular component at horizontal polarization (HP). Dipole matrix elements for the photoexcitation depend on the spatial extension of the orbital along the direction of $\bm{A}$. This means that in normal emission geometry states of $d_{3z^2-r^2}$ symmetry will contribute only at HP, while those of $d_{xz,yz}$ and $d_{x^2-y^2}$ ($d_{xy}$, depending on the orientation of the sample in the $(x,y)$ plane) symmetry will be detected at both VP and HP $-$ though with different relative intensities. ! [(Color online) Experimental ARPES images recorded from CeFePO at *h*$\nu$=112eV and VP along the $\bar\Gamma$ - $\bar M$ (a) and $\bar\Gamma$ -$\bar X$ (b) directions in the surface BZ, and calculated energy bands for a slab containing 15 atomic layers, with a P terminated surface, treating 4$f$ states as quasi-core (c) and valence states (d). Size of the dots indicates contribution of $d$ orbitals of the outermost Fe layer (solid dots) or of Ce 4$f$ states (4th layer, open dots). The labels indicate the orbitals with strongest contribution to the bands. []{data-label="ARPES"}](ARPES){width="8.5cm"} Photoemission (PE) spectra of Ce systems reveal a well known double-peak structure consisting of a component at about 2eV BE, roughly reflecting the 4$f^0$ final state expected from PE excitation of an unhybridized 4$f^1$ ground state, and a feature close to $E_F$ that is only due to hybridization and reproduces the ground-state configuration of mainly 4$f^1$ character. In our measurements we made use of strong variations of the 4$f$ photoionization cross section around the 4$d\rightarrow$ 4$f$ absorption threshold due to a Fano resonance: 4$f$ emission becomes resonantly enhanced (suppressed) at *h*$\nu$=121eV (112eV) photon energy[@Mol1997]. Valence-band maps taken at VP and a photon energy of 112eV are shown in Fig. \[ARPES\](a) and (b) for two high symmetry directions in the surface Brillouin zone. Along the $\bar\Gamma$ -$\bar X$ direction two energy bands cross $E_F$ at x$_1\approx$ 0.1 $\overline{\Gamma X}$ ($A_1$) and x$_2\approx$ 0.4$\overline{\Gamma X}$ ($A_2$), respectively. In LaFePO similar bands are observed but the crossings occur closer to the $\bar X$ point at x$_1\approx$ 0.2 $\overline{\Gamma X}$ and x$_2\approx$ 0.7 $\overline{\Gamma X}$ [@Lu2008]. In the following are the bands \[Fig. \[ARPES\](a), dashed\], that merge in LaFePO. All are calculated in Ref. \[\] on the basis of LDA bulk band-structure calculations, using internally relaxed parameters and rescaling calculated band energies by a factor of two. In this way, the Fermi level crossings x$_1$ and x$_2$ are caused by $d_{xz, yz}$ and $d_{3z^2-r^2}$-derived states, respectively. The latter should hardly be visible at VP and hence at least for the present measurement a different character of the $A_2$ band has to be concluded. Another method LaFePO. In order to take account of the surface sensitivity of ARPES and the fact that band positions of surface and subsurface atomic layers may be different in BE with respect to the bulk ones [@Vyalikh2009], slab calculations were performed by means of the linear-muffin-tin-orbital (LMTO) method [@And75]. It follows from the structural and cohesive properties that the CeFePO crystal can be cleaved mainly between the FeP and CeO stacks, so that
null
--- abstract: 'This paper is concerned with the modeling errors appeared in the numerical methods of inverse medium scattering problems (IMSP). Optimization based iterative methods are wildly employed to solve IMSP, which are computationally intensive due to a series of Helmholtz equations need to be solved numerically. Hence, rough approximations of Helmholtz equations can significantly speed up the iterative procedure. However, rough approximations will lead to instability and inaccurate estimations. Using the Bayesian inverse methods, we incorporate the modelling errors brought by the rough approximations. Modelling errors are assumed to be some complex Gaussian mixture (CGM) random variables, and in addition, well-posedness of IMSP in the statistical sense has been established by extending the general theory to involve CGM noise. Then, we generalize the real valued expectation-maximization (EM) algorithm used in the machine learning community to our complex valued case to learn parameters in the CGM distribution. Based on these preparations, we generalize the recursive linearization method (RLM) to a new iterative method named as Gaussian mixture recursive linearization method (GMRLM) which takes modelling errors into account. Finally, we provide two numerical examples to illustrate the effectiveness of the proposed method.' address: - 'School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an 710049, China' - 'School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an, 710049, China' - 'School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an 710049, China' - 'School of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, China' author: - Junxiong Jia - Bangyu Wu - Jigen Peng - Jinghuai Gao bibliography: - 'references.bib' title: Recursive linearization method for inverse medium scattering problems with complex mixture Gaussian error learning --- [^1] Introduction ============ Scattering theory has played a central role in the field of mathematical physics, which is concerned with the effect that an inhomogeneous medium has on an incident particle or wave [@ColtonThirdBook]. Usually, the total field is viewed as the sum of an incident field and a scattered field. Then, the inverse scattering problems focus on determining the nature of the inhomogeneity from a knowledge of the scattered field [@Bleistein2001Book; @ColtonSIAMReview2000], which have played important roles in diverse scientific areas such as radar and sonar, geophysical exploration, medical imaging and nano-optics. Deterministic computational methods for inverse scattering problems can be classified into two categories: nonlinear optimization based iterative methods [@Bao2015TopicReview; @Metivier2016IP; @Natterer1995IP] and imaging based direct methods [@Cakoni2006Book; @Cheney2001IP]. Direct methods are called qualitative methods which need no direct solvers and visualize the scatterer by highlighting its boundary with designed imaging functions. Iterative methods are usually called quantitative methods, which aim at providing some functions to represent the scatterer. Because a sequence of direct and adjoint scattering problems need to be solved, the quantitative methods are computationally intensive. This paper is concerned with the nonlinear optimization based iterative methods, especially focus on the recursive linearization method (RLM) for inverse medium scattering problems [@Bao2015TopicReview]. Although the computational obstacle can be handled in some circumstances, the accuracy of the forward solver is still a critical topic, particularly for applications in seismic exploration [@Fichtner2011Book] and medical imaging [@Koponen2014IEEE]. A lot of efficient forward solvers based on finite difference method, finite element methods and spectral methods have been proposed [@Teresa2006IP; @Wang1997JASA]. Here, we will not propose a new forward solver to reduce the computational load, but attempt to reformulate the nonlinear optimization model based on Bayesian inverse framework which can incorporate statistical properties of the model errors induced by rough forward solvers. By doing so, we can simplify the optimization procedure. In order to give a clear sketch of our idea, let us provide a concise review of the Bayesian inverse methods according to our purpose. Denote $X$ to be some separable Banach space, then the forward problem usually modeled as follows $$\begin{aligned} \label{forwardForm} d = \mathcal{F}(m) + \epsilon,\end{aligned}$$ where $d \in \mathbb{C}^{N_{d}}$ ($N_{d} \in \mathbb{N}^{+}$) stands for the measured data, $m \in X$ represents the interested parameter and $\epsilon$ denotes noise. For inverse scattering problems, $m$ is just the scatterer, $\mathcal{F}$ represents a Helmholtz equation combined with some measurement operator. The nonlinear optimization based iterative methods just formulate inverse problem as follows $$\begin{aligned} \label{optimiFormu} \min_{m \in X} \Bigg\{ \frac{1}{2}\big\|d - \mathcal{F}(m)\big\|_{2}^{2} + \mathcal{R}(m) \Bigg\},\end{aligned}$$ where $\mathcal{R}(\cdot)$ stands for some regularization operator and $\|\cdot\|_{2}$ represents the $\ell^{2}$-norm. Different values are allowed @acta_numerica]. Bayesian inverse methods aim to provide a complete posterior information, however, it can also offer a point estimate. Up to now, there are usually two frequently used point estimators: maximum a posteriori (MAP) estimate and conditional mean (CM) estimate [@Tenorio2006Book]. For problems defined on finite dimensional space, MAP estimate is obviously just the solution of the minimization problem (\[optimiFormu\]), which is illustrated rigorously in [@book_comp_bayeisn]. Different to the finite dimensional case, only recently, serious results for relationships between MAP estimates and minimization problem (\[optimiFormu\]) are obtained in [@Burger2014IP; @MAPSmall2013; @Dunlop2016IP] when $X$ is an infinite dimensional space. Simply speaking, if minimization problem (\[optimiFormu\]) has been used to solve our inverse problem, then an assumption has been made that is the noise $\epsilon$ is sampled from some Gaussian distribution $\mathcal{N}(\bar{\epsilon},\Sigma_{\epsilon})$ with mean $\bar{\epsilon}$ and covariance operator $\Sigma_{\epsilon}$. In real world applications, we would like to use a fast forward solver (limited accuracy) to obtain an estimation as accurately as possible. Hence, the noise usually not only brought by inaccurate measurements but also induced by a rough forward solver and inaccurate physical assumptions [@Calvetti2017]. Let us denote $\mathcal{F}_{a}(\cdot)$ to be the forward operator related to some rough forward solver, then (\[forwardForm\]) can be rewrite as follows by following the methods used in [@Koponen2014IEEE] $$\begin{aligned} \label{forwardForm2} d = \mathcal{F}_{a}(m) + (\mathcal{F}(m) - \mathcal{F}_{a}(m)) + \epsilon.\end{aligned}$$ By denoting $\xi := (\mathcal{F}(m) - \mathcal{F}_{a}(m))$, we obtain that $$\begin{aligned} \label{forwardForm3} d = \mathcal{F}_{a}(m) + \xi + \epsilon.\end{aligned}$$ From the perspective of Bayesian methods, we can model $\xi$ as a random variable which obviously has the following two important features 1. $\xi$ depend on the unknown function $m$; 2. $\xi$ may distributed according to a complicated probability measure. For feature (1), we can relax this tough problem to assume that $\xi$ is independent of $m$ but the probability distribution of $\xi$ and the prior probability measure of $m$ are related with each other [@Lasanen2012IPI]. For feature (2), to the best of our knowledge, the existing literatures only provide a compromised methods that is assume $\xi$ sampled from some Gaussian probability distributions [@Junxiong2016; @Koponen2014IEEE]. Here, we attempt to provide a more realistic assumptions for the probability measures of the random variable $\xi$. Noticing that Bayes’ formula is also one of fundamental tools for investigations about statistical machine learning [@PR2006Book] which is a field attracts numerous researchers from various fields, e.g., computer science, statistics and mathematics. Notice that for problems such as background subtraction [@Yong2017IEEE], low-rank matrix factorization [@Zhao2015IEEE] and principle component analysis [@MENG2012487; @Zhao2014ICML], learning algorithms deduced by Bayes’ formula are useful and the errors brought by inaccurate forward modeling also appears. For modeling errors appeared in machine learning tasks, Gaussian mixture model is widely used since it can approximate any probability measure in some sense [@PR2006Book]. Gaussian mixture distributions usually have the following form of density function $$\begin{aligned} \sum_{k = 1}^{K}\pi_{k} \mathcal{N}(\cdot \,| \,\zeta_{k},\Sigma_{k}),\end{aligned}$$ where $\mathcal{N}(\cdot \,| \,\zeta_{k},\Sigma_{k})$ stands for
null
--- abstract: 'Геометрическая медиана плоской области — это точка, минимизирующая среднее расстояние от самой себя до всех точек этой области. Здесь мы выпишем некоторую градиентную систему для вычисления геометрической медианы треугольной области и сформулируем наглядное характеристическое свойство этой медианы. Оно заключается в том, что все три средних расстояния от геометрической медианы до трех сторон границы треугольной области равны между собой. Дальше в точки.' author: – область. Геометрическая медиана является естественным пространственным обобщением статистической медианы одномерной выборки, которая, как известно, минимизирует суммарное расстояние до всех элементов выборки. Именно это минимизирующее свойство положено в основу определения геометрической медианы $m$ конечного набора точек $P_1,\dots P_n$ на плоскости $$\label{eq:nmedian} m = \mathop{\mathrm{arg\,min}}\limits_{X\in\mathbb R^2} \sum_{i=1}^n{|P_i-X|}.$$ С начала прошлого века геометрическая медиана и ее непосредственные обобщения используются в экономической науке в качестве полезного инструмента [@Weber1909]. Параллельно продолжают исследоваться математические свойства дискретной медианы и разрабатываются эффективные численные методы для ее нахождения [@Wesolowsky1993]. А ближе к концу века интерес смещается в сторону непрерывного случая — развиваются исследования, связанные с геометрическими медианами кривых и областей [@Fekete2005; @Zhang2014]. В своем изложении мы как раз сосредоточимся на непрерывном случае. Мы начнем с вывода градиентной системы для нахождения геометрической медианы треугольной области (Теорема \[TriangleSystem\]). Это и \[TriangleSystem’\]). Дальше эти результаты обобщаются на другие виды областей, а также на другие медианоподобные точки. Напомним, что по аналогии с дискретным случаем (\[eq:nmedian\]) геометрическая медиана $m$ области $\Omega\subset \mathbb R^2$ определяется как $$m = \mathop{\mathrm{arg\,min}}\limits_{X\in\mathbb R^2}\int_{P\in\Omega}{|P - X|}\,dP,$$ где $|P - X|$ — это обычное евклидово расстояние между точками $P$ и $X$. После введения обозначения $$\label{eq:SigmaOmega} \Sigma_\Omega(X) = \int_{P\in\Omega}|P-X|\,dP$$ то же самое определение можно записать короче $$m = \mathop{\mathrm{arg\,min}}\limits_{X\in\mathbb R^2} \Sigma_\Omega(X).$$ Нам понадобится еще одно одно обозначение, пусть $P_1$ и $P_2$ — некоторые точки на плоскости, тогда $$\Sigma_{P_1P_2}(X) = \int_{P\in P_1P_2}|P-X|\,dP,$$ где интегрирование ведется по отрезку $P_1P_2$. Заметим, что среднее расстояние от точки $X$ до точек отрезка $P_1P_2$ имеет вид $\Sigma_{P_1P_2}(X)/|P_2-P_1|$. Теперь приступим к формулировке первого результата. \[TriangleSystem\] Точка $m$ тогда и только будет геометрической медианой треугольной области $\Delta$ с вершинами $P_1,P_2,P_3$, когда выполняется равенство $$\label{eq:TriangleSystem} \frac{\Sigma_{P_1P_2}(m)}{|P_2 - P_1|}\,\overrightarrow{P_1P_2} + \frac{\Sigma_{P_2P_3}(m)}{|P_3 - P_2|}\,\overrightarrow{P_2P_3} + \frac{\Sigma_{P_3P_1}(m)}{|P_1 - P_3|}\,\overrightarrow{P_3P_1} = 0.$$ [*Доказательство*]{}. Из определения следует, что медиана $m= m(\Delta)$ — это критическая точка функции $\Sigma_\Delta$. Возьмем малый вектор $\vec \delta$, $|\vec \delta| = \delta$ и вычислим приращение функции $\Sigma_\Delta$ при смещении аргумента на этот вектор $$\delta\Sigma_\Delta(X)= \Sigma_\Delta(X + \vec \delta) - \Sigma_\Delta(X).$$ В критической точке $m$ оно должно иметь порядок $o(\delta)$. Положим $P'_i = P_i - \vec \delta$ и обозначим через $\Delta'$ сдвинутый треугольник с штрихованными вершинами (рис.
null
--- abstract: | In many problems of classical analysis extremal configurations appear to exhibit complicated fractal structure. This makes it much harder to describe extremals and to attack such problems. Many problems are of this measure*]{}. We argue that, searching for extremals in such problems, one should work with random fractals rather than deterministic ones. We introduce a new class of fractals [*random conformal snowflakes*]{} and investigate its properties developing tools to estimate spectra and showing that extremals can be found in this class. As an application we significantly improve known estimates from below on the extremal behaviour of harmonic measure, showing how to constuct a rather simple snowflake, which has a spectrum quite close to the conjectured extremal value. author: - 'D. Beliaev' - 'S. Smirnov' in structure. This makes such problems more difficult to approach than similar ones, where extremal objects are smooth. As an example one can consider the coefficient problem for univalent functions. Bieberbach formulated his famous conjecture arguing that the Köebe function, which maps a unit disc to a plane with a stright slit, is extremal. The Bieberbach conjecture was ultimately proved by de Branges in 1985 [@deBranges], while the sharp growth asymptotics was obtained by Littlewood [@Littlewood25] in 1925 by a much easier argument. However, coefficient growth problem for bounded functions remains widely open, largely due to the fact that the extremals must be of fractal nature (cf [@CaJo]). This relates (see [@BeSmECM]) to a more general question of finding the [*universal multifractal spectrum*]{} of [*harmonic measure*]{} defined below, which includes many other problems, in particular conjectures of Brennan, Carleson and Jones, Kraetzer, Szegö, and Littlewood. In this paper we report on our search for extremal fractals. We also consider the random ones. We analyze harmonic measure. Multifractal analysis of harmonic measure ----------------------------------------- It became clear recently that appropriate language for many problems in geometric function theory is given by the [*multifractal analysis*]{} of [*harmonic measure*]{}. The concept of multifractal spectrum of a measure was introduced by Mandelbrot in 1971 in [@Mandelbrot72; @Mandelbrot74] in two papers devoted to the distribution of energy in a turbulent flow. We use the definitions that appeared in 1986 in a seminal physics paper [@HJKPS] by Halsey, Jensen, Kadanoff, Procaccia, Shraiman who tried to understand and describe scaling laws of physical measures on different fractals of physical nature (strange attractors, stochastic fractals like DLA, etc.). There are various notions of spectra and several ways to make a rigorous definition. Two standard spectra are [*packing*]{} and [*dimension*]{} spectra. The packing spectrum of harmonic measure $\omega$ in a domain $\Omega$ with a compact boundary is defined as $$\pi_{\Omega}(t)= \sup\,\Big\{q:\ \forall\delta>0~\exists~\delta-\mathrm{ packing }~\{B\} ~{\mathrm with}~\sum \mathrm{diam}(B)^t\omega(B)^q\,\ge\,1\Big\}\ ,$$ where $\delta$-packing is a collection of disjoint open sets whose diameters do not exceed $\delta$. The [*dimension spectrum*]{} which is defined in terms of harmonic measure $\omega$ on the boundary of $\Omega$ (in the case of simply connected domain $\Omega$ harmonic measure is the image under the Riemann map $\phi$ of the normalised length on the unit circle). Dimension spectrum gives the dimension of the set of points, where harmonic measure satisfies a certain power law: $$f(\alpha)~:=~\mathrm{dim}\, \Big\{z:~\omega\br{B(z,\delta)}\,\approx\,\delta^\alpha\,,~\delta\to0\Big\},~\alpha\ge\frac12~.$$ Here $\mathrm{dim}$ stands for the Hausdorff or Minkowski dimension, leading to possibly different spectra. The restriction $\alpha \ge 1/ 2$ is due to Beurling’s inequality. Of course in general there will be many points where measure behaves differently at different scales, so one has to add $\limsup$’s and $\liminf$’s to the definition above – consult [@Makarov] for details. In our context it is more suitable to work with a modification of the packing spectrum which is specific for the harmonic measure on a two dimensional simply connected domain $\Omega$. In this case we can define the [*integral means spectrum*]{} as $$\beta_\phi(t)~:=~\limsup_{r\to1+}\frac{\log \int_{0}^{2\pi}|\phi'(re^{i\theta})|^t d\theta}{|\log(r-1)|},~t\in{{\mathbb R}}~,$$ where $\phi$ is a Riemann map from the complement of the unit disc onto a simply connected domain $\Omega$. Connections between all these spectra for particular domains are not that simple, but the [*universal spectra*]{} $$\Pi(t)=\sup_\Omega \pi(t), \quad F(\alpha)=\sup_\Omega f(\alpha), \ \ \mathrm{and} \ \ B(t)=\sup_\Omega\beta(t)$$ are related by Legendre-type transforms: $$\begin{aligned} F(\alpha)&=&\inf_{0\le t \le 2} (\alpha \Pi(t)+t), \quad \alpha \ge 1~, \\ \Pi(t)&=&\sup_{\alpha\ge 1} \left(\frac{F(\alpha)-t)}{\alpha}\right), \quad 0\le 2\le 2~, \\ \Pi(t)&=&B(t)-t+1~.\end{aligned}$$ See Makarov’s survey [@Makarov] for details. Random fractals --------------- One of the main problems in the computation of the integral means spectrum (or other multifractal spectra) is the fact that the derivative of a Riemann map for a fractal domain depends on the argument in a very non regular way: $\phi'$ is a “fractal” object in itself. We propose to study random fractals to overcome this problem. For a random function $\phi$ it is natural to consider the [*average integral means spectrum:*]{} $$\begin{aligned} \bar\beta(t)&=&\sup\brs{\beta: \int_1(r-1)^{\beta-1}\int_0^{2\pi} {{\mathbb E}}\brb{|f'(r e^{i\theta})|^t}d \theta d r=\infty} \\ &=&\inf\brs{\beta: \int_1(r-1)^{\beta-1}\int_0^{2\pi} {{\mathbb E}}\brb{|f'(r e^{i\theta})|^t}d \theta d r<\infty}.\end{aligned}$$ The average spectrum does not have to be related to the spectra of a particular realization. We want to point out that even if $\phi$ has the same spectrum a.s. it does not guarantee that $\bar\beta(t)$ equal to the a.s. value of $\beta(t)$. Moreover, it is not always the case in this domain. But one can see that $\bar\beta(t)$ is bounded by the universal spectrum $B(t)$. Indeed, suppose that there is a random $f$ with $\bar\beta(t)>B+\epsilon$, hence for any $r$ there are particular realizations of $f$ with $\int |f'(z)|d\theta>(r-1)^{-B-\epsilon/2}$. Then by Makarov’s fractal approximation [@Makarov] there is a (deterministic) function $F$ such that $\beta_F(t)>B(t)$ which is impossible by the definition of $B(t)$. For many classes of random fractals ${{\mathbb E}}|\phi'|^t$ (or its growth rate) does not depend on the argument. This allows us to drop the integration with respect to the argument and study the growth rate along any particular radius. Perhaps more importantly ${{\mathbb E}}|\phi'|$ is no longer a “fractal” function. One can think that this is not a big advantage compared to the usual integral means spectrum: instead of averaging over different arguments we average over different realizations of a fractal. But most fractals are results of some kind of an iterative construction, which means that they are invariant under some (random) transformation. Thus ${{\mathbb E}}|\phi'|^t$ is a solution of some kind of equation. Solving this equation (or estimating its solutions) we can find $\bar\beta(t)$. In this paper we want to show how one can employ these ideas. In the Section \[sec:def\] we introduce a new class of random fractals that
null
--- abstract: 'Using heavy quark effective theory a factorized form for inclusive production rate of a heavy meson can be obtained, in which the nonperturbative effect related to the heavy meson can be characterized by matrix elements defined in the heavy quark effective theory. Using this factorization, predictions for the full spin density matrix of a spin-1 and spin-2 meson can be obtained and they are characterized only by one coefficient representing the nonperturbative effect. Predictions for spin-1 heavy meson are compared with experiment performed at $e^+e^-$ colliders in the energy range from $\sqrt{s}=10.5$GeV to $\sqrt{s}=91$GeV, a complete agreement is found for $D^*$- and $B^*$-meson. There True discussed.' address: | Institute of Theoretical Physics,\ Academia Silica,\ P.O.Box 2735, Beijing 100080, China\ e-mail: majp@itp.ac.cn author: - 'J.P. Ma' HQET @Review]. With HQET it allows such a study starting directly from QCD. HQET is widely used in studied of decays of heavy hadrons. In comparison, only in a few works it is used to study productions of heavy hadrons. In 1994 Falk and Peskin used HQET to predict spin alignment of a heavy hadron in its inclusive production[@FP]. In this talk we will reexamine the subject and restrict ourself to the case of spin-1 meson. A spin-1 heavy meson $H^*$ is a bound state of a heavy quark $Q$ and a system of light degrees of freedom in QCD, like gluons and light quarks. In the work[@FP] the total angular momentum $j$ of the light system is taken as $1/2$. In the heavy quark limit, the orbital angular momentum of $Q$ can be neglected and only the spin of $Q$ contributes to the total spin of $H^*$. Once the heavy quark $Q$ is produced, it will combine the light system into $H^*$. Because parity is conserved in QCD, the probabilities for the light system with positive- and negative helicity is the same. Therefore, one can predict the probabilities for production of $H^*$ with a left-handed heavy quark $Q$ as: $$P(\bar B^*(\lambda=-1)):P(\bar B^*(\lambda=0)): P(\bar B^*(\lambda=1)) =\frac{1}{2} : \frac{1}{4} :0,$$ where $\lambda$ is the helicity of $H^*$. These results are easily derived, however, three questions or comments can be made to them: (a). In general the spin information is contained in a spin density matrix, probabilities are the diagonal part of the matrix. The question is how about the non-diagonal part. It should be noted that this part is also measured in experiment. (b). It is possible that the light system can have the total angular momentum $j=3/2$ to form $H^*$. One is suppressed. Is it possible to derive full spin density matrix without the assumption of $j=1/2$? (c). How can we systematically add corrections to the approximation which lead to the results in Eq.(1)? To make responses to these questions let us look at a inclusive production of $H^*$ in detail. In its inclusive production a heavy meson is formed with a heavy quark $Q$ and with other light degrees of freedom, the light degrees can be a system of light quarks and gluons. Because its large mass $m_Q$ the heavy quark is produced by interactions at short distance. Therefore the production can be studied with perturbative QCD. The heavy quark, once produced, will combine light degrees of freedom to form a hadron, the formation is a long-distance process, in which momentum transfers are small, hence the formed hadron will carry the most momentum of the heavy quark. The above discussion implies the production rate can be factorized, in which the perturbative part is for the production of a heavy quark, while the nonperturbative part is for the formation. For the nonperturbative part an expansion in the inverse of $m_Q$ can systematically be performed in the framework of HQET. This type of the factorization was firstly used in parton fragmentation into a heavy hadron[@Ma]. In this talk we will not discuss the factorization in detail, the details can be found in the work[@Ma1]. We directly give our results and make a comparison with experiment. It should be noted that the factorization can be performed for any inclusive production of $H^*$. Because the most experiments to measure spin alignment are performed at $e^+e^-$-colliders, we present the results for the inclusive production at $e^+e^-$-colliders. We consider the process $$e^+({\bf p})+e^{-}(-{\bf p})\to H^*({\bf k}) +X,$$ where the three momenta are given in the brackets. In the process we assume that the initial beams are unpolarized. We denote the helicity of $H^*$ as $\lambda$ and $\lambda=-1,0,1$. All information about the polarization of $H^*$ is contained in a spin density matrix, which may be unnormalized or normalized, we will call them unnormalized or normalized spin density matrix, respectively. The unnormalized spin density matrix can be defined as $$R(\lambda, \lambda',{\bf p},{\bf k}) = \sum_X \langle H^*(\lambda)X\vert {\cal T}\vert e^+e^-\rangle \cdot \langle H^*(\lambda')X\vert {\cal T}\vert e^+e^-\rangle^*,$$ where the conservation of the total energy-momentum and the spin average of the initial state is implied. ${\cal T}$ is the transition operator. The cross-section with a given helicity $\lambda$ is given by: $$\sigma(\lambda) = \frac{1}{2s} \int \frac {d^3k}{(2\pi)^3} R(\lambda, \lambda,{\bf p},{\bf k}).$$ From Eq. (3) the normalized spin density matrix is defined by $$\rho_{\lambda\lambda'}({\bf p},{\bf k}) = \frac {R(\lambda, \lambda',{\bf p},{\bf k})} {\sum_\lambda R(\lambda, \lambda,{\bf p},{\bf k})}.$$ It should be noted that the normalized spin density matrix is measured in experiment. It is straightforward to perform the mentioned factorization for the unnormalized spin density matrix in the rest frame of $H^*$, which is related to the moving frame only by a Lorentz boost. In the rest frame we can define a creation operator for $H^*$: $$\vert H^*(\lambda)\rangle =a^\dagger(\lambda) \vert 0\rangle= \bfeps(\lambda)\cdot{\bf a}^\dagger \vert 0 \rangle .$$ where $\bfeps(\lambda)$ is the polarization vector. In the rest frame the field $h_v$ of the heavy quark $Q$ in HQET has two non-zero components. We denote them as: $$h_v(x)=\left(\begin{array}{c} \psi(x) \\ 0 \end{array}\right).$$ With these notations we define two operators: $$O(H^*) = \frac{1}{6}{\rm Tr} \psi a_i^\dagger a_i \psi^\dagger, \ \ O_s(H^*) = \frac{i}{12} {\rm Tr} \sigma_i \psi a^\dagger_j a_k \psi^\dagger \varepsilon_{ijk},$$ where $\varepsilon_{ijk}$ is the totally antisymmetric tensor and $\sigma_i(i=1,2,3)$ is the Pauli matrix. The results for the unnormalized spin density matrix read: $$\begin{aligned} R(\lambda, \lambda',{\bf p},{\bf k}) &=&\frac{1}{3} a({\bf p}, {\bf k}) \langle 0 \vert O(H^*) \vert 0 \rangle \bfeps^*(\lambda) \cdot\bfeps(\lambda') \nonumber\\ &+& \frac{i}{3} {\bf b}({\bf p},{\bf k})\cdot [\bfeps^*(\lambda)\times\bfeps(\lambda')] \cdot \langle 0 \vert O_s(H^*) \vert 0 \rangle +{\cal O}(m_Q^{-2}).\end{aligned}$$ The quantities $a({\bf p}, {\bf k})$ and ${\bf b}({\bf p},{\bf k})$ characterize the spin density matrix of the heavy quark $Q$ produced in the inclusive process: $$e^+({\bf p})+e^{-}(-{\bf p})\to Q({\bf k},{\bf s}) +X$$ where ${\bf s}$ is the spin vector of $Q$ in its rest frame and the rest frame is related to the moving frame only by a Lorentz boost. The unnormalized spin density matrix $R_Q({\bf s},{\bf p},{\bf k})$ of $Q$ can be defined by replacing $H^*(\lambda)$ with $
null
--- author: - | [**Anna Kovalenko$^1$, Razin Farhan Hussain$^1$, Omid Semiari$^2$, and Mohsen Amini Salehi$^1$**]{}\ $^1$[{aok8889, razinfarhan.hussain1, amini}@louisiana.edu]{} $^2$[osemiari@georgiasouthern.edu]{}\ $^1$High Performance Cloud Computing ([HPCC](http://hpcclab.org/)) Laboratory,\ School of Computing and Informatics, University of Louisiana at Lafayette, Lafayette, LA, USA\ $^2$Department of Electrical and Computer Engineering, Georgia Southern University, Statesboro, GA, USA\ bibliography: - 'references.bib' title: 'Robust Resource Allocation Using Edge Computing for Vehicle to Infrastructure (V2I) Networks' --- Acknowledgments {#acknowledgments .unnumbered} =============== Portions of this research were conducted with high performance computational resources provided by Louisiana Optical Network Infrastructure [@LONI] and was supported by the Louisiana Board of Regents under grant number LEQSF(2016-19)-RD-A-25.
null
--- author: - 'C.R. Cowley[^1]' author: -<extra_id_1> title:<extra_id_2> --- author(s):<extra_id_3> title<extra_id_4> _title:<extra_id_5> author<extra_id_6> Description<extra_id_7><extra_id_8> —<extra_id_9> 1945; Xiao et al. 1959) (><extra_id_10> author 'S. Hubrig' (ed al. 2007, henceforth CAT). Pogodin, Franco & Lopes (2005, henceforth P05) give a detailed description of the spectrum along with a historical resume of investigations from the 1930’s. We note that the nature of HD190073 as a young, Herbig Ae star became widely recognized some three decades after Herbig’s (1960) seminal paper. HD190073 was included among the 24 young stars studied for abundances by Acke & Waelkens (2004, henceforth, AW). In this important paper, the authors made the bold assumption that abundances could be determined for stars of this nature using standard techniques-plane parallel, one dimensional models, in hydrostatic equilibrium. The models were used to obtain abundances from absorption lines with equivalent widths less than 150 mÅ. These models are still questioned. Material is being accreted by these young stars, and the infall velocities are thought to be near free-fall, several hundred kms$^{-1}$. Does this mean that these young stars are facing complications? AW nevertheless proceeded. Although they did not state this explicitly, the justification for their assumptions is empirical, and may be found in their results. Basically, these are the fact that their approach yields entirely reasonable stellar parameters and abundances, including agreement from lines of neutral and ionized elements. Stated simply, their assumptions led them to self-consistent results. We appreciate the science. While AW’s studies were both competent and thorough, better observational material is currently available, making it possible to use systematically weaker lines, and to study more elements. We have also made use of the wings of the Balmer lines, not used by AW. The lower Balmer lines have central emissions. In the case of H$\alpha$, the emission dominates the feature. The Balmer lines and especially H$\alpha$, have been extensively studied (e.g. P05; P05, 1990). In the present paper we also study the weaker metallic emission lines, to provide information on the physical conditions where this emission occurs. This was discussed by CAT who were primarily concerned with the magnetic field of HD 190073. They also included the electromagnetic field of HD 180073 (cf. Sect. \[sec:emiss\] and following). Hubrig an, <unk>[sec:emiss<unk>] & following). Hubrigan and CAT (2005) a al. (2006, 2009) reported a longitudinal magnetic field of 84$\pm$30 Gauss, up to 104$\pm$19 Gauss, while CAT found a longitudinal field of 74$\pm$10 Gauss. Observations {#sec:obs} ============ We downloaded 8 HARPS spectra from the ESO archive, all obtained on 11 November 2008 within 74 minutes of one another. These were averaged, binned to 0.02Å and mildly Fourier filtered. The resulting spectrum has a signal-to-noise ratio of 350 or more. The resolution of HARPS spectra usually cited as over 100000, is not significantly modified for our purposes on the averaged spectrum, as Fig. \[fig:line4481\] illustrates. ! [ The HARPS spectrum (ADP.HARPS.2008-11-10T23:43:14.386\_2\_SID\_A) and averaged spectra in the region of the Mg doublet $\lambda$4481Å. The HARPS (gray and red in online version) spectrum has been displaced slightly downward for display purposes. []{data-label="fig:line4481"}](4481.ps){width="55mm" <unk>/data-label="ps" Å. They were used for special features (e.g. the reduced abundances. Reduction {#sec:reduction} ========= ! [The region from $\lambda\lambda$4000 to 4050 Å shows numerous measurable absorption features. Note the broad absorption at $\lambda$4026 Å, which is He . Fe   $\lambda$4045 Å shows strong emission as well as absorption. []{data-label="fig:line4050"}](4050.ps){width="54mm" height="83mm"} The averaged HARPS spectrum was measured for 1796 lines. The UVES spectrum was also measured for line identifications in the region $\lambda\lambda$3054–3867 Å. We measured 760 absorption lines, which were often severely affected by emission. Many absorption lines, especially weak ones, were not significantly affected by emission, and suitable for abundance determinations. Figure \[fig:line4050\] shows a typical region with many relatively unperturbed absorptions. Preliminary, automated identifications were made, and wavelength coincidence statistics (WCS, Cowley & Hensberge 1981) were performed. A few spectra not investigated by AW were analyzed: He , Na , Al , Si , S , S , Co , Mn , Mn , Ni , Zn , and Sr . We found no exotic elements, such as lanthanides, or unusual 4d or 5d elements. ], or any unusual element, such as lanthanides [ A comparison of equivalent width measurements by AW and the present study (UMICH). []{data-label="fig:pltdif"}](pltdif.ps){width="54mm" height="83mm"} Lines were chosen for equivalent width measurement with the help of the automated identification list, which lists plausible identifications within 0.03 Å of any measured feature. Blends were rejected. Usually, we avoided lines with equivalent widths greater than 20 mÅ but in order to compare our measurements with those of AW, we included a few stronger lines. A comparison of measurements is given in Fig. \[fig:pltdif\]. Generally, the measurements agree well with one another, and differences can usually be explained by judgments of where to draw the continuum when a line is partially in emission, or there is emission close by. Differences in the case of one of the solid circles is surely due to emission, as Ti  $\lambda$4398 Å falls between two strong emission lines. The other solid point is for O $\lambda$3947 Å. This is misidentification. Note that Fig. \[fig:pltdif\] is not logarithmic. The model atmosphere and abundance methods {#sec:model} ========================================== The methods used to obtain abundances from the equivalent widths, including model atmosphere construction are explained in some detail in two previous papers (Cowley et al. 2000) Briefly, the $T(\tau_{5000})$ from Atlas 9 (Kurucz 1993) as implemented by Sbordone et al. (2004) was used with Michigan software to product depth-dependent models. The effective temperature and gravity were selected from ionization and excitation equilibrium as well as fits to the wings of H$\beta$–H$\delta$. We have adopted a somewhat lower temperature than used by AW, 8750 K, and $\log g = 3.0$. The former used 9250 K, and $\log g=3.5$. We also used a lower microturbulence, 2 kms$^{-1}$, compared to AW’s 3 kms$^{-1}$, but this is not important for most of our weaker lines. Oscillator strengths were taken from the modern literature when possible, or from compilations by NIST (Ralchenko 2010, preferred) or VALD (Kupka et al. 1999). Default damping constants were used as in the studies cited, but they are unimportant for weak lines. Abundances ========== --------------- ------------------------- ------ ----- ------- ------- Ion $\log({\rm El}/{\Sum})$ sd $N$ Sun AW \[1.5pt\] He –1.15 0.38 2 –1.11 C –3.40 0.23 36 –3.61 –3.55 [**N** ]{}\* –3.50 0.38 9 –4.21 –3.40 O –3.29 0.10 12 –3.35 –3.38 [**Na** ]{} –5.25 0.24 5 –5.80 Mg –4.29 0.23 3 –4.44 –4.52 Mg –4.54 0.16 8 –4.44 [**Al** ]{}\* –6.07 1 –5.59 –6.01 Si –4.43 0.36 7 –4.53 –4.41 Si –4.61 0.13 10 –4.53 S –4.62 0.06 3 –4.92 S –4.40 0.45 6 Ca –5.78 0.11 2
null
--- abstract: 'The order-by-order renormalization of the self-consistent mean-field potential in many-body perturbation theory for normal Fermi systems is investigated in detail. Building on previous work mainly by Balian and de Dominicis, as a key result we derive a thermodynamic perturbation series that manifests the consistency of the adiabatic zero-temperature formalism with perturbative statistical mechanics—for both isotropic and anisotropic systems—and satisfies at each order and for all temperatures the thermodynamic relations associated with Fermi-liquid theory. These properties are proved to all orders.' author: <unk>This is the normal temperature. In general, for Fermi systems the correct ground-state is not a normal state but involves Cooper pairs [@PhysRev.150.202; @PhysRevLett.15.524; @doi:10.1142/S0217979292001249; @RevModPhys.66.129; @Salmhofer:1999uq]. However, pairing effects can often be neglected for approximative calculations of thermodynamic properties close to zero temperature. For such calculations there are two formalisms: first, there is grand-canonical perturbation theory, and second, the zero-temperature formalism based on the adiabatic continuation of the ground state [@PhysRev.84.350; @Goldstone:1957zz; @Nozbook; @Runge; @negele; @Fetter; @Abrikosov]. In their time-dependent (i.e., in frequency space) formulations, these two formalisms give matching results if all quantities are derived from the exact Green’s functions, i.e., from the self-consistently renormalized propagators [@Luttinger:1960ua; @Fetter; @Abrikosov; @Feldman1999]. The renormalization of MBPT in frequency space can be generalized to vertex functions [@dommar2; @dommar1; @Hausmann; @Rossi:2015lda; @PhysRevB.93.161102; @DICKHOFF2004377; @VanHoucke:2011ux], and is essential to obtain a fully consistent framework for calculating transport properties [@Baym:1961zz; @Baym:1962sx; @stefanuc]. Nevertheless, the use of bare propagators has the benefit that in that case the time integrals can be performed analytically. With bare propagators, MBPT in its most basic form corresponds to a perturbative expansion in terms of the interaction Hamiltonian $V$ about the noninteracting system with Hamiltonian $H_0$, where $H=H_0+V$ is the full Hamiltonian. First-order self-energy effects can be included to all orders in bare MBPT by expanding instead about a reference Hamiltonian $H_\text{ref}=H_0+U_1$, where $U_1$ includes the first-order contribution to the (frequency-space) self-energy $\Sigma_{1,{{\textbf}{k}}}$ as a self-consistent single-particle potential (mean field). The renormalization of $H_\text{ref}$ in terms of $U_1$ has the effect that all two-particle reducible diagrams with first-order pieces (single-vertex loops) are canceled. At second order the self-energy becomes frequency dependent and complex, so the equivalence between the propagator renormalization in frequency space and the renormalization of the mean-field part of $H_\text{ref}$ in bare MBPT is restricted to the Hartree-Fock level. Zero-temperature MBPT calculations with bare propagators and a Hartree-Fock reference Hamiltonian ${H_\text{ref}=H_0+U_1}$ are common in quantum chemistry and nuclear physics. With a Hartree-Fock reference Hamiltonian (or, with ${H_\text{ref}=H_0}$), however, the adiabatic zero-temperature formalism is inconsistent with the zero-temperature limit (${T\rightarrow 0}$) of grand-canonical MBPT. The (main) fault however lies not with zero-temperature MBPT, but with the grand-canonical perturbation series: in the bare grand-canonical formalism (with $H_\text{ref}\in\{H_0,H_0+U_1\}$) there is a mismatch in the Fermi-Dirac distribution functions caused by using the *reference* spectrum $\varepsilon_{{\textbf}{k}}$ together with the *true* chemical potential $\mu$, and in general this leads to deficient results [@Fritsch:2002hp; @PhysRevC.89.064009; @Wellenhofer:2017qla]. The adiabatic formalism on the other hand uses the reference chemical potential, i.e., the reference Fermi energy ${\varepsilon}_{\text{F}}$. Related to formalism. This issue is usually dealt with by modifying the grand-canonical perturbation series for the free energy in terms of an expansion about the chemical potential $\mu_\text{ref}\xrightarrow{T\rightarrow 0}{\varepsilon}_{\text{F}}$ of the reference system [@Kohn:1960zz; @brout2] (see also Sec. \[sec42\]). This expansion introduces additional anomalous contributions, and for isotropic systems these can be seen to cancel the old ones for ${T\rightarrow 0}$ [@Luttinger:1960ua]. Thus, the modified perturbation series for the free energy $F(T,\mu_\text{ref})$ reproduces the adiabatic series in the isotropic case. For anisotropic systems, however, the anomalous contributions persist at ${T=0}$ (for ${H_\text{ref}=H_0+U_1}$, at fourth order and beyond). Negele and Orland [@negele] interpret this feature as follows: there is nothing fundamentally wrong with the bare zero-temperature formalism, but for anisotropic systems the adiabatic continuation must be based on a better reference Hamiltonian $H_\text{ref}$. Since the convergence rate[^1] of MBPT depends on the choice of $H_\text{ref}$, this issue is relevant also for finite-temperature calculations, and for isotropic systems. Recently, Holt and Kaiser [@PhysRevC.95.034326] have shown that including the real part of the bare second-order contribution to the (on-shell) self-energy, $\text{Re}\,[\Sigma_{2,{{\textbf}{k}}}(\varepsilon_{{\textbf}{k}})]$, as the second-order contribution to the self-consistent mean field has a significant effect in perturbative nuclear matter calculations with modern two- and three-nucleon potentials (see, e.g., Refs. [@RevModPhys.81.1773; @MACHLEIDT20111; @Bogner:2009bt]). However, a formal clarification for the renormalization of $H_\text{ref}$ in terms of $\text{Re}\,[\Sigma_{2,{{\textbf}{k}}}(\varepsilon_{{\textbf}{k}})]$ was not included in Ref. [@PhysRevC.95.034326]. In this Ref. [@PhysRevC.95.034326] it is not clear whether the use of this second-order mean field should be considered an improvement or not, compared to calculations with a Hartree-Fock mean field. [^2] A general scheme where the reference Hamiltonian is renormalized at each order in grand-canonical MBPT was introduced by Balian, Bloch, and de Dominicis [@Balian1961529] (see also Refs. [@Balian1961529b; @Balianph; @BlochImperFerm; @boer; @1964mbpdedom]). This scheme however leads to a mean field whose functional form is given by $U[n_{\textbf}{k},T]$, where $n_{\textbf}{k}(T,\mu)$ is the Fermi-Dirac distribution and the explicit temperature dependence involves factors $\operatorname{\operatorname{e}}^{\pm({\varepsilon}_{\textbf}{k}-\mu)/T}$. Because of the $\operatorname{\operatorname{e}}^{\pm({\varepsilon}_{\textbf}{k}-\mu)/T}$ factors, the resulting perturbation series is well-behaved only at sufficiently large temperatures, and its ${T\rightarrow 0}$ limit does not exist. [^3] A different renormalization scheme was outlined by Balian and de Dominicis (BdD) in Refs. [@statquasi3; @statquasi1] (see also Refs. [@1964mbpdedom; @boer]). At second order, this scheme leads to the mean field employed by Holt and Kaiser [@PhysRevC.95.034326]. The PhysRevC.95.034326] Refs. [@statquasi3; ] is 1. The functional form of the mean field is to all orders given by $U[n_{\textbf}{k}]$, i.e., there is no explicit temperature dependence (apart from the one given by the Fermi-Dirac distributions), so the ${T\rightarrow 0}$ limit exists. 2. The zero-temperature limit of the renormalized grand-canonical perturbation series for the free energy $F(T,\mu)$ reproduces the (correspondingly renormalized) adiabatic series for the ground-state energy $E^{(0)}({\varepsilon}_{\text{F}})$ to all orders; i.e., the reference spectrum ${\varepsilon
null
--- abstract: 'In this manuscript we provide a family of lower bounds on the indirect Coulomb energy for atomic and molecular systems in two dimensions in terms of a functional of the single particle density with gradient correction terms.' address: - '$^1$ Departmento de Física, P. Universidad Cat'' olica de Chile,' - '$^2$ Departmento de Física, P. Universidad Cat'' olica de Chile, ' author: - 'Rafael D. Benguria$^1$' - Matěj Tušek$^2$ title: '**Indirect Coulomb Energy for Two-Dimensional Atoms**' --- Introduction ============ Since the advent of quantum mechanics, the impossibility of solving exactly problems involving many particles has been clear. These problems are of interest in such areas as atomic and molecular physics, condensed matter physics, and nuclear physics. It was, therefore, necessary from the early beginnings to estimate various energy terms of a system of electrons as functionals of the single particle density $\rho_{\psi}(x)$, rather than as functionals of their wave function $\psi$. The wavefunction $(y therein). In Quantum Mechanics of many particle systems the main object of interest is the wavefunction $\psi \in \bigwedge^N L^2({{{\mathord{\mathbb R}}}}^3)$, (the antisymmetric tensor product of $L^2({{{\mathord{\mathbb R}}}}^3)$). More explicitly, for a system of $N$ fermions, $\psi(x_1, \dots , x_i, \dots, \dots x_ j, \dots x_N)= - \psi(x_1, \dots , x_j, \dots, \dots x_ i, \dots x_N)$, in view of Pauli’s Exclusion Principle, and $\int_{{{\mathord{\mathbb R}}}^N} |\psi|^2 \, dx_1 \dots d x_n=1$. Here, $x_i \in {{\mathord{\mathbb R}}}^3$ denote the coordinates of the $i$-th particle. From the wavefunction $\psi$ one can define the one–particle density (single particle density) as $$\rho_{\psi}(x) = N \int_{{{\mathord{\mathbb R}}}^{3(N-1)}} |\psi (x, x _2, \dots, x_N)|^2 \, dx_2 \dots dx_N, \label{density}$$ and from here it follows that $\int_{{{\mathord{\mathbb R}}}^3} \rho_{\psi} (x) \, dx = N$, the number of particles, and $\rho_{\psi}(x)$ is the density of particles at $x \in {{\mathord{\mathbb R}}}^3$. Notice that since $\psi$ is antisymmetric, $|\psi|^2$ is symmetric, and it is immaterial which variable is set equal to $x$ in (\[density\]). In Atomic and Molecular Physics, given that the expectation value of the Coulomb attraction of the electrons by the nuclei can be expressed in closed form in terms of $\rho_{\psi}(x)$, the interest focuses on estimating the expectation value of the kinetic energy of the system of electrons and the expectation value of the Coulomb repulsion between the electrons. Here, we will be concerned with the latest. The most natural approximation to the expectation value of the Coulomb repulsion between the electrons is given by $$D(\rho_{\psi},\rho_{\psi})=\frac{1}{2} \int \rho_{\psi} (x) \frac{1}{|x-y|} \rho_{\psi}(y) \, {\mathrm{d}}x \, {\mathrm{d}}y,$$ which is usually called the [*direct term*]{}. The remainder, i.e., the difference between the expectation value of the electronic repulsion and $D(\rho_{\psi},\rho_{\psi})$, say $E$, is called the [*indirect term*]{}. In 1930, Dirac [@Di30] gave the first approximation to the indirect Coulomb energy in terms of the single particle density. Using an argument with plane waves, he approximated $E$ by $$E \approx -c_D \int \rho_{\psi}^{4/3} \, dx, \label{eq:dirac}$$ where $c_D=(3/4)(3/\pi)^{1/3} \approx 0.7386$ (see, e.g., [@Mo06], p. 299). Here we use units in which the absolute value of the charge of the electron is one. The first rigorous lower bound for $E$ was obtained by E.H. Lieb in 1979 [@Li79], using the Hardy–Littlewood Maximal Function [@StWe71]. There he found that, $E \geq -8.52 \int \rho_{\psi}^{4/3} \, dx$. The constant $8.52$ was substantially improved by E.H. Lieb and S. Oxford in 1981 [@LiOx81], who proved the bound $$E \ge -C \int \rho_{\psi}^{4/3} \, dx, \label{eq:LO}$$ with $C = C_{LO}=1.68$. In their proof, Lieb and Oxford used Onsager’s electrostatic inequality [@On39], and a localization argument. The best value for $C$ is unknown, but Lieb and Oxford [@LiOx81] proved that it is larger or equal than $1.234$. The Lieb–Oxford value was later improved to $1.636$ by Chan and Handy, in 1999 [@ChHa99]. Since the work of Lieb and Oxford [@LiOx81], there has been a special interest in quantum chemistry in constructing corrections to the Lieb–Oxford term involving the gradient of the single particle density. This interest is discussed below (see Theorem 1.1 therein). Recently, Benguria, Bley, and Loss obtained an alternative to (\[eq:LO\]), which has a lower constant (close to $1.45$) to the expense of adding a gradient term (see Theorem 1.1 in [@BeBlLo12]), which we state below in a slightly modified way, \[BBL\] For any normalized wave function $\psi(x_1, \dots, x_N)$ and any $0 < \epsilon < 1/2$ we have the estimate $$E(\psi) \ge - 1.4508 \, (1+\epsilon) \int_{{{\mathord{\mathbb R}}}^3} \rho_{\psi}^{4/3} dx -\frac{3}{2 \epsilon} (\sqrt{\rho_{\psi}}, |p| \sqrt{\rho_{\psi}}) \label{exch}$$ where $$(\sqrt \rho, |p| \sqrt \rho) := \int_{{{\mathord{\mathbb R}}}^3} |\widehat{\sqrt \rho}(k)|^2 |2\pi k| d k = \frac{1}{2\pi^2} \int_{{{\mathord{\mathbb R}}}^3} \int_{{{\mathord{\mathbb R}}}^3} \frac{|\sqrt{\rho(x)} - \sqrt{\rho(y)}|^2 }{|x-y|^4} dx dy \ . \label{eq:KE}$$ Here, $\widehat f(k)$ denotes the Fourier-transform $$\widehat f (k) = \int_{{{\mathord{\mathbb R}}}^3} e^{-2\pi i k \cdot x} f(x) d x\ .$$ i\) For many physical states the contribution of the last two terms in (\[exch\]) is small compared with the contribution of the first term. See, e.g., the Appendix in [@BeBlLo12]; ii\) For the second equality in (\[eq:KE\]) see, e.g., [@LiLo01], Section 7.12, equation (4), p. 184; iii\) It was already noticed by Lieb and Oxford (see the remark after equation (26), p. 261 on [@LiOx81]), that somehow for uniform densities the Lieb–Oxford constant should be $1.45$ instead of $1.68$; iv\) In the same vein, J. P. Perdew [@Pe91], by employing results for a uniform electron gas in its low density limit, showed that in the Lieb–Oxford bound one ought to have $C \ge 1.43$ (see also [@LePe93]). After that he was born dots). Recently, Benguria, Gallegos, and Tušek [@BeGaTu12] gave an alternative to the Lieb–Solovej–Yngvason bound [@LiSoYn95], with a constant much closer to the numerical values proposed in [@RaSeGo11] (see also the references therein) to the
null
--- author: - 'Ю. В. --- author: — функций. Оно позволяет связать теоретико-функциональные и геометрические свойства множеств. Для и<unk> [@ahlfors]. Затем не тренд [@ziemer]. му Хессе [@hesse] распространил этот результат на p-емкость и p-модуль для случая, когда пластины конденсатора не пересекаются с границей области. В случае евклидовой метрики равенство емкости и модуля в самых общих предположениях было доказано В. А. Шлыком [@shlyk2], затем это доказательство было немного упрощено в работе М. Оцука [@ohtsuka]. В случае римановой метрики равенство было доказано в [@capmod]. Финслеровы пространства были введены как обобщение римановых многообразий на случай, когда метрика зависит не только от координат, но и от направления. Равенство <unk>итат<unk> [@dymch2009]. Пространства Карно-Каратеодори и субфинслеровы пространства отличаются от римановых и финслеровых пространств соответственно ограничением класса допустимых путей. C основными вопросами анализа на группах Карно можно ознакомиться, например, в книге [@folland]. Емкости, модули конденсаторов, а также свойства различных функциональных классов на группах Карно в последнее время изучались группой С. К. Водопьянова (например, [@vod89; @vod98; @vod96]). В о <unk>оследнее врем<unk> [@markina2003]. Субфинслеровы пространства изучались, например, в работах [@clelland2006628; @ber13; @donne; @ber14; @buk14]. Приведем , обозначения. Доказательство многих нижеприведенных рассуждений можно найти в [@folland]. Стратифицированной однородной группой (или группой Карно) называется связная односвязная нильпотентная группа Ли , алгебра Ли которой разлагается в прямую сумму векторных пространств $V_1\oplus V_2\oplus\dots\oplus V_m$ таких, что $[V_1,V_k]=V_{k+1}$ для $k=1,2,\dots,m-1$ и $[V_1,V_m]=\{0\}$. Здесь $[X,Y]=XY-YX$ — коммутатор элементов $X$ и $Y$, а $[V_1,V_j]$ — линейная оболочка элементов $[X,Y]$, где $X\in V_1$, $Y\in V_j$, $j=1,2,\dots,m$. Пусть левоинвариантные векторные поля $X_{11}$, $X_{12}$,…, $X_{1n_1}$ образуют базис $V_1$. Определим подрасслоение $HT$ касательного расслоения $T\G$ со слоями $HT_x$, $x\in \G$, которые представляют собой линейную оболочку векторных полей $X_{11}(x)$, $X_{12}$, …, $X_{1n_1}(x)$. Назовем $HT$ горизонтальным касательным расслоением, а его слои $HT_x$ — горизонтальными касательными пространствами в точке $x\in\G$. Расширим базис $X_{11}$, …, $X_{1n_1}$ до базиса $X_{ij}$, $j=1,2,\dots n_i$, $i=1,2,\dots,m$, всей алгебры Ли , где каждый $X_{ij}$ представляет собой коммутатор $j$-го порядка некоторых векторов $X_{1j}$, $j=1,2,\dots,n_1$. Таким G$ $i=1,2,\dots,m$. Любой элемент $x\in \G$ можно единственным образом представить в виде $x=\exp\left(\sum\limits_{i,j}x_{ij}X_{ij}\right)$. Набор чисел $\{x_{ij}\}$ назовем координатами элемента $x$. Получим <unk>о . Мера Лебега в $R^N$ индуцирует биинвариантную меру Хаара в , которую мы обозначим через $dx$. Обозначим $x_i=(x_{i1},x_{i2},\dots,x_{in_i})$, $i=1,2,\dots,m$. Определим растяжения $\delta_\lambda x$, $\lambda>0$, по формуле $\delta_\lambda x=(\lambda x_1,\lambda^2x_2,\dots,\lambda^m x_m)$. также имеем $d(\delta_\lambda x)=\lambda^Q dx$, где $Q=\sum\limits_iin_i$ — однородная размерность группы . Пусть $F(x,\xi)$ — неотрицательная функция, определенная при $x\in\G$, $\xi\in HT_x$, которая гладко завис
null
--- author: - | Yonghui Ling$^\textup{a,}$[^1] , Xiubin Xu$^\textup{b,}$[^2]\ *Department of Mathematics, Zhejiang University, Hangzhou 310027, China\ *Department of Mathematics, Zhejiang Normal University, Jinhua 321004, China** title: 'Semilocal Convergence Behavior of Halley’s Method Using Kantorovich’s Majorants Principle ' --- **Abstract:** The present paper is concerned with the semilocal convergence problems of Halley’s method for solving nonlinear operator equation in Banach space. Under some so-called majorant conditions, a new semilocal convergence analysis for Halley’s method is presented. This results in higher convergence rate. Moreover, a new error estimate based on a directional derivative of the twice derivative of the majorizing function is also obtained. This analysis also allows us to obtain two important special cases about the convergence results based on the premises of Kantorovich and Smale types. **Keywords:** = 65H10. Introduction {#section:Introduction} ============ In this paper, we concern with the numerical approximation of the solution $x$ of the nonlinear equation $$\label{eq:NonlinearOperatorEquation} F(x) = 0,$$ where $F$ is a given nonlinear operator which maps from some nonempty open convex subset $D$ in a Banach space $X$ to another Banach space $Y$. Newton’s method with initial point $x_0$ is defined by $$\label{iteration:NewtonMethod} x_{k+1} = x_k - F'(x_k)^{-1}F(x_k), \ \ \ k = 0,1,2,\ldots,$$ which is the most efficient method known for solving such an equation. One of the famous results on Newton’s method (\[iteration:NewtonMethod\]) is the well-known Kantorovich theorem [@Kantorvich1982], which guarantees convergence of that method to a solution using semilocal conditions. It does not require a priori existence of a solution, proving instead the existence of the solution and its uniqueness on some region. Another important result concerning Newton’s method (\[iteration:NewtonMethod\]) is Smale’s point estimate theory [@Smale1986]. It assumes that the nonlinear operator is analytic at the initial point. Since then, Kantorovich like theorem has been the subject of many new researches, see for example, [@GraggTapia1974; @Deuflhard1979; @Ypma1982; @Gutierrez2000; @XuLi2007; @XuLi2008]. For Smale’s point estimate theory, Wang and Han in [@WangHan1997] discussed $\alpha$ criteria under some weak condition and generalized this theory. In particular, Wang in [@Wang1999] introduced some weak Lipschitz conditions called Lipschitz conditions with L-average, under which Kantorovich like convergence criteria and Smale’s point estimate theory can be investigated together. Recently, Ferreira and Svaiter [@Ferreira2009a] presented a new convergence analysis for Kantorovich’s theorem which makes clear, with respect to Newton’s method (\[iteration:NewtonMethod\]), the relationship of the majorizing function $h$ and the nonlinear operator $F$ under consideration. Specifically, they studied the semilocal convergence of Newton’s method (\[iteration:NewtonMethod\]) under the following majorant conditions: $$\|F'(x_0)^{-1}[F'(y) - F'(x)]\| \leq h'(\|y - x\| + \|x - x_0\|) - h'(\|x - x_0\|), \ \ x,y \in {\bm{\mathrm{B}}}(x_0,R), R > 0,$$ where $\|y - x\| + \|x - x_0\| < R$ and $h:[0,R) \to \mathbb{R}$ is a continuously differentiable, convex and strictly increasing function and satisfies $h(0) > 0, h'(0) = - 1$, and has zero in $(0,R)$. This convergence analysis relaxes the assumptions for guaranteeing Q-quadratic convergence (see Definition \[definition:Q-orderConvergence\]) of Newton’s method (\[iteration:NewtonMethod\]) and obtains a new estimate of the Q-quadratic convergence. This analysis was also introduced in [@Ferreira2009b] studing the local convergence of Newton’s method. Halley’s method in Banach space denoted by $$\label{iteration:HalleyMethod} x_{k+1} = x_k - [{\bm{\mathrm{I}}}- L_F(x_k)]^{-1}F'(x_k)^{-1}F(x_k), \ \ k = 0,1,2,\ldots,$$ where operator $L_F(x)=\frac{1}{2}F'(x)^{-1}F''(x)F'(x)^{-1}F(x)$, is another famous iteration for solving nonlinear equation (\[eq:NonlinearOperatorEquation\]). The results concerning convergence of this method with its modification have recently been studied under the assumptions of Newton-Kantorovich type, see for example, [@Candela1990a; @HanWang1997; @Argyros2004; @YeLi2006; @Ezquerro2005; @Argyros2012]. Besides, there are also some researches concerned with Smale-type convergence for Halley’s method (\[iteration:HalleyMethod\]), if the nonlinear operator $F$ is analytic at the initial point, see for example, [@WangHan1990; @Wang1997; @Han2001]. Motivated by the ideas of Ferreira and Svaiter in [@Ferreira2009a], in the rest of this paper, we study the semilocal convergence of Halley’s method (\[iteration:HalleyMethod\]) under some so-called majorant conditions. Suppose that $F$ is a twice Fréchet differentiable operator and there exists $x_0 \in D$ such that $F'(x_0)$ is nonsingular. In addition, let $R > 0$ and $h : [0,R) \to \mathbb{R}$ be a twice continuously differentiable function. We say the operator $F''$ satisfies the majorant conditions, if $$\label{condition:MajorantCondition} \|F'(x_0)^{-1}[F''(y) - F''(x)]\| \leq h''(\|y - x\| + \|x - x_0\|) - h''(\|x - x_0\|), \ \ x,y \in {\bm{\mathrm{B}}}(x_0,R),$$ where $\|y - x\| + \|x - x_0\| < R$ and the following assumptions hold: 1. $h(0) > 0, h''(0) > 0, h'(0) = -1.$ 2. $h''$ is convex and strictly increasing in $[0,R)$. 3. $h$ has zero(s) in $(0,R)$. Assume that $t^*$ is the smallest zero and $h'(t^*) < 0.$ Under the assumptions that the second derivative of $F$ satisfies the majorant conditions, we establish a semilocal convergence for Halley’s method (\[iteration:HalleyMethod\]). In our convergence analysis, the assumptions for guaranteeing Q-cubic convergence of Halley’s method (\[iteration:HalleyMethod\]) are relaxed. In addition, we obtain a new error estimate based on a directional twice derivative of the derivative of the majorizing function. We drop out the assumption of existence of a second root for the majorizing function, still guaranteeing Q-cubic convergence. Moreover, the majorizing function even do not need to be defined beyond its first root. In particular, this convergence analysis allows us to obtain some important special cases, which includes Kantorovich-type convergence results under Lipschitz conditions and Smale-type convergence results under the $\gamma$-condition (see Definition \[definition:GammaCondition\]). The rest of this paper is organized as follows. In Section 2, we introduce some preliminary notions and properties of the majorizing function. In Section 3, we study the majorizing function and the results regarding only the majorizing sequence. The main result is discussed in section 4. In Section 5, we present two special cases of our main results. And finally in Section 6, some remarks and numerical example are offered. Preliminaries {#section:Preliminaries} ============= Let $X$ and $Y$ be Banach spaces. For $x \in X$ and a positive number $r$, throughout the whole paper, we use ${\bm{\mathrm{B}}}(x,r)$ to stand for the open ball with radius $r$ and center $x$, and let $\overline{{\bm{\mathrm{B}}}(x,r)}$ denote its closure. Throughout this paper, for a convergent sequence $\{x_n\}$ in $X$, we use the notion of Q-order of convergence (
null
--- abstract: 'Gamma-ray spectroscopy provides diagnostics of particle acceleration in solar flares, but care must be taken when interpreting the spectra due to effects of the angular distribution of the accelerated particles (such as relativistic beaming) and Compton reprocessing of the radiation in the solar atmosphere. In this paper, we use the GEANT4 Monte Carlo package to simulate the interactions of accelerated electrons and protons and study these effects on the gamma-rays resulting from electron bremsstrahlung and pion decay. We consider the ratio of the 511 keV annihilation-line flux to the continuum at 200 keV and in the energy band just above the nuclear de-excitation lines (8–15 MeV) as a diagnostic of the accelerated particles and a point of comparison with data from the X17 flare of 2003 October 28. We also find that pion secondaries from accelerated protons produce a positron annihilation line component at a depth of $\sim$ 10 g cm$^{-2}$, and that the subsequent Compton scattering of the 511 keV photons produces a continuum that can mimic the spectrum expected from the 3$\gamma$ decay of orthopositronium.' author: Light energies. When these particles interact with the ambient medium, they can produce photons with energies up to the gamma-ray range. The electrons produce continuum emission via the bremsstrahlung process, while the ions (protons and heavier nuclei) can produce excited and radioactive nuclei which, through de-excitation or decay, make emission lines usually $\lesssim$ 7 MeV. Ions with energy above $\sim$ 200 MeV can produce pions by interacting with ambient nuclei. These pions then produce gamma-ray continuum via $\pi^0\rightarrow 2\gamma$ or $\pi^{\pm} \rightarrow \mu^{\pm} \rightarrow e^{\pm}\rightarrow \gamma_{brem}$. There is also 511 keV line emission from annihilation of positrons created by the decay of $\beta^{+}$-emitting radioactive nuclei or $\pi^{+}$. Whether radioactive nuclei or $\pi^{+}$ contribute more to the positron population depends on the hardness of the injected ions [@murphy84]. Positrons may also come from the $e^{-}e^{+}$ pair production process of the gamma-ray continuum. These continua and lines provide information on particle acceleration in solar flares, but we can only observe those photons that reach us. Since the accelerated electrons are relativistic, the angular distribution of bremsstrahlung will tend to follow that of the original electrons, so electrons beamed downward along the magnetic field will put most of their radiation into the Sun. Photons created deep in the solar atmosphere by any process are less likely to escape than those created in the corona or chromosphere. These effects will change the relative luminosity of different spectral components, but the location and directionality of the photon production processes will also change the spectral shape of each as well. Bremsstrahlung intrinsically creates different spectra in different directions, with the hardest spectrum in the beam direction. Compton scattering can also affect the observed spectra [@kotoku07 e.g.] by scattering photons to lower energy. The importance of this process depends on both the depth where the original photons are produced and their direction relative to the line of sight. Accurate directionality is the key mechanism. Examples given in this post @kontar06. The picture of electrons accelerated directly down field lines into the deep solar atmosphere is seldom found to agree with observations. Strong scattering, as from interactions with magnetohydrodynamic waves, can make the distribution function evolve into one that is more isotropic than when the electrons were injected into the magnetic loop. @bret09 compared different kinds of instabilities that may cause this anisotropy decrease. The evolution speed of the distribution function is sensitive to the loop magnetic field as described by @karlicky09. Magnetic top [@dermer86]. @krucker08 proposed a scenario in which the highest energy bremsstrahlung in flares is produced in the magnetic loop top, because they found that the coronal source is harder and becomes dominant above 500 keV in the 2005 January 20 flare. In this picture, the angular distribution of gamma-ray producing electrons is also isotropic because they are trapped by strong scattering. @kontar06 concluded that the angular distribution of electrons producing hard X-rays at flare footpoints is isotropic by including the Compton-scattered X-ray “albedo” surrounding the footpoints in their spectral analysis. In the present work, we perform Monte Carlo simulations with the toolkit GEANT4 [@agostinelli03] to illustrate the effects of beaming and reprocessing on observable gamma-ray components from flare-accelerated electrons and protons. We focus on electron bremsstrahlung and the secondary radiation from pion production by protons, since we believe that GEANT4 addresses these components more accurately than it does the nuclear de-excitation lines that dominate between these energy ranges. In this observation. If the 8–15 MeV continuum is taken to be bremsstrahlung from flare-accelerated electrons, we show that there is a strong constraint on the angular distribution of the electrons, an effect discussed also by @kotoku07. The positron-annihilation line is always isotropic, because the positrons mostly slow down to thermal speed before they annihilate. Thus, a comparison of the annihilation line and the 8–15 MeV continuum gives us further information on the electron angular distribution, again under the assumption that both components are due to bremsstrahlung and its reprocessing. We will also discuss the other sources for these energy bands: for the line, $\beta+$ decay and positrons from $\pi+$, and for the continuum, bremsstrahlung from electrons and positrons from the decay of charged pions and reprocessing of gamma-rays from the decay of neutral pions in the solar atmosphere and in the instrument. Positrons can annihilate through the 3$\gamma$ orthopositronium channel. The resulting continuum, while it has a characteristic shape, can be mistaken for Comptonization of the 511 keV line [@share04] and can greatly affect the estimation of the ratio between the annihilation line and other spectral components. For an example both of the capabilities of these simulations and of instrumental effects, we use an observation of the large X-class flare of 2003 October 28 with [*RHESSI*]{} and compare the time integrated spectrum with our simulations. The models and method are shown in detail in §\[sec:model\]. In §\[sec:simu\] we show the simulation results and compare them with the 2003 October 28 flare, and §\[sec:dis\] provides the summary and discussion. Model and method {#sec:model} ================ The GEANT4 tool kit {#sec:geant} ------------------- We used the Monte Carlo simulation package GEANT4 [@agostinelli03], which is widely used in experimental high-energy physics for simulating the passage of particles through matter. The physics processes offered cover all the electromagnetic and hadronic processes we are interested in. GEANT4 treats individual simulated particles one at a time rather than distributions of particles, and carries them through a mass model of the universe defined by geometrical boundaries between materials rather than a grid. When a particle is “injected”, GEANT4 calculates the mean free path of all the discrete physics processes implemented, calculates a random distance associated with each process, and chooses that with the shortest distance to be implemented (unless the distance to the nearest material boundary is closer, in which case the particle is taken to the boundary). It then determines all the physics properties of the particle after the chosen process (including its new position) taking account of the continual physics processes (such as energy loss by electrons) that happen within this step. As this goes on, it provides a “track” of the particle until it comes to rest, leaves the volume, or reaches a low-energy threshold. Daughter particles, when created, are tracked immediately, with the parent particle put aside to be followed after the daughter particle is finished. Cascades can thus be followed deeply, restricted only by computer memory. In our simulations, we inject electrons or protons into a model of the solar atmosphere and track their interactions with the ambient material, recording the angular and energy distribution of photons leaving the Sun. @kotoku07 used GEANT4 to simulate electron bremsstrahlung in solar flares, and included the effect of Compton scattering on the emerging bremsstrahlung continuum. In this work, we also quantify the positron-annihilation line resulting from bremsstrahlung photons pair-producing in the Sun, and simulate the gamma-ray emissions originating from accelerated protons as well, considering the continuum and annihilation-line photons resulting from pion creation. ### Electromagnetic processes {#sec:em} GEANT
null
--- abstract: 'We performed Self-Consistent Greens Function (SCGF) calculations for symmetric nuclear matter using realistic nucleon-nucleon (NN) interactions and effective low-momentum interactions ($V_{low-k}$), which are derived from such realistic NN interactions. We compare the spectral distributions resulting from such calculations. We also introduce a density-dependent effective low-momentum interaction which accounts for the dispersive effects in the single-particle propagator in the medium.' author: 'D. <unk>.]' - 'P. Bożek[^1]' .[<unk>2]' 'D. J. Dean[^2]' Title:<extra_id_1> and<extra_id_2> and<extra_id_3> and<extra_id_4> title: 'H. Müther[<unk>4]' and<extra_id_5> and<extra_id_6>==<extra_id_7>=<extra_id_8> ==<extra_id_9>=<extra_id_10> and<extra_id_11> and<extra_id_12>=<extra_id_13> and<extra_id_14>=<extra_id_15>=<extra_id_16> 'H. Müther[^3]' , where the CN interaction has been determined as 'The pion problem. Various models have been proposed [see @n3lo]. A general feature of all these interaction models are strong short-range and tensor components, which lead to corresponding correlations in the nuclear many-body wave-function. Hartree-Fock mean-field theory, which represents the the lowest-order many-body calculations one can perform with such realistic NN interactions, fails to produce bound nuclei [@reviewartur; @localint] precisely because Hartree-Fock does not fully incorporate many-body correlation effects. That correlations beyond the mean field are important is supported by experiments exploring the spectral distribution of the single-particle strength. One experimental fact found in all nuclei is the global depletion of the Fermi sea. A recent experiment from NIKHEF puts this depletion of the proton Fermi sea in ${}^{208}$Pb at a little less than 20% [@bat01] in accordance with earlier nuclear matter calculations [@vond1]. Another consequence of the presence of short-range and tensor correlations is the appearance of high-momentum components in the ground state wave-function to compensate for the depleted strength of the mean field. Recent JLab experiments [@rohe:04] indicate that the amount and location of this strength is consistent with earlier predictions for finite nuclei [@mudi:94] and calculations of infinite matter [@frmu:03]. These data and their analysis, however, are not sufficient to allow for a detailed comparison with the predictions derived from the various interaction models at high momenta. In this paper, we want to investigate a possibility to separate the predictions for correlations at low and medium momenta, which are constrained by the NN scattering matrix below pion threshold, from the high momentum components, which may strongly depend on the underlying model for the NN interaction. For that purpose we will perform nuclear many-body calculations within a model space that allows for the explicit evaluation of low-momentum correlations. The effective Hamiltonian for this model space will be constructed from a realistic interaction to account for for correlations outside the model space. This concept of a model space and effective operators appropriately renormalized for this model space has a long history in approaches to the nuclear many-body physics. As an example we mention the effort to evaluate effective operators to be used in Hamiltonian diagonalization calculations of finite nuclei. For more details see e.g. [@morten:04]. The concept of a model space for the study of infinite nuclear matter was used e.g. by Kuo et al. [@kumod1; @kumod2; @kumod3]. Also the Brueckner-Hartree-Fock (BHF) approximation can be considered as a model space approach. In this case one restricts the model space to just one Slater-determinant and determines the effective interaction through a calculation of the G-matrix, the solution of the Bethe-Goldstone equation. The effective hamiltonians for such model space calculations have frequently been evaluated within the Rayleigh-Schrödinger perturbation theory, leading to a non-hermitian and energy-dependent result. The energy-dependence can be removed by considering the so-called folded-diagrams as has been discussed e.g. by Brandow[@brandow:67] and Kuo[@kuo:71]. We note that the folded-diagram expansion yields effective interaction terms between three and more particle, even if one considers a realistic interaction with two-body terms only[@polls:83; @polls:85]. During the last years the folded-diagram technique has been applied to derive an effective low-momentum potential $V_{low-k}$[@bogner:03] from a realistic NN interaction. By construction, $V_{low-k}$ potentials reproduce the deuteron binding-energy, the low-energy phase shifts and the half-on-shell $T$ matrix calculated from the underlying realistic NN interaction up to the chosen cut-off parameter. The resulting $V_{low-k}$ turns out to be rather independent on the original NN interaction if this cut-off parameter for the relative momenta is below the value of the pion-production threshold in NN scattering. The off-shell characteristics of the $V_{low-k}$ effective interaction are not constrained by experimental data and can influence the many-body character of the interaction. For finite nuclei we find that one does indeed obtain different binding energies for $^{16}$O depending on the underlying NN interaction from which one derives the $V_{low-k}$ interaction. For example, using coupled-cluster techniques at the singles and doubles level (CCSD) [@dean04] we find binding energies for $^{16}$O at a lab-momentum cutoff of $\Lambda=2.0$ fm$^{-1}$ to be $-143.4\pm 0.4$ MeV and $-153.3\pm 0.4$ MeV for the N$^3$LO [@n3lo] and CD-Bonn two-body interactions, respectively. The CCSD calculations were carried out at up to 7 major oscillator shells (with extrapolations to an infinite model space) using the intrinsic Hamiltonian defined as $H=T-T_{cm}+V_{low-k}$ where $T_{cm}$ is the center of mass kinetic energy. Attractive energies are obtained if such a $V_{low-k}$ interaction is used in a Hartree-Fock calculation of nuclear matter or finite nuclei[@corag:03; @kuck:03]. High-momentum correlations, which are required to obtain bound nuclear systems from a realistic NN interaction (see above) are taken into account in the renormalization procedure which leads to $V_{low-k}$. Supplementing these Hartree-Fock calculations with corrections up to third order in the Goldstone perturbation theory leads to results for the ground-state properties of $^{16}O$ and $^{40}Ca$, which are in fair agreement with the empirical data[@corag:03]. (One should note that $T_{cm}$ was not included in these calculations.) Calculations in infinite matter demonstrate that $V_{low-k}$ seems to be quite a good approximation for the evaluation of low-energy spectroscopic data. The results for the pairing derived from the bare interaction are reproduced[@kuck:03]. The prediction of pairing properties also agree with results obtained phenomenological interactions like the Gogny force[@gogny; @sedrak:03]. The $V_{low-k}$ interaction also yields a good approximation for the calculated binding energy of nuclear matter at low densities. At high densities, however, BHF calculations using $V_{low-k}$ yield too much binding energy and do not reproduce the saturation feature of nuclear matter[@kuck:03]. This is due to the fact that $V_{low-k}$ does not account for the effects of the dispersive quenching of the two-particle propagator, as it is done e.g. in the Brueckner $G$-matrix derived from a realistic NN interaction. The resulting matrix is the hamiltonian[@bogner:05]. An alternative technique to determine an effective hamiltonian for a model space calculation is based on a unitary transformation of the hamiltonian. It has been developed by Suzuki[@suzuki:82] and leads to an energy-independent, hermitian effective interaction. The unitary-model-operator approach (UMOA) has also been used to evaluate the ground-state properties of finite nuclei[@suz13; @suz15; @fuji:04; @roth:05]. In the present study we are going to employ the unitary transformation technique to determine an effective interaction, which corresponds to the $V_{low-k}$ discussed above. This effective interaction will then be used in self-consistent Green’s function (SCGF) calculation of infinite nuclear matter. Various groups have recently developed techniques to solve the corresponding equations and determine the energy- and momentum-distribution of the single-particle strength in a consistent way[@frmu:03; @bozek0; @bozek1; @bozek2; @dewulf:03; @rd; @frmu:05]. Therefore we can study the correlation effects originating from $V_{low-k}$ inside the model space and compare it to the correlations derived from the bare interaction. Furthermore we use the unitary transformation technique to determine an effective interaction which accounts for dispersive effects missing in the original $V_{low-k}$ (see discussion above). After this introduction we will present the method for evaluating the effective interaction in section 2 and briefly review the basic features of the SCGF approach in section 3. The results of our investigations are presented in section 4, which is followed up by the conclusions
null
--- abstract: 'Using a slightly weaker definition of cellular algebra, due to Goodman ([@G2] Definition 2.9), we prove that for a symmetric cellular algebra, the dual basis of a cellular basis is again cellular. Then a nilpotent ideal is constructed for a symmetric cellular algebra. The ideal connects the radicals of cell modules with the radical of the algebra. It also reveals some information on the dimensions of simple modules. As a by-product, we obtain some equivalent conditions for a finite dimensional symmetric cellular algebra to be semisimple.' title: Radicals of symmetric cellular algebras --- [^1] Yanbo Li Department of Information and Computing Sciences,\ Northeastern University at Qinhuangdao;\ Qinhuangdao, 066004, P.R. China\ True P.R. China\ School of Mathematics Sciences, Beijing Normal University;<unk> Beijing, 100875, P.R. China<unk> School [@KL]. They were defined by a so-called cellular basis with some nice properties. The theory of cellular algebras provides a systematic framework for studying the representation theory of non-semisimple algebras which are deformations of semisimple ones. One can parameterize simple modules for a finite dimensional cellular algebra by methods in linear algebra. Many classes of algebras from mathematics and physics are found to be cellular, including Hecke algebras of finite type, Ariki-Koike algebras, $q$-Schur algebras, Brauer algebras, Temperley-Lieb algebras, cyclotomic Temperley-Lieb algebras, Jones algebras, partition algebras, Birman-Wenzl algebras and so on, we refer the reader to [@G; @GL; @RX; @Xi1; @Xi2] for details. An equivalent basis-free definition of cellular algebras was given by Koenig and Xi [@KX1], which is useful in dealing with structural problems. Using this definition, in [@KX5], Koenig and Xi made explicit an inductive construction of cellular algebras called inflation, which produces all cellular algebras. In [@KX7], Brauer algebras were shown to be iterated inflations of group algebras of symmetric groups and then more information about these algebras was found. There are some generalizations of cellular algebras, we refer the reader to [@DR; @GRM; @GRM2; @WB] for details. Recently, Koenig and Xi [@KX8] introduced affine cellular algebras which contain cellular algebras as special cases. Affine Hecke algebras of type A and infinite dimensional diagram algebras like the affine Temperley-Lieb algebras are affine cellular. It is an open problem to find explicit formulas for the dimensions of simple modules of a cellular algebra. By the theory of cellular algebras, this is equivalent to determine the dimensions of the radicals of bilinear forms associated with cell modules. In [@LZ], for a quasi-hereditary cellular algebra, Lehrer and Zhang found that the radicals of bilinear forms are related to the radical of the algebra. This leads us to studying the radical of a cellular algebra. However, we have no idea for dealing with general cellular algebras now. We will discuss them in this paper. Note that Hecke algebras of finite types, Ariki-Koike algebras over any ring containing inverses of the parameters, Khovanov’s diagram algebras are all symmetric cellular algebras. The trivial extension of a cellular algebra is also a symmetric cellular algebra. For details, see [@BS], [@MM], [@XX]. Throughout this paper, we will adopt a slightly weaker definition of cellular algebra due to Goodman ([@G2] Definition 2.9). It is helpful to note that the results of [@GL] remained valid with his weaker axiom. In addition, his weaker axiom is equivalent. We begin with recalling definitions and some well-known results of symmetric algebras and cellular algebras in Section 2. Then in Section 3, we prove that for a symmetric cellular algebra, the dual basis of a cellular basis is again cellular. In Section 4, a nilpotent ideal of a symmetric cellular algebra is constructed. This ideal connects the radicals of cell modules with the radical of the algebra and also reveals some information on the dimensions of simple modules. As simple modules semisimple. **Preliminaries** {#xxsec2} ================= In this section, we start with the definitions of symmetric algebras and cellular algebras (a slightly weaker version due to Goodman) and then recall some well-known results about them. Let $R$ be a commutative ring with identity and $A$ an associative $R$-algebra. As an $R$-module, $A$ is finitely generated and free. Suppose that there exists an $R$-bilinear map $f:A\times A\rightarrow R$. We say that $f$ is non-degenerate if the determinant of the matrix $(f(a_{i},a_{j}))_{a_{i},a_{j}\in B}$ is a unit in $R$ for some $R$-basis $B$ of $A$. We call that A$. \[2.1\] An $R$-algebra $A$ is called symmetric if there is a non-degenerate associative symmetric bilinear form $f$ on $A$. Define $<unk>tau$ as $\tau(a)=f(a,1)$. We call $\tau$ a symmetrizing trace. Let $A$ be a symmetric algebra with a basis $B=\{a_{i}\mid i=1,\ldots,n\}$ and $\tau$ a symmetrizing trace. Denote by $D=\{D_{i}\mid i=i,\ldots,n\}$ the basis determined by the requirement that $\tau(D_{j}a_{i})=\delta_{ij}$ for all $i, j=1,\ldots,n$. We will call $D$ the dual basis of $B$. For arbitrary $1\leq i,j \leq n$, write $a_{i}a_{j}=\sum\limits_{k}r_{ijk}a_{k}$, where $r_{ijk}\in R$. Fixing a symmetrizing trace $\tau$ for $A$, then we have the following lemma. \[2.2\] Let $A$ be a symmetric $R$-algebra with a basis $B$ and the dual basis $D$. Then the following hold: $$a_{i}D_{j}=\sum_{k}r_{kij}D_{k};\,\,\,\,\,D_{i}a_{j}=\sum_{k}r_{jki}D_{k}.$$ We only prove the first equation. The first is shown similarly. Suppose that $a_{i}D_{j}=\sum\limits_{k}r_{k}D_{k}$, where $r_{k}\in R$ for $k=1,\cdots,n$. Left multiply by $a_{k_{0}}$ on both sides of the equation and then apply $\tau$, we get $\tau(a_{k_{0}}a_{i}D_{j})=r_{k_{0}}$. Clearly, $\tau(a_{k_{0}}a_{i}D_{j})=r_{k_{0},i,j}$. This implies that $r_{k_{0}}=r_{k_{0},i,j}$. Given a symmetric algebra, it is natural to consider the relation between two dual bases determined by two different symmetrizing traces. For this we have the following lemma. \[2.3\] Suppose that $A$ is a symmetric $R$-algebra with a basis $B=\{a_{i}\mid i=1, \cdots, n\}$. Let $\tau, \tau'$ be two symmetrizing traces. Denote by $\{D_{i}\mid i=1, \cdots, n\}$ the dual basis of $B$ determined by $\tau$ and $\{D_{i}'\mid i=1, \cdots, n\}$ the dual basis determined by $\tau'$. Then for $1\leq i \leq n$, we have $$D_{i}'=\sum_{j=1}^{n}\tau(a_{j}D_{i}')D_{j}.$$ It is proved by a similar method as in Lemma \[2.2\]. Graham and Lehrer introduced the so-called cellular algebras in [@GL] , then Goodman weakened the definition in [@G2]. We will adopt Goodman’s definition throughout this paper. [([@G2])]{}\[2.4\] <unk> identity. An associative unital $R$-algebra is called a cellular algebra with cell datum $(\Lambda, M, C, i)$ if the following conditions are satisfied: [(C1)]{} The finite set $\Lambda$ is a poset. Associated with each ${\lambda}\in\Lambda$, there is a finite set $M({\lambda})$. The algebra $A$ has an $R$-basis $\{C_{S,T}^{\lambda
null
--- author: - 'by Bill Baritompa, Rainer Löwen, Burkard Polster, and Marty Ross' title: Mathematical Table Turning Revisited --- [Abstract]{} We investigate under which conditions a rectangular table can be placed with all four feet on a ground described by a function $\mathbb R^2\to \mathbb R$. We start by considering highly idealized tables that are just simple rectangles. We prove that given any rectangle, any continuous ground and any point on the ground, the rectangle can be positioned such that all its vertices are on the ground and its center is on the vertical through the distinguished point. This is a mathematical existence result and does not provide a practical way of actually finding a balancing position. An old, simple, beautiful, intuitive and applicable, but not very well known argument guarantees that a square table can be balanced on any ground that is not “too wild”, by turning it on the spot. In the main part of this paper we turn this intuitive argument into a mathematical theorem. More precisely, our theorem deals with rectangular tables each consisting of a solid rectangle as top and four line segments of equal length as legs. We prove that if the ground does not rise by more than $\arctan\left (\frac{1}{\sqrt 2}\right) \approx 35.26^\circ$ between any two of its points, and if the legs of the table are at least half as long as its diagonals, then the table can be balanced anywhere on the ground, without any part of it digging into the ground, by turning the table on the spot. This is totally proof. Finally, we give a summary of related earlier results, prove a number of related results for tables of shapes other than rectangles, and give some advice on using our results in real life. %&\$\#!!! ========= You sit down at a table and notice that it is wobbling, because it is standing on a surface that is not quite even. What to do? Curse, yes, of course. Apart from that, it seems that the only quick fix to this problem is to wedge something under one of the feet of the table to stabilise it. However, there is another simple approach to solving this annoying problem. Just turn the table on the spot! More often than not, you will find a position in which all four legs of the table are touching the ground. This is counterintuitive. So, how will this work? Balancing Mathematical Tables—a Matter of Existence =================================================== In the mathematical analysis of the problem, we will first assume that the ground is the graph of a function $g:\mathbb R^2\to \mathbb R$, and that a [*mathematical table*]{} consists of the four vertices of a rectangle of diameter 2 whose center is on the $z$-axis. What we are then interested in is determining for which choices of the function $g$ can a mathematical table be [*balanced locally*]{}: that is, when can a table be moved such that its center remains on the $z$-axis, and all its vertices end up on the ground. We first observe that it is not always possible to balance a mathematical table locally. Consider, for example, the reflectively symmetric function of the angle $\theta$ about the $z$-axis with $$g(\theta)= \left \{ \begin{array}{l} 2 \quad \mbox{ if } 0\leq \theta < \frac{\pi}{2} \mbox{ or } \pi \leq \theta < \frac{3\pi}{2}, \\ 1 \quad \mbox{ otherwise}. \end{array} \right.$$ So, the ground consists of four quadrants, two at height 1 and two at height 2; see Figure \[cliff\]. It is not hard to see that a square mathematical table cannot be balanced locally on such a clifflike piece of ground. On the other hand, a constant table cannot be continuous. This result is a seemingly undocumented corollary of a theorem by Livesay [@Livesay], which can be phrased as follows: [*For any continuous function $f$ defined on the unit sphere, we can position a given mathematical table with all its vertices on the sphere such that $f$ takes on the same value at all four vertices. *]{} Note that since our mathematical table has diagonals of length 2, its four vertices will be on the unit sphere iff the centers of the table and the sphere coincide. Choose as the continuous function the [*vertical distance*]{} from the ground, $$f:\mathbb S^2 \to \mathbb R:(x,y,z) \mapsto z-g(x,y).$$ Note that here and in everything that follows the vertical distance of a point in space from the ground is really a signed vertical distance; depending on whether the point is above, on or below the ground its vertical distance is positive, zero or negative, respectively. Now, we are guaranteed a position of our rectangle with center at the origin such that all its vertices are the same vertical distance from the ground. This means that we can balance our mathematical table locally by translating it this distance in the vertical direction. // Balancing Real Tables...by Turning the Tables ============================================= So, one of our highly idealized tables can be balanced locally on any continuous ground. However, being an existence result, Theorem 1 is less applicable to our real-life balancing act than it appears at first glance. Here are two problems that seem worth pondering: 1. [*Mathematical vs. Real Tables. *]{} A real table consists of four legs and a table top; our theorem only tells us that we can balance the four endpoints of the legs of this real table. However, balancing the whole real table in this position may be physically impossible, since the table top or other parts of the legs may run into the ground. To deal with this complication, we define a [*real table*]{} to consist of a solid rectangle with diameters of length 2 as [*top*]{}, and four line segments of equal length as [*legs*]{}. These legs are attached to the top at right angles, as shown in Figure \[wobble\]. The end points of the legs of a real table form its [*associated mathematical table*]{}. We say that a real table is balanced locally if its associated mathematical table is balanced locally, and if no point of the real table is below the ground. ] [Mathematical table Balanced Locally [*Balancing by Turning. *]{} A second problem with our analysis so far is that Theorem 1, while guaranteeing a balancing position, provides no practical method for finding it. After all, although we restrict the center of the table to the $z$-axis, there are still four degrees of freedom to play with when we are actually trying to find a balancing position. The following rough argument indicates how, by turning a table on the spot in a certain way, we should be able to locate a balancing position, as long as we are dealing with a square table and a ground that is not “too crazy”. Unlike most other real-world applications of the Intermediate Value Theorem, it seems that this neat argument is not as well-known as it deserves. We have not been able to pinpoint its origin, but from personal experience we know that the argument has been around for at least thirty five years and that people keep rediscovering it. In terms of proper references in which variations of the argument explicitly appear, we are only aware of [@gardner], [@gardner1], [@gardner2] (Chapter 6, Problem 6), [@Hunzinker], [@Kraft], [@Martin], [@Polster], [@Polster1] and [@Polster2]; the earliest reference in this list, [@gardner], is Martin Gardner’s [*Mathematical Games*]{} column in the May 1973 issue of [*Scientific American*]{}. Note that an essential ingredient of the argument is the simple fact that a quarter-turn around its centre takes a square into itself—to move the table from the initial position to the end position takes roughly a quarter-turn around the $z$-axis. Closely related well-documented quarter-turn arguments date back almost a century; see, for example, Emch’s proof that any oval contains the vertices of a square in [@Emch] or [@Mayerson], Section 4. At the argument. At first glance, the above argument appears reasonable and, if true, would provide a foolproof method for balancing a square table locally by turning. However, for arbitrary continuous ground functions, it appears just about impossible to turn this intuitive argument into a rigorous proof. In particular, it seems very difficult to suitably model the rotating action, so that the vertical distance of the hovering vertices depends continuously upon the rotation angle, and such that we can always be sure to finish in the end position. As a second problem, it is easy to construct continuous grounds on which real tables cannot be balanced locally. For example, consider a real square table with short legs, together with a wedge-shaped ground made up of two steep half-planes meeting in a ridge along the $x$-axis. Then it is clear that the solid table top hitting the ground will prevent the table
null
--- abstract: 'We analyse the response of a spatially extended direction-dependent local quantum system, a detector, moving on the Rindler trajectory of uniform linear acceleration in Minkowski spacetime, and coupled linearly to a quantum scalar field. We consider two spatial profiles: (i) a profile defined in the Fermi-Walker frame of an arbitrarily-accelerating trajectory, generalising the isotropic Lorentz-function profile introduced by Schlicht to include directional dependence; and (ii) a profile defined only for a Rindler trajectory, utilising the associated frame, and confined to a Rindler wedge, but again allowing arbitrary directional dependence. For (i), we find that the transition rate on a Rindler trajectory is non-thermal, and dependent on the direction, but thermality is restored in the low and high frequency regimes, with a direction-dependent temperature, and also in the regime of high acceleration compared with the detector’s inverse size. For (ii), the transition rate is isotropic, and thermal in the usual Unruh temperature. We attribute the non-thermality and anisotropy found in (i) to the *leaking* of the Lorentz-function profile past the Rindler horizon.' author: ed acceleration. The acceleration singles out a distinguished direction in space, and an observer with direction-sensitive equipment will in general see a direction-dependent response; however, for the Lorentz-invariant notion of direction-sensitivity introduced in [@Takagi:1985tf], the associated temperature still turns out to be equal to $g/(2\pi)$, independently of the direction. For textbooks and reviews, see [@Birrell:1982ix; @Crispino:2007eb; @Fulling:2014wzx]. In this paper we address the response of uniformly linearly accelerated observers in Minkowski spacetime, operating direction-sensitive equipment of nonzero spatial size. We ask whether the temperature seen by these observers is still independent of the direction. The question is nontrivial: while a spatially pointlike detector with a monopole coupling is known to be a good approximation for the interaction between the quantum electromagnetic field and electrons on atomic orbitals in processes where the angular momentum interchange is insignificant [@MartinMartinez:2012th; @Alhambra:2013uja], finite size effects can be expected to have a significant role in more general situations [@DeBievre:2006pys; @Hummer:2015xaa; @Pozas-Kerstjens:2015gta; @Pozas-Kerstjens:2016rsh; @Pozas-Kerstjens:2017xjr; @Simidzija:2018ddw]. Also, the notion of a finite size accelerating body has significant subtlety: while a rigid body undergoing uniform linear acceleration in Minkowski spacetime can be defined in terms of the boost Killing vector, different points on the body have differing values of the proper acceleration, and the body as a whole does not have an unambiguous value of ‘acceleration’. It would be interesting to ask whether the resultant transition rate would be thermal at all. And if yes, then at what temperature. The n the temperature size. A related issue is the following: An analysis of a direction dependent point-like detector re-affirms that the Unruh bath is isotropic even though there is a preferred direction in the Rindler frame, the spatial direction along the direction of acceleration [@Takagi:1985tf]. However, analysing drag forces on drifting particles in the Unruh bath reveals through the Fluctuation Dissipation Theorem that the quantum fluctuations in the Unruh bath are not isotropic [@Kolekar:2012sf]. These anisotropies could be relevant for direction dependent spatially extended detectors whose length scales are of the order or greater than the correlation scales associated with the quantum fluctuations. We consider the response of spatially extended direction-dependent detectors in uniform linear acceleration in two models of such a detector: (i) a spatial sensitivity profile that generalises the isotropic Lorentz-function considered by Schlicht [@schlicht] to include spatial anisotropy, and (ii) a spatial sensitivity profile defined in terms of the geometry of the Rindler wedge, and explicitly confined to this wedge, following De Bievre and Merkli [@DeBievre:2006pys]. We begin in Section \[schlichtsection\] by briefly reviewing a detector with an isotropic Lorentz-function spatial profile [@schlicht], highlighting the role of a spatial profile as the regulator of the quantum field’s Wightman function, and recalling how the Unruh effect thermality arises for this detector. In section \[directiondetsection\] we generalise the Lorentz-function spatial profile to include spatial anisotropy, initially for an arbitrarily-accelerated worldline, relying on the Fermi-Walker frame along the trajectory. We investigate acceleration. We find that the transition rate is non-thermal, and angle dependent. Thermality is however restored in the low and high frequency regimes, and also in the regime of high acceleration compared with the inverse of the detector’s spatial extent. In section \[rindlerframesection\] we analyse a profile defined in the Rindler frame of a Rindler trajectory, and confined to the Rindler wedge, following De Bievre and Merkli [@DeBievre:2006pys]. We find that the transition rate is isotropic and thermal at the usual Unruh temperature. In section \[discsection\] we discuss and resolve the discrepancy of these two outcomes. The key property responsible for the non-thermality and anisotropy for the Lorentz-function profile is that this profile leaks outside the Rindler wedge, past the Rindler horizon. The leaking is an unphysical side effect of a detector model with a noncompact spatial profile, and it is unlikely to have a counterpart in spatially extended detectors with a more fundamental microphysical description. We leave the development of such spatially extended detector models subject to future work. Spatially isotropic Lorentz-function profile\[schlichtsection\] =============================================================== In this section we briefly review Schlicht’s generalisation [@schlicht] of a two-level Unruh-DeWitt detector [@Unruh:1976db; @DeWitt:1979] to a nonzero spatial size. We consider a massless scalar field $\phi$ in four-dimensional Minkowski spacetime, and a two-level quantum system, a detector, localised around a timelike worldline $x(\tau)$, parametrised in terms of the proper time $\tau$. The interaction Hamiltonian reads $H_{int} = c \, m(\tau) \,\chi(\tau) \phi(\tau)$, where $c$ is a coupling constant, $m(\tau)$ is the detector’s monopole moment operator, $\chi(\tau)$ is the switching function that specifies how the interaction is turned on and off, and $\phi(\tau)$ is the spatially smeared field operator. The formula for $\phi(\tau)$ is $$\phi(\tau) = \int d^3\xi \; f_{\epsilon} ({\bm \xi}) \, \phi(x(\tau, {\bm \xi})) \ , \label{smearedoperator}$$ where ${\bm \xi} = (\xi^1, \xi^2, \xi^3)$ stands for the spatial coordinates associated with the local Fermi-Walker transported frame and $x(\tau, {\bm \xi})$ is a spacetime point written in terms of the Fermi-Walker coordinates. The smearing profile function $f_{\epsilon} ({\bm \xi})$ specifies the spatial size and shape of the detector in its instantaneous rest frame. In linear order perturbation theory, the detector’s transition probability is then proportional to the response function, $${\cal F}(\omega) = \int_{-\infty}^{\infty} du \, \chi(u) \int_{-\infty}^{\infty} ds \, \chi(u -s) e^{- i \omega s} \, W(u,u-s) \ , \label{transprobability}$$ where $\omega$ is the transition energy, $W(\tau,\tau^\prime) = \langle \Psi | \phi(\tau) \phi(\tau^\prime) | \Psi \rangle$ and $|\Psi \rangle$ is the initial state of the scalar field. The choice for the smearing profile function $f_{\epsilon}$ made in [@schlicht] was the three-dimensional isotropic Lorentz-function, $$f_{\epsilon}({\bm \xi})= \frac{1}{\pi^2} \frac{\epsilon}{{(\xi^{2}+\epsilon^{2})}^2} \ , \label{schlichtprofile}$$ where the positive parameter $\epsilon$ of dimension length characterises the effective size. The selling point of the profile function is that it allows the switch-on and switch-off to be made instantaneous; for a strictly
null
--- abstract: 'In this paper, I address the oscillation probability of $O$(GeV) neutrinos of all active flavours produced inside the Sun and detected at the Earth. Flavours other than electron-type neutrinos may be produced, for example, by the annihilation of WIMPs which may be trapped inside the Sun. In the GeV energy regime, matter effects are important both for the “1–3” system and the “1–2” system, and for different neutrino mass hierarchies. A numerical scan of the multidimensional three-flavour parameter space is performed, “inspired” by the current experimental situation. One important result is that, in the three-flavour oscillation case, $P_{\alpha\beta}\neq P_{\beta\alpha}$ for a significant portion of the parameter space, even if there is no $CP$-violating phase in the MNS matrix. Furthermore, $P_{\mu\mu}$ has a significantly different behaviour from $P_{\tau\tau}$, which may affect expectations for the number of events detected at large neutrino telescopes.' --- be massless. Any evidence for neutrino masses would, therefore, imply physics beyond the Standard Model. Even though the direct experimental determination of a neutrino mass is (probably) far beyond the current experimental reach, experiments have been able to obtain indirect, and recently very strong, evidence for neutrino masses, via neutrino oscillations. The key evidence for neutrino oscillations comes from the angular dependent flux of atmospheric muon-type neutrinos measured at SuperKamiokande [@atmospheric], combined with a large deviation of the muon-type to electron-type neutrino flux ratio from theoretical predictions. This “atmospheric neutrino puzzle” is best solved by assuming that $\nu_{\mu}$ oscillates into $\nu_{\tau}$ and that the $\nu_e$ does not oscillate. For a recent analysis of all the atmospheric neutrino data see [@atmos_analysis]. On the other hand, measurements of the solar neutrino flux [@Cl; @Kamiokande; @GALLEX; @SAGE; @Super-K] have always been plagued by a large suppression of the measured solar $\nu_e$ flux with respect to theoretical predictions [@SSM]. Again, this “solar neutrino puzzle” is best resolved by assuming that $\nu_e$ oscillates into a linear combination of the other flavour eigenstates [@bksreview; @rate_analysis] (for a more conservative analysis of the event rates and the inclusion of the “dark side” of the parameter space, see [@dark_side]). The most recent analysis of the solar neutrino data which includes the mixing of three active neutrino species can be found in [@solar_3]. Neutrino oscillations were first hypothesised by Bruno Pontecorvo in the 1950’s [@Pontecorvo]. The hypothesis of three flavour mixing was first raised by Maki, Nakagawa and Sakata [@MNS]. In light of the solar neutrino puzzle, Wolfenstein [@W] and Mikheyev and Smirnov [@MS] realized that neutrino–matter interactions could affect in very radical ways the survival probability of electron-type neutrinos which are produced in the solar core and detected at the Earth (MSW effect). Since then, significant effort has been devoted to understanding the oscillation probabilities of electron-type neutrinos produced in the Sun. For example, in [@KP_3] the survival probability of solar electron-type neutrinos was discussed in the context of three-neutrino mixing including matter effects, and solutions to the solar neutrino puzzle in this context were studied (for example, in [@KP_3; @MS_3; @solar_3]). In this paper, the understanding of solar neutrino oscillations is extend to the case of other active neutrino species ($\nu_{\mu}$, $\nu_{\tau}$, and antineutrinos) produced in the solar core. Even though only electron-type neutrinos are produced by the nuclear reactions which take place in the Sun’s innards, it is well know that, in a number of dark matter models, dark matter particles can be trapped gravitationally inside the Sun, and that the annihilation of these should yield a flux of high energy neutrinos ($E_{\nu}\gtrsim 1$ GeV) of all species which may be detectable at the Earth [@DM_review]. Indeed, this is one of the goals of very large “neutrino telescopes,” such as AMANDA [@Amanda] or BAIKAL [@Baikal]. It is important to understand how neutrino oscillations will affect the expected event rates at these experiments. [^1] The oscillation probability of all neutrino species has, of course, been studied in different contexts, such as in the case of neutrinos produced in the core of supernovae [@supernova] or in the case of neutrinos propagating in constant electron number densities [@barger_etal]. The neutrinos at hand [@nufact]. The case at hand (GeV solar neutrinos) differs significantly from these mentioned above, in at least a few of the following: source-detector distance, electron number density average value and position dependency, energy average value and spectrum. Neutrino factory studies, for example, are interested in $O$(1000) km base-lines, $O$(10) GeV electron-type and muon-type neutrinos produced via muon decay propagating in roughly constant, Earth-like (matter densities around 3 g/cm$^3$) electron number densities. The paper is organised as follows. In Vol. 2, the most interesting experiment will be carried out 2, the well known case of two-flavour oscillations is reviewed in some detail, while special attention will be paid to neutrinos produced inside the Sun. In chapter 3 the same discussion is extended to the less familiar case of three-flavour oscillations. Again, special attention is paid to neutrinos produced in the Sun’s core. In Sec. 4 the results presented in Sec. 3 will be analysed numerically, and the three-neutrino multi-dimensional parameter space will be explored. Sec. 5 contains a summary of the results and the conclusions. It is important to comment at this point that one of the big challenges of studying three-flavour oscillations is the multi-dimensional parameter space, composed of three mixing angles, two mass-squared differences, and one complex phase, plus the neutrino energy. For this reason, the discussions presented here will take advantage of the current experimental situation to constrain the parameter space, and of the possibility of producing neutrinos of all species via dark matter annihilations to constrain the neutrino energies to the range from a few to tens of GeV. Two-Flavour Oscillations ======================== In this section, the well studied case of two-flavour oscillations will be reviewed [@general_review]. This is done in order to present the formalism which will be later extended to the case of three-flavour oscillations and describe general properties of neutrino oscillations and of neutrinos produced in the Sun’s core. Generalities ------------ Neutrino oscillations take place because, similar to what happens in the quark sector, neutrino weak eigenstates are different from neutrino mass eigenstates. The two sets are related by a unitary matrix, which is, in the case of two-flavour mixing, parametrised by one mixing angle $\vartheta$. [^2] $$\left(\matrix{\nu_{e} \cr \nu_{x} }\right)= \left(\matrix{U_{e1}&U_{e2}\cr U_{x1}&U_{x2}}\right) \left(\matrix{\nu_{1} \cr \nu_{2} }\right)= \left(\matrix{\cos\vartheta&\sin\vartheta\cr -\sin\vartheta&\cos\vartheta}\right) \left(\matrix{\nu_{1} \cr \nu_{2} }\right),$$ where $\nu_1$ and $\nu_2$ are neutrino mass eigenstates with masses $m_1$ and $m_2$, respectively, and $\nu_x$ is the flavour eigenstate orthogonal to $\nu_e$. All physically distinguishable situations can be obtained if $0\leq\vartheta\leq\pi/2$ and $m_1^2\leq m_2^2$ or $0\leq\vartheta\leq\pi/4$ and no constraint is imposed on the masses-squared. In the case of oscillations in vacuum, it is trivial to compute the probability that a neutrino produced in a flavour state $\alpha$ is detected as a neutrino of flavour $\beta$, assuming that the neutrinos are ultrarelativistic and propagate with energy $E_{\nu}$: $$P_{\alpha\beta}=|U_{\beta1}|^2|U_{\alpha1}|^2+|U_{\beta2}|^2|U_{\alpha2}|^2+2Re\left( U_{\beta1}^*U_{\beta2}U_{\alpha1}U_{\alpha2}^* e^{i\frac{\Delta m^2x}{2E_{\nu}}}\right).$$ Here $\Delta m^2\equiv m^2_{2}-m_1^2$ is the mass-squared difference between the two mass eigenstates and $x$ is the distance from the detector to the source. It is trivial to note that $P
null
--- abstract: 'Oscillatory double-diffusive convection (ODDC, more traditionally called semiconvection) is a form of linear double-diffusive instability that occurs in fluids that are unstably stratified in temperature (Schwarzschild unstable), but stably stratified in chemical composition (Ledoux stable). This scenario is thought to be quite common in the interiors of stars and giant planets, and understanding the transport of heat and chemical species by ODDC is of great importance to stellar and planetary evolution models. Fluids unstable to ODDC have a tendency to form convective thermo-compositional layers which significantly enhance the fluxes of temperature and chemical composition compared with microscopic diffusion. Although a number of recent studies have focused on studying properties of both layered and non-layered ODDC, few have addressed how additional physical processes such as global rotation affect its dynamics. In this work we study first how rotation affects the linear stability properties of rotating ODDC. Using direct numerical simulations we then analyze the effect of rotation on properties of layered and non-layered ODDC, and study how the angle of the rotation axis with respect to the direction of gravity affects layering. We find that rotating systems can be broadly grouped into two categories, based on the strength of rotation. Qualitative behavior in the more weakly rotating group is similar to non-rotating ODDC, but strongly rotating systems become dominated by vortices that are invariant in the direction of the rotation vector and strongly influence transport. We find that whenever layers form, rotation always acts to reduce thermal and compositional transport.' author: "The two aspects of stratification are common. Fluids stratified in this way are, by definition, stable to the sort of overturning motion that occurs in standard convection. However, @walin1964 and @kato1966 showed that, given the right conditions, infinitesimal perturbations can trigger an instability which takes the form of over-stable gravity waves. This instability, often known as semiconvection but more accurately described as oscillatory double-diffusive convection (ODDC) after @spiegel1969, can lead to significant augmentation of the turbulent transport of temperature and chemical species through a fluid, and is therefore an important process to consider in evolution models of stars and giant planets. Double-diffusive fluids with the kind of stratification described here were first discussed in the geophysical scientific community in the context of volcanic lakes [@Newman1976] and the polar ocean [@Timmermans2003; @toole2006]. There, they became well-known for their propensity to form density staircases consisting of convectively mixed layers separated by stably stratified interfaces. As a result, layered convection is usually studied in experiments where a layered configuration is imposed as an initial condition, rather than following naturally from the growth and non-linear saturation of ODDC. Thermo-compositional layering was first studied in laboratory experiments involving salt water [@Turner1965; @lindenshirtcliffe1978], or aqueous sugar/salt solutions [@shirtcliffe1973], that were initialized with layers. The results from these studies were then used to inform studies of double-diffusive fluids in stars [@langer1985; @merryfield1995] and giant planets [@stevenson1982; @leconte2012; @nettelmann2015]. However, recent studies have taken a different approach to characterize the dynamics of double-diffusive layering. Advances in high-performance computing have made it feasible to study ODDC using 3D numerical simulations. @rosenblum2011 discovered that layers may form spontaneously in a linearly unstable system, and proposed a mechanism to explain how layer formation occurs. This mechanism, known as the $\gamma-$instability, was originally put forward by @radko2003mlf to explain layer formation in fingering convection but was found to apply to ODDC as well. The simulations of @rosenblum2011 also demonstrated the existence of a non-layered phase of ODDC which had been neglected by nearly all previous studies except that of @langer1985, who proposed a model for mixing of chemical species by semiconvection that ignores layering entirely [see reviews by @merryfield1995; @Moll2016]. Next, @Mirouh2012 identified the parameter regimes in which layers do and do not form by the $\gamma-$instability in ODDC. @Wood2013 then studied the thermal and compositional fluxes through layered ODDC, and @Moll2016 studied the transport characteristics through non-layered ODDC. In each of these studies, a fairly simple model was used in which the only body force considered was gravity. It is natural to wonder how additional physical mechanisms may affect the long term dynamics of ODDC. Global rotation is one such mechanism that is particularly relevant to the gas giant planets in our own solar system due to their rapid rotation periods ($\sim 9.9$ hours for Jupiter and $\sim 10.7$ hours for Saturn). It $ stars. There have been some recent studies of rotating layered convection in double-diffusive fluids, but only for the geophysical parameter regime [@CarpTimm2014] in conditions that are not unstable to ODDC (or to the $\gamma-$instability). In this work we study the effect of global rotation on the linear stability properties and long-term dynamics associated with ODDC. In Section \[sec:mathMod\] we introduce our mathematical model and in Section \[sec:LinStab\] we study how rotation affects its linear stability properties. We analyze the impact of Coriolis forces on the formation of thermo-compositional layers in Section \[sec:thetaZero\] by studying a suite of simulations with parameter values selected to induce layer formation in non-rotating ODDC. In Section \[sec:diffParams\] we show results from two other sets of simulations at different values of the diffusivities and of the background stratification and study how rotation affects the dynamics of the non-layered phase of ODDC. In Section \[sec:incSims\] we study the effect of colatitude on layer formation. Finally, in Section \[sec:conclusion\] we discuss our results and present preliminary conclusions. Mathematical Model {#sec:mathMod} ================== The basic model assumptions for rotating ODDC are similar to those made in previous studies of the non-rotating systems [@rosenblum2011; @Mirouh2012; @Wood2013; @Moll2016]. As in previous work, we consider a domain that is significantly smaller than a density scale height, and where flow speeds are significantly smaller than the sound speed of the medium. This allows us to use the Boussinesq approximation [@spiegelveronis1960] and to ignore the effects of curvature. We consider a 3D Cartesian domain centered at radius $r=r_0$, and oriented in such a way that the $z$-axis is in the radial direction, the $x$-axis is aligned with the azimuthal direction, and the $y$-axis is aligned with the meridional direction. We also assume constant background gradients of temperature, $T_{0z}$, and chemical composition, $\mu_{0z}$, over the vertical extent of the box, which are defined as follows: $$\begin{aligned} T_{0z} = \frac{\partial T}{\partial r} = \frac{T}{p} \frac{\partial p}{\partial r} \nabla \, , \nonumber \\ \mu_{0z} =\frac{\partial \mu}{\partial r} = \frac{\mu}{p} \frac{\partial p}{\partial r} \nabla_{\mu} \, ,\end{aligned}$$ where all the quantities are taken at $r =r_0$. Here, $p$ denotes pressure, $T$ is temperature, $\mu$ is the mean molecular weight, and $\nabla$ and $\nabla_{\mu}$ have their usual astrophysical definitions: $$\nabla = \frac{d \ln T}{d \ln p} \mbox{ , } \nabla_\mu = \frac{d \ln \mu}{d \ln p} \, \mbox{ at } r = r_0 \, .$$ We use a linearized equation of state in which perturbations to the background density profile, $\tilde{\rho}$, are given by $$\frac{\tilde{\rho}}{\rho_0} = -\alpha \tilde{T} + \beta \tilde{\mu} \, ,$$ where $\tilde{T}$, and $\tilde{\mu}$ are perturbations to the background profiles of temperature and chemical composition, respectively, and $\rho_0$ is the mean density of the domain. The coefficient of thermal expansion, $\alpha$, and of compositional contraction, $\beta$, are defined as $$\begin{aligned} \alpha &= &-\frac{1}{\rho_0} \left.\frac{\partial \rho}{\partial T}\right|_{p,\mu} \, , \nonumber \\ \beta &= &\frac{1}{\rho_0} \left.\frac{\partial \rho}{\partial \mu}\right|_{p,T} \, .\end{aligned}$$ We take the effect of rotation into account by assuming that the rotation vector is given by: $$\label{eq:RotAxis} \mathbf{\Omega} = \left| \mathbf{\Omega} \right| \left( 0,\sin{\theta},\cos{\theta} \right) \, ,$$
null
--- abstract: 'In this paper, the dynamics of a modified Leslie-Gower predator-prey system with two delays and diffusion is considered. By calculating stability switching curves, the stability of positive equilibrium and the existence of Hopf bifurcation and double Hopf bifurcation are investigated on the parametric plane of two delays. Taking two time delays as bifurcation parameters, the normal form on the center manifold near the double Hopf bifurcation point is derived, and the unfoldings near the critical points are given. Finally, we obtain the complex dynamics near the double Hopf bifurcation point, including the existence of quasi-periodic solutions on a 2-torus, quasi-periodic solutions on a 3-torus, and strange attractors.' author: - 'Yanfei Du$^{1,2}$' - 'Ben Niu$^{2}$' - 'Junjie Wei$^{2}$' title: 'Two delays induce Hopf bifurcation and double Hopf bifurcation in a diffusive Leslie-Gower predator-prey system' --- **Diffusive predator-prey models with delays have been investigated widely, and the delay induced Hopf bifurcation analysis has been well studied. However, the study about bifurcation analysis of predator-prey models with two simultaneously varying delays has not been well established. Neither the Hopf bifurcation theorem with two parameters nor the derivation process of normal form for two delays induced double Hopf bifurcation has been proposed in literatures. In this paper, we investigate a diffusive Leslie-Gower model with two delays, and carry out Hopf and double Hopf bifurcation analysis of the model. Applying the method of studying characteristic equation with two delays, we get the stability switching curves and the crossing direction, after which we give the Hopf bifurcation theorem in two-parameter plane for the first time. Under some condition, the intersections of two stability switching curves are double Hopf bifurcation points. To figure out the dynamics near the double Hopf bifurcation point, we calculate the normal form on the center manifold. The derivation process of normal form we use in this paper can be extended to other models with two delays, one delay, or without delay. ** and *, respectively. $r_1$ and $r_2$ are the intrinsic growth rates for prey and predator, respectively. $K$ is the environmental carrying capacity for prey population. $a$ is the per capita capturing rate of prey by a predator during unit time. $\frac{v}{\gamma u}$ is Leslie-Gower term with carrying capacity of the predator $\gamma u$, which means that the carrying is proportional to the population size of the prey, and $\gamma$ is referred to as a measure of the quality of the prey as the food for the predator. Since then, various researches on this model and modified models have been carried out. [@M.; @Aziz; @J.; @Collings; @P.; @Feng; @Y.; @Ma; @J.; @Zhou; @Yuan; @S.; @L.1; @Yuan; @S.; @L.2] Refuges have important effects on the coexistence of predator and prey, and reducing the chance of extinction due to predation. Chen et al. [@F.; @Chen] incorporated a refuge protecting $mu$ of the prey into Leslie-Gower system, which means that the remaining $(1-m)u$ of the prey is available to the predator. They rerouted the prey. Time delays are ubiquitous in predator-prey systems. It seems that time delays play an important role in the stability of species densities. Researches are carried out to figure out the effect of delays on predator-prey systems. May [@R.M. ; @May] considered the feedback time delay in prey growth, and the term $r_1u(t)(1-\frac{u(t-\tau)}{K})$ is the well-known delayed logistic equation. Another type of time delay was introduced to the negative feedback of the predator’s density in Leslie-Gower model in Refs. [@Yuan; @R.; @A.F. ; @Nindjin], which denoted the time taken for digestion of the prey. Liu et al. [@Leslie] considered both delays mentioned above, and investigated a modified Leslie-Gower predator-prey system with two delays described by the following system: $$\label{odepredator} \left\lbrace \begin{array}{lll} \dot{u}(t) &=& r_1u(t)[ 1-\frac{u(t-\tau_1)}{K}] -a(1-m)u(t)v(t), \\ \dot{v}(t) &=&r_2v(t)[ 1-\frac{v(t-\tau_2)}{\gamma(1-m)u(t-\tau_2)}] , \\ \end{array} \right.$$ with the initial conditions $$\label{initial} \begin{array}{l} (\varphi_{1},\varphi_2)\in\textbf{C}([-\tau,0],\mathbb{R}_+^2),\varphi_i(0)>0,i=1,2, \end{array}$$ where $\tau_1$ is the feedback time delay in prey growth, $\tau_2$ is the feedback time delay in predator growth, and we define $\tau={\rm max}\{\tau_1,\tau_2\}$. Related to systems with two delays, general approach is to fix one delay, and vary another, or let $\tau_1+\tau_2=\tau$. [@Song; ] simultaneously. To do this we ad al. [@Gu; @K] analyzed the characteristic quasipolynomial $$p(s)=p_0(s)+p_1(s)e^{-\tau_1s}+p_2(s)e^{-\tau_2s},$$ where $$p_l(s)=\sum_{k=0}^np_{lk}s^k,$$ and provided a detailed study on the stability crossing curves such that the characteristic quasipolynomial has at least one imaginary zero and the crossing direction. Lin and Wang [@Lin; @X] considered the following characteristic functions $$D(\lambda;\tau_1,\tau_2)= P_{0}(\lambda)+P_{1}(\lambda)e^{-\lambda\tau_1}+P_{2}(\lambda)e^{-\lambda\tau_2}+P_{3}(\lambda)e^{-\lambda(\tau_1+\tau_2)}.$$ They derived an explicit expression for the stability switching curves in the $(\tau_1, \tau_2)$ plane, and gave a criterion to determine switching directions. Since the preys and predators distribute inhomogenously in different locations, the diffusion should be taken into account in more realistic ecological models. To reveal new phenomena caused by the introduction of inhomogenous spatial environment, Du and Hsu [@Y.; @Du] considered a diffusive predator-prey model $$\label{diffusion du} \left\{ \begin{array}{ll} \dfrac{\partial u(x,t)} {\partial t}= d_1\Delta u(x,t)+\lambda u(x,t)-\alpha u(x,t)^2-\beta u(x,t)v(x,t),&x\in \Omega, t>0,\\ \dfrac{\partial v(x,t)}{\partial t }= d_2\Delta v(x,t)+\mu v(x,t)[1-\delta \frac{v(x,t)}{u(x,t)}],&x\in \Omega, ~t>0, \\ \dfrac{\partial u(x,t)} {\partial v}= 0,~~\dfrac{\partial v(x,t)} {\partial v}=0, & x\in \partial \Omega, t>0.\\ \end{array} \right.$$ The Neumann boundary condition means that no species can pass across the boundary of $\Omega$. They can also be observed in different patterns. Motivated by the previous work, we consider the following modified Leslie-Gower predator-prey model with diffusion and Neumann boundary conditions $$\label{diffusion predator} \left\{ \begin{array}{l} \begin{array}{l} \dfrac{\partial u(x,t)} {\partial t}= d_1\Delta u
null
--- abstract: 'We initiate the development of a horizon-based initial (or rather final) value formalism to describe the geometry and physics of the near-horizon spacetime: data specified on the horizon and a future ingoing null boundary determine the near-horizon geometry. In this initial paper we restrict our attention to spherically symmetric spacetimes made dynamic by matter fields. We illustrate the formalism by considering a black hole interacting with a) inward-falling, null matter (with no outward flux) and b) a massless scalar field. The ' data. For the more involved case of the scalar field we analytically investigate the near slowly evolving horizon regime and propose a numerical integration for the general case.' author: - Sharmila Gunasekaran - Ivan Booth bibliography: - 'ivp.bib' title: Horizons as boundary conditions in spherical symmetry --- Introduction ============ This paper begins an investigation into what horizon dynamics can tell us about external black hole physics. At first thought this might seem obvious: if one watches a numerical simulation of a black hole merger and sees a post-merger horizon ringdown (see for example [@sxs]) then it is natural to think of that oscillation as a source of emitted gravitational waves. However this cannot be the case. Neither event nor apparent horizons can actually send signals to infinity: apparent horizons lie inside event horizons which in turn are the boundary for signals that can reach infinity[@Hawking:1973uf]. It is not horizons themselves that interact but rather the “near horizon” fields. This idea was (partially) formalized as a “stretched horizon” in the membrane paradigm[@Thorne:1986iy]. Then the best that we can hope for from horizons is that they act as a proxy for the near horizon fields with horizon evolution reflecting some aspects of their dynamics. As for physics. Robinson-Trautman spacetimes (see for example [@Griffiths:2009dfa]) demonstrate that this correlation cannot be perfect. In those spacetimes there can be outgoing gravitational (or other) radiation arbitrarily close to an isolated (equilibrium) horizon[@Ashtekar:2000sz]. Hence our goal is two-fold: both to understand the conditions under which a correlation will exist and to learn precisely what information it contains. The black hole spacetime new. The classical definition of a black hole as the complement of the causal past of future null infinity [@Hawking:1973uf] is essentially global and so defines a black hole spacetime rather than a black hole *in* some spacetime. However there are also a range of geometrically defined black hole boundaries based on outer and/or marginally trapped surfaces that seek to localize black holes. These include apparent[@Hawking:1973uf], trapping[@Hayward:1993wb], isolated [@Ashtekar:1998sp; @Ashtekar:1999yj; @Ashtekar:2000sz; @PhysRevD.49.6467] and dynamical [@Ashtekar:2003hk] horizons as well as future holographic screens [@Bousso:2015qqa]. These quasilocal definitions of black holes have successfully localized black hole mechanics to the horizon[@Ashtekar:1998sp; @Ashtekar:1999yj; @Ashtekar:2003hk; @Booth:2003ji; @Bousso:2015qqa; @Hayward:1993wb] and been particularly useful in formalizing what it means for a (localized) black hole to evolve or be in equilibrium. They are used in numerical relativity not only as excision surfaces (see, for example the discussions in [@Baumgarte:2010ndz; @Thornburg:2006zb]) but also in interpreting physics (for example [@Dreyer:2002mx; @Cook:2007wr; @Chu:2010yu; @Jaramillo:2011re; @Jaramillo:2011rf; @Jaramillo:2012rr; @Rezzolla:2010df; @Lovelace:2014twa; @Gupta:2018znn; @Owen:2017yaj]). In this paper we work to quantitatively link horizon dynamics to observable black hole physics. To establish an initial framework and build intuition we for now restrict our attention to spherically symmetric marginally outer trapped tubes (MOTTs) in similarly symmetric spacetimes. Matter fields are included to drive the dynamics. Our primary approach is to take horizon data as a (partial) final boundary condition that is used to determine the fields in a region of spacetime in its causal past. In particular these boundary conditions constrain the geometry and physics of the associated “near horizon” spacetime. The main application that we have in mind is interpreting the physics of evolving horizons that have been generated by either numerical simulations or theoretical considerations. Normally, data on a MOTT by itself is not sufficient to specify any region of the external spacetime. As shown from Fig. \[hd1\] even for a spacelike MOTT (a dynamical horizon) the region determined by a standard (3+1) initial value formulation would lie entirely within the event horizon. More information is needed to determine the near-horizon spacetime and hence in this paper we work with a characteristic initial value formulation [@MR1032984; @doi:10.1063/1.1724305; @Winicour:2012znc; @Winicour:2013gha; @Madler:2016xju] where extra data is specified on a null surface $\mathcal{N}$ that is transverse to the horizon (Fig. \[hd2\]). Intuitively the horizon records inward-moving information while $\mathcal{N}$ records the outward-moving information. Together they are sufficient to reconstruct the spacetime. There is an existing literature that studies spacetime near horizons, however it does not exactly address this problem. Most works focus on isolated horizons. [@Li:2015wsa] and [@Li:2018knr] examine spacetime near an isolated extremal horizon as a Taylor series expansion of the horizon while [@Krishnan:2012bt] and [@Lewandowski:2018khe] study spacetime near more general isolated horizons but in a characteristic initial value formulation with the extra information specified on a transverse null surface. [@Booth:2012xm] studied both the isolated and dynamical case though again as a Taylor series expansion off the horizon. In the case of the Taylor expansions, as one goes to higher and higher orders one needs to know higher and higher order derivatives of metric quantities at the horizon to continue the expansion. While the current paper instead investigates the problem as a final value problem, it otherwise closely follows the notation of and uses many results from [@Booth:2012xm]. It is organized as follows. We introduce the final value formulation of spherically symmetric general relativity in Sec.\[sec:formulation\]. We illustrate this for infalling null matter in \[sec:matter modelsI\] and then the much more interesting massless scalar field in \[sec:matter modelsII\]. We [sec] Sec.\[sec:discussion\]. Formulation {#sec:formulation} =========== Coordinates and metric ---------------------- We work in a spherically symmetric spacetime (${\cal M}, g$) and a coordinate system whose non-angular coordinates are $\rho$ (an ingoing affine parameter) and $v$ (which labels the ingoing null hypersurfaces and increases into the future). Hence, $g_{\rho \rho} =0$ and the curves tangent to the future-oriented inward-pointing $$N = \frac{\partial}{\partial \rho} \label{E1}$$ are null. We then scale $v$ so that $\mathcal{V}= \frac{\partial}{\partial v}$ satisfies $$\mathcal{V} \cdot N = -1. \label{E2}$$ One coordinate freedom still remains: the scaling of the affine parameter on the individual null geodesics $$\tilde{\rho} = f(v) \rho \, . \label{gaugefree}$$ In subsection \[physconf\] we will fix this freedom by specifying how $N$ is to be scaled along the $\rho = 0$ surface $\Sigma$ (which we take to be a black hole horizon). ! [Coordinate d], and the evolution. We use 0$. []{data-label="fig:3p1"}](cdsys.pdf) Next we define the future-oriented outward-pointing null normal to the spherical surfaces $S_{(v,\rho)}$ as $\ell^a$ and scale so that $$\begin{aligned} \ell \cdot N = -1 \label{affine2} \, . \end{aligned}$$ With this choice the four-metric $g_{ab}$ and induced two-metric $\tilde{q}_{ab}$ on the $S_{(v,\rho)}$ are related by $$g^{ab} = \tilde{q}^{ab} - \ell^a N^b - N^a \ell^b \, . \label{decomp}$$ Further for some function $C$ we can write $${\cal V} = \ell - C N \, . \label{crel1}$$ The coordinates and normal vectors are depicted in Fig.\[fig:3
null
--- abstract: 'A classical recursive construction for mutually orthogonal latin squares (MOLS) is shown to hold more generally for a class of permutation codes of length $n$ and minimum distance $n-1$. When such codes of length $p+1$ are included as ingredients, we obtain a general lower bound $M(n,n-1) \ge n^{1.079}$ for large $n$, gaining a small improvement on the guarantee given from MOLS.' address: igma integer. The *Hamming distance* between two permutations $\sigma, \tau \in \mathcal{S}_n$ is the number of non-fixed points of $\sigma \tau^{-1}$, or, equivalently, the number of disagreements when $\sigma$ and $\tau$ are written as words in single-line notation. For example, $1234$ and $3241$ are at distance three. A *permutation code* PC$(n,d)$ is a subset $\Gamma$ of $\mathcal{S}_n$ such that the distance between any two distinct elements of $\Gamma$ is at least $d$. Language of classical coding theory is often used: elements of $\Gamma$ are *words*, $n$ is the *length* of the code, and the parameter $d$ is the *minimum distance*, although for our purposes it is not important whether distance $d$ is ever achieved. Permutation codes are also called *permutation arrays* by some authors, where the words are written as rows of an $n \times |\Gamma|$ array. The syntax is also called @FD]. After a decade or so of inactivity on the topic, permutation codes enjoyed a resurgence due to various applications. See [@CCD; @H; @SM] for surveys of construction methods and for more on the coding applications. For positive integers $n \ge d$, we let $M(n,d)$ denote the maximum size of a PC$(n,d)$. It is easy to see that $M(n,1)=M(n,2)=n!$, and that $M(n,n)=n$. The Johnson bound $M(n,d) \le n!/(d-1)!$ holds. The n$-transitive $M(n,3)=n!/2$. More generally, a sharply $k$-transitive subgroup of $\mathcal{S}_n$ furnishes a permutation code of (maximum possible) size $n!/(n-k)!$. For instance, the Mathieu groups $M_{11}$ and $M_{12}$ are maximum PC$(11,7)$ and PC$(12,7)$, respectively. On the other hand, determination of $M(n,d)$ absent any algebraic structure appears to be a difficult problem. As an example, it is only presently known that $78 \le M(7,5) \le 134$; see [@BD; @JLOS] for details. A table of bounds on $M(n,d)$ can be found in [@SM]. In [@CKL], it was shown that the existence of $r$ mutually orthogonal latin squares (MOLS) of order $n$ yields a permutation code PC$(n,n-1)$ of size $rn$. Although construction of MOLS is challenging in general, the problem is at least well studied. Lower bounds on MOLS can be applied to the permutation code setting, though it seems for small $n$ not a prime power that the code sizes can be much larger than the MOLS guarantee. For example, $M(6,5) =18$ despite the nonexistence of orthogonal latin squares of order six, and $M(10,9) \ge 49$, [@JS], when no triple of MOLS of order 10 is known. On the other hand, it is straightforward to see, [@CKL], that $M(n,n-1)=n(n-1)$ implies existence of a full set of MOLS (equivalently a projective plane) of order $n$, so any nontrivial upper bound on permutation codes would have major impact on design theory and finite geometry. This connection is explored in more detail in [@BM]. Permutation codes are used in [@JS2] for some recent MOLS constructions. Let ' s be $n$. Chowla, Erdős and Strauss showed in [@CES] that $N(n)$ goes to infinity. Wilson, [@WilsonMOLS], found a construction strong enough to prove $N(n) \ge n^{1/17}$ for sufficiently large $n$. Subsequently, Beth, [@Beth] tightened some number theory in the argument to lift the exponent to $1/14.8$. In terms of permutation codes, then, one has $M(n,n-1) \ge n^{1+1/14.8}$ for sufficiently large $n$. Our main result in this note gives a small improvement to the exponent. \[main\] $M(n,n-1) \ge n^{1+1/12.533}$ for sufficiently large $n$. The proof is essentially constructive, although it requires, as does [@Beth; @WilsonMOLS], the selection of a ‘small’ integer avoiding several arithmetic progressions. This is guaranteed by the Buchstab sieve; see [@Ivt]. Apart from this number theory, our construction method generalizes a standard design-theoretic construction for MOLS to permutation codes posessing a small amount of additional structure. Some set up for our methodology is given in the next two sections, and the proof of Theorem \[main\] is given in Section \[proof\] as a consequence of the somewhat stronger Theorem \[idem-bound\]. We conclude with a discussion of some possible next directions for this work. Idempotent permutation codes and latin squares ============================================== Let $[n]:=\{1,2,\dots,n\}$. Recall that a *fixed point* of a permutation $\pi:[n]\rightarrow [n]$ is an element $i \in [n]$ such that $\pi(i)=i$. In single-line notation, this says symbol $i$ is in position $i$. Of course, for the identity permutation $\iota$, every element is a fixed point. Let us say that a permutation code is *idempotent* if each of its words has exactly one fixed point. As some justification for the definition, recall that a latin square $L$ of order $n$ is idempotent if the $(i,i)$-entry of $L$ equals $i$ for each $i \in [n]$. So, a maximum PC$(n,n)$ is idempotent if and only if the ‘corresponding’ latin square is idempotent. We are particularly interested in idempotent PC$(n,n-1)$ in which every symbol is a fixed point of the same number, say $r$, words; these we call $r$-*regular* and denote by $r$-IPC$(n,n-1)$. Permutation codes with extra ‘distributional’ properties have been investigated before. For example, ‘$k$-uniform’ permutation arrays are introduced in [@DV], while ‘$r$-balanced’ and ‘$r$-separable’ permutation arrays are considered in [@DFKW]. However, our definition is seemingly new, or at least not obviously related to these other conditions. If there exists an $r$-IPC$(n,n-1)$, say $\Delta$, then $\Delta \cup \{\iota\}$ is also a PC$(n,n-1)$. Consequently, $M(n,n-1) \ge rn+1$. It follows that $r \le n-2$ is an upper bound on $r$. On the other hand, if $\Gamma$ is a PC$(n,n-1)$ containing $\iota$, then the words of $\Delta$ at distance exactly $n-1$ from $\iota$ form an idempotent IPC$(n,n-1)$. Concerning the $r$-regular condition, whether $\iota \in \Gamma$ or not, we may find an $r$-IPC$(n,n-1)$ with $$\label{r-formula} r=\max_{\sigma \in \Gamma} \min_{i \in [n]} |\{\tau \in \Gamma \setminus \{\sigma\}: \tau(i)=\sigma(i)\}|.$$ In more detail, if $\sigma$ achieves the maximum in (\[r-formula\]), then for each $i=1,\dots,n$ we choose exactly $r$ elements $\tau \in \Gamma$ which agree with $\sigma$ in position $i$. After relabelling each occurrence of $\sigma(i)$ to $i$, we have the desired $r$-idempotent PC$(n,n-1)$. A question in its own right is whether there exists an $r$-IPC$(n,n-1)$ for $r =
null
--- address: - | Mathematical Institute\ University of Oslo\ P. O. Box 57, Oslo $P=\CC\PP^2$. An interpretation of $q_{4n-3}$ in an algebro-geometric context is the following. Let $M_n$ denote the Gieseker-Maruyama moduli space of semistable coherent sheaves on $P$ with rank 2 and Chern classes $c_1=0$ and $c_2=n$. For such a sheaf $F$, the Grauert-Mülich theorem implies that the restriction of $F$ to a general line $L\sub P$ splits as $F_L \iso \OO_L\dsum\OO_L$, and that the exceptional lines form a curve $J(F)$ of degree $n$ in the dual projective plane $P\v$. The system is named P_n$. Here $P_n=\PP^{n(n+3)/2}$ is the linear system parameterizing all curves of degree $n$ in $P\v$. Let $H\in\Pic(P_n)$ be the hyperplane class and let $\alpha = f_n^*H$. The interpretation of the Donaldson invariant is: $$q_{4n-3} = \int_{M_n} \alpha^{4n-3}.$$ Thus $q_{4n-3}$ is the degree of $f_n$ times the degree of its image. From [@Bart-2] it follows that $f_n$ is generically finite for all $n\ge2$, that $f_2$ is an isomorphism and $q_5=1$, and that $f_3$ is of degree 3 and $q_9=3$. Le Potier [@LePo] proved that $f_4$ is birational onto its image and that $q_{13}=54$. The value of $q_{13}$ has also been computed independently by Tikhomirov and Tyurin [@Tyur-1 prop. 4.1] and by Li and Qin [@Li-Qin thm. 1<unk>] The main result in the present note is the following \[thm1\] $q_{17}=2540$ and $q_{21}=233208$. The proof consists of two parts. The first part, treated in this note, is to express $q_{4n-3}$ in terms of certain classes on the Hilbert scheme of length-$(n+1)$ subschemes of $P$. This is theorems \[thm2\] and \[thm3\] below. The second part is to evaluate these classes numerically. This has been carried out in [@Elli-Stro-5 prop. 4.2]. Let $H_{n+1}=\Hilb^{n+1}_P$ denote the Hilbert scheme parameterizing closed subschemes of $P$ of length $n+1$. There is a universal closed subscheme $\Z\sub H_{n+1}\x P$. Consider the vector bundles $$\E = R^1{p_1}_* (\I_{\Z}\*{p_2}^* \OO_{P}(-1))\text{ and } \G = R^1{p_1}_* \I_{\Z}$$ on $H_{n+1}$ of ranks $n+1$ and $n$, respectively, and the linebundle $$\L = \det(\G) \* \det(\E)\i.$$ \[thm2\] Let the notation be as above. Then $$q_{17} = \int_{H_6} s_{12}(\E\*\L) \quad\text{and}\quad q_{21} = \dfrac25 \int_{H_7} s_{14}(\E\*\L).$$ This result was obtained both by Tikhomirov and Tyurin [@Tyur-Tikh], using the method of “geometric approximation procedure” and by Le Potier [@LePo-3], using “coherent systems”. We present in this note what we believe is a considerably simplified proof, which is strongly hinted at on the last few pages of [@Tyur-Tikh]. The formula for $q_{17}$ is a special case of the following formula: \[thm3\] For $2\le n\le 5$, we have $$q_{4n-3} = \dfrac1{2^{5-n}}\int_{H_{n+1}} c_1(\L)^{5-n} s_{3n-3}(\E\*\L).$$ With this it is also easy to recompute $q_5$, $q_9$, and $q_{13}$ using similar techniques as in [@Elli-Stro-5]. We let $h$, $h\v$, and $H$ be the hyperplane classes in $P$, $P\v$, and $P_n$, respectively. In general, if $\omega$ is a divisor class, we denote by $\OO(\omega)$ the corresponding linebundle and its natural pullbacks. This work is heavily inspired by conversations with A. Tyurin, and we thank him for generously sharing his ideas. We are funded by The Thomas Foundation. Hulsbergen sheaves ================== Barth [@Bart-2] used the term Hulsbergen bundle to denote a stable rank-2 vector bundle $F$ on $P$ with $c_1(F)=0$ and $H^0(P,F(1))\ne0$. We modify this definition a little as follows: A *Hulsbergen sheaf* is a coherent sheaf $F$ on $P$ which admits a non-split short exact sequence (*Hulsbergen sequence*) $$\label{Hulsbergen} 0 \to \OO_P \to F(1) \to \I_Z(2) \to 0,$$ where $Z\sub P$ is a closed subscheme of finite length (equal to $c_2(F)+1$). Note that a Hulsbergen sheaf is not necessarily semistable or locally free. However: is $c_2(F)=n>0$. Then the set $J(F)\sub P\v$ of exceptional lines for $F$ is a curve of degree $n$, defined by the determinant of the bundle map $$m\: H^1(P,F(-2))\*\OO_{P\v}(-1) \to H^1(P,F(-1))\*\OO_{P\v}$$ induced by multiplication with a variable linear form. First note from the Hulsbergen sequence that the two cohomology groups have dimension $n$. It is easy to see that any Hulsbergen sheaf is slope semistable, in the sense that it does not contain any rank-1 subsheaf with positive first Chern class. Thus , see $F_L[ thm. 1], it is clear that this is an exception at $L$. On the other hand, it is clear that a line $L$ is exceptional if and only if $m$ is not an isomorphism at the point $[L]\in P\v$. It is straightforward to construct a moduli space for Hulsbergen sequences. For any length-$(n+1)$ subscheme $Z\sub P$, the isomorphism classes of extensions are parameterized by $\PP(\Ext^1_P(\I_Z(2),\OO_P)\v)$. By Serre duality, $$\Ext^1_P(\I_Z(2),\OO_P)\v \iso H^1(P,\I_Z(-1)).$$ For varying $Z$, these vector spaces glue together to form the vector bundle $\E$ over $H_{n+1}$, hence $D_n=\PP(\E)$ is the natural parameter space for Hulsbergen sequences. Let $\OO(\tau)$ be the associated tautological quotient linebundle. For later use, note that for any divisor class $\omega$ on $H_{n+1}$, we have $\pi_*(\tau+\pi^*\omega)^{k+n} =
null
null
--- abstract: 'We report the first detection of X-ray emission from a brown dwarf in the Pleiades, the M7-type Roque 14, obtained using the EPIC detectors on [[*XMM-Newton*]{}]{}. This is the first X-ray detection of a brown dwarf intermediate in age between $\approx 12$ and $\approx 320$ Myr. The emission appears persistent, although we cannot rule out flare-like behaviour with a decay time-scale $> 4$ ks. The time-averaged X-ray luminosity of $\approx 3.3 \pm 0.8 \times 10^{27}$ , and its ratios with the bolometric ([$L_{\rm X}/L_{\rm bol}$]{} $\approx 10^{-3.05}$) and H$\alpha$ (/ $\approx 4.0$) luminosities suggest magnetic activity similar to that of active main-sequence M dwarfs, such as the M7 old-disc star VB 8, though the suspected binary nature of Roque 14 merits further attention. No emission is detected from four proposed later-type Pleiades brown dwarfs, with upper limits to in the range 2.1–3.8 $\times 10^{27}$ and to [$\log (L_{\rm X}/L_{\rm bol})$]{} in the range $-3.10$ to $-2.91$.' author: <unk>1$Dr.<extra_id_1> Profs.<extra_id_2>:<extra_id_3> John<extra_id_4> '<extra_id_5>'<extra_id_6> X-ray emission from a brown dwarf in the Pleiades' author:<extra_id_7> title: <extra_id_8> *<extra_id_9>==<extra_id_10> K.R. Briggs$^{1}$[^1] <unk> $\approx$F5–M7). Studies of these diagnostic emissions have found consistency in the character of magnetic activity throughout this range, despite the expected change in dynamo mechanism demanded by the absence of a radiative interior in stars of spectral types $\approx$ M3 and later. However, recent studies suggest the magnetic activity of ‘ultracool’ objects, with spectral types $\approx$ M8 and later, is quite different. The observed persistent (‘non-flaring’) levels of X-ray ([$L_{\rm X}/L_{\rm bol}$]{}) and H$\alpha$ () emission from MS late-type stars increase with decreasing Rossby number, $Ro = P / \tau_{\rm C}$, where $P$ is the rotation period and $\tau_{\rm C}$ is the convective turnover time, until reaching respective ‘saturation’ plateaus of [$L_{\rm X}/L_{\rm bol}$]{} $\sim 10^{-3}$ and $\sim 10^{-3.5}$ (e.g. Delfosse & Sullivan , <unk>[ al. ]{}1998 for M dwarfs). The fraction of field stars showing persistent chromospheric emission levels close to saturation increases toward later spectral types, peaking around M6–7 (Gizis [et al. ]{}2000). However, around spectral type M9 persistent H$\alpha$ emission levels begin to plummet dramatically (Gizis [et al. ]{}2000). Among L-type dwarfs no rotation–activity connection is found: continues to fall steeply toward later spectral types despite most L-dwarfs being fast rotators (Mohanty & Basri 2003). A proposed explanation is that magnetic fields diffuse with increasing efficiency in the increasingly neutral atmospheres of cooler dwarfs (Meyer & Meyer-Hofmeister 1999; Mohanty [et al. ]{}2002), overwhelming the importance of a rotation-driven dynamo efficiency in chromospheric heating. Magnetic activity is still observed, however, in the forms of H$\alpha$ flaring on some L dwarfs (e.g. Liebert [et al. ]{}2003), and flaring and apparently-persistent radio emission from several ultracool dwarfs (Berger [et al. ]{}2001; Berger 2002). Detections of X-ray emission from ultracool field dwarfs are scarce. The M7 old-disc star VB 8 shows persistent and flaring X-ray emission levels of [$L_{\rm X}/L_{\rm bol}$]{}$\approx 10^{-4.1}$–$10^{-2.8}$, similar to those of active M dwarfs (Fleming [et al. ]{}1993; Schmitt, Fleming & Giampapa 1995; Fleming, Giampapa & Garza 2003). However, the persistent levels of X-ray emission from the M8 old-disc star VB 10 and the M9 $\sim$320 Myr-old brown dwarf LP 944-20 are at least an order of magnitude lower – [$L_{\rm X}/L_{\rm bol}$]{}$\approx 10^{-5.0}$ (Fleming [et al. ]{}2003) and [$L_{\rm X}/L_{\rm bol}$]{}$< 10^{-5.7}$ (Rutledge [et al. ]{}2000), respectively – despite the latter being a fast rotator ([$v \sin i$]{}$= 30$ ). Yet transient strong magnetic activity is evidenced by the flaring X-ray emission, with peak [$L_{\rm X}/L_{\rm bol}$]{}$\approx 10^{-3.7}$–$10^{-1.0}$, that has been observed on both VB 10 (Fleming, Giampapa & Schmitt 2000) and LP 944-20, and on the M9 field dwarfs LHS 2065 (Schmitt & Liefke 2002) and 1RXS J115928.5-524717 (Hambaryan [et al. ]{}2004). Interestingly the temperature of the dominant X-ray-emitting plasma appears to be low, $T \approx 10^{6.5}$ K, whether it is measured in the persistent (VB 10) or flaring (LP 944-20 and 1RXS J115928.5-524717) emission state. While such low temperatures are typical for the persistent coronae of inactive stars – M dwarfs (Giampapa [et al. ]{}1996) and the Sun (Orlando, Peres & Reale 2000) alike – the temperatures of flaring plasma are significantly higher, with $T > 10^{7.0}$ K (Güdel [et al. ]{}2004; Reale, Peres & Orlando 2001). As very young substellar objects ($t \la 5$ Myr) may have photospheres as warm as MS M5–6 dwarfs, an individual brown dwarf may experience a transition from ‘stellar-like’ to ‘ultracool’ magnetic activity as it cools. Brown dwarfs in star-forming regions are routinely observed to emit X-rays at high levels, [$L_{\rm X}/L_{\rm bol}$]{}$\ga 10^{-3.5}$, arising from plasma at $T \ga 10^{7.0}$ K, similar to those of dMe stars and higher-mass young stars (e.g. Neuhäuser & Comerón 1998; Imanishi, Tsujimoto & Koyama 2001; Mokler & Stelzer 2002; Preibisch & Zinnecker 2002; Feigelson [et al. ]{}2002). The $\approx 12$ Myr-old, low-mass brown dwarf TWA 5B, of ultracool spectral type M8.5–9, exhibits apparently persistent X-ray emission at [$L_{\rm X}/L_{\rm bol}$]{} $\approx 10^{-3.4}$, like younger brown dwarfs, but with $T \approx 10^{6.5}$ K (Tsuboi [et al. ]{}2003), like field ultracool dwarfs and LP 944-20. At around spectral type M8–9, between the ages of $\sim 10$ and $\sim 300$ Myr for brown dwarfs, persistent X-ray emission levels appear to fall by a factor $\sim 100$ and coronal temperatures appear constrained to $T \la 10^{6.5}$ K, even during flares and at high emission levels. The population of brown dwarfs in the Pleiades cluster, $135$ pc away (Pan, Shao & Kulkarni 2004), of age $\approx 125$ Myr (Stauffer, Schultz & Fitzpatrick 1998) and spanning spectral types M6.5–early-L (Mart[í]{}n [et al. ]{}1998), is therefore crucial to understanding the evolution of substellar magnetic activity and the conflict of a rotationally-driven magnetic dynamo against atmospheric neutrality. [[*ROSAT*]{}]{} observations of brown dwarfs in the Pleiades detected no X-ray emission at the level of [$L_{\rm X}/L_{\rm bol}$]{}$\ga 10^{-2.5}$ (Neuhäuser [et al. ]{}1999, and see Section 4.4). We present a deeper X-ray (0.3–4.5 keV) observation
null
--- abstract: 'The luminous $z=0.286$ quasar [HE0450–2958]{} is interacting with a companion galaxy at 6.5 kpc distance and the whole system radiates in the infrared at the level of an ultraluminous infrared galaxy (ULIRG). A so far undetected host galaxy triggered the hypothesis of a mostly “naked” black hole (BH) ejected from the companion by three-body interaction. We present new HST/NICMOS 1.6$\mu$m imaging data at 01 resolution and VLT/VISIR 11.3$\mu$m images at 035 resolution that are for the first time resolving the system in the near- and mid-infrared. We combine these data with existing optical HST and CO maps. (i) At 1.6$\mu$m we find an extension N-E of the quasar nucleus that is likely a part of the host galaxy, though not its main body. If true, a combination with upper limits on a main body co-centered with the quasar brackets the host galaxy luminosity to within a factor of $\sim$4 and places [HE0450–2958]{} directly onto the $M_\mathrm{BH}-M_\mathrm{bulge}$-relation for nearby galaxies. (ii) A dust-free line of sight to the quasar suggests a low dust obscuration of the host galaxy, but the formal upper limit for star formation lies at 60 M$_\odot$/yr. [HE0450–2958]{} is consistent with lying at the high-luminosity end of Narrow-Line Seyfert 1 Galaxies, and more exotic explanations like a “naked quasar” are unlikely. (iii) All 11.3$\mu$m radiation in the system is emitted by the quasar nucleus. It has warm ULIRG-strength IR emission powered by black hole accretion and is radiating at super-Eddington rate, $L/L_\mathrm{Edd}=6.2^{+3.8}_{-1.8}$, or 12 $M_\odot$/year. (iv) The companion galaxy is covered in optically thick dust and is not a collisional ring galaxy. It emits in the far infrared at ULIRG strength, powered by Arp220-like star formation (strong starburst-like). An infrared spectrum reaches out. (v) With its black hole accretion rate [HE0450–2958]{} produces not enough new stars to maintain its position on the $M_\mathrm{BH}-M_\mathrm{bulge}$-relation, and star formation and black hole accretion are spatially disjoint. This relation can either only be maintained averaging over a longer timescale ($\la$500 Myr) and/or the bulge has to grow by redistribution of preexisting stars. (vi) Systems similar to [HE0450–2958]{} with spatially disjoint ULIRG-strength star formation and quasar activity might be common at high redshifts but at $z<0.43$ we only find $<$4% (3/77) candidates for a similar configuration.' author: == ULIRG. --- True evolution. The masses of galactic bulges and their central black holes (BHs) in the local Universe follow a tight relation [e.g. @haer04] with only 0.3 dex scatter. Currently it is not clear how this relation comes about and if and how it evolved over the last 13 Gyrs, but basically all semi-analytic models now include feedback from active galactic nuclei (AGN) as a key ingredient to acquire consensus with observations [e.g. @hopk06c; @some08]. In these models it is assumed that black hole growth by accretion and energetic re-emission from the ignited AGN back into the galaxy can form a self regulating feedback chain. This feedback loop can potentially regulate or possibly also truncate star formation and in this process create and maintain the red/blue color–magnitude bimodality of galaxies. In this light, any galaxy with an abnormal deviation from the $M_\mathrm{BH}$–$M_\mathrm{bulge}$-relation will be an important laboratory for understanding the coupling mechanisms of black hole and bulge growth. It will also be involved. Since the early work by @bahc94 [@bahc95b] on QSO host galaxies with the [*Hubble Space Telescope (HST)*]{} and the subsequently resolved dispute about putatively “naked” QSOs [@mcle95a], no cases for QSOs without surrounding host galaxies were found – when detection limits were correctly interpreted. Only recently the QSO [HE0450–2958]{} renewed the discussion, when @maga05 made a case for a 6$\times$ too faint upper limit of the host galaxy of [HE0450–2958]{} with respect to the $M_\mathrm{BH}$–$M_\mathrm{bulge}$-relation. In light of a number of competing explanations for this, the nature of the [HE0450–2958]{} system needs to be settled. The QSO [HE0450–2958]{} (a.k.a. IRAS 04505–2958) at a redshift of $z=0.286$ was discovered by @low88 as a warm IRAS source. [HE0450–2958]{} is a radio-quiet quasar, with a distorted companion galaxy at 15 (=6.5 kpc) distance at the same redshift, likely in direct interaction with the QSO [@cana01]. The combined system shows an infrared luminosity of an ultraluminous infrared galaxy (ULIRG, $L_\mathrm{IR}>10^{12}$ L$_\odot$). [HE0450–2958]{} was observed with the [Hubble Space Telescope (HST)]{} and its WFPC2 camera [@boyc96] in F702W (=$R$ band) and ACS camera [@maga05] in F606W (=$V$ band), both observations did not allow to detect a host galaxy centered on the quasar position within their limits (Figure \[fig:allwave\], left column). @maga05 estimated an expected host galaxy brightness if [HE0450–2958]{} was a normal QSO system that obeyed the $M_\mathrm{BH}$–$M_\mathrm{bulge}$-relation in the local Universe and given a BH mass estimate or luminosity of the QSO. They concluded that the ACS F606W detection limits were six times fainter than the expected value for the host galaxy, which qualified [HE0450–2958]{} to be very unusual. ! [image](fig1_all.eps){width="\textwidth"} @maga05 sparked a flurry of subsequent papers to explain the undetected host galaxy to black hole relation. Over massive black hole. Figure 1. [HE0450–2958]{} is a normal QSO nucleus, but with a massive black hole residing in an under-massive host galaxy. The system is lying substantially off the local $M_\mathrm{BH}$–$M_\mathrm{bulge}$-relation; the host galaxy possibly hides just below the F606W detection limit [@maga05]. @maga05 The host galaxy is actually absent, [HE0450–2958]{} is a truly “naked” QSO, by means of a black hole ejection event in a gravitational three body interaction or gravitational recoil following the merger of [HE0450–2958]{} with the companion galaxy [@hoff06; @haeh06; @bonn07]. * The original black hole mass estimate was too high [@merr06; @kim07; @leta07] and is in fact $\sim$10 times lower. With comparably narrow ($\sim$1500 km/s FWHM) broad QSO emission lines the QSO could be the high-luminosity analog of the class of narrow-line Seyfert 1 galaxies (NLSy1). The host galaxy could be normal for the black hole mass and be absolutely consistent with the ACS upper limits. In this article we present new data initially motivated by the still undetected host galaxy and by the possibility that the host galaxy might be obscured by substantial amounts of dust. We want to investigate the overall cool and warm dust properties of the system, using new near infrared (NIR) and mid infrared (MIR) images. The F606W ACS band is strongly susceptible to dust attenuation, and dust could have prevented the detection of the host galaxy in the optical. With new NIR data we look at a substantially more transparent wavelength. At the same time the new infrared data is meant to localize the source(s) of the ULIRG emission. Three components are candidates for this: The AGN nucleus, the host galaxy, and the companion galaxy. Our NIR data allow to trace star formation and the MIR image traces the hot dust in the system. We present
null
--- abstract: 'It is demonstrated that radiative corrections increase tunneling probability of a charged particle.' address: abstract:<extra_id_1> L. B. Bost<unk>$, A.S.<extra_id_2> '<extra_id_3> --- Abstract:<extra_id_4> Flambaum$<unk>1<unk>' and<extra_id_5> -<extra_id_6> Flammbaum#$$. Abstract<extra_id_7> Fla 'V.V. Flambaum$^{1}$ True V.G. Zelevinsky$^{2}$[^1]' title: Quantum Münchhausen effect in tunneling --- Famous baron von Münchhausen saved himself from a swamp pulling his hairs by the hands of his own [@Mun]. According to classical physics, such a feat seems to be impossible. However, we live in a quantum world. In a tunneling of a charged particle, the head of the particle wave function can send a photon to the tail which absorbs this photon and penetrates the barrier with enhanced probability. Obviously, such a photon feedback should work in the two-body tunneling where the first particle, while continuing to be accelerated by the potential after the tunneling, can emit a (virtual) photon that increase energy of the second particle and its tunneling probability. The Münchhausen mechanism may be helpful in the tunneling of a composite system. It is related to phonon assisted tunneling but does not require any special device being always provided by the interaction of a charged particle with the radiation field. The interaction of a tunneling object with other degrees of freedom of the system and the influence of this interaction on the tunneling probability for a long time was a topic of intensive studies initiated by Caldeira and Leggett [@cal]. Their general conclusion, in agreement with intuitive arguments, was that any friction-type interaction suppresses the tunneling. At the same time, it was realized that such an interaction leads to distortions of the barrier which can be helpful in endorsing the tunneling. The interaction should improve the barrier. This is important for the probabilities of subbarrier nuclear reactions as pointed out by Esbensen [@esb]. In this paper we discuss the mechanism therein. Below we discuss the interaction of a charged tunneling object with the electromagnetic field which always accompanies motion of the object. Formally speaking, we are looking for the effects of radiative corrections on the single-particle tunneling. These effects can be described by the Schrödinger equation with the self-energy operator: $$\label{H} \hat{H}\Psi({\bf r}) + \int \Sigma({\bf r},{\bf r}';E) \Psi({\bf r}') d^3r'= E \Psi({\bf r})$$ where $\hat{H}$ is the unperturbed particle hamiltonian, which includes a barrier potential, and $\Sigma=M-i\Gamma/2$ is the complex nonlocal and energy-dependent operator determined by the coupling to virtual photons and a possibility of a real photon emission. The “photon hand" here connects two points ${\bf r}$ and ${\bf r}'$ of the same wave function. In the one-photon approximation the self-energy due to the interaction with the transverse radiation field can be written as $$\Sigma({\bf r},{\bf r}';E)=\sum_{{\bf k},\lambda}|g_{{\bf k}}|^{2} \sum_{n}\frac{\langle {\bf r}|(\hat{{\bf p}}\cdot {\bf e}_{{\bf k}\lambda}) e^{i{\bf k\hat{r}}} |n\rangle\langle n|(\hat{{\bf p}}\cdot{\bf e}^{\ast}_{{\bf k}\lambda}) e^{-i{\bf k\hat{r}}}|{\bf r}'\rangle}{E-E_{n}-\omega_{{\bf k}}-i0}. \label{1}$$ Here we sum over unperturbed stationary states $|n\rangle$; $\hat{{\bf r}}$ and $\hat{{\bf p}}$ are the position and momentum operators, respectively; the photons are characterized by the momentum ${\bf k}$, frequency $\omega_{{\bf k}}$ and polarization $\lambda$; the polarization vectors ${\bf e}_{{\bf k}\lambda}$ are perpendicular to ${\bf k}$ so that the momentum operators commute with the exponents. The normalization factors are included into $g_{{\bf k}}\propto \omega_{{\bf k}}^{-1/2}$. The algorithm is straightforward. The hermitian part $M$ of the self-energy operator is given by the principal value integral over photon frequencies in (\[1\]). The integral does not contain any energy levels. It contains also the mass renormalization for a free particle which should be subtracted. Our problem is different from the energy shift calculation for bound states since we are interested in the change of the wave function of the tunneling particle. However we can use some features of the conventional approach. As well known from the Lamb shift calculations, one can use different approximations in the two regions of integration over the photon frequency $\omega$. In the nonrelativistic low-frequency region, $\omega<\beta m$, where the parameter $\beta<1$ is chosen in such a way that typical excitation energies of a particle in the well $\delta E$ are smaller than $\beta m$ (in the hydrogen Lamb shift problem a fine structure constant $\alpha$ can play the role of the borderline scale parameter), it is possible to neglect the exponential factors in (\[1\]). The high-frequency contribution to $M$, where the potential can be considered as a perturbation to free motion, has been calculated, e. g. in Ref. [@Akhieser]. The two contributions match smoothly at $\omega =\beta m$. It is easy to estimate the mass operator $M$ with logarithmic accuracy. After summation over polarizations and standard regularization [@Akhieser], the low frequency part of the operator $M$ can be written as $$\label{sigma} \hat{M}(E)=\frac{2 Z^2 \alpha}{3 \pi m^{2}} \int d\omega \sum_n \hat{{\bf p}}|n\rangle \frac{E - E_n}{E - E_n -\omega}\langle n|\hat{{\bf p}}$$ where $Ze$ is the particle charge, and $m$ is the mass of the particle (reduced mass in the alpha-decay case). We use the units $\hbar=c=1$. Substituting the logarithm arising from the frequency integration by its average value $L= \ln(\beta m/\omega_{min}) $, we can use the closure relations and obtain a simple expression $$\begin{aligned} \hat{M}(E)&=&\frac{2Z^{2}\alpha}{3\pi m^{2}}L\hat{{\bf p}}(\hat{H} -E)\hat{{\bf p}} \\ &=&\frac{Z^{2}\alpha}{3\pi m^{2}}L\left\{\nabla^{2}\hat{U} +[(\hat{H}-E),\hat{{\bf p}}^{2}]_{+} \right\}. \label{2}\end{aligned}$$ The mean value of the term with the anticommutator $[...,...]_{+}$ in eq. (\[2\]) is equal to zero since $(\hat{H}-E)\Psi_0=0$ where $\Psi_0$ is the unperturbed wave function. A correction to the wave function due to this term can be calculated by using perturbation theory and the unperturbed Schrödinger equation, $$\delta\Psi=\frac{2Z^{2}\alpha}{3\pi m}L[U-\langle 0|U|0\rangle]\Psi_{0}. \label{2a}$$ This correction is not essential since it does not influence the exponent in the tunneling amplitude. Combining the remaining term in eq. (\[2\]) with the high-frequency contribution which contains $L= \ln(m/\beta m)$, see Ref. [@Akhieser], the result can be presented as an effective local operator proportional to the Laplacian $\nabla^{2}U({\bf r})$, $$\label{sigmaloc} M({\bf r}, {\bf r}'; E) \simeq \nabla^{2}U({\bf r}) \delta({\bf r}- {\bf r}')\frac{ Z^2 \alpha}{3 \pi m^2} \ln\frac{m}{U_0} \equiv \delta U({\bf r})\delta({\bf r}-{\bf r}').$$ Here we used the barrier height $U_0$ as a lower cut-off $\omega_{min}$ of the integration over frequencies (below we give a semiclassical estimate which leads also to a more accurate evaluation of the logarithmic factor). For the tunneling of an extended object, the mass $m$ in the argument of the logarithm should be replaced by the inverse size of the particle $1/r_0$ which comes from the upper frequency cut-off given in this case by the charge formfactor. The obtained result is physically equivalent to the averaging over the position fluctuations due to the coupling to virtual photons. Thus, in the logarithmic approximation the mass operator is reduced to a local correction $\delta U({\bf r})$ to the potential $U({\bf r})$. The Laplacian of the potential energy $\nabla^{2} U({\bf r})$ near the maximum of the barrier is negative (correspondingly, near the bottom of the potential well it is positive). Therefore, we obtained the negative correction $\delta
null
--- abstract: 'Many real world problems can now be effectively solved using supervised machine learning. A major roadblock is often the lack of an adequate quantity of labeled data for training. A possible solution is to assign the task of labeling data to a crowd, and then infer the true label using aggregation methods. A well-known approach for aggregation is the Dawid-Skene (DS) algorithm, which is based on the principle of Expectation-Maximization (EM). We propose a new simple, yet effective, EM-based algorithm, which can be interpreted as a ‘hard’ version of DS, that allows much faster convergence while maintaining similar accuracy in aggregation. We show the use of this algorithm as a quick and effective technique for online, real-time sentiment annotation. We also prove that our algorithm converges to the estimated labels at a linear rate. Our experiments on standard datasets show a significant speedup in time taken for aggregation - upto $\sim$8x over Dawid-Skene and $\sim$6x over other fast EM methods, at competitive accuracy performance. The code for the implementation of the algorithms can be found at <https://github.com/GoodDeeds/Fast-Dawid-Skene>.' author: DaneSkene years. However, the success of supervised learning for the domain in recent years has been premised on the availability of large amounts of data to effectively train models. Obtaining a large labeled dataset is time-consuming, expensive, and sometimes infeasible; and this has often been the bottleneck in translating the success of machine learning models to newer problems in the domain. An approach that has been used to solve this problem is to crowdsource the annotation of data, and then aggregate the crowdsourced labels to obtain ground truths. Online platforms such as Amazon Mechanical Turk and CrowdFlower provide a friendly interface where data can be uploaded, and workers can annotate labels in return for a small payment. With the ever-growing need for large labeled datasets and the prohibitive costs of seeking experts to label large datasets, crowdsourcing has been used as a viable option for a variety of tasks, including sentiment scoring [@CSsentimentscoring], opinion mining [@CScommodityreview], general text processing [@Snow:2008:CFG:1613715.1613751], taxonomy creation [@Bragg2013CrowdsourcingMC], or domain-specific problems, such as in the biomedical field [@DBLP:journals/corr/GuanGDH17; @Albarqouni2016AggNetDL], among many others. In recent times, there is a growing need for a fast and real-time solution for judging the sentiment of various kinds of data, such as speech, text articles, and social media posts. Given the ubiquitous use of the internet and social media today, and the wide reach of any information disseminated on these platforms, it is critical to have a efficient vetting process to ensure prevention of the usage of these platforms for anti-social and malicious activities. Sentiment data is one such parameter that could be used to identify potentially harmful content. A very useful source for identifying harmful content is other users of these internet services, that report such content to the service administrators. Often, these services are set up such that on receiving such a flag, they ask other users interacting with the same content to classify whether the content is harmful or not. Then, based on these votes, a final decision can be made, without the need for any human intervention. Some such works include: crowdsourcing the sentiment associated with words [@CSsentimenttoword], crowdsourcing sentiment scoring for online media [@CSsentimentscoring], crowdsourcing the classification of words to be used as a part of lexicon for sentiment analysis [@CSlexicon], crowdsourcing sentiment judgment for video review [@CSvideoreview], crowdsourcing for commodity review [@CScommodityreview], and crowdsourcing for the production of word level annotation for opinion mining tasks [@CSsyntacticrelatedness]. However, with millions of users creating and adding new content every second, it is necessary that this decision be quick, so as to keep up with and effectively address all flags being raised. This indicates a need for fast vote aggregation schemes that can provide results for a stream of data in real time. The use of crowdsourced annotations requires a check on the reliability of the workers and the accuracy of the annotations. While the platforms provide basic quality checks, it is still possible for workers to provide incorrect labels due to misunderstanding, ambiguity in the data, carelessness, lack of domain knowledge, or malicious intent. This can be countered by obtaining labels for the same question from a large number of annotators, and then aggregating their responses using an appropriate scheme. A simple approach is to use majority voting, where the answer which the majority of annotators choose is taken to be the true label, and is often effective. However, there is no \[related\]. Despite the various recent methods proposed, one of the most popular, robust and oft-used method to date for aggregating annotations is the Dawid-Skene algorithm, proposed by [@dawid1979maximum], based on the Expectation Maximization (EM) algorithm. This method uses the M-step to compute error rates, which are the probabilities of a worker providing an incorrect class label to a question with a given true label, and the class marginals, which are the probabilities of a randomly selected question to have a particular true label. These are then used to update the proposed set of true labels in the E-step, and the process continues till the algorithm converges on a proposed set of true labels (further described in Section \[dawidskenealgo\]). In this work, we propose a new simple, yet effective, EM-based algorithm for aggregation of crowdsourced responses. Although formulated differently, the proposed algorithm can be interpreted as a ‘hard’ version of Dawid-Skene (DS) [@dawid1979maximum], similar to Classification EM [@celeux1992classification] being a hard version of the original EM. The proposed method converges upto 7.84x faster than DS, while maintaining similar accuracy. We also propose a hybrid approach, a combination of our algorithm with the Dawid-Skene algorithm, that combines the high rate of convergence of our algorithm and the better likelihood estimation of the Dawid-Skene algorithm as part of this work. Related Work {#related} ============ The Expectation-Maximization algorithm for maximizing likelihood was first formalized by [@10.2307/2984875]. Soon after, Dawid and Skene [@dawid1979maximum] proposed an EM-based algorithm for estimating maximum likelihood of observer error rates, which became very popular for crowdsourced aggregation and is still considered by many as a baseline for performance. Many researchers, to this day, have worked on analyzing and extending the Dawid-Skene methodology (henceforth, called DS), of which we summarize the more recent efforts below. The work on crowdsourced data aggregation have not been confined only for sentiment analysis or opinion mining tasks, instead most of the methods are generic and can easily used for sentiment analysis and
null
--- abstract: 'We have estimated the ages of a sample of A–type Vega–like stars by using Strömgren *uvby$ \beta $* photometric data and theoretical evolutionary tracks. We find that 13 percent of these A stars have been reported as Vega–like stars in the literature and that the ages of this subset run the gamut from very young (50 Myr) to old (1 Gyr), with no obvious age difference compared to those of field A stars. We 'Ages of A–type Vega–like<extra_id_1> are very young<extra_id_2> cannot detect age differences in field B<extra_id_3> have no age difference among the A<extra_id_4> have several unusual sub–groups among<extra_id_5> are still young<extra_id_6> stars and stars.' author: - 'Inseok Song, J.-P. Caillault,' - 'David Barrado y Navascués,' - 'John R. Stauffer' title: 'Ages of A–type Vega–like stars from *uvby$ \beta $* Photometry' --- Introduction ============ There are several unusual sub–groups among the A–type stars, such as the metallic–line stars (Am), the peculiar A stars (Ap), $ \lambda $ Bootis type stars, and shell stars [@AbtMorrell95]. Another class of stars with many members amongst the A dwarfs is that of the Vega–like stars. Vega–like stars show excess IR emission attributable to an optically thin dust disk around them. These disks are believed to have very little or no gas [@LBA99]. It is very important to know the ages and, hence, the evolutionary stages of these stars, since they are believed to be signposts of exo–planetary systems or of on–going planet formation. However, determining the ages of individual A–type stars is a very difficult task. Some indirect age dating methods for A–type stars include the use of late–type companions if any exist (HR 4796A and Fomalhaut; see @Stauffer95 [@Barrado97; @myPhD]) or using stellar kinematic groups (Fomalhaut, Vega and $ \beta $ Pictoris; see @David98 and @BSSC). The use of Strömgren *uvby$ \beta $* photometry [@ATF97], however, provides a more direct and general determination of the ages of A–type stars. The photometric *uvby*$ \beta $ system as defined by @Stromgren63 and @CM66 allows for reasonably accurate determination of stellar parameters like effective temperature $ T_{eff} $, surface gravity $ g $, and metallicity for B, A, and F stars [@crawford79; @NSW93 and references therein]. The $ T_{eff} $ and $ g $ values can then be used to estimate directly the ages of stars when they are coupled with theoretical evolutionary tracks (though for individual stars these estimates have relatively large error bars). In this letter, we describe our application of this technique to a volume limited sample of 200 A stars. Method ====== $ T_{eff}\protect $ and $ \log g\protect $ determination -------------------------------------------------------- Extensive catalogues of *uvby*$ \beta $ data have been published by @HM80, @Olsen83, and @OP84. We have used these catalogues and WEBDA[^1] databases to find *uvby$ \beta $* photometry data for our sample of A–type stars. Numerous calibration methods of effective temperature and surface gravity using *uvby$ \beta $* photometry have been published . @MD85, in particular, demonstrate that their calibration yields $ T_{eff} $ and $ \log g $ to a 1 $ \sigma $ accuracy of $ 260 $ K and $ 0.10 $ dex, respectively. However, as pointed out by @NSW93, $ \log g $ from @MD85’s calibration depends on the $ T_{eff} $ value while the most desirable calibration method should not. Therefore, we used the @MD85 grids with Napiwotzki et al.’s gravity modification to eliminate the $ \log g $ dependence on $ T_{eff} $ for early–type stars. The subsequent temperature calibration is in agreement with the integrated–flux temperatures $ \left( T_{eff}=(\pi F/\sigma )^{1/4}\right) $ from @Code, @Beeckmans, and @Malagnini at the 1% level and the accuracy of $ \log g $ ranges from $ \approx 0.10 $ dex for early A stars to $ \approx 0.25 $ dex for hot B stars [@NSW93]. A rapidly rotating star has a surface gravity smaller at the equator than at the poles and both the local effective temperature and surface brightness are therefore lower at the equator than at the poles. Thus, in comparing a rotating star with a non–rotating star of the same mass, the former is always cooler. But the apparent luminosity change of a rotating star depends on the inclination angle ($ i $) such that a pole–on $ \left( i=0^{\circ }\right) $ star is brighter and an edge–on $ \left( i=90^{\circ }\right) $ star is dimmer than a non–rotating star [@Kraft]. In all cases, the combination of the luminosity and temperature changes result in an older inferred age compared to the non–rotating case. This effect is prominent in spectral types B and A in which most stars are rapidly rotating ($ v\sin i\geq 100km/sec $). Recently, @FB98 simulated the effect of stellar rotation on the Strömgren *uvby$ \beta $* photometric indices. They concluded that the effect of stellar rotation is to enhance the stellar main sequence age by an average of $ 40\% $. Therefore, we included the stellar rotation correction suggested by @FB98. However, their rotation correction schemes are available only for stars with spectral type between approximately B7–A4. We extended the range of rotation correction such that for stars earlier than B7, we used the correction scheme for B7 stars and for stars later than A4, we used the correction scheme for A4 stars. Therefore, stars earlier or later than the Figueras & Blasi’s (1998) range will have more uncertain ages. Large uncertainties in estimated ages are mainly due to the large error in $ \log g $. However, using a rotation correction scheme based on the projected stellar rotational velocities $ \left( v\sin i\right) $ rather than a scheme based on the true stellar rotational velocities $ \left( v\right) $ may have resulted in uncertainties also. The stellar rotation decreases the effective temperature depending on the inclination angle (small change of $ T_{eff} $ for $ i\approx 0^{\circ } $ but large change of $ T_{eff} $ for $ i\approx 90^{\circ } $) but the current rotation correction scheme cannot distinguish between the case of large $ v $ with small $ i $ and the case of small $ v $ with large $ i $. Thus, rotation correction using $ v\sin i $ instead of $ v $ may cause uncertainty in stellar ages. Ages of Open Clusters --------------------- The theoretical evolutionary grids of @Schaller92 were used to estimate ages of stars from $ T_{eff} $ and $ \log g $. To verify that our age dating method is working, we applied the method to a few open clusters with ages determined by other methods – $ \alpha $ Perseus (80 Myr), Pleiades (125 Myr), NGC 6475 (220 Myr), M34 (225 Myr), and Hyades (660 Myr). The other methods Pleiades). The ages for the other clusters are from upper main sequence isochrone fitting (UMSIF), and are taken from @JonesProsser or @Lynga. The age scales based on the two different methods (LDBM and UMSIF) are not yet consistent with each other, and both have possible systematic errors. The current best UMS isochrone ages for $ \alpha $ Perseus and the Pleiades are in the range 50–80 Myr and 80–150 Myr. In Figure \[OpenCluster\], one can see that the isochrones of these open clusters are fairly well reproduced. However, there are some deviations from the expected values. Stars that are younger than or close to 100 Myr, like stars in $ \alpha $ Perseus, tend to locate below the theoretical 100 Myr isochrone. So we assigned an age of 50 Myr for the stars below the 100 Myr isochrone. At intermediate ages, the open cluster data provide a mixed message – the M34 Strömgren age appears to be younger than the UMSIF age, whereas the NGC 6475 Strömgren age seems older than the UMSIF age. This could be indicative of the inhomogeneous nature of the ages (some from LDBM, some from relatively old UMS models, some from newer models) to which we are comparing the Strömgren ages. If we could use $ v $ data instead of $ v\sin i $ and if one could make a rotation correction scheme by using $ v $ values, then the new correction scheme would tighten more stars for a given cluster to the locus
null
--- abstract: 'We produce the family of Calabi-Yau hypersurfaces $X_{n}$ of $({\mathbb P}^{1})^{n+1}$ in higher dimension whose inertia group contains non commutative free groups. This is completely different from Takahashi’s result [@ta98] for Calabi-Yau hypersurfaces $M_{n}$ of ${\mathbb P}^{n+1}$.' address: - ' (Masakatsu Hayashi) Department of Mathematics, Graduate School of Science, Osaka University, Machikaneyamacho 1-1, Toyonaka, Osaka 560-0043, Japan ' - ' (Taro Hayashi) Department of Mathematics, Graduate School of Science, Osaka University, Machikaneyamacho 1-1, Toyonaka, Osaka 560-0043, Japan ' author: - Masakatsu Hayashi and Taro Hayashi title: 'Calabi-Yau hypersurfaces in the direct product of ${\mathbb P}^{1}$ and inertia groups' --- Introduction ============ Throughout this paper, we work over ${\mathbb C}$. Given an algebraic variety $X$, it is natural to consider its birational automorphisms $\varphi {\colon}X \dashrightarrow X$. The set of these birational automorphisms forms a group ${\operatorname{Bir}}(X)$ with respect to the composition. When $X$ is a projective space ${\mathbb P}^{n}$ or equivalently an $n$-dimensional rational variety, this group is called the Cremona group. In higher dimensional case ($n \geq 3$), though many elements of the Cremona group have been described, its whole structure is little known. Let $V$ be an $(n+1)$-dimensional smooth projective rational manifold. In that case, it's [@gi94]. It consists of those elements of the Cremona group that act on $X$ as identity. In Section \[cyn\], we mention the result (Theorem \[tak\]) of Takahashi [@ta98] about the smooth Calabi-Yau hypersurfaces $M_{n}$ of ${\mathbb P}^{n+1}$ of degree $n+2$ (that is, $M_{n}$ is a hypersurface such that it is simply connected, there is no holomorphic $k$-form on $M_{n}$ for $0<k<n$, and there is a nowhere vanishing holomorphic $n$-form $\omega_{M_{n}}$). It turns out that the inertia group of $M_{n}$ is trivial (Theorem \[intro2\]). Takahashi’s result (Theorem \[tak\]) is proved by using the “Noether-Fano inequality". It is the useful result that tells us when two Mori fiber spaces are isomorphic. Theorem , and $ result. In Section \[cy1n\], we consider Calabi-Yau hypersurfaces $$X_{n} = (2, 2, \ldots , 2) \subset ({\mathbb P}^{1})^{n+1}.$$ Let $${\operatorname{UC}}(N) {\coloneqq}\overbrace{{\mathbb Z}/2{\mathbb Z}* {\mathbb Z}/2{\mathbb Z}* \cdots * {\mathbb Z}/2{\mathbb Z}}^{N} = \operatorname*{\raisebox{-0.8ex}{\scalebox{2.5}{$\ast$}}}_{i=1}^{N}\langle t_{i}\rangle$$ be the *universal Coxeter group* of rank $N$ where ${\mathbb Z}/2{\mathbb Z}$ is the cyclic group of order 2. There is no non-trivial relation between its $N$ natural generators $t_{i}$. Let $$p_{i} {\colon}X_{n} \to ({\mathbb P}^{1})^{n}\ \ \ (i=1, \ldots , n+1)$$ be the natural projections which are obtained by forgetting the $i$-th factor of $({\mathbb P}^{1})^{n+1}$. Then, the $n+1$ projections $p_{i}$ are generically finite morphism of degree 2. Thus, for each index $i$, there is a birational transformation $$\iota_{i} {\colon}X_{n} \dashrightarrow X_{n}$$ that permutes the two points of general fibers of $p_{i}$ and this provides a group homomorphism $$\Phi {\colon}{\operatorname{UC}}(n+1) \to {\operatorname{Bir}}(X_{n}).$$ From now, we set $P(n+1) {\coloneqq}({\mathbb P}^{1})^{n+1}$. Cantat-Oguiso $ and $P(n+1) [@co11]. $($[@co11 ). 3$ 3$. Then the morphism $\Phi$ that maps each generator $t_{j}$ of ${\operatorname{UC}}(n+1)$ to the involution $\iota_{j}$ of $X_{n}$ is an isomorphism from ${\operatorname{UC}}(n+1)$ to ${\operatorname{Bir}}(X_{n})$. Here “generic” means $X_{n}$ belongs to the complement of some countable union of proper closed subvarieties of the complete linear system $\big| (2, 2, \ldots , 2)\big|$. Let $X \subset V$ be a projective variety. The *decomposition group* of $X$ is the group $$\begin{aligned} {\operatorname{Dec}}(V, X) {\coloneqq}\{f \in {\operatorname{Bir}}(V)\ |\ f(X) =X \text{ and } f|_{X} \in {\operatorname{Bir}}(X) \}.\end{aligned}$$ The *inertia group* of $X$ is the group $$\begin{aligned} \label{inertia} {\operatorname{Ine}}(V, X) {\coloneqq}\{f \in {\operatorname{Dec}}(V, X)\ |\ f|_{X} = {\operatorname{id}}_{X}\}.\end{aligned}$$ Then it is natural to consider the following question: \[qu\] Is the sequence $$\begin{aligned} \label{se} 1 \longrightarrow {\operatorname{Ine}}(V, X) \longrightarrow {\operatorname{Dec}}(V, X) \overset{\gamma}{\longrightarrow} {\operatorname{Bir}}(X) \longrightarrow 1\end{aligned}$$ exact, i.e., is $\gamma$ surjective? Note that, in general, this sequence is not exact, i.e., $\gamma$ is not surjective (see Remark \[k3\]). When the sequence is exact, the group ${\operatorname{Ine}}(V, X)$ measures how many ways one can extend ${\operatorname{Bir}}(X)$ to the birational automorphisms of the ambient space $V$. Our main result is following theorem, answering a question asked by Ludmil Katzarkov: \[intro\] Let $X_{n} \subset P(n+1)$ be a smooth hypersurface of multidegree $(2, 2, \ldots, 2)$ and $n \geq 3$. Then: $<unk>p> $X_{n}$. - If, in addition, $X_{n}$ is generic, there are $n+1$ elements $\rho_{i}$ $(1 \leq i \leq n+1)$ of ${\operatorname{Ine}}(P(n+1), X_{n})$ such that $$\langle \rho_{1}, \rho_{2}, \ldots , \rho_{n+1} \rangle \simeq \underbrace{{\mathbb Z}* {\mathbb Z}* \cdots * {\mathbb Z}}_{n+1} \subset {\operatorname{Ine}}(P(n+1), X_{n}).$$ In particular, ${\operatorname{Ine}}(P(n+1), X_{n})$ is an infinite non-commutative group. Our proof of Theorem \[intro\] is based on an explicit computation of elementary flavour. We also consider another type of Calabi-Yau manifolds, namely smooth hypersurfaces of degree $n+2$ in ${\mathbb P}^{n+1}$ and obtain the following result: \[intro2\] Suppose $n \geq 3$. Let $M_{n} = (n+2) \subset {\mathbb P}^{n+1}$ be a smooth hypersurface of degree $n+2$. Then |<unk> <unk>in $M_{n}$. More precisely: - ${\operatorname{Dec}}({\mathbb P}^{n+1}, M_{n}) = \{ f \in {\operatorname{PGL}}(n+2, {\mathbb C}) = {\operatorname{Aut}}({\mathbb P}^{n+1})\ |\ f(M_{n}) = M_{n}\}$. - ${\operatorname{Ine}}({\mathbb P}^{n+1}, M_{n}) = \{{\operatorname{id}}_{{\mathbb P}^{n+1}}\}$, and $\gamma {\colon}{\operatorname{Dec}}({\mathbb P}^{n+1}, M_{n}) \overset{\simeq}{\longrightarrow} {\operatorname{Bir}}(M_{
null
--- abstract: 'The Particle Swarm Optimization (PSO) algorithm is developed for solving the Schaffer F6 function in fewer than $4000$ function evaluations on a total of $30$ runs. Four variations of the Full Model of Particle Swarm Optimization (PSO) algorithms are presented which consist of combinations of Ring and Star topologies with Synchronous and Asynchronous updates. The True explored.' author: Synchronous and Asynchronous Particle Updates. The True Model. The Full Model learns from itself and others $\phi_{1} > 0$, $\phi_{2} > 0$. The Cognition Model learns from itself $\phi_{1} > 0$, $\phi_{2} = 0$. The Social Model learns from others $\phi_{1} = 0$, $\phi_{2} > 0$. The Selfless Model learns from others $\phi_{1} = 0$, $\phi_{2} > 0$, except for the best particle in the swarm, which learns from changing itself randomly ($g \neq i$) [@b4]. There are two types of PSO topologies: Ring and Star. The star topology is dynamic, but the ring topology is not. For the star neighborhood topology, the social component of the particle velocity update reflects information obtained from all the particles in the swarm [@b1]. There are two types of particle update methods: asynchronous and synchronous. The asynchronous method updates the particles one at a time, while the synchronous method updates the particles all at ones. The asynchronous update method is similar to the Steady-State Genetic Algorithm update method, while the synchronous update method is similar to the Generational Genetic Algorithm update method. The Asynchronous Particle Update Method allows for newly discovered solutions to be used more quickly [@b4]. Synchronous updates are done separately from particle position updates. Asynchronous updates calculate the new best positions after each particle position update and have the advantage of being given immediate feedback about the best regions of the search space. Feedback with synchronous updates is only given once per iteration. Carlisle and Dozier reason that asynchronous updates are more important for *lbest* PSO where immediate feedback will be more beneficial in loosely connected swarms, while synchronous updates are more appropriate for *gbest* PSO [@b1]. Having the algorithm terminate when a maximum number of iterations, or function evaluations, has been exceeded is useful when the objective is to evaluate the best solution found in a restricted time period [@b1]. Methodology =========== In PSO, the vectors are $\textbf{x} = <x_{k0},x_{k1},...,x_{kn-1}>$, $\textbf{p} = <p_{k0},p_{k1},...,p_{kn-1}>$, and $\textbf{v} = <v_{k0},v_{k1},...,v_{kn-1}>$, where $k$ represents the particle and $n$ represents the dimension. The x represents space. The p-vector represents the location of the best solution found so far by the particle. The v-vector represents the gradient (direction) that the particle will travel if undisturbed [@b4]. The Fitness Values are $x_{fitness}(i)$ and $p_{fitness}(i)$. The x-fitness records the fitness of the x-vector. The b4 [@b4]. Ring Topology with Synchronous Particle Update PSO -------------------------------------------------- Ring Topology with Synchronous Particle Update PSO (RS PSO) is used for sparsely connected population so as to speed up convergence. In this case the particles have predefined neighborhood based on their location in the topological space. The connection between the particles increases the convergence speed which causes the swarm to focus on the search for local optima by exploiting the information of solutions found in the neighborhood. Synchronous update provides feedback about the best region of the search space once every iteration when all the particles have moved at least once from their previous position. Ring Topology with Asynchronous Particle Update PSO --------------------------------------------------- The Ring Topology with Asynchronous Particle Update PSO (RA PSO) has information move at a slower rate through the social network, so convergence is slower, but larger parts of the search space are covered compared to the star structure. This and provide feedback about the best regions. If you are having difficulty finding the structure. Asynchronous updates provide immediate feedback about the best regions of the search space, while synchronous updates only provide feedback once per iteration. Star Topology with Synchronous Particle Update PSO -------------------------------------------------- The Star Topology with Synchronous Particle Update PSO (SS PSO) uses a global neighborhood with the star topology. Whenever the neighbor updates, it updates its star topology. The synchronous update only provides feedback once each cycle, so all the particles in the swarm will update their positions before more feedback is provided, instead of checking to see if one of the recently updated particles has a better fit than the particle deemed best fit at the beginning of the cycle. Star Topology with Asynchronous Particle Update PSO --------------------------------------------------- The Star Topology with Asynchronous Particle Update PSO (SA PSO) has particles moving all at once in the search space, which allows for newly discovered solutions to be used more quickly. The Star Topology uses a global neighborhood, meaning that the entire swarm can communicate with one another and each particle bases its search off of the global best particle known to the swarm. The g is one such swarm. Experiment ========== The experiment consists of four instances of a Full Model PSO with a cognition learning rate, $\phi_{1}$, and a social learning rate, $\phi_{2}$, equal to 2.05. To regulate the velocity and improve the performance of the PSO, the constriction coefficient implements to ensure convergence. The inertia weight, $\omega$, is also implemented to control the exploration and exploitation abilities of the swarm. Both topologies in this experiment use an $\omega$ value of 1.0, in order to facilitate exploration and increase diversity. The two topologies update their particles asynchronously. Asynchronous Particle Update is a method that updates particles one at a time and allows newly discovered solutions to be used more quickly, while Synchronous Particle Update is a method that updates all the particles at once. The four instances of the PSO are variations of the two Particle Update methods, and the two topologies described. With these four instances of the PSO, a population of 30 particles is evolved and each particle’s fitness is evaluated; this is done 30 times for each PSO. The number of function evaluations is observed after each population of 30 is evolved, and these 30 best function evaluation values for 30 runs are used to perform ANOVA tests and T-Tests to determine the equivalence classes of the four instances of the PSO. Ring Topology with Synchronous Particle Update PSO -------------------------------------------------- The RS PSO updates synchronously at the end of every iteration. It uses ring topology to compare and select the best solution within the neighborhood of three. Ring Topology with Asynchronous Particle Update PSO --------------------------------------------------- The RA PSO updates asynchronously, which allows for quick updates, and uses ring topology to compare solutions within a neighborhood of three. Star Topology with Synchronous Particle Update PSO -------------------------------------------------- The SS PSO updates synchronously, which only allows for one update per iteration, and uses star topology to compare solutions with a global neighborhood. Star Topology with Asynchronous Particle Update PSO --------------------------------------------------- The SA PSO updates asynchronously, which allows for quicker updates on newly discovered solutions. The star topology uses a global neighborhood to compare solutions, which allows for quicker convergence. Results ======= --------------- ----------- ----------- ----------- ----------- **** **** **** **** **** **Run** ***RS*** ***RA*** ***SS*** ***SA*** 1 4000 77 129 75 2 4000 71 57 72 3 82 82 82 65 4 62 60 4000 71 5 4000 72 49 56 6 72 4000 48 4000 7 95 4000 83 189 8 45 4000 4000 4000 9 71 54 4000 4000 10 61 68 91 4000 11 4000 66 38 89 12 50 4000 71 4000 13 4000 4000 4000 4000 14 4000 72 4000 4000 15 4000 65 4000 4000 16
null
--- abstract: 'This report reviews the recent experimental results from the CLAS collaboration (Hall B of Jefferson Lab, or JLab) on Deeply Virtual Compton Scattering (DVCS) and Deeply Virtual Meson Production (DVMP) and discusses their interpretation in the framework of Generalized Parton Distributions (GPDs). The interpretation of DVMP is discussed. Initial results obtained from JLab 6 GeV data indicate that DVCS might already be interpretable in this framework while GPD models fail to describe the exclusive meson production (DVMP) data with the GPD parameterizations presently used. An exception is the $\phi$ meson production for which the GPD mechanism appears to apply. The recent global analyses aiming to extract GPDs from fitting DVCS CLAS and world data are discussed. The GPD experimental program at CLAS12, planned with the upcoming 12 GeV upgrade of JLab, is briefly presented.' author: - | Hyon-Suk Jo\ \ Institut de Physique Nucléaire d’Orsay, 91406 Orsay, France title: '[**Deeply Virtual Compton Scattering and Meson Production at JLab/CLAS**]{}' --- Introduction {#introduction .unnumbered} ============ Generalized Parton Distributions take the description of the complex internal structure of the nucleon to a new level by providing access to, among other things, the correlations between the (transverse) position and (longitudinal) momentum distributions of the partons in the nucleon. They also give access to the orbital momentum contribution of partons to the spin of the nucleon. GPDs can be accessed via Deeply Virtual Compton Scattering and exclusive meson electroproduction, processes where an electron interacts with a parton from the nucleon by the exchange of a virtual photon and that parton radiates a real photon (in the case of DVCS) or hadronizes into a meson (in the case of DVMP). The amplitude of the studied process can be factorized into a hard-scattering part, exactly calculable in pQCD or QED, and a non-perturbative part, representing the soft structure of the nucleon, parametrized by the GPDs. At leading twist and leading order approximation, there are four independent quark helicity conserving GPDs for the nucleon: $H$, $E$, $\tilde{H}$ and $\tilde{E}$. These GPDs are functions depending on three variables $x$, $\xi$ and $t$, among which only $\xi$ and $t$ are experimentally accessible. The expression parton. The variable $\xi$ is linked to the Bjorken variable $x_{B}$ through the asymptotic formula: $\xi=\frac{x_{B}}{2-x_{B}}$. The variable $t$ is the squared momentum transfer between the initial and final nucleon. Since the variable $x$ is not experimentally accessible, only Compton Form Factors, or CFFs (${\cal H}$, ${\cal E}$, $\tilde{{\cal H}}$ and $\tilde{{\cal E}}$), which real parts are weighted integrals of GPDs over $x$ and imaginary parts are combinations of GPDs at the lines $x=\pm\xi$, can be extracted. The reader is referred to Refs. [@gpd1; gpd1] for formalism. Deeply appreciated! ! [Handbag diagram for DVCS (left) and diagrams for Bethe-Heitler (right), the two processes contributing to the amplitude of the $eN \to eN\gamma$ reaction. []{data-label="fig:diagrams"}](dvcs_bh.png){height="0.11\textheight"} ]Binary GPDs. The DVCS amplitude interferes with the amplitude of the Bethe-Heitler (BH) process which leads to the exact same final state. In the BH process, the real photon is emitted by either the incoming or the scattered electron while in the case of DVCS, it is emitted by the target nucleon (see Figure \[fig:diagrams\]). Although these two processes are experimentally indistinguishable, the BH is well known and exactly calculable in QED. At current JLab energies (6 GeV), the BH process is highly dominant (in most of the phase space) but the DVCS process can be accessed via the interference term rising from the two processes. With a polarized beam or/and a polarized target, different types of asymmetries can be extracted: beam-spin asymmetries ($A_{LU}$), longitudinally polarized target-spin asymmetries ($A_{UL}$), transversely polarized target-spin asymmetries ($A_{UT}$), double-spin asymmetries ($A_{LL}$, $A_{LT}$). Each Of These Factors. ! [DVCS beam-spin asymmetries as a function of $-t$, for different values of $Q^{2}$ and $x_{B}$. The ] Ref. [@bsa1] and Ref. [#fd3] The black dashed curves represent Regge calculations [@jml]. The green dash curve is Ref. [@vgg1] (VGG) at twist-2 (solid) and twist-3 (dashed) levels, with the contribution of the GPD $H$ only. []{data-label="fig:bsa"}](bsa.png){height="0.34\textheight"} The first results on DVCS beam-spin asymmetries published by the CLAS collaboration were extracted using data from non-dedicated experiments [@bsa1; @bsa2]. Also using non-dedicated data, CLAS published DVCS longitudinally polarized target-spin asymmetries in 2006 [@tsa]. In 2005, the first part of the e1-DVCS experiment was carried out in the Hall B of JLab using the CLAS spectrometer [@clas] and an additional electromagnetic calorimeter, made of 424 lead-tungstate scintillating crystals read out via avalanche photodiodes, specially designed and built for the experiment. This additional calorimeter was located at the forward angles, where the DVCS/BH photons are mostly emitted, as the standard CLAS configuration does not allow detection at those forward angles. This first CLAS experiment dedicated to DVCS measurements, with this upgraded setup allowing a fully exclusive measurement, ran using a 5.766 GeV polarized electron beam and a liquid-hydrogen target. From this experiment data, CLAS published in 2008 the largest set of DVCS beam-spin asymmetries ever extracted in the valence quark region [@bsa3]. Figure \[fig:bsa\] shows the corresponding results as a function of $-t$ for different bins in ($Q^{2}$, $x_{B}$). The predictions using the GPD model from VGG (Vanderhaeghen, Guichon, Guidal) [@vgg1; @vgg2] overestimate the asymmetries at low $|-t|$, especially for small values of $Q^{2}$ which can be expected since the GPD mechanism is supposed to be valid at high $Q^{2}$. Regge calculations [@jml] are in fair agreement with the results at low $Q^{2}$ but fail to describe them at high $Q^{2}$ as expected. We are also happy with the results [@hsj]. Having both the beam-spin asymmetries and the longitudinally polarized target-spin asymmetries, a largely model-independent GPD analysis in leading twist was performed, fitting simultaneously the values for $A_{LU}$ and $A_{UL}$ obtained with CLAS at three values of $t$ and fixed $x_{B}$, to extract numerical constraints on the imaginary parts of the Compton Form Factors (CFFs) ${\cal H}$ and $\tilde{{\cal H}}$, with average uncertainties of the order of 30% [@guidal_clas]. Before that, the same analysis was performed fitting the DVCS unpolarized and polarized cross sections published by the JLab Hall A collaboration [@halla] to extract numerical constraints on the real and imaginary parts of the CFF ${\cal H}$ [@guidal_fitter_code]. Another GPD analysis in leading twist, assuming the dominance of the GPD $H$ (the contributions of $\tilde{H}$, $E$ and $\tilde{E}$ being neglicted) and using the CLAS $A_{LU}$ data as well as the DVCS JLab Hall A data, was performed to extract constraints on the real and imaginary parts of the CFF ${\cal H}$ [@moutarde]. Similar analyses were performed using results published by the HERMES collaboration [@guidal_mout
null
--- abstract: 'We report the observation of simultaneous quantum degeneracy in a dilute gaseous Bose-Fermi mixture of metastable atoms. Sympathetic cooling of helium-3 (fermion) by helium-4 (boson), both in the lowest triplet state, allows us to produce ensembles containing more than $10^{6}$ atoms of each isotope at temperatures below 1 $\mu$K, and achieve a fermionic degeneracy parameter of $T/T_{F}=0.45$. Due to their high internal energy, the detection of individual metastable atoms with sub-nanosecond time resolution is possible, permitting the study of bosonic and fermionic quantum gases with unprecedented precision. This may lead to metastable helium becoming the mainstay of quantum atom optics.' author: S." author: , <unk>J. M. McNamara' – 'J. M. McNamara' - 'T. Jeltes' - 'A. S. K.' 'W. Hogervorst' - 'W. Vassen' [@wilkj90] [@wilkj90]. More recently, the advent of laser cooling and trapping techniques heralded the production of Bose-Einstein condensates (BECs) [@ande95; @davi95] and the observation of Fermi degeneracy [@dema99; @trus01] in weakly interacting atomic gases. To date nine different atomic species have been Bose condensed, each exhibiting its own unique features besides many generic phenomena of importance to an increasing number of disciplines. We can expect that studies of degenerate fermions will have a similar impact, and indeed they have been the object of much study in recent years, culminating in the detection of Bardeen-Cooper-Schriefer (BCS) pairs and their superfluidity [@zwie05]. However, only two fermions have so far been brought to degeneracy in the dilute gaseous phase: $^{40}$K [@dema99] and $^6$Li [@trus01]. Degenerate atomic Fermi gases have been difficult to realize for two reasons: firstly, evaporative cooling [@hess86] relies upon elastic rethermalizing collisions, which at the temperatures of interest ($<$ 1 mK) are primarily s-wave in nature and are forbidden for identical fermions; and secondly, the number of fermionic isotopes suitable for laser cooling and trapping is small. Sympathetic cooling [@lars86; @myat97] overcomes the limit to evaporative cooling by introducing a second component (spin-state, isotope or element) to the gas; thermalization between the two components then allows the mixture as a whole to be cooled. In 2001 a BEC of helium atoms in the metastable $2\;^{3}\textup{S}_{1}$ state (He\*) was realized [@robe01; @pere01]; more recently we reported the production of a He\* BEC containing a large ($>$ $10^7$) number of atoms [@tych06]. A quantum degenerate gas of He\* is unique in that the internal energy of the atoms (19.8 eV) is many orders of magnitude larger than their thermal energy ($10^{-10}$ eV per atom at 1 $\mu$K), allowing efficient single atom detection with a high temporal and spatial resolution in the plane of a microchannel plate (MCP) detector [@sche05]. In an unpolarized sample (as is the case in a magneto-optical trap (MOT)) the internal energy leads to large loss rates due to Penning ionization (PI) and associative ionization (AI) [@stas06]: $$\textup{He*}+\textup{He*}\rightarrow \textup{He}+\textup{He}^++e^- \quad ( \mbox{or}\quad \text{He}_{2}^{+}+e^-).$$ These losses are forbidden by angular momentum conservation in a spin-polarized He\* gas, in which all atoms have been transferred into a fully stretched magnetic substate. Spin-polarization suppresses ionizing losses by four orders of magnitude in the case of $^4$He\* [@shly94; @pere01]; it is only this suppression of the loss rate constant to an acceptable value of $\approx$ $10^{-14} \text{cm}^{3}/\text{s}$ [@tych06; @pere01] that has allowed the production of a BEC in this system [@robe01; @pere01; @tych06]. It was also a process [@seid04; @seid04]. In $^3$He\* the hyperfine interaction splits the $2\;^{3}\textup{S}_{1}$ state into an inverted doublet ($F$=3/2 and $F$=1/2, where $F$ is the total angular momentum quantum number) separated by 6.7 GHz (Fig. 3) $\begin{array}{ccc} * axis). Whether or not interactions would enhance spin-flip processes and the associated loss rate was unknown, and the prospect of a similarly acceptable level of suppression in the case of $^3$He\* (and indeed between $^3$He\* and $^4$He\*) was an open question before this work. Having realized the capability of producing large BECs of $^4$He\* [@tych06], and therefore large clouds of ultracold $^4$He\*, we use $^4$He\* to sympathetically cool a cloud of $^3$He\* into the quantum degenerate regime. In the manner demonstrated previously [@stas04], we have adapted our setup [@tych06] to allow the magneto-optical trapping of both He\* isotopes simultaneously. The present configuration traps a mixture of $N_{^3\text{He*}}=7\times 10^8$ and $N_{^4\text{He*}}=1.5\times 10^9$ atoms simultaneously at a temperature of $\approx$ 1 mK; a complication here is the need for a repumper exciting the $^3$He\* C2 transition, due to the near (-811 MHz) coincidence of the $^4$He\* laser cooling and $^3$He\* C9 transitions (Fig. \[fig1\]a) [@stas04]. Unable to cool so much $^3$He\* [@carr04], we reduce the number of $^3$He\* atoms in the two-isotope MOT (TIMOT) to $\approx$ $10^7$ by either altering the ratio $^3$He:$^4$He in our helium reservoir or, more simply, by loading the TIMOT with $^3$He\* for a shorter period. Spin-polarization of the mixture to the $^3$He\* $|3/2,+3/2\rangle$ and $^4$He\* $|1,+1\rangle$ states prior to magnetic trapping not only suppresses PI and AI, but also enhances the transfer efficiency of the mixture into the magnetic trap. The re-layering process (Fig. \[fig1\]b) reduces the sample temperature to $T=0.13$ mK without loss of atoms, increasing the $^4$He\* phase space density by a factor of 600 to $\approx$ $10^{-4}$, greatly enhancing the initial conditions for evaporative cooling. We note at this point that the application of 1D-Doppler cooling to the $^4$He\* component already leads to sympathetic cooling of $^3$He\*, however the process appears to be more efficient if we actively cool both components simultaneously. During these experiments the lifetime of a pure sample of either $^3$He\* or $^4$He\* in the magnetic trap was limited by the background pressure in our ultra-high vacuum chamber to $\approx$ $110\;\mbox{s}$, whilst the lifetime of the mixture was only slightly shorter at $\approx$ $100\;\mbox{s}$, indicating that the suppression of PI and AI during $^3$He\*-$^3$He\* and $^3$He\*-$^4$He\* collisions works very well. In order to further increase the collision rate in our cloud, we adiabatically compress it during 200 ms by increasing the trap frequencies to their final radial and axial values: $\nu_{r}=273$ Hz and $\nu_{a}=54$ Hz for $^3$He\*, and $\nu_{r}=237$ Hz and $\
null
--- abstract: 'A method is presented for solving the discrete-time finite-horizon Linear Quadratic Regulator (LQR) problem subject to auxiliary linear equality constraints, such as fixed end-point constraints. The method explicitly determines an affine relationship between the control and state variables, as in standard Riccati recursion, giving rise to feedback control policies that account for constraints. Since the linearly-constrained LQR problem arises commonly in robotic trajectory optimization, having a method that can efficiently compute these solutions is important. We demonstrate some of the useful properties and interpretations of said control policies, and we compare the computation time of our method against existing methods.' author: methods theory. Referring to both continuous and discrete-time systems, the LQR problem is that of finding an infinite or finite-length control sequence for a linear dynamical system that is optimal with respect to a quadratic cost function. Either as a stand-alone means for computing trajectories and controllers for linear systems, or as a method for solving successive approximate trajectories for with nonlinear systems, it shows up in one way or another in the computation of nearly all finite-length trajectory optimization problems. Because of the importance of trajectory optimization in controlling Robotic systems, and because of the prevalence of the LQR problem in those optimizations, devoting time to highly efficient methods capable of solving LQR-type problems is an important endeavor. The focus of this paper is on a particular instance of the discrete-time, finite-horizon variant of the LQR problem, being that which is subject to linear constraints. These constraints are useful in a variety of situations. As an example, imagine we want plan a trajectory that minimizes the amount of energy need to get a robot to some desired configuration. If the dynamics of the robot can be modeled as a linear system, this problem takes the form of linearly-constrained LQR. We can also imagine constraints appearing at multiple stages in the trajectory and having varying dimensions. Perhaps we require that the center of mass of the robot is constrained to not move in the first half of the trajectory. Of course many robots have non-linear dynamics. But even when planning constrained trajectories for non-linear systems, iterative solution methods such as Sequential Quadratic Programming make successive local approximations of the trajectory optimization problem which result in a series of constrained LQR problems to be solved. We present this section. Understanding that the linearly-constrained LQR problem is common, we provide some context surrounding methods for solving these type of problems. The property that any trajectory must satisfy linear dynamics can be thought of as a sequence of linear constraints on successive states in the trajectory. And since all auxiliary constraints we consider are also linear, these problems result in quadratic programs (QPs) just as unconstrained LQR problems are QPs [@boyd2004convex]. Under standard assumptions, the constrained problems are also strictly-convex and have a unique solution. Unlike unconstrained LQR, however, the presence of additional constraints cause some computational difficulties. Looking from a pure optimization standpoint, all of the approaches to solving convex QPs can be applied to the constrained LQR problem without problem. However, using general methods in a naive way fail to exploit the unique structure of the optimal control problem, and suffer a computational complexity which grows cubicly with the time horizon being considered in the control problem (trajectory length). Due to the sparsity of the problem data in the time domain, the KKT conditions of optimality for optimal control problems have a banded nature, and linear algebra packages designed for such systems can be used to solve the problem in a linear complexity with respect to the trajectory length [@wright1996applying]. However, these approaches result in what we will call *open-loop* trajectories, producing only numerical values of the state and control vectors making up the trajectory. It is well-known that unconstrained LQR problem offers a solution based on dynamic-programming which is sometimes referred to as the discrete-time Riccati recursion. This method can also solve unconstrained LQR problems in linear time complexity while *also* providing an affine relationship between the state and control variables. This is supported by several policy variants. It is because we would like to derive these policies for the constrained case that the aforementioned computational difficulties show up. The presence of auxiliary constraints have made it such that up until now, a method for the constrained LQR problem analogous to Riccati recursion has not been developed. This is due to the fact that linear constraints of dimension exceeding that of the control can not alway be thought of as time-separable. This means that the choice of control at a particular time-point may not always be able to satisfy a constraint appearing at that time-point (for arbitrary values of the corresponding state at that time). We will see that this complication requires reasoning about future constraints yet to come when computing the control in the present. This is the very reason why, as we will see, existing methods either make restrictive assumptions on the dimension of constraints, or require a higher order of computational complexity to compute solutions. Because of this, if the problem does not satisfy the restricting assumptions used by existing methods, solution approaches are currently limited to QP solvers and only offer open-loop trajectories, or suffer cubic time-complexity with respect to the trajectory length if control policies are desired. Given this context, we can now state the contribution of this work: [2em]{} We present a method for computing constraint-aware feedback control policies for discrete-time, time-varying, linear-dynamical systems which are optimal with respect to a quadratic cost function and subject to auxiliary linear equality constraints. We make no assumptions about the dimension of the constriants. In section \[sec:priorwork\] we discuss in more detail existing methods which have addressed the same problem and the limitations of those works. In section \[sec:method\] we formally define the problem and present our method. In section \[sec:analysis\] we discuss computational complexity, and present an alternative approach to solving the problem. We also demonstrate some of the advantages of the control policies derived from our method when compared to the open-loop solutions, and discuss applicability to SQP methods. PRIOR WORK {#sec:priorwork} ========== Consideration of the constrained linear-quadratic optimal control problem extends back to the early days in the field of control. Many authors have presented methods for constraining control systems to a time-invariant linear subspace. The author in [@johnson1973stabilization] studied this issue for continuous systems under the name subspace stabilization. In the works [@hemami1979modeling] and [@yu1996design] the same problem is addressed by designing pole-assignment controllers. More recently, [@posa2016optimization] utilize a very similar method to generate a time-varying controller for tracking existing trajectories. This method is also derived in continuous-time, and hence requires the constraint-dimension to be constant. The authors in [@ko2007optimal] developed a more comprehensive method for computing optimal control policies for discrete-time, time-varying objective functions, but only considers a single time-invariant constraint of constant dimension. In [@park2008lq] a method is presented for solving continuous- and discrete-time LQR problems with fixed terminal states. This method is able to reason about a constraint only appearing at a portion of the trajectory, being the end, but does not account for additional constraints appearing at other times, however. Perhaps the most general method for computing linearly constrained LQR control policies was presented in [@sideris2011riccati]. However, that method suffers a computational complexity which scales cubicly in the worst-case, i.e. when no controls are present. As a part of the method presented in [@xie2017differential], a technique for satisfying linear constraints at arbitrary times in the trajectory is presented, but that method assumes that the constraint dimension does not exceed that of the control. Most recently, [@giftthaler2017projection] present a method for solving problems with time-varying constraints, but still require that the relative-degree of these constraints does not exceed 1. This is a slightly less-restrictive condition than requiring the dimension of the constraints be less than that of the control, but still limits the applicability of that method in that it can not handle full-state constraints when the control dimension is less than that of the state dimension. As mentioned above, the problem can also be solved using numerical linear algebra techniques, as discussed for example in [@wright1999numerical] and particularly for the optimal control problem, [@wright1996applying]. Again, these methods are very general and efficient but fail to produce the desired feedback control policies. The method we present combines the desirable properties of all these methods into one. The contribution of this method is that it is capable of generating optimal feedback control policies for general, discrete-time, linearly-constrained LQR problems while maintaining a linear computational complexity with respect to control horizon. To the best of our knowledge, the
null
--- abstract: 'We show that a tetragonal lattice of weakly interacting cavities with uniaxial electromagnetic response is the photonic counterpart of topological crystalline insulators, a new topological phase of atomic band insulators. Namely, the frequency band structure stemming from the interaction of resonant modes of the individual cavities exhibits an omnidirectional band gap within which gapless surface states emerge for finite slabs of the lattice. Due to the equivalence of a topological crystalline insulator with its photonic-crystal analog, the frequency band structure of the latter can be characterized by a $Z_{2}$ topological invariant. Such a topological photonic crystal can be realized in the microwave regime as a three-dimensional lattice of dielectric particles embedded within a continuous network of thin metallic wires.' author: ha solids. Recently, a new analogy between electron and photon states in periodic structures has been proposed by Raghu and Haldane, [@haldane] namely the one-way chiral edge states in two-dimensional (2D) photonic-crystal slabs which are similar to the corresponding edge states in the quantum Hall effect. [@one_way] The photonic chiral edge states are a result of time-reversal (TR) symmetry breaking which comes about with the inclusion of gyroelectric/ gyromagnetic material components; these states are robust to disorder and structural imperfections as long as the corresponding topological invariant (Chern number in this case) remains constant. In certain atomic solids, TR symmetry breaking is not prerequisite for the appearance of topological electron states as it is the case in the quantum Hall effect. Namely, when spin-orbit interactions are included in a TR symmetric graphene sheet, a bulk excitation gap and spin-filtered edge states emerge [@mele_2005] without the presence of an external magnetic field, a phenomenon which is known in literature as quantum spin Hall effect. Its generalization to three-dimensional (3D) atomic solids lead to a new class of solids, namely, topological insulators. [@ti_papers] The latter possess a spin-orbit-induced energy gap and gapless surface states exhibiting insulating behavior in bulk and metallic behavior at their surfaces. Apart from topological insulators where the spin-orbit band structure with TR symmetry defines the topological class of the corresponding electron states, other topological phases have been proposed such as topological superconductors (band structure with particle-hole symmetry), [@ts_papers] magnetic insulators (band structure with magnetic translation symmetry), [@mi_papers] and, very recently, topological crystalline insulators. [@fu_prl] electron states. In this work, we propose a photonic analog of a topological crystalline insulator. Our model photonic system is a 3D crystal of weakly interacting resonators respecting TR symmetry and the point-symmetry group associated with a given crystal surface. As a result, the system possesses an omnidirectional band gap within which gapless surface states of the EM field are supported. It is shown that the corresponding photonic band structure is equivalent to the energy band structure of an atomic topological crystalline insulator and, as such, the corresponding states are topological states of the EM field classified by a $Z_{2}$ topological invariant. The frequency band structure of photonic crystals whose (periodically repeated) constituent scattering elements interact weakly with each other can be calculated by a means which is similar to the tight-binding method employed for atomic insulators and semiconductors. Photonic bands amenable to a tight-binding-like description are e.g., the bands stemming from the whispering-gallery modes of a lattice of high-index scatterers [@lido] the defect bands of a sublattice of point defects, within a photonic crystal with an absolute band gap, [@bayindir] the plasmonic bands of a lattice of metallic spheres [@quinten] or of a lattice of dielectric cavities within a metallic host. [@stefanou_ssc] In the latter case, the frequency band structure stems from the weak interaction of the surface plasmons of each individual cavity [@stefanou_ssc] wherein light propagates within the crystal volume by a hopping mechanism. Such type of lattice constitutes the photonic analog of a topological crystalline insulator presented in this work whose frequency band structure will be revealed based on a photonic tight-binding treatment within the framework of the coupled-dipole method. [@cde] The latter is an exact means of solving Maxwell’s equations in the presence of nonmagnetic scatterers. Tight-binding description of dielectric cavities in a plasmonic host ==================================================================== We consider a lattice of dielectric cavities within a lossless metallic host. The $i$-th cavity is represented by a dipole of moment ${\bf P}_{i}=(P_{i;x},P_{i;y},P_{i;z})$ which stems from an incident electric field ${\bf E}^{inc}$ and the field which is scattered by all the other cavities of the lattice. This way the dipole moments of all the cavities are coupled to each other and to the external field leading to the coupled-dipole equation $${\bf P}_{i}= \boldsymbol\alpha_{i}(\omega) [{\bf E}^{inc} + \sum_{i' \neq i} {\bf G}_{i i'}(\omega) {\bf P}_{i'}]. \label{eq:cde}$$ ${\bf G}_{i i'}(\omega)$ is the electric part of the free-space Green’s tensor and ${\bf \boldsymbol\alpha}_{i}(\omega)$ is the $3 \times 3$ polarizability tensor of the $i$-th cavity. Eq. (\[eq:cde\]) we assume that the cavities exhibit EM response system. We assume that the cavities exhibit a uniaxial EM response, i.e., the corresponding polarizability tensor is diagonal with $\alpha_{x}=\alpha_{y}=\alpha_{\parallel}$ and $\alpha_{z}=\alpha_{\perp}$. For strong anisotropy, the cavity resonances within the $xy$-plane and along the $z$-axis can be spectrally distinct; thus, around the region of e.g., the cavity resonance $\omega_{\parallel}$ within the $xy$-plane, $\alpha_{\perp} \ll \alpha_{\parallel}$ (see appendix). In this case, one can separate the EM response within the $xy$-plane from that along the $z$-axis and Eq. (\[eq:cde\]) [<unk>[eq:cde P}_{i'}]. \label{eq:cde_no_field}$$ where we have set ${\bf E}^{inc}={\bf 0}$ since we are seeking the eigenmodes of the system of cavities. Also, now, ${\bf P}_{i}=(P_{i;x},P_{i;y})$. For a particle/cavity of electric permittivity $\epsilon_{\parallel}$ embedded within a material host of permittivity $\epsilon_{h}$, the polarizability $\alpha_{\parallel}$ is given by the Clausius-Mossotti formula $$\alpha_{\parallel}=\frac{3 V}{4 \pi} \frac{\epsilon_{\parallel}-\epsilon_{h}}{\epsilon_{\parallel}+ 2\epsilon_{h}} \label{eq:cm}$$ where $V$ is the volume of the particle/ cavity. For a lossless plasmonic (metallic) host in which case the electric permittivity can be taken as Drude-type, i.e., $\epsilon_{h}=1-\omega_{p}^{2} / \omega^{2}$ (where $\omega_{p}$ is the bulk plasma frequency), the polarizability $\alpha_{\parallel}$ exhibits a pole at $\omega_{\parallel}=\omega_{p} \sqrt{2/ (\epsilon_{\parallel} +2)}$ (surface plasmon resonance). By making a Laurent expansion of $\alpha_{\parallel}$ around $\omega_{\parallel}$ and keeping the leading term, we may write $$\alpha_{\parallel}= \frac{F} {\omega - \omega_{\parallel}} \equiv \frac{1} {\Omega} \label{eq:a_laurent}$$ where $F=(\omega_{\parallel}/2) (\epsilon_{\parallel} - \epsilon_{h})/ (\epsilon_{\parallel}+2)$. For sufficiently high value of the permittivity of the dielectric cavity, i.e., $\epsilon_{\parallel} > 10$, the electric field of the surface plasmon is much localized at the surface of the cavity. As a result, in a periodic lattice of cavities, the interaction of neighboring surface plasmons is very weak leading to much narrow frequency bands. By treating such a lattice in a tight binding-like framework, we may assume that the Green’s tensor ${\bf G}_{i i'}(\omega)$ does not vary much with frequency and therefore, ${\bf G}_{i i'}(\omega) \simeq {\bf G}_{i i'}(\omega_{\parallel})$. In this case, Eq. (\[eq:cde\_no\_field\]) becomes an eigenvalue problem $$\sum_{i' \neq i} {\bf G}_{i i'}(\omega_{\parallel})
null
--- abstract: 'NO$\nu$A is an accelerator-based neutrino oscillation experiment which has a great potential to measure the last unknown mixing angle $\theta_{13}$, the neutrino mass hierarchy, and the CP-violation phase in lepton sector with 1) 700 kW beam, 2) 14 mrad off the beam axis, 3) 810 km long baseline. The Near Detector on the Surface is fully functioning and taking both NuMI and Booster beam data. The far detector building achieved beneficial occupancy on April 13. This proceeding will focus on the DAQ software system.' author: - 'X. C. Tian, on behalf of the NO$\nu$A Collaboration' title: 'NO$\nu$A Data Acquisition Software System' --- Introduction ============ The next generation long-baseline neutrino experiments [@NOvA; @T2K; @LBNE] aim to measure the third mixing angle $\theta_{13}$, determine whether CP is violated in the lepton sector, and resolve the neutrino mass hierarchy. The NuMI Off-axis electron-neutrino ($\nu_e$) Appearance (NO$\nu$A) experiment is the flagship experiment of the US domestic particle physics program which has the potential to address most of the fundamental questions in neutrino physics raised by the Particle Physics Project Prioritization Panel (P5). NO$\nu$A has two functionally identical detectors (Fig. \[detector\]), a 222 ton near detector located underground at Fermilab and a 14 kiloton far detector located in Ash River, Minnesota with a baseline of 810 km. The detectors are composed of extruded PVC cells loaded with titanium dioxide to enhance reflectivity. There are 16,416 and 356,352 cells for the near and far detector, respectively. Each cell has a size of 3.93 cm transverse to the beam direction and 6.12 cm along the direction filled with liquid scintillator (mineral oil plus 5% pseudocumene). The corresponding radiation length is 0.15 $X_0$ and the Moliere radius is 10 cm, ideal for the identification of electron-type neutrino events. The “Neutrinos at the Main Injector” (NuMI) will provide a 14 mrad off-axis neutrino beam to reduce neutral current backgrounds and which peaks at 2 GeV, corresponding to the first oscillation maximum for this detector distance. The accelerator and NuMI upgrades will double the protons per year delivered to the detector which is $6\times 10^{20}$ protons per year. For further details on the current status of NO$\nu$A experiment, please see Ref. [@Gavin]. ! [The NO$\nu$A detectors. The far (near) detector has 29 (6) blocks and each block is made of 32 scintillator PVC planes, and 928 (192) planes in total. The near detector also has a muon catcher which is composed of 13 scintillator PVC planes and 10 steel planes. []{data-label="detector"}](detector){width="100.00000%"} Charged particles from neutrino interactions or cosmic ray muons will emit scintillation light in the scintillator. The scintillation light is collected by a loop of wavelength shifting fibers (WLS) and a 32-pixel Avalanche Photo-diode (APD) attached to the fibers converts the light pulse into electrical signals. The Data Acquisition (DAQ) System as shown in Fig. \[daq-all\] will concentrate the data from those APDs into a single stream that can be analyzed and archived. The DAQ can buffer the data and wait for a trigger decision that the data should be recorded or rejected. Online trigger processors will be used to analyze the data stream to correlate data with similar time stamps and to look for clusters of hits indicating an interesting event. Additional information can also be shared with [@NOvA-TDR]. The event types that NO$\nu$A DAQ will record include beam neutrino events, cosmic ray muons, and other physics events (supernova neutrinos, high energy neutrinos, [*etc. *]{}) Every 2.2 s (will be reduced to 1.3 s with the accelerator and NuMI upgrades), a 10 $\mu$s beam spill will be generated and time stamped by a GPS based timing system at Fermilab. All spill processing. The event rates are 30 neutrino events per spill for the near detector and 1,400 $\nu_e$ beam events per year for the far detector. The randomly selected cosmic ray muons used for calibration and monitoring are taken to give 100 times the number of beam neutrino events. The cosmic ray muon rate is 50 Hz (200 kHz) for the near (far) detector. Other interesting physics process such as a supernova explosion at 10 kpc will result in thousands of neutrinos within 10 seconds in the far detector. For the near detector, the data rates are 75 TB per year through DAQ system and 1 TB per year written to disk. For the far detector, the data rates are 12,000 TB per year through DAQ sytem and 25 TB per year written to disk. ! [An schematic overview of the NO$\nu$A DAQ system. The data stream is from left to right. []{data-label="daq-all"}](daq.png){width="\textwidth"} NO$\nu$A Data Acquisition System ================================ The primary task for the DAQ is to record the data from APDs for further processing. The data flows through Front End Boards (FEBs), Data Concentrate Modules (DCMs), Buffer Nodes (BNs), DataLogger (DL) and then archived on disk or tape as shown in Fig. \[daq-all\]. Each APD is digitized by a FEB continuously without dead time. The data from a group of FEBs up to 64 are consolidated by the DCM into 5 ms time slices which are routed to downstream Buffer Nodes. The data is buffered in the Buffer Nodes for a minimum of 20 s waiting for the spill trigger. A spill signal is required to arrive within the buffering time so that the spill time can be correlated with the time-stamped data to determine if the hits occurred in or out spill. The triggered data from Buffer Nodes will be merged to form an event in the DataLogger and the event will be written to file for storage or shared memory for monitoring. The power distribution system (PDS) provides power to FEBs, APDs, ThermoElectric Coolers (TECs) [^1], DCMs and Timing Distribution Units (TDUs). The Run Control provides the overall control of the DAQ system. The Front End system. Front End Boards (FEBs) ----------------------- The front end electronics (Fig. \[feb\]) is responsible for amplifying and integrating the signals from the APD arrays, determining the amplitude of the signals and their arrival time and presenting that information to the DAQ. The FEBs are operated in trigger-less, continuous readout mode with no dead time, and the data is zero suppressed based on Digital Signal Processing (DSP) algorithms. Data above a pre-programmed threshold is settable at the channel level to allow different thresholds to be set depending on the particular characteristics of a given channel. Data above that threshold will be time-stamped and compared to a NuMI timing signal in the DAQ system to determine if the event was in or out of spill. Major components of the FEB are the carrier board connector location at the left, which brings the APD signals to the NO$\nu$A ASIC, which performs integration, shaping, and multiplexing. The chip immediately to the right is the ADC to digitize the signals, and FPGA for control, signal processing, and communication. The ASIC is customized to maximize the sensitivity of the detector to small signals from long fibers in the far detector. The average photoelectrons (PEs) yield at the far end of an extrusion module is 30, and the noise is 4 PEs. The FPGA on the FEB uses a Digital Signal Processing algorithm to extract the time and amplitude of signals from the APD. Each FEB reads out 32 channels corresponding to 32 pixels of one APD. Higher detector activity during beam spill at the near detector requires higher time resolution, therefore FEBs sample APDs pixels at 8 MHz at the near detector, 2 MHz at the far detector. The FEBs are capable of limited waveform digitization and waveform readout. ] [Schematic s components. []{data-label="feb"}](feb-apd.png){width="100.00000%"} Data Concentrator Modules (DCMs) -------------------------------- The DCM (Fig. \[dcm\]) is a custom component of the DAQ. Each DCM is responsible for consolidating the data into 5 ms time slices received from up to 64 FEBs to internal data buffers and for transferring the data out to the Event Builder Buffer Farm nodes over Gigabit Ethernet. The DCMs also pass timing and control information from the timing system to the FEBs. The DCM mainly consists of a FPGA, an embedded Power PC processor, and connectors as shown in Fig. \[dcm\]. Data from FEB consists of a header and hit information for a given timeslice. The mid-sized FPGA on the DCM concatenates and combines the hit information for all 64 FEBs to time slices on the order of 50 $\mu
null
--- abstract: 'In global seismology Earth’s properties of fractal nature occur. Zygmund classes appear as the most appropriate and systematic way to measure this local fractality. For the purpose of seismic wave propagation, we model the Earth’s properties as Colombeau generalized functions. In one spatial dimension, we have a precise characterization of Zygmund regularity in Colombeau algebras. This False wavelets.' author: - | Günther Hörmann and Maarten V. de Hoop\ *Department of Mathematical and Computer Sciences*,\ *Colorado School of Mines, Golden CO 80401* title: 'Geophysical modelling with Colombeau functions: Microlocal properties and Zygmund regularity' --- Introduction ============ *Wave propagation in highly irregular media*. In global seismology, (hyperbolic) partial differential equations the coefficients of which have to be considered generalized functions; in addition, the source mechanisms in such application are highly singular in nature. The coefficients model the (elastic) properties of the Earth, and their singularity structure arises from geological and physical processes. These processes are believed to reflect themselves in a multi-fractal behavior of the Earth’s properties. Zygmund classes appear as the most appropriate and systematic way to measure this local fractality (cf. , p. 4) *The modelling process and Colombeau algebras*. In the seismic transmission problem, the diagonalization of the first order system of partial differential equations and the transformation to the second order wave equation requires differentiation of the coefficients. Therefore, highly discontinuous coefficients will appear naturally although the original model medium varies continuously. However, embedding the fractal coefficient first into the Colombeau algebra ensures the equivalence after transformation and yields unique solvability if the regularization scaling $\ga$ is chosen appropriately (cf. [@LO:91; @O:89; @HdH:01]). We use the framework and notation (in particular, $\G$ for the algebra and $\A_N$ for the mollifier sets) of Colombeau algebras as presented in [@O:92]. An interesting aspect of the use of Colombeau theory in wave propagation is that it leads to a natural control over and understanding of ‘scale’. In this paper, we focus on this modelling process. Basic definitions and constructions =================================== Review of Zygmund spaces ------------------------ We briefly review homogeneous and inhomogeneous Zygmund spaces, ${\ensuremath{\dot{C}_*}}^s(\R^m)$ and ${\ensuremath{C_*}}^s(\R^m)$, via a characterization in pseudodifferential operator style which follows essentially the presentation in [@Hoermander:97], Sect. 8.6. Alternatively, for practical and implementation issues one may prefer the characterization via growth properties of the discrete wavelet transform using orthonormal wavelets (cf. [@Meyer:92]). Classically, the Zygmund spaces were defined as extension of Hölder spaces by boundedness properties of difference quotients. Within the systematic and unified approach of Triebel (cf. [@Triebel:I; @Triebel:II]) we can simply identify the Zygmund spaces in a scale of inhomogeneous and homogeneous (quasi) Banach spaces, $B^s_{p q}$ and $\dot{B}^s_{p q}$ ($s\in\R$, $0 < p, q \leq \infty$), by ${\ensuremath{C_*}}^s(\R^m) = B^s_{\infty \infty}(\R^m)$ and ${\ensuremath{\dot{C}_*}}^s(\R^m) = \dot{B}^s_{\infty \infty}$. Both ${\ensuremath{C_*}}^s(\R^m)$ and ${\ensuremath{\dot{C}_*}}^s(\R^m)$ are Banach spaces. To see the detail. Let $0 < a < b$ and choose $\vphi_0\in\D(\R)$, $\vphi_0$ symmetric and positive, $\vphi_0(t) = 1$ if $|t| < a$, $\vphi_0(t) = 0$ if $|t| > b$, and $\vphi_0$ strictly decreasing in the interval $(a,b)$. Putting $\vphi(\xi) = \vphi_0(|\xi|)$ for $\xi\in\R^m$ then defines a function $\vphi\in\D(\R^m)$. Finally we set $$\psi(\xi) = - \inp{\xi}{\grad\vphi(\xi)}$$ and note that if $a < |\xi| < b$ then $\psi(\xi) = - \vphi_0'(|\xi|) |\xi| > 0$. We denote by ${\cal M}(\R^m)$ the set of all pairs $(\vphi,\psi)\in\D(\R^m)^2$ that are constructed as above (we usually suppress the dependence of ${\cal M}$ on $a$ and $b$ in the notation). We are now in aposition to state the characterization theorem for the inhomogeneous Zygmund spaces as subspaces of $\S'(\R^m)$. It follows from [@Triebel:88], Sec. 2, sec 3 or, alternatively, from [@Hoermander:97], Sec. 8.6. Note that all appearing pseudodifferential operators in the following have $x$-independent symbols and are thus given simply by convolutions. \[inh\_Z\] $ is arbitrary. Let $s\in\R$ then $u\in\S'(\R^m)$ belongs to the inhomogeneous Zygmund space of order $s$ ${\ensuremath{C_*}}^s(\R^m)$ if and only if $${\ensuremath{|u|_{{\ensuremath{C_*}}^{s}}}} := \linf{\vphi(D)u} + \sup\limits_{0 < t < 1}\Big( t^{-s} \linf{\psi(tD)u}\Big) < \infty .$$ (Note that we made use of the modification for $q=\infty$ in [@Triebel:88], equ. (82).) The norm ${\ensuremath{|u|_{{\ensuremath{C_*}}^{s}}}}$ defines an equivalent norm on ${\ensuremath{C_*}}^s$. In fact that all norms defined as above by some $(\vphi,\psi)\in{\cal M}(\R^m)$ are equivalent can be seen as in [@Hoermander:97], Lemma 8.6.5. $ If $s\in \R_+ \setminus \N$ then $C^s_*(\R^m)$ is the classical Hölder space of regularity $s$. Denoting by ${\ensuremath{\lfloor s \rfloor}}$ the greatest integer less than $s$ it consists of all ${\ensuremath{\lfloor s \rfloor}}$ times continuously differentiable functions $f$ such that $\d^\al f$ is bounded when $|\al| \leq {\ensuremath{\lfloor s \rfloor}}$ and globally Hölder continuous with exponent $s-{\ensuremath{\lfloor s \rfloor}}$ if $|\al| = {\ensuremath{\lfloor s \rfloor}}$. 9 Due to the term $\linf{\vphi(D)u}$ the norm ${\ensuremath{|u|_{{\ensuremath{C_*}}^{s}}}}$ is not homogeneous with respect to a scale change in the argument of $u$. ). 4 If $u\in\L^\infty(\R^m)$ then (cf. [@Hoermander:97], Sect. 8.6) $$u(x) = \vphi(D)u(x) + \int\limits_1^\infty \psi(D/t)u(x) \frac{dt}{t} \qquad \text{ for almost all } x .$$ Using $\vphi(\xi) = \int_0^1 \psi(\xi/t)/t \,dt$ this can be rewritten in the form $u(x) = \int_0^\infty \psi(D/t)u(x)/t \,dt$ and resembles Calderon’s classical identity in terms of a continuous wavelet transform (cf. [@Meyer:92], Ch. 1, (5.9) and (5.10)). $$ and (5.10) In a similar way one can characterize the homogeneous Zygmund spaces as subspaces of $\S'(\R^m)$ modulo the polynomials ${\cal P}$. A proof can be found in [@Triebel:82], Sec. XII 1. We may identify $\S'/{\cal P}$ with the dual space $\S_0'(\R^m)$ of $\S_0(\R^m) = \{ f\in\S(\R^m) \mid \d^\al \FT{f}(0) = 0 \, \forall \al\in\N_0^m \}$, the Schwartz functions with vanishing moments, by mapping the class $u+{\cal P}$ with representative $u\in\S'$ to $u\mid_{\S
null
--- abstract: 'We investigate scalar perturbations from inflation in braneworld cosmologies with extra dimensions. For '. Abstract: . scalar field potential of the braneworld cosmology slices. The background metric is determined self-consistently by the (arbitrary) bulk scalar field potential, supplemented by the boundary conditions at both orbifold branes. Assuming that the inflating branes are stabilized (by the brane scalar field potentials), we estimate the lowest eigenvalue of the scalar fluctuations – the radion mass. In the limit of flat branes, we reproduce well known estimates of the positive radion mass for stabilized branes. Surprisingly, however, we found that for de Sitter (inflating) branes the square of the radion mass is typically negative, which leads to a strong tachyonic instability. Thus, parameters of stabilized inflating braneworlds must be constrained to avoid this tachyonic instability. Instability of “stabilized” de Sitter branes is confirmed by the [ BraneCode]{} numerical calculations in the accompanying paper [@branecode]. If the model’s parameters are such that the radion mass is smaller than the Hubble parameter, we encounter a new mechanism of generation of primordial scalar fluctuations, which have a scale free spectrum and acceptable amplitude.' author: How is Space Physics Stabilized?' --- Introduction ============ One of the most interesting recent developments in high energy physics has been the picture of braneworlds. Higher dimensional formulations of braneworld models in superstring/M theory, supergravity and phenomenological models of the mass hierarchy have the most obvious relevance to cosmology. In application to the very early universe this leads to braneworld cosmology, where our 3+1 dimensional universe is a 3d curved brane embedded in a higher-dimensional bulk [@review]. Early , $ds^2_4$. The conformal warp factor $a(w)$ is determined self-consistently by the five-dimensional Einstein equations, supplemented by the boundary conditions at two orbifold branes. We assume the presence of a single bulk scalar field $\varphi$ with the potential $V(\varphi)$ and self-interaction potentials $U_\pm(\varphi)$ at the branes. The potentials can be pretty much arbitrary as long as the phenomenology of the braneworld is acceptable. The class of metrics (\[warp\]) with bulk scalars and two orbifold branes covers many interesting braneworld scenarios including the Hořava-Witten theory [@HW; @Lukas], the Randall-Sundrum model [@RS1; @RS2] with phenomenological stabilization of branes [@GW; @Dewolfe], supergravity with domain walls, and others [@FTW; @FFK]. We will consider models where by the choice of the bulk/brane potentials the inter-brane separation (the so-called radion) can be fixed, i.e. models in which branes could in principle be stabilized. The theory of scalar fluctuations around flat stabilized branes, involving bulk scalar field fluctuations $\delta\varphi$, scalar 5d metric fluctuations and brane displacements, is well understood [@Tanaka:2000er]. Similar to Kaluza-Klein (KK) theories, the extra-dimensional dependence can be separated out, and the problem is reduced to finding the eigenvalues of a second-order differential equation for the extra-dimensional ($w$-dependent) part of the fluctuation eigenfunctions subject to the boundary conditions at the branes. The lowest eigenvalue corresponds to the radion mass, which is positive $m^2>0$ and exceeds the TeV scale or so [@Csaki:1999mp]. Tensor abrasion) stable. Brane inflation, like all inflationary models, generates long wavelength cosmological perturbations from the vacuum fluctuations of all light (i.e. with mass less than the Hubble parameter $H$) degrees of freedom. The theory of metric fluctuations around the background geometry (\[warp\]) with inflating (de Sitter) branes is more complicated than that for the flat branes. For tensor fluctuations (gravitational waves), the lowest eigenvalue of the extra dimensional part of the tensor eigenfunction is zero, $m=0$, which corresponds to the usual 4d graviton. As it was shown in [@LMW; @gw], massive KK gravitons have a gap in the spectrum; the universal lower bound on the mass is $m \ge \sqrt{3 \over 2}\, H$. This is called inflation. Massless scalar and vector projections of the bulk gravitons are absent, so only the massless 4d tensor mode is generated. Scalar cosmological fluctuations from inflation in the braneworld setting (\[warp\]) have been considered in many important works [@Mukohyama:2000ui; @Kodama:2000fa; @Langlois:2000ia; @vandeBruck:2000ju; @Koyama:2000cc; @Deruelle:2000yj; @Gen:2000nu; @Mukohyama:2001ks]. The theory of scalar perturbations in braneworld inflation with bulk scalars is even more complicated than for tensor perturbations. This is because one has to consider 5d scalar metric fluctuations and brane displacements induced not only by the bulk scalar field fluctuations $\delta\varphi$, but also by the fluctuations $\delta \chi$ of the inflaton scalar field $\chi$ living at the brane. In fact, most papers on scalar perturbations from brane inflation concentrated mainly on the inflaton fluctuations $\delta \chi$, while the bulk scalar fluctuations were not included. This was partly because in the earlier papers on brane inflation people considered a single brane embedded in an AdS background without a bulk scalar field, and partly because for braneworlds with two stabilized branes there was an expectation that the fluctuations of the bulk scalar would be massive and thus would not be excited during inflation. In this letter we focus on the bulk scalar field fluctuations, assuming for the sake of simplicity that the inflaton fluctuations $\delta \chi$ are subdominant. We consider a relatively simple problem of scalar fluctuations around curved (de Sitter) branes, involving only bulk scalar field fluctuations $\delta\varphi$. We find the extra-dimensional eigenvalues of the scalar fluctuations subject to boundary conditions at the branes, focusing especially on the radion mass $m^2$ for the inflating branes. In particular, we investigate the presence or absence of a gap in the KK spectrum of scalar fluctuations in view of the tensor mode result. Our results are a generalization of the known results for flat stabilized branes [@Tanaka:2000er], which we reproduce in the limit where the branes are flattening $H \to 0$. Bulk Equations ============== The five-dimensional braneworld models with a scalar field in the bulk are described by the action $$\begin{aligned} \label{eq:action} S &=& M_5^3 \int \sqrt{-g}\, d^5 x\, \left\{R - (\nabla\varphi)^2 - 2V(\varphi)\right\} \nonumber\\ && -2 M_5^3 \sum \int \sqrt{-q}\, d^4 x\, \left\{ [{{\cal K}}] + U(\varphi)\right\},\end{aligned}$$ where the first term corresponds to the bulk and the sum contains contributions from each brane. The jump of the extrinsic curvature $[{\cal K}]$ provides the junction conditions across the branes (see equation (\[eq:jc\]) below). Variation of this action gives the bulk Einstein $G_{AB}=T_{AB}(\varphi)$ and scalar field $\Box\varphi=V_{,\varphi}$ equations. For the (stationary) warped geometry (\[warp\]) they are \[eq:bg\] $$\begin{aligned} &\displaystyle \varphi'' + 3\frac{a'}{a} \varphi' - a^2 V' = 0,&\label{eq:bg:phi}\\ &\displaystyle \frac{a''}{a} = 2\, \frac{a'^2}{a^2} - H^2 - \frac{\varphi'^2}{3},&\label{eq:bg:a}\\ &\displaystyle 6\left(\frac{a'^2}{a^2} - H^2\right) = \frac{\varphi'^2}{2} - a^2 V,&\label{eq:bg:c}\end{aligned}$$ where the prime denotes the derivative with respect to the extra dimension coordinate $w$. The first two equations are dynamical, and the last is a constraint. The solutions of equations (\[eq:bg\]) were investigated in detail in [@FFK]. Now we consider scalar fluctuations around the background (\[warp\]). The perturbed metric can be written in the longitudinal gauge as $$\label{eq:metric:pert} ds^2 = a(w)^2 \left[(1+2\Phi) dw^2 + (1+2\Psi)ds_4^2\
null
--- abstract: 'In order to estimate in absolute terms the luminosity of LHC certain beam parameters have to be measured very accurately. In particular the total beam current and the relative distribution of the charges around the ring, the transverse size of the beams at the interaction points and the relative position of the beams at the interaction point. The experiments can themselves measure several of these parameters very accurately thanks to the versatility of their detectors, other parameters need however to be measured using the monitors installed on the machine. The beam instrumentation is usually built for the purpose of aiding the operation team in setting up and optimizing the beams, often this only requires precise relative measurements and therefore the absolute scale is usually not very precisely calibrated. The luminosity calibration requires several machine-side instruments to be pushed beyond their initial scope.' author: False 'E. Bravin, CERN, Geneva, Switzerland' title: 'Instrumentation2: Other instruments, ghost/satellite bunch monitoring, halo, emittance, new developments[^1]' --- Colliding and non colliding charges =================================== In general in colliders the particles circulating in opposite directions are kept separated and only allowed to encounter each other at the designated interaction points. This is even more true for the LHC where the particles travel in different vacuum tubes for most of the accelerator length. Particles colliding outside of the experiments would provide no useful information and would only contribute to the background and reduce the lifetime of the beams. In order to estimate the luminosity it is therefore important to quantify the number of particles that can potentially collide in a given interaction point more than just the total current stored in the machine. The distribution of particles around the ring can be rather complicated. In theory there should be only a well known number of equal bunches spaced by well known amounts of time and in this situation it would be easy to calculate the colliding charges from the total current. In reality the bunches have all different currents and there can be charges also outside of these bunches. In the LHC the radio frequency (RF) system has a frequency of 400.8 MHz and only every 10th bucket at most is filled. This means that there are plenty of *wrong* RF buckets that can store particles in a stable way. It can happen that capture problems (also upstream in the injectors) create unwanted small intensity bunches near by the main ones. These, named satellite bunches, have typically intensities of up to 1% of the main bunch and are only a few RF buckets away from the main bunch (usually a multiple of the RF period of one of the preceding accelerators). Other effects can lead to particles escaping from the main buckets and becoming un-captured, these particles are no longer synchronous and will just diffuse around the ring where they can remain for very long time. In case some RF gymnastic is performed (like inserting dips in the accelerating voltage in order to improve injection efficiency) it can happen that some un-captured beam is recaptured forming a very large number of very faint bunches. These are called ghost bunches and have typically currents below the per-mill of the main bunches. In the LHC ghost bunches have been observed, in particular during the heavy ions run due to the special RF tricks used at injection when injecting ions. It is worth mentioning that un-captured particles will be lost if the energy of the machine is changed (e.g. during the ramp) due to the fact that they can not be properly accelerated by not being synchronous with the RF. Measuring the colliding charge ============================== Usually fast current transformers should be sufficient to measure the relative current variations from bunch to bunch. The dynamic range and speed of these detectors are however not sufficient to detect the satellites and the ghost bunches. Moreover in the LHC the fast current transformers integrate the beam current over 25ns (10 RF buckets) bins and it is not possible to know if and which satellites are included in the integration. Detectors with better time resolution and higher dynamic range are required. Candidates are: Wall current monitor Strip line pick-up Fast light detector sampling the synchrotron light vs. time Precise time stamping and counting of synchrotron light Wall current monitor -------------------- The wall current monitors can probably be used to estimate the satellites. This requires however averaging over many turns and correcting for quirks in the frequency response of the detector and the cables. It is in particular important to verify that reflection/noise or other effects are not limiting the potential of the averaging. For the moment the amount of charge in satellites is calculated by studying the frequency spectra of the acquired signals, as satellites are out of the nominal bunching pattern it is possible to compare the expected spectra with the measured one ad estimate the amount of charge producing the distortion. One can also estimate the distortion coefficient known. It is also possible that they are ghosts. At the moment a continuous analysis of the spectrum of the wall current monitor is performed by the front-end software and provides an estimate of the amount of charge outside of the correct buckets which is stored in the database. Figure \[WCM\] shows the signal from a wall current monitor acquired with a 10 GSample scope. A long tail after the bunch can be observed, this arises from the frequency response of the detector and is corrected for in the analysis. ! [Signal from a wall current monitor. The top graph shows a zoom into a single bunch while the bottom graph shows the entire ring. []{data-label="WCM"}](wall_current_monitor){width="1.0\linewidth"} Strip line pick-ups ------------------- The strip line pick-ups provide signals comparable to the ones of the wall current monitor with the drawback of a perfectly reflected pulse shortly after the main pulse with a delay that depends on the strip length, 30cm for the devices installed in the LHC, intrinsic to the principle of the device (see Fig. \[strip\_line\]). This reflection complicates the treatment of the signal resulting in the impossibility of using this instrument for the identification of ghosts and satellites. This reflection complicates the [Signal ] for pick-up. []{data-label="strip_line"}](strip_line){width="1.0\linewidth"} Synchrotron light detection --------------------------- There are two possibilities for using the synchrotron light for longitudinal measurements. One consists in simply using a fast optical detector connected to a fast sampler and record the intensity of synchrotron light as function of time. The principle is simple and photo-diodes in the order of 50GHz are commercially available, there are however a few difficulties associated with this technique. As for the WCM the transport of the high frequency signals is not simple and the cables response will modify the pulses requiring frequency domain corrections. Another problem is introduced by the need of fast digitizers implying a reduced dynamic range (typically 8 bits only), noise etc. On the other end the response of the detector itself should be much more linear than the one of the WCM and can in principle extend down to DC. It is surely worth trying this possibility however it will be very difficult to be able to measure the ghost bunches in this way. The other alternative is to count single SR photons with precise time stamping of the arrival time. Detectors suitable for the task exists (avalanche photo diodes, APD) and time to digital converters with resolutions of a few tens of ps also exists. The only draw back of this technique is that the counting rate is limited and the light has to be attenuated such that the probability of detecting a photon during a bunch passage should be less than  60%. Such a detector has been operated during the last part of the 2010 run (mainly during the ions period) and has given very promising results, it is known as the longitudinal density monitor or LDM (see Fig. \[LDM\].) Longitudinal density monitor LDM -------------------------------- The LDM is based on avalanche photodiodes from either id-Quantique or Micro Photon Devices connected to a fast TDC from Agilent (former Acquiris). The detector can resolve single photons with a time resolution of the order of 50ps, the TDC has a resolution of 50ps as well. At the moment the temporal resolution of the system is limited to about 75ps (300ps pk-pk) due to the reference timing used (turn clock from the BST receiver, BOBR), in the future this limitation will be removed by using a dedicated RF timing signal [@ldm]. The avalanche photo diodes present a short dead-time used to quench the avalanche (tens of ns) and there is also a small probability that at the end of this dead-time trapped electrons or holes will trigger a new avalanche (the probability of this type of events is of the order of 3%.) These effects, together with the dark count rate, although small, must be corrected for, a rather simple statistical algorithms is sufficient. The probability of SL photon triggering an avalanche per bunch-crossing must be maintained below a certain level (60-70%) otherwise the error on these corrections becomes too large. This has an impact on the maximum counting rate and thus on the integration time required for acquiring a profile with sufficient resolution. In fact the integration time required depends on what is being observed; if the aim is just to measure the so called core parameters of a bunch (mainly the bunch length) a few seconds are sufficient, on the other hand if the population of ghosts and satellites has to be measured an integration of several minutes may be required. --------------------------------------- --------------------------------------- --------------------------------------- --------------------------------------- ! <unk> [image](ldm2){width="0.2\linewidth"}
null
--- abstract: 'The International Linear Collider (ILC) is the next large scale project in accelerator particle physics. Colliding electrons with positrons at energies from 0.3 TeV up to about 1 TeV, the ILC is expected to provide the accuracy needed to complement the LHC data and extend the sensitivity to new phenomena at the high energy frontier and answer some of the fundamental questions in particle physics and in its relation to Cosmology. This paper reviews some highlights of the ILC physics program and of the major challenges for the accelerator and detector design.' address: | Department of Physics, University of California at Berkeley and\ Lawrence Berkeley National Laboratory\ Berkeley, CA 94720, USA\ MBattaglia@lbl.gov author: - Marco Battaglia --- The International Linear Collider ================================= Introduction {#sec0} ------------ Accelerator particle physics is completing a successful cycle of precision tests of the Standard Model of electro-weak interactions (SM). After the discovery of the $W$ and $Z$ bosons at the $Sp\bar{p}S$ hadron collider at CERN, the concurrent operation of hadron and $e^+e^-$ colliders has provided a large set of precision data and new observations. Two $e^+e^-$ colliders, the SLAC Linear Collider (SLC) at the Stanford Linear Accelerator Center (SLAC) and the Large Electron Positron (LEP) collider at the European Organization for Nuclear Research (CERN), operated throughout the 1990’s and enabled the study of the properties of the $Z$ boson in great detail. Operation at LEP up to 209 GeV, the highest collision energy ever achieved in electron-positron collisions, provided detailed information on the properties of $W$ bosons and the strongest lower bounds on the mass of the Higgs boson and of several supersymmetric particles. The collision of point-like, elementary particles at a well-defined and tunable energy offers advantages for precision measurements, as those conducted at LEP and SLC, over proton colliders. On the other hand experiments at hadron machines, such as the Tevatron $p \bar p$ collider at Fermilab, have enjoyed higher constituent energies. The CDF and D0 experiments eventually observed the direct production of top quarks, whose mass had been predicted on the basis of precision data obtained at LEP and SLC. While we await the commissioning and operation of the LHC $pp$ collider at CERN, the next stage in experimentation at lepton colliders is actively under study. For more than two decades, studies for a high-luminosity accelerator, able to collide electrons with positrons at energies of the order of 1 TeV, are being carried out world-wide. The path towards the ILC {#sec1} ------------------------ The concept of an $e^+e^-$ linear collider dates back to a paper by Maury Tigner [@Tigner:1965] published in 1965, when the physics potential of $e^+e^-$ collisions had not yet been appreciated in full. This seminal paper envisaged collisions at 3-4 GeV with a luminosity competitive with that of the SPEAR ring at SLAC, i.e. $3 \times 10^{30}$ cm$^{-2}$ s$^{-1}$. [*A possible scheme to obtain $e^-e^-$ and $e^+e^-$ collisions at energies of hundreds of GeV*]{} is the title of a paper [@Amaldi:1976] by Ugo Amaldi published a decade later in 1976, which sketches the linear collider concept with a design close to that now developed for the ILC. The parameters for a linear collider, clearly recognised as the successors of $e^+e^-$ storage rings on the way to high energies, were discussed by Burt Richter at the IEEE conference in San Francisco in 1979 [@Richter:1979cq] and soon after came the proposal for the [*Single Pass Collider Project*]{} which would become SLC at SLAC. From 1985, the CERN Long Range Planning Committee considered an $e^+e^-$ linear collider, based on the CLIC [@Schnell:1986ig] design, able to deliver collisions at 2 TeV with $10^{33}$ cm$^{-2}$ s$^{-1}$ luminosity, [*vis-a-vis*]{} a hadron collider, with proton-proton collisions at 16 TeV and luminosity of $1.4 \times 10^{33}$ cm$^{-2}$ s$^{-1}$, as a candidate for the new CERN project after LEP. That review process eventually led to the decision to build the LHC, but it marked an important step to establish the potential of a high energy $e^+e^-$ collider. It is important to note that it was through the contributions of several theorists, including John Ellis, Michael Peskin, Gordon Kane and others, that the requirements in terms of energy and luminosity for a linear collider became clearer in the mid 1980’s [@Ahn:1988vj]. The same way. After a decade marked by important progress in the R&D of the basic components and the setup of advanced test facilities, designs of four different concepts emerged: TESLA, based on superconducting RF cavities, the NLC/JLC-X, based on high frequency (11.4 GHz) room-temperature copper cavities, JLC-C, based on lower frequency (5.7 GHz) conventional cavities and CLIC, a multi-TeV collider based on a different beam acceleration technique, the two-beam scheme with transfer structures operating at 30 GHz. Accelerator R&D had reached the maturity to assess the technical feasibility of a linear collider project and take an informed choice of the most advantageous RF technology. The designs were considered by the International Linear Collider Technical Review Committee (ILC-TRC), originally formed in 1994 and re-convened by the International Committee for Future Accelerators (ICFA) in 2001 under the chairmanship of Greg A. Loew. The ILC-TRC assessed their status using common criteria, identified outstanding items needing R&D effort and suggested areas of collaboration. The TRC report was released in February 2003 [@trc] and the committee found that there were [*no insurmountable show-stoppers to build TESLA, NLC/JLC-X or JLC-C in the next few years and CLIC in a more distant future, given enough resources*]{}. Nonetheless, significant R&D remained to be done. At this stage, it became clear that, to make further progress, the international effort towards a linear collider should be focused on a single design. ICFA gave mandate to an International Technology Recommendation Panel (ITRP), chaired by Barry Barish, to make a definite recommendation for a RF technology that would be the basis of a global project. In August 2004 the ITRP made the recommendation in favour of superconducting RF cavities [@itrp]. The technology choice, which was promptly accepted by all laboratories and groups involved in the R&D process, is regarded as a major step towards the realization of the linear collider project. Soon after it, a truly world-wide, centrally managed design effort, the Global Design Effort (GDE) [@gde], a team of more than 60 persons, started, with the aim to produce an ILC Reference Design Report by beginning of 2007 and an ILC Technical Design Report by end of 2008. The GDE responsibility now covers the detailed design concept, performance assessments, reliable international costing, industrialization plan, siting analysis, as well as detector concepts and scope. A further important step has been achieved with release of the Reference Design Report in February 2007 [@rdr]. This report includes a preliminary value estimate of the cost for the ILC in its present design and at the present level of engineering and industrialisation. The value estimate is structured in three parts: 1.78 Billion ILC Value Units for site-related costs, such as those of tunneling in a specific region, 4.87 Billion ILC Value Units for the value of the high technology and conventional components and 13,000 person-years for the required supporting manpower. For this estimate the conversion factor is 1 ILC Value Unit = 1 US Dollar = 0.83 Euro = 117 Yen. This estimate, which is comparable to the LHC cost, when the pre-existing facilities, such as the LEP tunnel, are included, provides guidance for optimisation of both the design and the R&D to be done during the engineering phase, due to start in Fall 2007. Technical progress was paralleled by increasing support for the ILC in the scientific community. At the 2001 APS workshop [*The Future of Physics*]{} held in Snowmass, CO, a consensus emerged for the ILC as the right project for the next large scale facility in particle physics. This consensus has spread world-wide. The ILC role in the future of scientific research was recognised by the OECD Consultative Group on High Energy Physics [@oecd], while the DOE Office of Science ranked the ILC as its top mid-term project. More recently the EPP 2010 panel of the US National Academy of Sciences, in a report titled [*Elementary Particle Physics in the 21$^{st}$ Century*]{} has endorsed the ILC as the next major experimental facility to
null
--- abstract: | The evolution of number density, size and intrinsic colour is determined for a volume-limited sample of visually classified early-type galaxies selected from the HST/ACS images of the GOODS North and South fields (version 2). The sample comprises $457$ galaxies over $320$ arcmin$^2$ with stellar masses above $3\cdot 10^{10}$in the redshift range 0.4$<$z$<$1.2. Our data allow a simultaneous study of number density, intrinsic colour distribution and size. We find that the most massive systems (${\lower.5ex\hbox{$\; \buildrel > \over \sim \;$}}3\cdot 10^{11}M_\odot$) do not show any appreciable change in comoving number density or size in our data. Furthermore, when including the results from 2dFGRS, we find that the number density of massive early-type galaxies is consistent with no evolution between z=1.2 and 0, i.e. over an epoch spanning more than half of the current age of the Universe. Massive galaxies show very homogeneous [*intrinsic*]{} colour distributions, featuring red cores with small scatter. The distribution of half-light radii – when compared to z$\sim$0 and z$>$1 samples – is compatible with the predictions of semi-analytic models relating size evolution to the amount of dissipation during major mergers. However, in a more speculative fashion, the observations can also be interpreted as weak or even no evolution in comoving number density [*and size*]{} between 0.4$<$z$<$1.2, thus pushing major mergers of the most massive galaxies towards lower redshifts. author: - | Ignacio Ferreras$^{1}$[^1], Thorsten Lisker$^2$, Anna Pasquali$^3$, Sadegh Khochfar$^4$, Sugata Kaviraj$^{1,5}$\ $^1$ Mullard Space Science Laboratory, Unversity College London, Holmbury St Mary, Dorking, Surrey RH5 6NT\ $^2$ Astronomisches Rechen-Institut, Zentrum für Astronomie, Universität Heidelberg, Mönchhofstr. 12-14, D-69120 Heidelberg, Germany\ $^3$ Max-Planck-Institut für Astronomie, Koenigstuhl 17, D-69117 Heidelberg, Germany\ $^4$ Max-Planck-Institut für Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching, Germany\ $^5$ Astrophysics subdepartment, The Denys Wilkinson Building, Keble Road, Oxford OX1 3RH date: 'January 20, 2009: To be published in MNRAS' title: 'On the formation of massive galaxies: A simultaneous study of number density, size and intrinsic colour evolution in GOODS' --- \[firstpage\] galaxies: evolution — galaxies: formation — galaxies: luminosity function, mass function — galaxies: high redshift Introduction {#sec:intro} ============ During the past decades the field of extragalactic astrophysics has undergone an impressive development, from simple models that were compared with small, relatively nearby samples to current surveys extending over millions of Mpc$^3$ at redshifts beyond z$\sim$1 along with numerical models that can probe cosmological volumes with the aid of large supercomputers. However, in the same period of time, our knowledge of the ’baryon physics’ relating the dark and luminous matter components has progressed much slower, mainly due to the highly non-linear processes that complicate any ab initio approach to this complex problem. The evolution of the most massive galaxies constitutes one of the best constraints one can impose on the modelling of galaxy formation. Within the current paradigm of galaxy growth in a $\Lambda$CDM cosmology, massive galaxies evolve from subsequent mergers of smaller structures. The most massive galaxies are early-type in morphology and are dominated by old stellar populations, with a tight mass-metallicity relation and abundance ratios suggesting a quick build-up of the stellar component [see e.g. , [14, 17] On the other hand, semi-analytic models of galaxy formation predict a more extended assembly history (if not star formation) from major mergers. By carefully adjusting these models, it has been possible to generate realizations that are compatible with the observed stellar populations in these galaxies [e.g. @kav06; <unk> [@giav04]. Our data set complements recent work exploring the issue of size and stellar mass evolution [e.g. @Bun05; @McIn05; @fran06; @fon06; @Borch06; @brown07; @Truj07; @vdk08]. The coverage (320 arcmin$^2$), depth ($1\sigma$ surface brightness limit per pixel of $24.7$ AB mag/arcsec$^2$ in the $i$ band) and high-resolution (FWHM$\sim 0.12$ arcsec) of these images allow us to perform a consistent analysis of the redshift evolution of the comoving number density, size and intrinsic colour of these galaxies. ! [image](f1.eps){width="5in"} here> galaxies. This is a continuation of @fer05 – that was restricted to the CDFS field. However, notice that our sample does [*not*]{} apply the selection based on the Kormendy relation, i.e. the only constraint in this sample is visual classification. The analysis of the complete sample is presented in @egds09. Over the $320$ arcmin$^2$ field of view of the North and South GOODS/ACS fields, the total sample comprises $910$ galaxies down to $i_{\rm AB}=24$ mag (of which 533/377 are in HDFN/CDFS). The available photometric data – both space and ground-based – were combined with spectroscopic or photometric redshifts in order to determine the stellar mass content. Spectroscopic redshifts are available for 66% of the galaxies used in this paper. The photometric redshifts have an estimated accuracy of $\Delta (z)/(1+z)\sim 0.002\pm 0.09$ [@egds09]. Stellar masses are obtained by convolving the synthetic populations of @bc03 with a grid of exponentially decaying star formation histories [see appendix B of @egds09 for details]. A single galaxy population can be assumed. Even though the intrinsic properties of a stellar population (i.e. its age and metallicity distribution) cannot be accurately constrained with broadband photometry, the stellar mass content can be reliably determined to within $0.2-0.3$ dex provided the adopted IMF gives an accurate representation of the true initial mass function [see e.g. @fsb08]. The sizes are computed using a non-parametric approach that measures the total flux within an ellipse with semimajor axis $a_{\rm TOT}<1.5a_{\rm Petro}$. The light distribution. The half-light radius is defined as R$_{50}\equiv\sqrt{a_{50}\times b_{50}}$, where $a_{50}$ and $b_{50}$ are respectively the semimajor and semiminor axes of the ellipse that engulfs 50% of the total flux. Those values need to be corrected for the loss of flux caused by the use of an aperture [see e.g. , S/ACS] We used a synthetic catalogue of galaxies with Sersic profiles and the same noise and sampling properties as the original GOODS/ACS images to build fitting functions for the corrections in flux and size. The corrections depend mostly on R$_{50}$ and, to second order, on the Sersic index. Most of this correction is related to the ratio between the size of the object and the size of the Point Spread Function of the observations. The dependence with Sersic index (or in general surface brightness slope) is milder and for this correction the concentration [as defined in @ber00] was used as a proxy. We compared our photometry with the GOODS-MUSIC data [@graz06] in the CDFS. Our sample has 351 galaxies in common with that catalogue, and the difference between our total+corrected $i$-band magnitudes and the total magnitudes from GOODS-MUSIC is $\Delta i\equiv i_{\rm ours}- i_{\rm MUSIC}=-0.17\pm 0.16$ mag. This discrepancy is mostly due to our corrections of the total flux. A bootstrap method using synthetic images show that our corrections are accurate with respect to the true total flux to within 0.05 mag, and to within 9% in half-light radius [see appendix A of @egds09]. Our estimates of size were also compared with the GALFIT-based parametric approach of @gems on the GEMS survey. Out of 133 galaxies in common, the median of the difference defined as $($R$_{50}^{\rm ours}-$
null
--- abstract: 'Natural selection and random drift are competing phenomena for explaining the evolution of populations. Combining a highly fit mutant with a population structure that improves the odds that the mutant spreads through the whole population tips the balance in favor of natural selection. The probability that the spread occurs, known as the fixation probability, depends heavily on how the population is structured. Certain topographies induce fixation. We introduce a randomized mechanism for network growth that is loosely inspired in some of these topologies’ key properties and demonstrate, through simulations, that it is capable of giving rise to structured populations for which the fixation probability significantly surpasses that of an unstructured population. This False structures.' author: [intl @sf07]. Typically, the dynamics of such interactions involves the propagation of information through the network as the agents contend to spread their influence and alter the states of other agents. In this letter, we focus on the dynamics of evolving populations, particularly on how network structure relates to the ability of a mutation to take over the entire network by spreading from its node of origin. In evolutionary dynamics, the probability that a mutation occurring at one of a population’s individuals eventually spreads through the entire population is known as the mutation’s fixation probability, $\rho$. In an otherwise homogeneous population, the value of $\rho$ depends on the ratio $r$ of the mutant’s fitness to that of the other individuals, and it is the interplay between $\rho$ and $r$ that determines the effectiveness of natural selection on the evolution of the population, given its size. In essence, highly correlated $\rho$ and $r$ lead to a prominent role of natural selection in driving evolution; random drift takes primacy, otherwise [@n06]. Let $P$ be a population of $n$ individuals and, for individual $i$, let $P_i$ be any nonempty subset of $P$ that excludes $i$. We consider the evolution of $P$ according to a sequence of steps, each of which first selects $i\in P$ randomly in proportion to $i$’s fitness, then selects $j\in P_i$ randomly in proportion to some weighting function on $P_i$, and finally replaces $j$ by an offspring of $i$ having the same fitness as $i$. When $P$ is a homogeneous population of fitness $1$ (except for a randomly chosen mutant, whose fitness is initially set to $r\neq 1$), $P_i=P\setminus\{i\}$ [^1], and moreover the weighting function on every $P_i$ is a constant (thus choosing $j\in P_i$ occurs uniformly at random), this sequence of steps is known as the Moran process [@m58]. In this setting, evolution can be modeled by a simple discrete-time Markov chain, of states $0,1,\ldots,n$, in which state $s$ indicates the existence of $s$ individuals of fitness $r$, the others $n-s$ having fitness $1$. In this chain, states $0$ and $n$ are absorbing and all others are transient. If $s$ is a transient state, then it is possible either to move from $s$ to $s+1$ or $s-1$, with probabilities $p$ and $q$, respectively, such that $p/q=r$, or to remain at state $s$ with probability $1-p-q$. When $r>1$ (an advantageous mutation), the evolution of the system has a forward bias; when $r<1$ (a disadvantageous mutation), there is a backward bias. And given that the initial state is $1$, the probability that the system eventually reaches state $n$ is precisely the fixation probability, in this case denoted by $\rho_1$ and given by $$\rho_1=\frac{1-1/r}{1-1/r^n}$$ (cf. [@n06]). The probability that the mutation eventually becomes extinct (i.e., that the system eventually reaches state $0$) is $1-\rho_1$. Because $\rho_1<1$, extinction is a possibility even for advantageous mutations. Similarly, it is possible for disadvantageous mutations to spread through the entirety of $P$. In order to consider more complex possibilities for $P_i$, we introduce the directed graph $D$ of node set $P$ and edge set containing every ordered pair $(i,j)$ such that $j\in P_i$. The case of a completely connected $D$ (in which every node connects out to every other node) corresponds to the Moran process. But in the general case, even though it continues to make sense to set up a discrete-time Markov chain with $0$ and $n$ the only absorbing states, analysis becomes infeasible nearly always and $\rho$ must be calculated by computer simulation of the evolutionary steps. The founding work on this graph-theoretic perspective for the study of $\rho$ is [@lhn05], where it is shown that we continue to have $\rho=\rho_1$ for a much wider class of graphs. Specifically, the necessary and sufficient condition for $\rho=\rho_1$ to hold is that the weighting function be such that, for all nodes, the probabilities that result from the incoming weights sum up to $1$ (note that this already holds for the outgoing probabilities, thus characterizing a doubly stochastic process for out-neighbor selection). In particular, if the weighting function is a constant for all nodes and a node’s in-degree (number of in-neighbors) and out-degree (the cardinality of $P_i$ for node $i$, its number of out-neighbors) are equal to each other and the same for all nodes, as in the Moran case, then $\rho=\rho_1$. Other interesting structures, such as scale-free graphs [@ba99], are also handled in [@lhn05], but the following two observations are especially important to the present study. The first one is that, if $D$ is not strongly connected (i.e., not all nodes are reachable from all others through directed paths), then $\rho>0$ if and only if all nodes are reachable from exactly one of $D$’s strongly connected components. Furthermore, when this is the case random drift may be a more important player than natural selection, since fixation depends crucially on whether the mutation arises in that one strongly connected component. If $D$ is strongly connected, then $\rho>0$ necessarily. The second important observation is that there do exist structures that suppress random drift in favor of natural selection. One of them is the $D$ that in [@lhn05] is called a $K$-funnel for $K\ge 2$ an integer. If $n$ is sufficiently large, the value of $\rho$ for the $K$-funnel, denoted by $\rho_K$, is $$\rho_K=\frac{1-1/r^K}{1-1/r^{Kn}}.$$ Thus, the $K$-funnel can be regarded as functionally equivalent to the Moran graph with $r^K$ substituting for the fitness $r$. Therefore, the fixation probability can be arbitrarily amplified by choosing $K$ appropriately, provided $r>1$. Noteworthy ] [email protected by @sar08]. In these works, analytical characterizations are obtained for the fixation probability on undirected scale-free graphs, both under the dynamics we have described (in which $j$ inherits $i$’s fitness) and the converse dynamics (in which it is $i$ that inherits $j$’s fitness). The main find is that the fixation probability is, respectively for each dynamics, inversely or directly proportional to the degree of the node where the advantageous mutation appears. In this letter, we depart from all previous studies of the fixation probability by considering the question of whether a mechanism exists for $D$ to be grown from some simple initial structure in such a way that, upon reaching a sufficiently large size, a value of $\rho$ can be attained that substantially surpasses the Moran value $\rho_1$ for an advantageous mutation. Such a $D$ might lack the sharp amplifying behavior of structures like the $K$-funnel, but being less artificial might also relate more closely to naturally occurring processes. We respond affirmatively to the question, inspired by the observation discussed above on the strong connectedness of $D$, and using the $K$-funnel as a sieving mechanism to help in looking for promising structures. It should be noted, however, that since other amplifiers exist with capabilities similar to those of the $K$-funnel (e.g., the $K$-superstar [@lhn05]), alternatives to the strategy we introduce that are based on them may also be possible. In a $K$-funnel, nodes are organized into $K$ layers, of which layer $k$ contains $b^
null
--- abstract: 'We present a new concept for a multi-stage Zeeman decelerator that is optimized particularly for applications in molecular beam scattering experiments. The decelerator consists of a series of alternating hexapoles and solenoids, that effectively decouple the transverse focusing and longitudinal deceleration properties of the decelerator. It can be operated in a deceleration and acceleration mode, as well as in a hybrid mode that makes it possible to guide a particle beam through the decelerator at constant speed. The deceleration features phase stability, with a relatively large six-dimensional phase-space acceptance. The separated focusing and deceleration elements result in an unequal partitioning of this acceptance between the longitudinal and transverse directions. This is ideal in scattering experiments, which typically benefit from a large longitudinal acceptance combined with narrow transverse distributions. We demonstrate the successful experimental implementation of this concept using a Zeeman decelerator consisting of an array of 25 hexapoles and 24 solenoids. The performance of the decelerator in acceleration, deceleration and guiding modes is characterized using beams of metastable Helium ($^3S$) atoms. Up to 60% of the kinetic energy was removed for He atoms that have an initial velocity of 520 m/s. The hexapoles consist of permanent magnets, whereas the solenoids are produced from a single hollow copper capillary through which cooling liquid is passed. The solenoid design allows for excellent thermal properties, and enables the use of readily available and cheap electronics components to pulse high currents through the solenoids. The Zeeman decelerator demonstrated here is mechanically easy to build, can be operated with cost-effective electronics, and can run at repetition rates up to 10 Hz.' author: - Theo Cremers - Simon Chefdeville - Niek Janssen - Edwin Sweers - Sven Koot - Peter Claus - 'Sebastiaan Y.T. van der Land' beam. Using methods that are inspired by concepts from charged particle accelerator physics, complete control over the velocity of molecules in a beam can be achieved. In particular, Stark and Zeeman decelerators have been developed to control the motion of molecules that possess an electric and magnetic dipole moment using time-varying electric and magnetic fields, respectively. Since the first experimental demonstration of Stark deceleration in 1998 [@Bethlem:PRL83:1558], several decelerators ranging in size and complexity have been constructed [@Meerakker:CR112:4828; @Narevicius:ChemRev112:4879; @Hogan:PCCP13:18705]. Applications of these controlled molecular beams are found in high-resolution spectroscopy, the trapping of molecules at low temperature, and advanced scattering experiments that exploit the unprecedented state-purity and/or velocity control of the packets of molecules emerging from the decelerator [@Carr:NJP11:055049; @Bell:MolPhys107:99; @Jankunas:ARPC66:241; @Stuhl:ARPC65:501; @Brouard:CSR43:7279; @Krems:ColdMolecules]. Essential in any experiment that uses a Stark or Zeeman decelerator is a high particle density of the decelerated packet. For this, it is imperative that the molecules are decelerated with minimal losses, i.e., molecules within a certain volume in six-dimensional (6D) phase-space should be kept together throughout the deceleration process [@Bethlem:PRL84:5744]. It is a formidable challenge, however, to engineer decelerators that exhibit this so-called phase stability. The problem lies in the intrinsic field geometries that are used to manipulate the beam. In a multi-stage Zeeman (Stark) decelerator a series of solenoids (high-voltage electrodes) yields the deceleration force as well as the transverse focusing force. This can result in a strong coupling between the longitudinal (forward) and transverse oscillatory motions; parametric amplification of the molecular trajectories can occur, leading to losses of particle density [@Meerakker:PRA73:023401; @Sawyer:EPJD48:197]. For Stark decelerators, the occurrence of instabilities can be avoided without changing the electrode design. By operating the decelerator in the so-called $s=3$ mode [@Meerakker:PRA71:053409], in which only one third of the electrode pairs are used for deceleration while the remaining pairs are used for transverse focusing, instabilities are effectively eliminated [@Meerakker:PRA73:023401; @Scharfenberg:PRA79:023410]. The high particle densities afforded by this method have recently enabled a number of high-resolution crossed beam scattering experiments, for instance [@Gilijamse:Science313:1617; @Kirste:Sience338:1060; @Zastrow:NatChem6:216; @Vogels:SCIENCE350:787]. For example reducing scatter losses. Wiederkehr i al. * extensively investigated phase stability in a Zeeman decelerator, particularly including the role of the nonzero rise and fall times of the current pulses, as well as the influence of the operation phase angle [@Wiederkehr:JCP135:214202; @Wiederkehr:PRA82:043428]. Evolutionary algorithms were developed to optimize the switching pulse sequence, significantly increasing the number of particles that exit from the decelerator. Furthermore, inspired by the $s=3$ mode of a Stark decelerator, alternative strategies for solenoid arrangements were investigated numerically [@Wiederkehr:PRA82:043428]. Dulitz *et al. * developed a model for the overall 6D phase-space acceptance of a Zeeman decelerator, from which optimal parameter sets can be derived to operate the decelerator at minimum loss [@Dulitz:PRA91:013409]. Dulitz *et al. * also proposed and implemented schemes to improve the transverse focusing properties of a Zeeman decelerator by applying reversed current pulses to selected solenoids [@Dulitz:JCP140:104201]. Yet, despite the substantial improvements these methods can offer, the phase-stable operation of a multi-stage Zeeman decelerator over a large range of velocities remains challenging. Recently, a very elegant approach emerged that can be used to overcome these intrinsic limitations of multi-stage decelerators. So-called traveling wave decelerators employ spatially moving electrostatic or magnetic traps to confine part of the molecular beam in one or multiple wells that start traveling at the speed of the molecular beam pulse and are subsequently gradually slowed down. In this approach the molecules are confined in genuine potential wells, and stay confined in these wells until the final velocity is reached. Consequently, these decelerators are inherently phase stable, and no losses occur due to couplings of motions during the deceleration process. The acceptances are almost equal in both the longitudinal and transverse directions, which appears to be particularly advantageous for experiments that are designed to spatially trap the molecules at the end of the decelerator. Both traveling wave Stark [@Osterwalder:PRA81:051401; @vandenBerg:JMS300:201422] and Zeeman [@Trimeche:EPJD65:263; @Lavert-Ofir:NJP13:103030; @Lavert-Ofir:PCCP13:18948; @Akerman:NJP17:065015] decelerators have been successfully demonstrated. Recently, first experiments in which the decelerated molecules are subsequently loaded into static traps have been conducted [@Quintero:PRL110:133003; @Jansen:PRA88:043424]. These traveling wave decelerators typically feature a large overall 6D acceptance. This acceptance is almost equally partitioned between the longitudinal and both transverse directions. For high-resolution scattering experiments, however, there are rather different requirements for the beam than for trapping. Certainly, phase-stable operation of the decelerator—and the resulting production of molecular packets with high number densities—is essential. In addition, tunability over a wide range of final velocities is important, but the ability to reach very low final velocities approaching zero meters per second is often inconsequential. More ivity directions. Ideally, for scattering experiments the longitudinal acceptance of the decelerator should be relatively large, whereas it should be small in the transverse directions. A broad longitudinal distribution—in the order of a few tens of mm spatially and 10–20 m/s in velocity—is typically required to yield sufficiently long interaction times with the target beam or sample, and to ensure the capture of a significant part of the molecular beam pulse that is available for scattering. In addition, a large longitudinal velocity acceptance allows for the application of advanced phase-space manipulation techniques such as bunch compression and longitudinal cooling to further improve the resolution of the experiment [@Crompvoets:PRL89:093004]. By contrast, much narrower distributions are desired in the transverse directions. Here, the spatial diameter of the beam should be matched to the size of the target beam and the detection volume; typically a diameter of several mm is sufficient. Finally, the transverse velocity distribution should be narrow to
null
--- author: - 'Y. X. ' 'W. Yu' - 'Z. Yan' Huang', <unk> ’D. Yang' 'L. Sun' - 'T. P. Li' title: 'On the Relation of Hard X-ray Peak Flux and Outburst Waiting Time in the Black Hole Transient GX 339-4' --- [In this work we re-investigated the empirical relation between the hard X-ray peak flux and the outburst waiting time found previously in the black hole transient GX 339-4. We compared it to the waiting time for outburst. ]{} [We included Swift/BAT data obtained in the past four years. Together and 2007] years. ]{} [The observation of the 2007 outburst confirms the empirical relation discovered before. This correlation is quite hard to reach. We also show that faint flares with peak fluxes smaller than about 0.12 crab do not affect the empirical relation. We predict that the hard X-ray peak flux of the next outburst should be larger than 0.65 crab, which will make it at least the second brightest in the hard X-ray since 1991. ]{} INTRODUCTION ============ GX 339-4 is a black hole transient discovered more than 30 years ago. It has a mass function of $5.8~M_{\odot}$, a low mass companion star and a distance of $\gtrsim 7$ kpc [@Mar73; @Hyn03; @Sha01; @Zdz04]. It is one of the black hole transients with the most frequent outbursts [@Kon02; @Zdz04]. @Yu07 , 2005. They found a nearly linear relation between the peak flux of the low/hard (LH) spectral state that occurs at the beginning of an outburst and the outburst waiting time defined based on the hard X-ray flux peaks. The empirical relation indicates a link between the brightest LH state that the source can reach and the mass stored in the accretion disk before an outburst starts. After then the source underwent an outburst in 2007. The 2007 outburst and any future outbursts can be used to test and refine the empirical relation. Here we show that the hard X-ray peak flux of the 2007 outburst falls right on the empirical relation obtained by @Yu07, proving that the empirical relation indeed holds. By including the most recent monitoring observations with the Swift/BAT in the past four years, we re-examine the empirical relation and make a prediction for the hard X-ray peak flux of the next bright outburst for a given waiting time. We also clarify issues related to faint flares that have been seen in the recent past. OBSERVATION AND DATA ANALYSIS ============================= We made use of observations performed with BATSE (20–160 keV) covering from May 31, 1991 to May 25, 2000, HEXTE (20–250 keV) covering from January 6, 1996 to January 2, 2006, as in @Yu07, and recent monitoring results of Swift/BAT that are publicly available (15–50 keV) covering from February 13, 2005 to August 31, 2009. The BATSE data were obtained in crab unit. The fluxes of the Crab were 305 counts s$^{-1}$ and 0.228 counts s$^{-1}$ cm$^{-2}$ for HEXTE and BAT respectively. These values were used to convert the source fluxes into the unit of crab. Following the previous study [@Yu07], the light curves were rebinned to a time resolution of 10 days. It is worth noting that the X-ray fluxes quoted below all correspond to 10-day averages, including those obtained in the empirical relation and the predicted fluxes. The combined BATSE, HEXTE and BAT light curves are shown in Fig \[fig\_pkwt\]. The triangles marked with 1–8 indicate the initial hard X-ray peaks during the rising phases of the outburst 1–8, and those with $\rm 5_e$–$\rm 8_e$ indicate the ending hard X-ray peaks during the decay phases of the outburst 5–8. Outburst 1-7 were studied in @Yu07. Outburst 8 is the 2007 outburst that occurred after the empirical relation was obtained. The waiting time of outburst 8 is determined in the same way as in the previous study, i.e., the time separation between the peaks $\rm 7_e$ and 8 and the peak $\rm 7_e$ is the hard X-ray peak associated with the HS-to-LH transition. In order to show how the peaks are chosen, we also plotted the soft X-ray light curves obtained with the RXTE/ASM and the hardness ratios between the ASM and the BATSE or HEXTE or BAT fluxes in Fig \[fig\_hr\]. This explicitly shows that the hard X-ray peaks at the end of outbursts correspond to the HS-to-LH state transitions. The initial hard X-ray peak, on the other hand, is normally the first prominent one during the initial LH state. Due to the hysteresis effect of spectral state transitions [@Miy95], the source would have very low luminosity after the HS-to-LH transition during the outburst decay. We took the hard X-ray peak corresponding to the HS-to-LH state transition such as peak $\rm 7_e$, as the end of the previous outburst, i.e., the starting time to calculate the waiting time of the following outburst (see the definition of waiting time in @Yu07). Due to the relatively low sensitivity of BATSE, flares with 10-day averaged peak flux at or below about 0.1 crab could not be identified as individual outbursts. It is therefore worth noting that the current empirical relation is determined based on outbursts with hard X-ray peak fluxes above about 0.2 crab. In recent years with more sensitive observations of Swift/BAT, we have observed several faint flares in this source. These flares would not have been clearly seen in the BATSE 10-day averaged light curve and would not have been taken as single outbursts if BATSE had operated. Therefore we ignored these flares although they were clearly seen with Swift/BAT. We also analyzed the data for outburst 8, which is shown in Table 1A on. We found that the data point of outburst 8 follows the empirical relation reported in @Yu07, as shown in the inset panel of Fig \[fig\_pkwt\]. The deviation from the empirical relation is only -0.034 crab. The deviation is T_w$. A linear fit to this relation gives $\rm F_p=(9.25\pm0.06)\times 10^{-4}{\rm T_w}-(0.039\pm0.005)$, where $\rm F_p$ is in unit of crab and $\rm T_w$ in units of days. This updated relation is almost identical to the one reported in @Yu07. The intrinsic scattering of the data is 0.014 crab, which defines a $\pm$0.014 crab bound of the linear relation. The intercept of the best-fitting linear model on the waiting time axis is $\rm T_w=42$ days when $\rm F_p=0$ crab. Considering the intrinsic scattering and the model uncertainty, we obtained an intercept $\rm T_w= 42\pm 20$ days. This means that the hard X-ray peak of any outburst should be at least $42\pm 20$ days after the end of the previous outburst, which is determined as the hard X-ray peak corresponding to the HS-to-LH transition. The refined empirical relation enables us to approximately estimate the hard X-ray peak flux (10-day average) for the next bright outburst in GX 339-4. The updated relation gives the peak flux of the next bright outburst as $\rm F_{p,n}=9.25\times10^{-4}~({\rm Day_{09}}+{\rm T_{rise}})+0.44$ crab, where $\rm Day_{09}$ is the number of days in 2009 when a future outburst starts and ${\rm T_{rise}}$ is the rise time in unit of day for the next outburst to reach its initial hard X-ray peak. The hard X-ray peak flux can be predicted almost as soon as the next outburst occurs because the rise time is nearly a small constant compared with the waiting time. The source has remained inactive for about 750 days since the end of the 2007 outburst. This gives that the hard X-ray peak flux of the next outburst should be at least 0.65
null
--- abstract: 'In this work an iterative algorithm based on unsupervised learning is presented, specifically on a Restricted Boltzmann Machine (RBM) to solve a perfect matching problem on a bipartite weighted graph. Iteratively is calculated the weights $ w_{ij} $ and the bias parameters $\theta = ( a_i, b_j) $ that maximize the energy function and assignment element $i$ to element $j$. An assignment algorithm.' author: A compound. Numerous resolution methods and algorithms have been proposed in recent times and many have provided important results, among them for example we find: Constructive heuristics, Meta-heuristics, Approximation algorithms, Iper-heuristics, and other methods. Combinatorial optimization deals with finding the optimal solution between the collection of finite possibilities. The finished set of possible solutions. The heart of the problem of finding solutions in combinatorial optimization is based on the efficient algorithms that present with a polynomial computation time in the input dimension. Therefore, when dealing with certain combinatorial optimization problems one must ask with what speed it is possible to find the solutions or the optimal problem solution and if it is not possible to find a resolution method of this type, which approximate methods can be used in polynomial computational times that lead to stable explanations. Solve this kind of problem in polynomial time $o(n)$ has long been the focus of research in this area until Edmonds \[1\] developed one of the most efficient methods. Over time other algorithms have been developed, for example the fastest of them is the Micali e Vazirani algorithm \[2\], Blum \[3\] and Gabow and Tarjan \[4\]. The first of these methods is an improvement on that of Edmonds, the other algorithms use different logics, but all of them with computational time equal to $o(m \sqrt n)$. The problem is fundamentally the following: we imagine a situation in which respect for the characteristics detected on a given phenomenon is to be assigned between elements of two sets, as for example in one of the most known problems such as the tasks and the workers to be assigned to them. A classical maximum cardinality matching algorithm to take the maximum weight range and assign it, in a decision support system, through the domain expert this could also be acceptable, but in a totally automatic system like a system could be of artificial intelligence that puts together pairs of elements on the basis of some characteristics, this way would not be very reliable, totally removing the user’s control. Another problem related to this kind of situation is that of features. Let’s take as an example a classic problem of flight-gate assignment in an airport, on the basis of the history we could have information about the flight, the gates and the time, the flight number and maybe the airline. Little information that even through the best of feature enginering would lead to a model of machine learning, specifically of classification, very poor in information. Treating the same problem with classical optimization, as done so far, would lead to solving it with a perfect matching of maximum weight, and we would return to the beginning. Matching is the key to classical optimization. In this set of notes, we focus on the case when the underlying graph is bipartite. We start by introducing some basic graph terminology. A graph $G = (V, E)$ consists of a set $V = A \cup B$ of vertices and a set $E$ of pairs of vertices called edges. For an edge $e = (u, v)$, we say that the endpoints of e are $u$ and $v$; we also say that $e$ is incident to $u$ and $v$. A graph $G = (V, E)$ is bipartite if the vertex set $V$ can be partitioned into two sets $A$ and $B$ (the bipartition) such that no edge in $E$ has both endpoints in the same set of the bipartition. A matching M is a collection of edges such that every vertex of $V$ is incident to at most one edge of $M$. If a vertex $v$ has no edge of $M$ incident to it then $v$ is said to be exposed (or unmatched). A matching is perfect if no vertex is exposed; in other words, a matching is perfect if its cardinality is equal to $|A| = |B|$. In the literature several examples of the real world have been treated such as the assignment of children to certain schools \[5\], or as donors and patients \[6\] and workers at companies \[7\]. The problem of the weighted bipartite matching finds the feasible match with the maximum available weight. This problem was developed in several areas, such as in the work of \[8\] about protein and structure alignment, or within the computer vision as documented in the work of \[9\] or as in the paper by \[10\] in which the similarity of the texts is estimated. Other jobs have faced this problem in the classification \[11\],\[12\] e \[13\], but not for many to one correspondence. The mathematical formulation can be solved by presenting it as a linear program. Each edge $(i,j)$, where $i$ is in $A$ and $j$ is in $B$, has a weight $w_{ij}$. For each edge $(i,j)$ we have a decision variable $$x_{ij} =\begin{cases} 1 & \mbox{if the edge is contained in the matching} \\ 0 & \mbox{otherwise} \end{cases}$$ and $x_{ij}\in \mathbb {Z} {\text{ for }}i,j \in A,B$, and we have the following LP: $$\begin{aligned} & \underset{x_{ij}}{\text{max}} \sum _{(i,j)\in A\times B}w_{ij}x_{ij} \end{aligned}$$ $$\begin{aligned} \sum _{j\in B}x_{ij}=1{\text{ for }}i\in A \end{aligned}$$ $$\begin{aligned} \sum _{i\in A}x_{ij}=1{\text{ for }}j\in B \end{aligned}$$ $$\begin{aligned} 0 \leq x_{ij} \leq 1{\text{ for }}i,j \in A,B \end{aligned}$$ $$\begin{aligned} x_{ij}\in \mathbb {Z} {\text{ for }}i,j\in A,B \end{aligned}$$ ! [Bipartite ] $B$. One of most popular solution is Hungarian algorithm \[16\] . The assignment rule therefore in many real cases could be misleading and limiting as well as it could be unrealistic as a solution. Machine learning (ML) algorithms are increasingly gaining ground in applied sciences such as engineering, biology, medicine, etc. both supervised and unsupervised learning models. The matching problem in this case can be seen as a set of inputs $ x_1, ..., x_k$ (in our case the nodes of the set A) and a set of ouput $y_1, ..., y_k$ (the respective nodes of the set $B$), weighed by a series of weights $w_{11}, ... w_{ij}$, which inevitably recall the structure of a classic neural network. The problem is that in this case there would be a number of classes (in the case of assignment) equal to the number of inputs. Considering it as a classic machine learning problem, the difficulty would lie in the features and their engineering, on the one hand, while on the other the number of classes to predict (assign) would be very large. For example, if we think about matching applicants and jobs, if we only had the name of a candidate for the job, we would have very little info to build a robust machine learning model, and even a good features engineering would not lead to much, but having available other information on the candidate they could be extracted it is used case never as “weight” to build a neural network, but even in this case the constraint of a classic optimization model solved with ML techniques would not be maintained, let’s say we would “force” a little hand. While what we want to present in this work is the resolution of a classical problem of matching (assignment) through the application of a ML model, in this case of a neural network, which as already said maintains the mathematical structure of a node (input) and arc (weight) but instead of considering the output of arrival (the set $B$) as classification label (assignment)
null
--- abstract: 'The odd parity gravitational Quasi-Normal Mode spectrum of black holes with non-trivial scalar hair in Horndeski gravity is investigated. We study ‘almost’ Schwarzschild black holes such that any modifications to the spacetime geometry (including the scalar field profile) are treated perturbatively. A modified Regge-Wheeler style equation for the odd parity gravitational degree of freedom is presented to quadratic order in the scalar hair and spacetime modifications, and a parameterisation of the modified Quasi-Normal Mode spectrum is calculated. In addition, statistical error estimates for the new hairy parameters of the black hole and scalar field are given.' author: John [@LIGOScientific:2018mvr]. With next generation ground and space based GW detectors on the horizon, the prospect of performing black hole spectroscopy (BHS) [@Dreyer:2003bv; @Berti:2005ys; @Gossan:2011ha; @Meidam:2014jpa; @Berti:2015itd; @Berti:2016lat; @Berti:2018vdi; @Baibhav:2018rfk; @2019arXiv190209199B; @Giesler:2019uxc; @Bhagwat:2019dtm; @Bhagwat:2019bwv; @Maselli:2019mjd; @Ota:2019bzl; @Cabero:2019zyt] (the gravitational analog to atomic spectroscopy) is tantalisingly close. With BHS, one aims to discern multiple distinct frequencies of gravitational waves emitted during the ringdown of the highly perturbed remnant black hole of a merger event. These frequencies, known as Quasi-Normal Modes (QNMs), act as fingerprints for a black hole, being dependent on both the background properties of a black hole (e.g. its mass) and on the laws of gravity [@1975RSPSA.343..289C; @0264-9381-16-12-201; @Kokkotas:1999bd; @Berti:2009kk; @Konoplya:2011qq]. In General Relativity (GR), the QNM spectrum of a Kerr black hole is entirely determined by its mass and angular momentum, and the black hole is said to have no further ‘hairs’ [@Kerr:1963ud; @Israel:1967wq; @Israel:1967za; @Carter:1971zc; @1972CMaPh..25..152H; @PhysRevD.5.2403]. Thus the detection of multiple QNMs in the ringdown portion of a gravitational wave signal allows a consistency check between the inferred values of $M$ and $J$ from each frequency. In gravity theories other than GR, however, the situation can be markedly different. For example, black holes may not be described by the Kerr solution, and may have properties other than mass or angular momentum that affect its QNM spectrum. Such black holes are said to have ‘hair’ and, despite no-hair theorems existing for various facets of modified gravity, finding and studying hairy black hole solutions is at the forefront of strong gravity research [@Blazquez-Salcedo:2016enn; @Blazquez-Salcedo:2017txk; @Silva:2017uqg; @Antoniou:2017acq; @Antoniou:2017hxj; @Bakopoulos:2018nui; @Silva:2018qhn; @Minamitsuji:2018xde; @Sullivan:2019vyi; @Ripley:2019irj; @Macedo:2019sem; @Konoplya:2001ji; @Dong:2017toi; @Endlich:2017tqa; @Cardoso:2018ptl; @Brito:2018hjh; @Franciolini:2018uyq; @Okounkova:2019zjf]. On the other hand, even if black holes in modified gravity theories are described by the same background solution as in GR (i.e. they have no hair), their perturbations may obey modified equations of motion that alter the emitted gravitational wave signal [@Barausse:2008xv; @Molina:2010fb; @Tattersall:2017erk; @Tattersall:2018nve; @Tattersall:2019pvx]. In this paper we will investigate the first possibility, where modified gravity black holes are altered from their usual description in GR due to their interactions with new gravitational fields. We will, however, assume that black holes are (to first order at least) well described by the GR solutions, and any modifications to the background spacetime are treated perturbatively. As various observations appear to suggest that black holes are well described by the suite of GR solutions [@2016PhRvL.116v1101A; @Isi:2019aib], this approach seems sensible. In this work we will also focus on the analysis. We will specifically focus on the Horndeski family of scalar-tensor theories of gravity [@Horndeski:1974wa], where a new gravitational scalar field interacts non-minimally with the metric. Furthermore, for simplicity, we will restrict ourselves to looking only at the odd parity sector of perturbations to spherically symmetric black holes, i.e. we will assume that the black holes studied here are described by a slightly modified Schwarzschild metric. The following exercises will be included in the exercises. *Summary*: In section \[horndeskisection\] we will introduce the action for Horndeski gravity, the hairy black hole metric and scalar field profile that we are considering, and explore the odd parity gravitational perturbations of this system. In section \[QNMsection\] we will utilise the results of [@Cardoso:2019mqo] to calculate the modified QNM spectrum of the modified black hole, and provide observational error estimates for the new hairy parameters. We will also provide numerical predictions here. Throughout , the data will still be stated. The initial 'H' will be positive. Horndeski Gravity {#horndeskisection} ================= Background ---------- A general action for scalar-tensor gravity with 2$^{nd}$ order-derivative equations of motion is given by the Horndeski action [@Horndeski:1974wa; @Kobayashi:2011nu]: $$\begin{aligned} S=\int d^4x\sqrt{-g}\sum_{n=2}^5L_n,\label{Shorndeski}\end{aligned}$$ where the component Horndeski Lagrangians are given by: $$\begin{aligned} L_2&=G_2(\phi,X)\nonumber\\ L_3&=-G_3(\phi,X)\Box \phi\nonumber\\ L_4&=G_4(\phi,X)R+G_{4X}(\phi,X)((\Box\phi)^2-\phi^{\alpha\beta}\phi_{\alpha\beta} )\nonumber\\ L_5&=G_5(\phi,X)G_{\alpha\beta}\phi^{\alpha\beta}-\frac{1}{6}G_{5X}(\phi,X)((\Box\phi)^3 \nonumber\\ & -3\phi^{\alpha\beta}\phi_{\alpha\beta}\Box\phi +2 \phi_{\alpha\beta}\phi^{\alpha\sigma}\phi^{\beta}_{\sigma}),\end{aligned}$$ where $\phi$ is the scalar field with kinetic term $X=-\phi_\alpha\phi^\alpha/2$, $\phi_\alpha=\nabla_\alpha\phi$, $\phi_{\alpha\beta}=\nabla_\alpha\nabla_\beta\phi$, and $G_{\alpha\beta}=R_{\alpha\beta}-\frac{1}{2}R\,g_{\alpha\beta}$ is the Einstein tensor. The $G_i$ are arbitrary functions of $\phi$ and $X$, with derivatives $G_{iX}$ with respect to $X$. GR is given by the choice $G_4=M_{P}^2/2$ with all other $G_i$ vanishing and $M_{P}$ being the reduced Planck mass. Note that eq. (\[Shorndeski\]) $ @Achour:2016rkg]. For a spherically symmetric black hole solution in Horndeski gravity we assume the following form for the metric $g$ and scalar field $\phi$ in ‘Schwarzschild-like’ coordinates: $$\begin{aligned} ds^2=&\;g_{\mu\nu}dx^\mu dx^\nu=\;-A
null
--- abstract: 'We review the stabilization of the radion in the Randall–Sundrum model through the Casimir energy due to a bulk conformally coupled field. We also show some exact self–consistent solutions taking into account the backreaction that this energy induces on the geometry.' address: [email protected<extra_id_1>[RS1]] @RS1]. The idea is to introduce a $d$-dimensional internal space of large physical volume ${\cal V}$, so that the the effective lower dimensional Planck mass $m_{pl}\sim {\cal V}^{1/2} M^{(d+2)/2}$ is much larger than $M \sim TeV$- the true fundamental scale of the theory. In the original scenarios, only gravity was allowed to propagate in the higher dimensional bulk, whereas all other matter fields were confined to live on a lower dimensional brane. Randall and Sundrum [@RS1] (RS) introduced a particularly attractive model where the gravitational field created by the branes is taken into account. Their background solution consists of two parallel flat branes, one with positive tension and another one with negative tension embedded in a a five-dimensional Anti-de Sitter (AdS) bulk. In this model, the hierarchy problem is solved if the distance between branes is about $37$ times the AdS radius and we live on the negative tension brane. More recently, scenarios where additional fields propagate in the bulk have been considered [@alex1; @alex2; @alex3; @bagger]. In principle, the distance between branes is a massless degree of freedom, the radion field $\phi$. However, in order to make the theory compatible with observations this radion must be stabilized [@gw1; @gw2; @gt; @cgr; @tm]. Clearly, all fields which propagate in the bulk will give Casimir-type contributions to the vacuum energy, and it seems natural to investigate whether these could provide the stabilizing force which is needed. Here, we shall calculate the radion one loop effective potential $V_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}(\phi)$ due to conformally coupled bulk scalar fields, although the result shares many features with other massless bulk fields, such as the graviton, which is addressed in [@gpt]. As we shall see, this effective potential has a rather non-trivial behaviour, which generically develops a local extremum. Depending on the detailed matter content, the extremum could be a maximum or a minimum, where the radion could sit. For the purposes of illustration, here we shall concentrate on the background geometry discussed by Randall and Sundrum, although our methods are also applicable to other geometries, such as the one introduced by Ovrut [*et al. *]{} in the context of eleven dimensional supergravity with one large extra dimension [@ovrut]. This ] [@gpt]. Related calculations of the Casimir interaction amongst branes have been presented in an interesting paper by Fabinger and Hořava [@FH]. In the concluding section we shall comment on the differences between their results and ours. The Randall-Sundrum model and the radion field ============================================== To be definite, we shall focus attention on the brane-world model introduced by Randall and Sundrum [@RS1]. In this model the metric in the bulk is anti-de Sitter space (AdS), whose (Euclidean) line element is given by $$ds^2=a^2(z)\eta_{ab}dx^{a}dx^{b}= a^2(z)\left[dz^2 +d{\bf x}^2\right] =dy^2+a^2(z)d{\bf x}^2. \label{rsmetric}$$ , whose line element radius. The branes are placed at arbitrary locations which we shall denote by $z_+$ and $z_-$, where the positive and negative signs refer to the positive and negative tension branes respectively ($z_+ < z_-$). The “canonically normalized” radion modulus $\phi$ - whose kinetic term contribution to the dimensionally reduced action on the positive tension brane is given by $${1\over 2}\int d^4 x \sqrt{g_+}\, g^{\mu\nu}_+\partial_{\mu}\phi \,\partial_{\nu}\phi, \label{kin}$$ is related to the proper distance $d= \Delta y$ between both branes in the following way [@gw1] $$\phi=(3M^3\ell/4\pi)^{1/2} e^{- d/\ell}.$$ Here, $M \sim TeV$ is the fundamental five-dimensional Planck mass. It is usually assumed that $\ell \sim M^{-1}$ . Let us introduce the dimensionless radion $$\lambda \equiv \left({4\pi \over 3M^3\ell}\right)^{1/2} {\phi} = {z_+ \over z_-} = e^{-d/\ell},$$ which will also be refered to as [*the hierarchy*]{}. The effective four-dimensional Planck mass $m_{pl}$ from the point of view of the negative tension brane is given by $m_{pl}^2 = M^3 \ell (\lambda^{-2} - 1)$. With $d\sim 37 \ell$, $\lambda$ is the small number responsible for the discrepancy between $m_{pl}$ and $M$. At the classical level, the radion is massless. However, as we shall see, bulk fields give rise to a Casimir energy which depends on the interbrane separation. This induces an effective potential $V_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}(\phi)$ which by convention we take to be the energy density per unit physical volume on the positive tension brane, as a function of $\phi$. This potential must be added to the kinetic term (\[kin\]) in order to obtain the effective action for the radion: $$S_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}[\phi] =\int d^4x\, a_+^4 \left[{1\over 2}g_+^{\mu\nu}\partial_{\mu}\phi\, \partial_{\nu}\phi + V_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}(\lambda(\phi)) \right]. \label{effect}$$ In the following Section, we calculate the contributions to $V_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}$ from conformally invariant bulk fields. Massless scalar bulk fields =========================== The effective potential induced by scalar fields with arbitrary coupling to the curvature or bulk mass and boundary mass can be addressed. It reduces to a similar calculation to minimal the coupling massless field case, which is sovled in [@gpt], and correponds to bulk gravitons. However, for the sake of simplicity, we shall only consider below the contribution to $V_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}(\phi)$ from conformally coupled massless bulk fields. Technically, this is much simpler than finding the contribution from bulk gravitons and the problem of backreaction of the Casimir energy onto the background can be taken into consideration in this case. Here we are considering generalizations of the original RS proposal [@alex1; @alex2; @alex3] which allow several fields other than the graviton only (contributing as a minimally coupled scalar field). A conformally coupled scalar $\chi$ obeys the equation of motion $$-\Box_g \chi + {D-2 \over 4 (D-1)}\ R\ \chi =0, \label{confin}$$ $$\Box^{(0)} \hat\chi =0. \label{fse}$$ Here $\Box^{(0)}$ is the [*flat space*]{} d’Alembertian. It is customary to impose $Z_2$ symmetry on the bulk fields, with some parity given. If we choose even parity for $\hat\chi$, this results in Neumann boundary conditions $$\partial_{z}\hat\chi = 0,$$ at $z_+$ and $z_-$. The eigenvalues of the d’Alembertian subject to these conditions are given by $$\label{flateigenvalues} \lambda^2_{n,k}=\left({n \pi \over L}\right)^2+k^2,$$ where $n$ is a positive integer, $L=z_{-}-z_+$ is the coordinate distance between both branes and $k$ is the coordinate momentum parallel to the branes. [^1] Similarly, we could consider the case of massless fermions in the RS background. The Dirac equation,[^2] $$\gamma^{n}e^a_{\ n}\nabla_a\,\psi=0.$$ is conformally invariant [@bida], and the conformally rescaled components of the fermion obey the flat space equation (\[fse\]) with Neumann boundary
null
--- abstract: 'It was conjectured by Lang that a complex projective manifold is Kobayashi hyperbolic if and only if it is of general type together with all of its subvarieties. We verify this conjecture for projective manifolds whose universal cover carries a bounded, strictly plurisubharmonic function. This article was published in the following domains.' address: False Roma. author: - Sébastien Boucksom - Simone Diverio bibliography: - 'bibliography.bib' title: 'A note on Lang’s conjecture for quotients of bounded domains' --- [^1] Introduction {#introduction .unnumbered} ============ For a compact complex space $X$, Kobayashi hyperbolicity is equivalent to the fact that every holomorphic map ${\mathbb{C}}\to X$ is constant, thanks to a classical result of Brody. When $X$ is moreover projective (or, more generally, compact Kähler), hyperbolicity is further expected to be completely characterized by (algebraic) positivity properties of $X$ and of its subvarieties. More precisely, we have the following conjecture, due to S. Lang. [@Lan86 Conjecture 5.6] A projective variety $X$ is hyperbolic if and only if every subvariety (including $X$ itself) is of general type. Recall that a projective variety $X$ is of general type if the canonical bundle of any smooth projective birational model of $X$ is big, *i.e. * has maximal Kodaira dimension. This is for instance the case when $X$ is smooth and *canonically polarized*, *i.e. * with $K_X$. Note that Lang’s conjecture in fact implies that every smooth hyperbolic projective manifold $X$ is canonically polarized, as conjectured in 1970 by S. Kobayashi. It is indeed a well-known consequence of the Minimal Model Program that any projective manifold of general type without rational curves is canonically polarized (see for instance [@BBP Theorem A]). Besides the trivial case of curves and partial results for surfaces [@MM83; @DES79; @GG80; @McQ98], Lang’s conjecture is still almost completely open in higher dimension as of this writing. General projective hypersurfaces of high degree in projective space form a remarkable exception: they are known to be hyperbolic [@Bro17] (see also [@McQ99; @DEG00; @DT10; @Siu04; @Siu15; @RY18]), and they satisfy Lang’s conjecture [@Cle86; @Ein88; @Xu94; @Voi96; @Pac04]. It is natural to test Lang’s conjecture for the following two basic classes of manifolds, known to be hyperbolic since the very beginning of the theory: - compact Kähler manifolds $X$ with negative holomorphic sectional curvature; - compact, free quotients $X$ of bounded domains $\Omega\Subset{\mathbb{C}}^n$. In case (N), ampleness of $K_X$ was established in [@WY16a; @WY16b; @TY17] (see also [@DT16]). By curvature monotonicity, this implies that every smooth subvariety of $X$ also has ample canonical bundle. More generally, Guenancia recently showed [@Gue18] that each (possibly singular) subvariety of $X$ is of general type, thereby verifying Lang’s conjecture in that case. One might even more generally consider the case where $X$ carries an arbitrary Hermitian metric of negative holomorphic sectional curvature, which seems to be still open. In this note, we confirm Lang’s conjecture in case (B). While the case of quotients of bounded *symmetric* domains has been widely studied (see, just to cite a few, [@Nad89; @BKT13; @Bru16; @Cad16; @Rou16; @RT18]), the general case seems to have somehow passed unnoticed. Instead of bounded domains, we consider more generally the following class of manifolds, which comprises relatively compact domains in Stein manifolds, and has the virtue of being stable under passing to an étale cover or a submanifold. We are here talking about ${\varphi}$. By a well-known result of Richberg, any *continuous* bounded strictly psh function on a complex manifold $M$ can be written as a decreasing limit of smooth strictly psh functions, but this fails in general for discontinuous functions [@For p.66], and it is thus unclear to us whether every manifold of bounded type should carry also a *smooth* bounded strictly psh function. \[thm:main\] Let $X$ be a compact Kähler manifold admitting an étale (Galois) cover $\tilde X\to X$ of bounded type. Then: - $X$ is Kobayashi hyperbolic; - $X$ has large fundamental group; - $X$ is projective and canonically polarized; - every subvariety of $X$ is of general type. Note that $\tilde X$ can always be replaced with the universal cover of $X$, and hence can be assumed to be Galois. By [@Kob98 3.2.8], (i) holds iff $\tilde X$ is hyperbolic, which follows from the fact that manifolds of bounded type are Kobayashi hyperbolic [@Sib81 Theorem 3]. Alternatively, any entire curve $f:{\mathbb{C}}\to X$ lifts to $\tilde X$, and the pull-back to ${\mathbb{C}}$ of the bounded, strictly psh function carried by $\tilde X$ has to be constant, showing that $f$ itself is constant. By definition, (ii) means that the image in $\pi_1(X)$ of the fundamental group of any subvariety $Z\subseteq X$ is infinite [@Kol §4.1], and is a direct consequence of the fact that manifolds of bounded type do not contain nontrivial compact subvarieties. According to the Shafarevich conjecture, $\tilde X$ should in fact be Stein; in case $\tilde X$ is a bounded domain of ${\mathbb{C}}^n$, this is indeed a classical result of Siegel [@Sie50] (see also  [@Kob59 Theorem 6.2]). By another classical result, this time due to Kodaira [@Kod], any compact complex manifold $X$ admitting a Galois étale cover $\tilde X\to X$ biholomorphic to a bounded domain in ${\mathbb{C}}^n$ is projective, with $K_X$ ample. Indeed, the Bergman metric of $\tilde X$ is non-degenerate, and it descends to a positively curved metric on $K_X$. Our proof of (iii) and (iv) is a simple variant of this idea, inspired by [@CZ02]. For each subvariety $Y\subseteq X$ with desingularization $Z\to Y$ and induced Galois étale cover $\tilde Z\to Z$, we use basic Hörmander–Andreotti–Vesentini–Demailly $L^2$-estimates for ${\overline{\partial}}$ to show that the Bergman metric of $\tilde Z$ is generically non-degenerate. It then descends to a psh metric on $K_Z$, smooth and strictly psh on a nonempty Zariski open set, which is enough to conclude that $K_Z$ is big, by [@Bou02]. As a final comment, note that Kähler hyperbolic manifolds, *i.e. * compact Kähler manifolds $X$ carrying a Kähler metric ${\omega}$ whose pull-back to the universal cover $\pi:\tilde X\to X$ satisfies $\pi^*{\omega}=d{\alpha}$ with ${\alpha}$ bounded, also satisfy (i)–(iii) in Theorem A [@Gro]. It would be interesting to check Lang’s conjecture for such manifolss as well. This work was started during the first-named author’s stay at SAPIENZA Università di Roma. He acknowledges his support. Both authors would also like to thank Stefano Trapani for helpful discussions, in particular for pointing out the reference [@For]. The Bergman metric and manifolds of general type ================================================ Non-degeneration of the Bergman metric -------------------------------------- Recall that the *Bergman space* of a complex manifold $M$ is the separable Hibert space ${\mathcal{H}}={\mathcal{H}}(M)$ of holomorphic forms $\eta\in H^0(M,K_M)$ such that $$\|\eta\|_{\mathcal{H}}^2:=i^{n^2}\int_{\tilde X}\eta\wedge\bar\
null
--- abstract: 'We predict a huge interference effect contributing to the conductance through large ultra-clean quantum dots of chaotic shape. When a double-dot structure is made such that the dots are the mirror-image of each other, constructive interference can make a tunnel barrier located on the symmetry axis effectively transparent. We show (via theoretical analysis and numerical simulation) that this effect can be orders of magnitude larger than the well-known universal conductance fluctuations and weak-localization (both less than a conductance quantum). A small magnetic field destroys the effect, massively reducing the double-dot conductance; thus a magnetic field detector is obtained, with a similar sensitivity to a SQUID, but requiring no superconductors.' author: - 'Robert S. Whitney' - 'P. Marconcini' et al 'M. Macucci' title: ' Symmetry causes a huge conductance peak in double quantum dots.' --- False @Marcus-chaos]. The chaotic shape of such dots makes these effects analogous to speckle-patterns in optics rather than to the regular interference patterns observed with Young’s slits or Fabry-Perot etalons. While such interference phenomena are beautiful, they have only a small effect on the properties of quantum dots coupled to multi-mode leads. Here <unk>; Fig. The interference effects occur in systems that are mirror-symmetric but otherwise chaotic We show that the mirror symmetry induces interference that greatly enhances tunneling through a barrier located on the symmetry axis; it can make the barrier become effectively transparent. Thus an open double-dot system with an almost opaque tunnel barrier between the two dots will exhibit a huge peak in conductance when the two dots are the mirror image of each other, see Fig. \[Fig:numerics\]. This effect could be used to detect anything which breaks the mirror symmetry. For example, current 2D electron gas (2DEG) technology [@best-ultraclean-samples] could be used to construct a device whose resistance changes by a factor of ten, when an applied magnetic flux changes from zero to a fraction of a flux quantum in the double dot. This is a sensitivity similar to that of a SQUID, but it is achieved without superconductivity, making it easy to integrate with other 2DEG circuitry. ! [\[Fig:butter-path\] A mirror-symmetric double dot, where the classical dynamics is highly chaotic. We call it a “butterfly double dot” to emphasize the left-right symmetry. Every classical path from the left lead to the right lead (solid line) which hits the barrier more than once, is part of a family of paths which are related to it by the mirror symmetry (dashed line). ](fig1.eps){width="6.5cm"} 1 peak. **]{} The origin of the effect can be intuitively understood by looking at Fig. \[Fig:butter-path\]. Assume that electrons only follow the two paths shown (instead of an infinite number of different paths). Path 1 does not tunnel the first time it hits the barrier, but does tunnel the second time it hits it. Path 2 tunnels the first time it hits the barrier, but not the second time. Quantum mechanics gives the probability to go from the left lead to the right lead as $|r(\theta) t(\theta'){\rm e}^{{\rm i} S_1/\hbar} + t(\theta) r(\theta'){\rm e}^{{\rm i} S_2/\hbar}|^2$, where the scattering matrix of the tunnel barrier has amplitudes $r(\theta)$ and $t(\theta)$ for reflection and transmission at angle $\theta$. If there is no correlation between the classical actions of the two paths ($S_1$ and $S_2$), then the cross-term cancels upon averaging over energy, leaving the probability as $|r(\theta) t(\theta')|^2+|t(\theta) r(\theta')|^2$. In contrast, if there is a perfect mirror symmetry, then $S_2=S_1$, and the probability is $|r(\theta) t(\theta')+t(\theta) r(\theta')|^2$, which is significantly greater than $|r(\theta) t(\theta')|^2+|t(\theta) r(\theta')|^2$. Indeed, if we could drop the $\theta$-dependence of $r$ and $t$, the probability would be doubled by the constructive interference induced by the mirror symmetry. A path that hits the barrier $(n+1)$ times has $2^n$ partners with the same classical action (each path segment that begins and ends on the barrier can be reflected with respect to the barrier axis). However the conductance is [*not*]{} thereby enhanced by $2^n$, because (due to the nature of the barrier scattering matrix) there is also destructive interference when one path tunnels $(4j-2)$ times more than another (for integer $j$). The effect looks superficially like resonant tunneling. However, that only occurs when dots are weakly coupled to the leads, so that each dot has a peak for each level of the closed dot and the current flow is enhanced when two peaks are aligned. Instead in our case each dot is well coupled to a lead (with $N\!\gg\!1$ modes), so the density of states in each dot is featureless (the broadenning of each level is about $N$ times the level-spacing). Furthermore, resonant tunneling occurs at discrete energies, while our effect is largely energy independent. Another f @reflectionless-tunnel-review]. However, this retro-reflection transforms the classical dynamics in the dot from chaotic to integrable [@Kosztin-Maslov-Goldbart], and large interference effects in integrable systems are not uncommon (consider a Fabry-Perot etalon). Here, the mirror symmetry induces a large interference effect without any retro-reflection and without a change in the nature of the classical dynamics (chaotic motion remains chaotic). [<unk> [\[Fig:numerics\] Average conductance as (a) a function of applied $B$-field (with the barrier on the symmetry axis), and as (b) a function of the barrier position (for zero $B$-field). The latter mimics the effect of gates that reduce the size of one dot relative to the other. The data points come from simulations performed for the structures shown in the insets. The simulations represent the experimental data. The conductance of the tunnel barrier alone is $G_{\rm tb}$. [G_<unk>rm sym<unk>nangle/<unk>langle G_*r [\[Fig:cond-ratio\] Plot of the ratio $\langle G_{\rm sym}\rangle/\langle G_{\rm asym}\rangle$, given by Eqs. (\[eq:Gsym\],\[eq:Gasym\]). The ratio grows as $T_{\rm tb}\to 0$ for all $P$ (although $\langle G_{\rm sym,asym}\rangle$ shrink). For given $T_{\rm tb}$, the ratio is maximal at $P=(1 -2T_{\rm tb}^{1/2})/(1-4T_{\rm tb})$. ](fig3.eps){width="6.6cm"} [**Semiclassical theory. **]{} @Bar93 [@Bar93]. The conductance through a system whose dimensions are much greater than a Fermi wavelength can be written as a double sum over classical paths, $\gamma$ and $\gamma'$, which both start at a point $y_0$ on the cross-section of the left lead and end at $y$ on the right lead: $$\begin{aligned} G &=& (2\pi \hbar)^{-1} G_0\sum_{\gamma,\gamma'} A_{\gamma}A_{\gamma'}^* \exp \big[{\rm i}(S_\gamma-S_{\gamma'})/\hbar \big] , \label{eq:conductance}\end{aligned}$$ where $G_0= 2e^2/h$ is the quantum of conductance, and $S_\gamma$ is the classical action of path $\gamma$. A tunnel barrier with left-right symmetry must have the scattering matrix $$\begin{aligned} {\cal S}_{\rm tb}(\theta) = {\rm e}^{{\rm i} \phi_{r(\theta)}} \left(\begin{array}{cc} |r(\theta)| & \pm {\rm i} |t(\theta)| \\ \pm {\rm i} |t(\theta)| & |r(\theta)|\end{array} \right) \label{eq:Stb}\end{aligned}$$ where $r(\theta)$ and $t(\theta)$ are reflection and transmission amplitudes for a plane wave at angle of incidence $\theta$. Keeping the Eq. (\[eq:conductance\]) are $$\begin{aligned} \label{eq:
null
--- abstract: 'Spinel structured compounds, $AB_2O_4$, are special because of their exotic multiferroic properties. In $ACr_2O_4$ ($A$=$Co$, $Mn$,$Fe$), a switchable polarization has been observed experimentally due to a non-collinear magnetic spin order. In this article, we demonstrated the microscopic origin behind such magnetic spin order, hysteresis, polarisation and the so-called magnetic compensation effect in $ACr_2O_4$ ($A$=$Co$, $Mn$,$Fe$, $Ni$) using Monte Carlo simulation. With a careful choice of the exchange interaction, we were able to explain various experimental findings such as magnetization vs. temperature (T) behavior, conical stability, unique magnetic ordering and polarization in a representative compound $CoCr_2O_4$ which is the best known multiferroic compound in the $AB_2O_4$ spinel family. We have also studied the effect of $Fe$-substitution in $CoCr_2O_4$, with an onset of few exotic phenomena such as magnetic compensation and sign reversible exchange bias effect. These effects are investigated using an effective interactions mimicking the effect of substitution. Two other compounds in this family, $CoMn_2O_4$ and $CoFe_2O_4$, are also studied where no conical magnetic order and polarisation was observed, as hence provide a distinct contrast. Here all calculations are done using the polarisation calculated by the spin-current model. This model has certain limitation and it works quite good for low temperature and low magnetic field. But the model despite its limitation it can reproduce sign reversible exchange bias and magnetic compensation like phenomena quite well.' author: - Debashish Das - Aftab Alam title: 'Exotic multiferroic properties of spinel structured $AB_2O_4$ compounds: A Monte Carlo Study' --- Introduction ============ $CoCr_2O_4$ is a classic example of spinel which is observed to show a new kind of polarisation at very low temperature, whose origin lies in the formation of a conical magnetic order. [@Pol-org] The application of the magnetic field manipulates the cone angle and hence the coupling between ferromagnetism and ferroelectric properties. Similar multiferroism has been reported for other spinel compounds such as $MnCr_2O_4$,[@Tomiyasu] $NiCr_2O_4$,[@ACr2O4-pol] and $FeCr_2O_4$. [@FeCr2O4-pol] These four spinels posses both polarisation and magnetism due to spin origin. However, there are several other compounds, $RMnO_3$ (R= Tb, Dy), in perovskite family where the polarisation is due to spin spiral developed in the plane. [@RMnO3_1; @RMnO3_2] Therefore, such a compound does not have any net magnetization (M). However, the conical magnetic order in $ACr_2O_4$ adds an extra magnetism along the cone axis and makes these compounds much more interesting. There is no evidence of any magnetic properties. Yamasaki *et al. *[@Pol-org] reported the signature of polarisation in $CoCr_2O_4$ below $T_s$=27 K. They also showed how polarisation can be controlled using magnetic field. Neutron scattering experiments on $ACr_2O_4$ \[$A$=$Co$, $Mn$\] was first performed by Tomiyasu *et al. *,[@Tomiyasu] who estimated the cone angle by analyzing the experimental intensity of satellite reflections. They also proposed a unique concept of “Weak Magnetic Geometrical Frustration" (MGF) in spinel $AB_2O_4$, where both $A$ and $B$ cation are magnetic. Such weak MGF is responsible for the short-range conical spiral. Using neutron diffraction, Chang *et al. *[@incommensurate] predicted a transformation from incommensurate conical spin order to commensurate order in $CoCr_2O_4$ at lowest temperature. A complete understanding of such transformation is lacking in the literature. Spin current model[@spin-current] is one simplistic approach which provides some conceptual advancement about incommensurate conical spin order, however, a firm understanding of incommensurate to commensurate transformation requires a better model. These class of compounds show few other phenomena such as negative magnetization, magnetic compensation and sign reversible exchange bias at a critical temperature called magnetic compensation temperature ($T_{comp}$). [@padam-Fe; @ram-Mn; @Junmoni-Mn-Fe; @Junmoni-Ni-Al; @Junmoni-Ni-Fe1; @Junmoni-Ni-Fe2] This is a temperature at which different sublattice magnetization cancels each other to fully compensate the net magnetization (M=0). Interestingly, it changes sign if one goes beyond this temperature. Depending on the substituting element, in some cases, magnetic compensation is associated with the exchange bias phenomena. Such unique phenomena are very useful for magnetic storage devices which require a reference fixed magnetization direction in space for switching magnetic field. Compounds having exchange bias are highly suitable for such a device because their hysteresis is not centred at M=0, H=0, rather shifted towards +ve or -ve side. Although the phenomena of exchange bias are well understood in various compounds including FM/AFM layered compounds,[@EB] the same is not true for the substituted spinel compounds which crystallize in a single phase. A concordant ground desired. Using the generalized Luttinger-Tisza[@GLT] method, a conical ground state can be found theoretically,[@LKDM] by defining a parameter $u$ $$u=\frac{4J_{BB}S_B}{3J_{AB}S_A}$$ Here $S_A$ and $S_B$ are the A-site (tetrahedral) and B-site (Octahedral) magnetic spins, $J_{AB}$ and $J_{BB}$ represent the exchange interaction between first nearest neighbor $A$-$B$ and $B$-$B$ pairs respectively. According to the theory, the stable conical spin order is possible only if $u$ lies between $0.88$ and $1.298$. Yan *et al*[@Yao-2009; @Yao-2009-2; @Yao-2010; @Yao-2011; @Yao-2013; @Yao-2017] has studied the conical spin order by performing simulation on a 3-dimensional spinel lattice. They show that $\hat{J}_{BB}$ and $\hat{J}_{AA}$ enhance the spin frustration, and single ion anisotropy helps to stabilize the cone state. Here , BB is constant. In this article, the conical spin order of $ACr_2O_4$ ($A$=$Mn$, $Fe$, $Co$ and $Ni$) along with $CoMn_2O_4$ and $CoFe_2O_4$ are studied using a combined Density Functional Theory (DFT) and Monte Carlo based Metropolis algorithm. The latter two compounds do not show conical spin order. For these six compounds, we have calculated the exchange interactions using the self consistent Density Functional Theory. We have then varied the interaction parameters and found a new set of exchange interactions which best fit the experimental magnetization and hysteresis curves. For comparison sake, the investigation of magnetic ordering, magnetization, hysteresis curve, and the ground state spin order were carried out using both sets of exchange interactions. We have also simulated the magnetic compensation and exchange bias behavior around $T_{comp}$. We found an effective exchange interaction pairs for the system $CoCr_2O_4$, for which its magnetization is similar to $Fe$ substituted $CoCr_2O_4$ showing magnetic compensation effect followed by a turn over in the sign having of M. Using these sets of exchange bias, we are able to predict the sign reversible exchange bias at around $T_{comp}$, as observed experimentally. [@padam-Fe] ------------- ------- ---------------- ---------------- ---------------- ------------ ----------- ------------ ----------- ----------- ------- ------- System Calculated Expt. $\hat{J}_{BB}$ $\hat{J}_{AB}$ $\hat{J}_{AA}$ $u$ $M_A$ $M_B$ $M_A$ $M_B$ $T_c$ $T_c$ (meV) (meV) (meV) ($\mu_B$) ($\mu_B$) ($\mu_B$) ($\mu_B$) (K) (K) $MnCr_2O_4$ set 1 -1.74 -1.28 -1.58 1.81 -4.50 3.01 -5 3 40 set 2 -0.97 -0.85 0.00 1.52 42 $FeCr_2O_4$ set 1 -2.88 -2.83
null
--- abstract: 'We here discuss the emergence of Quasi Stationary States (QSS), a universal feature of systems with long-range interactions. With reference to the Hamiltonian Mean Field (HMF) model, numerical simulations are performed based on both the original $N$-body setting and the continuum Vlasov model which is supposed to hold in the thermodynamic limit. A detailed comparison unambiguously demonstrates that the Vlasov-wave system provides the correct framework to address the study of QSS. Further, analytical calculations based on Lynden-Bell’s theory of violent relaxation are shown to result in accurate predictions. Finally, in specific regions of parameters space, Vlasov numerical solutions are shown to be affected by small scale fluctuations, a finding that points to the need for novel schemes able to account for particles correlations.' author: - | Andrea Antoniazzi$^{1}$[^1], Francesco Califano$^ {2}$[^2], Duccio Fanelli$^{1,3}$[^3], Stefano Ruffo$^{1}$[^4] title: 'Exploring the thermodynamic limit of Hamiltonian models: convergence to the Vlasov equation.' --- The Vlasov equation constitutes a universal theoretical framework and plays a role of paramount importance in many branches of applied and fundamental physics. Structure formation in the universe is for instance a rich and fascinating problem of classical physics: The fossile radiation that permeates the cosmos is a relic of microfluctuation in the matter created by the Big Bang, and such a small perturbation is believed to have evolved via gravitational instability to the pronounced agglomerations that we see nowdays on the galaxy cluster scale. Within this scenario, gravity is hence the engine of growth and the Vlasov equation governs the dynamics of the non baryonic “dark matter" [@peebles]. Furthermore, the continuous Vlasov description is the reference model for several space and laboratory plasma applications, including many interesting regimes, among which the interpretation of coherent electrostatic structures observed in plasmas far from thermodynamic equilibrium. The Vlasov equation is obtained as the mean–field limit of the $N$–body Liouville equation, assuming that each particle interacts with an average field generated by all plasma particles (i.e. the mean electromagnetic field determined by the Poisson or Maxwell equations where the charge and current densities are calculated from the particle distribution function) while inter–particle correlations are completely neglected. Numerical simulations are presently one of the most powerful resource to address the study of the Vlasov equation. In the plasma context, the Lagrangian Particle-In-Cell approach is by far the most popular, while Eulerian Vlasov codes are particularly suited for analyzing specific model problems, due to the associated low noise level which is secured even in the non–linear regime [@mangeney]. However, any numerical scheme designed to integrate the continuous Vlasov system involves a discretization over a finite mesh. This is indeed an unavoidable step which in turn affects numerical accuracy. A numerical (diffusive and dispersive) characteristic length is in fact introduced being at best comparable with the grid mesh size: as soon as the latter matches the typical length scale relative to the (dynamically generated) fluctuations a violation of the continuous Hamiltonian character of the equations occurs (see Refs. [@califano]). It is important to emphasize that even if such [*non Vlasov*]{} effects are strongly localized (in phase space), the induced large scale topological changes will eventually affect the system globally. Therefore, aiming at clarifying the problem of the validity of Vlasov numerical models, it is crucial to compare a continuous Vlasov, but numerically discretized, approach to a homologous N-body model. Vlasov equation has been also invoked as a reference model in many interesting one dimensional problems, and recurrently applied to the study of wave-particles interacting systems. The Hamiltonian Mean Field (HMF) model [@antoni-95], describing the coupled motion of $N$ rotators, is in particular assimilated to a Vlasov dynamics in the thermodynamic limit on the basis of rigorous results [@BraunHepp]. The HMF model has been historically introduced as representing gravitational and charged sheet models and is quite extensively analyzed as a paradigmatic representative of the broader class of systems with long-range interactions [@Houches02]. A peculiar feature of the HMF model, shared also by other long-range interacting systems, is the presence of [*Quasi Stationary States*]{} (QSS). During time evolution, the system gets trapped in such states, which are characterized by non Gaussian velocity distributions, before relaxing to the final Boltzmann-Gibbs equilibrium [@ruffo_rapisarda]. An earlier experiment was used [@Tsallis]. This approach has been later on criticized in [@Yamaguchi], where QSSs were shown to correspond to stationary stable solutions of the Vlasov equation, for a particular choice of the initial condition. More recently, an approximate analytical theory, based on the Vlasov equation, which derives the QSSs of the HMF model using a maximum entropy principle, was developed in  [@antoniazziPRL]. This theory is inspired by the pioneering work of Lynden-Bell  [@LyndenBell68] and relies on previous work on 2D turbulence by Chavanis [@chava2D]. However, the underlying Vlasov ansatz has not been directly examined and it is recently being debated [@EPN]. In this model. By comparing these results to both direct N-body simulations and analytical predictions, we shall reach the following conclusions: (i) the Vlasov formulation is indeed ruling the dynamics of the QSS; (ii) the proposed analytical treatment of the Vlasov equation is surprisingly accurate, despite the approximations involved in the derivation; (iii) Vlasov simulations are to be handled with extreme caution when exploring specific regions of the parameters space. The jection momentum momentum. To monitor the evolution of the system, it is customary to introduce the magnetization, a macroscopic order parameter defined as $M=|{\mathbf M}|=|\sum {\mathbf m_i}| /N$, where ${\mathbf m_i}=(\cos \theta_i,\sin \theta_i)$ stands for the microscopic magnetization vector. As , i.e. non-equilibrium dynamical regimes whose lifetime diverges when increasing the number of particles $N$. Importantly, when performing the mean-field limit ($N \rightarrow \infty$) [*before*]{} the infinite time limit, the system cannot relax towards Boltzmann–Gibbs equilibrium and remains permanently confined in the intermediate QSSs. As mentioned above, this phenomenology is widely observed for systems with long-range interactions, including galaxy dynamics [@Padmanabhan], free electron lasers [@Barre], 2D electron plasmas [@kawahara]. In the $N \to \infty$ limit the discrete HMF dynamics reduces to the Vlasov equation $$\partial f / \partial t + p \, \partial f / \partial \theta \,\, - (dV / d \theta ) \, \partial f / \partial p = 0 \, , %\frac{\partial f}{\partial t} + p\frac{\partial f}{\partial \theta} - %\frac{d V}{d \theta} \frac{\partial f}{\partial p}=0\quad , \label{eq:VlasovHMF}$$ where $f(\theta,p,t)$ is the microscopic one-particle distribution function and $$\begin{aligned} V(\theta)[f] &=& 1 - M_x[f] \cos(\theta) - M_y[f] \sin(\theta) ~, \\ M_x[f] &=& \int_{-\pi}^{\pi} \int_{-\infty}^{\infty} f(\theta,p,t) \, \cos{\theta} {\mathrm d}\theta {\mathrm d}p\quad , \\ M_y[f] &=& \int_{-\pi}^{\pi} \int_{\infty}^{\infty} f(\theta,p,t) \, \sin{\theta}{\mathrm d}\theta {\mathrm d}p\quad . \label{eq:pot_magn}\end{aligned}$$ The specific energy $h[f]=\int \int (p^2/{2}) f(\theta,p,t) {\mathrm d}\theta {\mathrm d}p - ({M_x^2+M_y^2 - 1})/{2}$ and momentum $P[f]=\int \int p f(\theta,p,t) {\mathrm d}\theta {\mathrm d}p$ functionals are conserved quantities. Homogeneous <unk> 0$. Rigorous mathematical results [@BraunHepp] demonstrate that, indeed, the Vlasov framework applies
null
--- abstract: - | This paper investigates the generalization of Principal Component Analysis (PCA) to Riemannian manifolds. We first propose a new and general type of family of subspaces in manifolds that we call barycentric subspaces. They are implicitly defined as the locus of points which are weighted means of $k+1$ reference points. As this definition relies on points and not on tangent vectors, it can also be extended to geodesic spaces which are not Riemannian. For instance, in stratified spaces, it naturally allows principal subspaces that span several strata, which is impossible in previous generalizations of PCA. We show that barycentric subspaces locally define a submanifold of dimension $k$ which generalizes geodesic subspaces. Second, we rephrase PCA in Euclidean spaces as an optimization on flags of linear subspaces (a hierarchy of properly embedded linear subspaces of increasing dimension). We show that the Euclidean PCA minimizes the Accumulated Unexplained Variances by all the subspaces of the flag (AUV). Barycentric subspaces are naturally nested, allowing the construction of hierarchically nested subspaces. Optimizing the AUV criterion to optimally approximate data points with flags of affine spans in Riemannian manifolds lead to a particularly appealing generalization of PCA on manifolds called Barycentric Subspaces Analysis (BSA). - 'This supplementary material details the notions of Riemannian geometry that are underlying the paper [*Barycentric Subspace Analysis on Manifolds*]{}. In particular, it investigates the Hessian of the Riemannian square distance whose definiteness controls the local regularity of the barycentric subspaces. This is exemplified on the sphere and the hyperbolic space.' - 'This supplementary material details in length the proof that the flag of linear subspaces found by PCA optimizes the Accumulated Unexplained Variances (AUV) criterion in a Euclidean space.' address: - | Asclepios team, Inria Sophia-Antipolis Méditerrannée\ 2004 Route des Lucioles, BP93\ F-06902 Sophia-Antipolis Cedex, France\ - | Asclepios team, Inria Sophia Antipolis\ 2004 Route des Lucioles, BP93\ F-06902 Sophia-Antipolis Cedex, France - | Asclepios team, Inria Sophia-Antipolis Méditerranée\ 2004 Route des Lucioles, BP93\ F-06902 Sophia-Antipolis Cedex, France\ author: - - - title: - Barycentric Subspace Analysis on Manifolds - | Supplementary Materials A:\ Hessian of the Riemannian Squared Distance - 'Supplementary Materials B: Euclidean PCA as an optimization in the flag space' --- Introduction ============ In a Euclidean space, the principal $k$-dimensional affine subspace of the Principal Component Analysis (PCA) procedure is equivalently defined by minimizing the variance of the residuals (the projection of the data point to the subspace) or by maximizing the explained variance within that affine subspace. This double interpretation is available through Pythagoras’ theorem, which does not hold in more general manifolds. A simple PCB is necessary for components. Generalizing PCA to manifolds first requires the definition of the equivalent of affine subspaces in manifolds. For the zero-dimensional subspace, an intrinsic generalization of the mean on manifolds naturally comes into mind: the Fréchet mean is the set of global minima of the variance, as defined by [@frechet48] in general metric spaces. For simply connected Riemannian manifolds of non-positive curvature, the minimum is unique and is called the Riemannian center of mass. This is an optimal distribution for distribution purposes. [@karcher77; @buser_gromovs_1981] first established conditions on the support of the distribution to ensure the uniqueness of a local minimum in general Riemannian manifolds. This is now generally called Karcher mean, although there is a dispute on the naming [@karcher_riemannian_2014]. From a statistical point of view, [@Bhattacharya:2003; @Bhattacharya:2005] have studied in depth the asymptotic properties of the empirical Fréchet / Karcher means. The one-dimensional component can naturally be a geodesic passing through the mean point. Higher-order components are more difficult to define. The simplest generalization is tangent PCA (tPCA), which amounts unfolding the whole distribution in the tangent space at the mean, and computing the principal components of the covariance matrix in the tangent space. The method is thus based on the maximization of the explained variance, which is consistent with the entropy maximization definition of a Gaussian on a manifold proposed by [@pennec:inria-00614994]. tPCA is actually implicitly used in most statistical works on shape spaces and Riemannian manifolds because of its simplicity and efficiency. However, if tPCA is good for analyzing data which are sufficiently centered around a central value (unimodal or Gaussian-like data), it is often not sufficient for distributions which are multimodal or supported on large compact subspaces (e.g. circles and spheres). Instead of an analysis of the covariance matrix, [@fletcher_principal_2004] proposed the minimization of squared distances to subspaces which are totally geodesic at a point, a procedure coined Principal Geodesic Analysis (PGA). These Geodesic Subspaces (GS) are spanned by the geodesics going through a point with tangent vector restricted to a linear subspace of the tangent space. However, the least-squares procedure is computationally expensive, so that the authors approximated it in practice with tPCA, which led to confusions between tPCA and PGA. A lot of work on PGA [@sommer_optimization_2013]. PGA is allowing to build a flag (sequences of embedded subspaces) of principal geodesic subspaces consistent with a forward component analysis approach. Components are built iteratively from the mean point by selecting the tangent direction that optimally reduces the square distance of data points to the geodesic subspace. In this procedure, the mean always belongs to geodesic subspaces even when it is outside of the distribution support. To alleviate this problem, [@huckemann_principal_2006], and later [@huckemann_intrinsic_2010], proposed to start at the first order component directly with the geodesic best fitting the data, which is not necessarily going through the mean. The second principal geodesic is chosen orthogonally to the first one, and higher order components are added orthogonally at the crossing point of the first two components. The method was named Geodesic PCA (GPCA). Further relaxing the assumption that second and higher order components should cross at a single point, [@sommer_horizontal_2013] proposed a parallel transport of the second direction along the first principal geodesic to define the second coordinates, and iteratively define higher order coordinates through horizontal development along the previous modes. These are all intrinsically forward methods that build successively larger approximation spaces for the data. A notable exception is the concept of Principal Nested Spheres (PNS), proposed by [@jung_analysis_2012] in the context of planar landmarks shape spaces. A backward analysis approach determines a decreasing family of nested subspheres by slicing a higher dimensional sphere with affine hyperplanes. In this process, the nested subspheres are not of radius one, unless the hyperplanes passe through the origin. [@damon_backwards_2013] have recently generalized this approach to manifolds with the help of a “nested sequence of relations”. However, we need more explicit examples of subspaces in manifold spaces. We first propose in this paper new types of family of subspaces in manifolds: barycentric subspaces generalize geodesic subspaces and can naturally be nested, allowing the construction of inductive forward or backward nested subspaces. We then rephrase PCA in Euclidean spaces as an optimization on flags of linear subspaces (a hierarchy of properly embedded linear subspaces of increasing dimension). To that end, we propose an extension of the unexplained variance criterion that generalizes nicely to flags of barycentric subspaces in Riemannian manifolds. This leads to a particularly appealing generalization of PCA on manifolds: Barycentric Subspaces Analysis (BSA). Paper Organization {#paper-organization .unnumbered} ------------------ We recall in Section \[Sec:Geom\] the notions and notations needed to define statistics on Riemannian manifolds, and we introduce the two running example manifolds of this paper: $n$-dimensional spheres and hyperbolic spaces. Exponential Barycentric Subspaces (EBS) are then defined in Section \[Sec:Bary\] as the locus of weighted exponential barycenters of $k+1$ affinely independent reference points. The closure of the EBS in the original manifold is called affine span (this differs from the preliminary definition of [@pennec:hal-01164463]). Equations of the EBS and affine span are exemplified on our running examples: the affine span of $k+1$ affinely independent reference points is the great subsphere (resp. sub-hyperbola) that contains the reference points. In fact, other tuple of points of that subspace generates the same affine span, which is also a geodesic subspace. This coincidence is due to the very high symmetry of the constant curvature spaces. Section \[Sec:KBS\] defines the Karcher (resp. Fréchet) barycentric subspaces (KBS, resp. FBS) as the
null
--- abstract: 'We present a general framework to describe the simultaneous para-to-ferromagnetic and semiconductor-to-metal transition in electron-doped EuO. The theory correctly describes detailed experimental features of the conductivity and of the magnetization, in particular the doping dependence of the Curie temperature. The existence of correlation-induced local moments on the impurity sites is essential for this description.' author: SM and nearly 100% of the itinerant charge carriers ferromagnetic K}$. Upon electron doping, either by O defects or by Gd impurities, this phase transition turns into a simultaneous ferromagnetic and semiconductor-metal (SM) transition with nearly 100 % of the itinerant charge carriers polarized and a sharp resistivity drop of 8 to 13 orders of magnitude, depending on sample quality [@oliver1; @oliver2; @penney; @steeneken]. Concomitant with this transition is a huge colossal magnetoresistance (CMR) effect [@shapira], much larger than in the intensely studied manganates [@tokura]. These extreme properties make electron-doped EuO interesting for spintronics applications. Known since the 1970s, these features have therefore recently stimulated more systematic experimental studies with modern techniques and improved sample quality [@steeneken; @ott; @schmehl] as well as theoretical calculations [@schiller; @sinjukow]. In pure EuO the FM ordering is driven by the Heisenberg exchange coupling between the localized Eu 4$f$ moments with spin $S_f=7/2$ [@lee]. Upon electron doping, above $T_C$, the extra electrons are bound in defect levels situated in the semiconducting gap, and the transition to a FM metal occurs when the majority states of the spin-split conduction band shift downward to overlap with the defect levels. Although this scenario is widely accepted, several questions of fundamental as well as applicational relevance have remained poorly understood. (1) What is the transition system? (2) What is the order of the transition? While the magnetic ordering of the 4$f$ system should clearly be of 2nd order, the metallic transition requires a [*finite*]{} shift of the conduction band and, hence, seems to favor a 1st order transition. (3) What applications? While in the Eu-rich compound EuO$_{1-x}$ a systematic $T_C$ increase due to the O defects (i.e. missing O atoms) is not observed experimentally [@oliver1; @oliver2], a minute Gd doping concentration significantly enhances $T_C$ [@matsumoto; @ott]. An O defect in EuO$_{1-x}$ essentially binds the two excess electrons from the extra Eu 6s orbital and, therefore, should not carry a magnetic moment. As shown theoretically in Ref. [@sinjukow], the presence of O defects with two-fold electron occupancy does not enhance $T_C$, in agreement with experiments [@oliver1; @oliver2]. In the present work we focus on the Gd-doped system Eu$_{1-y}$Gd$_y$ and calculate the temperature and doping dependent magnetization and resistivity from a microscopic model. We find that the key feature for obtaining a $T_C$ enhancement is that the impurities not only donate electrons but also carry a local magnetic moment in the paramagnetic phase. [*][ *]{} — A Gd atom substituted for Eu does not alter the $S_f=7/2$ local moment in the Eu Heisenberg lattice but donates one dopant electron, which in the insulating high-temperature phase is bound in the Gd 5d level located in the gap. Therefore, the Gd impurities are Anderson impurities with a local level $E_d$ below the chemical potential $\mu$ and a [*strong*]{} on-site Coulomb repulsion $U>\mu - E_d$ which restricts their electron occupation essentially to one. The hybridization $V$ with the conduction band is taken to be site-diagonal because of the localized Gd 5d orbitals. The Hamiltonian for the Eu$_{1-y}$Gd$_y$O system then reads, $$\begin{aligned} \label{hamiltonian} H&=&\sum_{{\bf k}\sigma}\varepsilon_{{\bf k}} c_{{\bf k}\sigma}^{\dagger}c_{{\bf k}\sigma}^{\phantom{\dagger}}+H_{cd}+H_{cf}\\ \label{Hcd} H_{cd}&=&E_{d} \sum_{i=1 \dots N_I,\sigma} d_{i\sigma}^{\dagger}d_{i\sigma}^{\phantom{\dagger}} + V \sum_{i=1 \dots N_I,\sigma} (c_{i\sigma}^{\dagger} d_{i\sigma}^{\phantom{\dagger}} + H.c.)\nonumber\\ &+& U \sum_{i=1 \dots N_I} d_{i\uparrow}^{\dagger} d_{i\uparrow}^{\phantom{\dagger}} d_{i\downarrow}^{\dagger} d_{i\downarrow}^{\phantom{\dagger}} \\ \label{Hcf} H_{cf}&=&- \sum_{i,j} J_{ij} \vec S_{i}\cdot\vec S_{j} - J_{cf}\sum_{i}\vec \sigma_{i}\cdot\vec S_{i} \ ,\end{aligned}$$ where the first term in Eq. (\[hamiltonian\]) denotes conduction electrons with spin $\sigma$. The Eu 4$f$ moments $\vec S_i$ on the lattice sites $i=1,\dots, N$ are described in terms of a Heisenberg model $H_{cf}$ with FM nearest and next-nearest neighbor couplings $J_{ij}$ and an exchange coupling $J_{cf}$ to the conduction electron spin operators at site $i$, $\vec\sigma_{i}=(1/2)\sum_{\sigma\sigma'} c_{i\sigma}^{\dagger}\vec\tau_{\sigma\sigma'}c_{i\sigma'}^{\phantom{\dagger}}$, with $c_{i\sigma}=\sum_{\bf k} \exp(i{\bf k x_i})\,c_{{\bf k}\sigma}$ and $\vec \tau_{\sigma\sigma'}$ the vector of Pauli matrices. The Gd impurities at the random positions $i=1, ..., N_I$ are described by $H_{cd}$. For the numerical evaluations we take $U\to\infty$ for simplicity. For the present purpose of understanding the general form of the magnetization $m(T)$ and the systematic doping dependence of $T_C$ it is sufficient to treat the 4$f$ Heisenberg lattice, $H_{cf}$, on mean field level, although recent studies have shown that Coulomb correlations in the conduction band can soften the spin wave spectrum in similar systems [@golosov; @perakis]. The effect of the latter on $m(T)$ can be absorbed in the effective mean field coupling of the 4$f$ system, $J_{4f} \equiv \sum_{j}J_{ij}$. We therefore choose $J_{4f}$ such that for pure EuO it yields the experimental value of $T_C=69~{\rm K}$ [@oliver1; @oliver2; @shapira; @steeneken]. For simplicity, we don’t consider a direct coupling $J_{df}$ between the 4$f$ and the impurity spins, since this would essentially renormalize $J_{cf}$ only. The indirect RKKY coupling will also be neglected, since for the small conduction band fillings relevant here it is FM, like $J_{ij}$, but much smaller than $J_{ij}$. In the evaluations we use a semi-elliptical bare conduction band density of states (DOS) with a half width $D_0=8\, {\rm eV}$, (consistent with experiment [@steeneken]), centered around $\Delta _0\approx 1.05\, D_0$ above the (bare) defect level $E_d$. The other parameters are taken as $J_{4f} \equiv \sum_{j}J_{ij} = 7\cdot 10^{-5} D_{0}$, $J_{cf}=0.05 D_{0}$, $E_{d}=-0.4 D_{0}$, and $\Gamma=\pi V^{2}=0.05 D_{0}^{2}$, where $J_{cf}\gg J_{4f}$ because $J_{4f}$ involves a non-local matrix element. [*Selfconsistent theory. *]{} — The averaging over the random defect positions is done within the single-site $T$-matrix approximation, sufficient for dilute impurities. This yields for the retarded conduction electron Green‘s function $G_{c\sigma}({\bf k},\omega)$ in terms of its selfenergy $\Sigma _{c\sigma}(\omega)$, $$\begin{aligned} &&G_{c\sigma}({\bf k},\omega)=\left[\omega+\mu-\varepsilon_{\bf k}-\Sigma_{c\sigma}(\omega)\right]^{-1} \label{gc}\\ &&\Sigma_{c\sigma}(\omega)=n_{I
null
--- abstract: 'In this contribution, the evaluation of the diversity of the MIMO MMSE receiver is addressed for finite rates in both flat fading channels and frequency selective fading channels with cyclic prefix. It has been observed recently that in contrast with the other MIMO receivers, the MMSE receiver has a diversity depending on the aimed finite rate, and that for sufficiently low rates the MMSE receiver reaches the full diversity - that is, the diversity of the ML receiver. This behavior has so far only been partially explained. The purpose of this paper is to provide complete proofs for flat fading MIMO channels, and to improve the partial existing results in frequency selective MIMO channels with cyclic prefix.' author: Andrew M regime. [@kumar2009asymptotic] showed that the MMSE linear receivers, widely used for their simplicity, exhibit a largely suboptimal DMT in flat fading MIMO channels. Nonetheless, for a finite data rate (i.e. when the rate does not increase with the signal to noise ratio), the MMSE receivers take several diversity values, depending on the aimed rate, as noticed earlier in [@hedayat2005linear], and also in [@hedayat2004outage; @tajer2007diversity] for frequency-selective MIMO channels. In particular they achieve full diversity for sufficiently low data rates, hence their great interest. This behavior was partially explained in [@kumar2009asymptotic; @mehana2010diversity] for flat fading MIMO channels and in [@mehana2011diversity] for frequency-selective MIMO channels. Indeed the proof of the upper bound on the diversity order for the flat fading case given in [@mehana2010diversity] contains a gap, and the approach of [@mehana2010diversity] based on the Specht bound seems to be unsuccessfull. As for MIMO frequency selective channels with cyclic prefix, [@mehana2011diversity] only derives the diversity in the particular case of a number of channel taps equal to the transmission data block length, and claims that this value provides an upper bound in more realistic cases, whose expression is however not explicitly given. In this paper we provide a rigorous proof of the diversity for MMSE receivers in flat fading MIMO channels for finite data rates. We also derive the diversity in MIMO frequency selective channels with cyclic prefix for finite data rates if the transmission data block length is large enough. Simulations corroborate our derived diversity in the frequency selective channels case. Problem statement ================= We consider a MIMO system with $M$ transmitting, $N \geq M$ receiving antennas, with coding and ideal interleaving at the transmitter, and with a MMSE linear equalizer at the receiver, followed by a de-interleaver and a decoder (see Fig. \[fig:scheme\]). We evaluate in the following sections the achieved diversity by studying the outage probability, that is the probability that the capacity does not support the target data rate, at high SNR regimes. We denote $\rho$ the SNR, $I$ the capacity and $R$ the target data rate. We use the notation $\doteq$ for [*exponential equality*]{} [@zheng2003diversity], i.e. $$f(\rho) \doteq \rho^d \Leftrightarrow \lim_{\rho \to \infty} \frac{\log f(\rho)}{\log \rho}= d, \label{eq:exp-equ}$$ and the notations $\dot\leq$ and $\dot\geq$ for exponential inequalities, which are similarly defined. We note $\log$ the logarithm to base $2$. <unk> [image](div_sch3){width="6.5in"} Flat fading MIMO channels ========================= In this section we consider a flat fading MIMO channel. The channels are presented with the i.i.d. entries $\sim \mathcal{CN}(0,1)$. For a rate $R$ such that $\log \frac{M}{m} < \frac{R}{M} < \log \frac{M}{m-1}$, with $m \in \{ 1, \ldots, M \}$, the outage probability verifies $$\PP(I<R) \doteq \rho^{-m(N-M+m)},$$ that is, a diversity of $m(N-M+m)$. Note that for a rate $R < M \log \frac{M}{M-1}$ (i.e. $m=M$) full diversity $MN$ is attained, while for a rate $R > M \log M$ the diversity corresponds to the one derived by DMT approach. This result was stated by [@mehana2010diversity]. Nevertheless the proof of the outage lower bound in [@mehana2010diversity] omits that the event noted $\mathcal{B}_a$ is not independent from the eigenvalues of $\H^H\H$, hence questioning the validity of the given proof. We thus provide an alternative proof based on an approach suggested by the analysis of [@kumar2009asymptotic] in the case where $R = r \log \rho$ with $r > 0$. The capacity $I$ of the MIMO MMSE considered system is given by $$I = \sum_{j=1}^M \log ( 1 + \beta_j),$$ where $\beta_j$ is the SINR for the $j$th stream: $$\beta_j= \frac{1}{\left( \left[ \I + \frac{\rho}{M} \H^*\H \right]^{-1} \right)_{jj} } - 1.$$ We lower bound in the first place $\PP(I<R)$ and prove in the second place that the bound is tight by upper bounding $\PP(I<R)$ with the same bound. Lower bound of the outage probability {#sec:lowB_flat} ------------------------------------- We here assume that $R/M>\log (M/m$). In order to lower bound $\PP(I<R)$ we need to upper bound the capacity $I$. Using Jensen’s inequality on function $x \mapsto \log x$ yields $$\begin{aligned} I &\leq M \log \Bigg[ \frac{1}{M} \sum_{j=1}^M \left( 1+\beta_j \right) \Bigg] \label{ineq:jensen1} \\ &= M \log \Bigg[ \frac{1}{M} \sum_{j=1}^M \bigg( \left[ \left( \I + \frac{\rho}{M} \H^*\H \right)^{-1} \right]_{jj} \bigg)^{-1} \Bigg]. \label{ineq:logconcave} \end{aligned}$$ We note $\H^*\H= \U^*\Lambda\U$ the SVD of $\H^*\H$ with $\Lambda=\mathrm{diag}(\lambda_1,\ldots,\lambda_M)$, $\lambda_1 \leq \lambda_2 \ldots \leq \lambda_M$. We recall that the $(\lambda_k)_{k=1, \ldots, M}$ are independent from the entries of matrix $\U$ and that $\U$ is a Haar distributed unitary random matrix, i.e. the probability distribution of $\U$ is invariant by left (or right) multiplication by deterministic matrices. Using this SVD we can write $$\frac{1}{M} \sum_{j=1}^M \bigg( \left[ \left( \I + \frac{\rho}{M} \H^*\H \right)^{-1} \right]_{jj} \bigg)^{-1} = \frac{1}{M} \sum_{j=1}^M \bigg( \sum_{k=1}^M \frac{|\U_{kj}|^2}{1+ \frac{\rho}{M} \lambda_k } \bigg)^{-1}. \label{eq:sumSINR} $$ ### Case m=1 In order to better understand the outage probability behavior, we first consider the
null
--- bibliography: - 'sources.bib' --- Ji Li, Dept. of Mathematics Thomas Roby, Dept. of Mathematics, University of Connecticut For my friends, who keep me sane. Without Susan possible. Without Susan’s friendship and guidance or the camaraderie of the members of my cohort, I never would have made it to the point of doing graduate research. Without the encouragement David, Margaret, and Kim gave me, I might never have discovered mathematical research at all. Without my parents’ patient and generous support or the forbearance of my many teachers from childhood into my graduate years, I would never even have discovered the academy. Without Janet’s affection, fellowship, and tolerance of my eccentricities, my progress in these last years—and my spirits—would have been greatly diminished. My life and this work are gift from and a testament to everyone whom I have known. Thank you all. The theory of $\Gamma$-species is developed to allow species-theoretic study of quotient structures in a categorically rigorous fashion. This new approach is then applied to two graph-enumeration problems which were previously unsolved in the unlabeled case—bipartite blocks and general $k$-trees. Historically, the algebra of generating functions has been a valuable tool in enumerative combinatorics. The theory of combinatorial species uses category theory to justify and systematize this practice, making clear the connections between structural manipulations of some objects of interest and algebraic manipulations of their associated generating functions. The notion of ‘quotient’ enumeration (that is, of counting orbits under some group action) has been applied in species-theoretic contexts, but methods for doing so have largely been ad-hoc. We will contribute a species-compatible way for keeping track of the way a group $\Gamma$ acts on structures of a species $F$, yielding what we term a $\Gamma$-species, which has the sort of synergy of algebraic and structural data that we expect from species. We will then show that it is possible to extract information about the $\Gamma$-orbits of such a $\Gamma$-species and harness this new method to attack several unsolved problems in graph enumeration—in particular, the isomorphism classes of nonseparable bipartite graphs and $k$-trees (that is, ‘unlabeled’ bipartite blocks and $k$-trees). It is assumed that the reader of this thesis is familiar with the classical theory of groups and that he has encountered at least the basic vocabularies of category theory and graph theory. Results in these fields which are not original to this thesis will either be referenced from the literature or simply assumed, depending on the degree to which they are part of the standard body of knowledge one acquires when studying those disciplines. In the first chapter, we outline the theory of species, develop several classical methods, and introduce the notion of a $\Gamma$-species. In the second chapter, we apply these techniques to the enumeration of unlabeled vertex-$2$-connected bipartite graphs, a historically open problem. In the final chapter a new problem remains unsolved. Finally, in an appendix we discuss algebraic and computational methods which allow species-theoretical insights to be translated into explicit algorithmic techniques for enumeration. The theory of species {#c:species} ===================== Introduction {#s:introspec} ------------ Many of the most important historical problems in enumerative combinatorics have concerned the difficulty of passing from ‘labeled’ to ‘unlabeled’ structures. In many cases, the algebra of generating functions has proved a powerful tool in analyzing such problems. However, the general theory of the association between natural operations on classes of such structures and the algebra of their generating functions has been largely ad-hoc. André Joyal’s introduction of the theory of combinatorial species in [@joy:species] provided the groundwork to formalize and understand this connection. A full, pedagogical exposition of the theory of species is available in [@bll:species], so we here present only an outline, largely tracking that text. To begin, we wish to formalize the notion of a ‘construction’ of a structure of some given class from a set of ‘labels’, such as the construction of a graph from its vertex set or or that of a linear order from its elements. The language of category theory will allow us capture this behavior succinctly yet with full generality: \[def:species\] Let $\catname{FinBij}$ be the category of finite sets with bijections and $\catname{FinSet}$ be the category of finite sets with set maps. Then a *species* is a functor $F: \catname{FinBij} \to \catname{FinSet}$. For <unk> $A$*. Moreover, for a species $F$ and a bijection $\phi: A \to B$, the bijection $F \sbrac{\phi}: F \sbrac{A} \to F \sbrac{B}$ is the *$F$-transport of $\phi$*. A species functor $F$ simply associates to each set $A$ another set $F \sbrac{A}$ of its $F$-structures; for example, for $\specname{S}$ the species of permutations, we associate to some set $A$ the set $\specname{S} \sbrac{A} = \operatorname{Bij} \pbrac{A}$ of self-bijections (that is, permutations as maps) of $A$. This association of label set $A$ to the set $F \sbrac{A}$ of all $F$-structures over $A$ is fundamental throughout combinatorics, and functorality is simply the requirement that we may carry maps on the label set through the construction. \[ex:graphspecies\] Let $\specname{G}$ denote the species of simple graphs labeled at vertices. Then, for any finite set $A$ of labels, $G \sbrac{A}$ is the set of simple graphs with $\abs{A}$ vertices labeled by the elements of $A$. For example, for label set $A = \sbrac{3} = \cbrac{1, 2, 3}$, there are eight graphs in $\specname{G} \sbrac{A}$, since there are $\binom{3}{2} = 3$ possible edges and thus $2^{3} = 8$ ways to choose a subset of those edges: $$\specname{G} \sbrac{\cbrac{1, 2, 3}} = \cbrac{ \begin{array}{c} \begin{aligned} \tikz{ \node[style=graphnode](1) at (90:1) {1}; \node[style=graphnode](2) at (210:1) {2}; \node[style=graphnode](3) at (330:1) {3}; } \end{aligned}, \begin{aligned} \tikz{ \node[style=graphnode](1) at (90:1) {1}; \node[style=graphnode](2) at (210:1) {2}; \node[style=graphnode](3) at (330:1) {3}; \draw(1) to (2); } \end{aligned}, \begin{aligned} \tikz{ \node[style=graphnode](1) at (90:1) {1}; \node[style=graphnode](2) at (210:1) {2}; \node[style=graphnode](3) at (330:1) {3}; \draw(2) to (3); } \end{aligned}, \begin{aligned} \tikz{ \node[style=graphnode](1) at (90:1) {1}; \node[style=graphnode](2) at (210:1) {2}; \node[style=graphnode](3) at (330:1) {3}; \draw(1) to (3); } \end{aligned}, \\ \begin{aligned} \tikz{ \node[style=graphnode](1) at (90:1) {1}; \node[style=graphnode](2) at (210:1) {2}; \node[style=graphnode](3) at (330:1) {3}; \draw(1) to (2); \draw(1) to (3); \draw(2) to (3); } \end{aligned}, \begin{aligned} \tikz{ \node[style=graphnode](1) at (90:1) {1}; \node[style=graphnode](2) at (210:1) {2}; \node[style=graphnode](3) at (330:1) {3}; \draw(1) to (3); \draw(2) to (3); }
null
--- abstract: 'We propose a new type of hidden layer for a multilayer perceptron, and demonstrate that it obtains the best reported performance for an MLP on the MNIST dataset.' bibliography: - 'strings.bib' - 'strings-shorter.bib' - 'ml.bib' - 'aigaion-shorter.bib' --- The piecewise linear activation function ======================================== We propose to use a specific kind of piecewise linear function as the activation function for a multilayer perceptron. Specifically, suppose that the layer receives as input a vector $x \in \mathbb{R}^D$. The layer then computes presynaptic output $z = x^T W + b$ where $W \in \mathbb{R}^{D \times N}$ and $b \in \mathbb{R}^N$ are learnable parameters of the layer. We propose to have each layer produce output via the activation function $h(z)_i = \text{max}_{j \in S_i} z_j$ where $S_i$ is a different non-empty set of indices into $z$ for each $i$. This function provides several benefits: - It is similar to the rectified linear units [@Glorot+al-AI-2011] which have already proven useful for many classification tasks. - Unlike rectifier units, every unit is guaranteed to have some of its parameters receive some training signal at each update step. This is because the inputs $z_j$ are only compared to each other, and not to 0., so one is always guaranteed to be the maximal element through which the gradient flows. In the case of rectified linear units, there is only a single element $z_j$ and it is compared against 0. In the case when $0 > z_j$, $z_j$ receives no update signal. - Max pooling over groups of units allows the features of the network to easily become invariant to some aspects of their input. For example, if a unit $h_i$ pools (takes the max) over $z_1$, $z_2$, and $z_3$, and $z_1$, $z_2$ and $z_3$ respond to the same object in three different positions, then $h_i$ is invariant to these changes in the objects position. A layer consisting only of rectifier units can’t take the max over features like this; it can only take their average. - Max pooling can reduce the total number of parameters in the network. If we pool with non-overlapping receptive fields of size $k$, then $h$ has size $N / k$, and the next layer has its number of weight parameters reduced by a factor of $k$ relative to if we did not use max pooling. This makes the network cheaper to train and evaluate but also more statistically efficient. - This kind of piecewise linear function can be seen as letting each unit $h_i$ learn its own activation function. Given a linear function input. This is accomplished by rectification. Experiments =========== We used $S_i = \{ 5 i, 5 i + 1, ... 5 i + 4 \}$ in our experiments. In other words, the activation function consists of max pooling over non-overlapping groups of five consecutive pre-synaptic inputs. We ’ve been inspired by this code by @Hinton-et-al-arxiv2012. This MLP uses two hidden layers of 1200 units each. In our setup, the presynaptic activation $z$ has size 1200 so the pooled output of each layer has size 240. The rest of our training setup remains unchanged apart from adjustment to hyperparameters. @Hinton-et-al-arxiv2012 report 110 errors on the test set. To our knowledge, this is the best published result on the MNIST dataset for a method that uses neither pretraining nor knowledge of the input geometry. It is not clear how @Hinton-et-al-arxiv2012 obtained a single test set number. We train on the first 50,000 training examples, using the last 10,000 as a validation set. We use the misclassification rate on the validation set to determine at what point to stop training. We then record the log likelihood on the first 50,000 examples, and continue training but using the full 60,000 example training set. When the log likelihood of the validation set first exceeds the recorded value of the training set log likelihood, we stop training the model, and evaluate its test set error. Using this approach, our trained model made 94 mistakes on the test set. We believe this is the best-ever result that does not use pretraining or knowledge of the input geometry.
null