text
stringlengths
0
12.5k
meta
dict
sentences_perturbed
int64
0
15
length_stats
dict
--- abstract: 'We are interested in the problem of robust parametric estimation of a density from i.i.d observations. By using a practice-oriented procedure based on robust tests, we build an estimator for which we establish non-asymptotic risk bounds with respect to the Hellinger distance under mild assumptions on the parametric model. We prove that the estimator is robust even for models for which the maximum likelihood method is bound to fail. We also evaluate the performance of the estimator by carrying out numerical simulations for which we observe that the estimator is very close to the maximum likelihood one when the model is regular enough and contains the true underlying density.' address: 'Univ. Nice Sophia Antipolis, CNRS, LJAD, UMR 7351, 06100 Nice, France.' author: - Mathieu Sart bibliography: - 'biblio.bib' date: 'September, 2013' title: Robust estimation on a parametric model with tests --- Introduction ============ Consider $n$ independent and identically random variables $X_1,\dots,X_n$ defined on an abstract probability space ($\Omega, \mathcal{E},{\mathbb{P}\,})$ with values in the measured space $({\mathbb{X}},\mathcal{F},\mu)$. We suppose that the distribution of $X_i$ admits a density $s$ with respect to $\mu$ and aim at estimating $s$ by using a parametric approach. When the unknown density $s$ is assumed to belong to a parametric model ${\mathscr{F}}= \{f_{\theta}, \, \theta \in \Theta \}$ of densities, a traditional method to estimate $s = f_{\theta_0}$ is the maximum likelihood one. It is indeed well known that the maximum likelihood estimator (m.l.e for short) possesses nice statistical properties such as consistency and asymptotic efficiency when the model ${\mathscr{F}}$ is regular enough. However, it is also well known that this estimator breaks down for many models ${\mathscr{F}}$ of interest and counter examples may be found in [@Pitman1979; @Ferguson1982; @Lecam1990mle; @BirgeTEstimateurs] among other references. Another drawback of the m.l.e lies in the fact that it is not robust. This means that if $s$ lies in a small neighbourhood of the model ${\mathscr{F}}$ but not in it, the m.l.e may perform poorly. Several kinds of robust estimators have been suggested in the literature to overcome this issue. We can cite the well known $L$ and $M$ estimators (which includes the class of minimum divergences estimators of [@Basu1998divergence]) and the class of estimators built from a preliminary non-parametric estimator (such as the minimum Hellinger distance estimators introduced in [@Beran1977] and the related estimators of [@Lindsay1994; @Basu1994]). In this paper, we focus on estimators built from robust tests. This approach, which begins in the 1970s with the works of Lucien Lecam and Lucien Birgé ([@LeCam1973; @LeCam1975; @Birge1983; @Birge1984; @Birge1984a]), has the nice theoretical property to yield robust estimators under weak assumptions on the model ${\mathscr{F}}$. A key modern reference on this topic is [@BirgeTEstimateurs]. The recent papers [@BirgeGaussien2002; @BirgePoisson; @Birge2012; @BirgeDens; @BaraudBirgeHistogramme; @BaraudMesure; @Baraud2012; @SartMarkov; @Sart2012] show that increasing attention is being paid to this kind of estimator. Their main interest is to provide general theoretical results in various statistical settings (such as general model selection theorems) which are usually unattainable by the traditional procedures (such as those based on the minimization of a penalized contrast). For our statistical issue, the procedures using tests are based on the pairwise comparison of the elements of a thin discretisation ${\mathscr{F}_{\text{dis}}}$ of ${\mathscr{F}}$, that is, a finite or countable subset ${\mathscr{F}_{\text{dis}}}$ of ${\mathscr{F}}$ such that for all function $f \in {\mathscr{F}}$, the distance between $f$ and ${\mathscr{F}_{\text{dis}}}$ is small (in a suitable sense). As a result, their complexities are of order the square of the cardinality of ${\mathscr{F}_{\text{dis}}}$. Unfortunately, this cardinality is often very large, making the construction of the estimators difficult in practice. The aim of this paper is to develop a faster way of using tests to build an estimator when the cardinality of ${\mathscr{F}_{\text{dis}}}$ is large. From a theoretical point of view, the estimator we propose possesses similar statistical properties than those proved in [@BirgeTEstimateurs; @BaraudMesure]. Under mild assumptions on ${\mathscr{F}}$, we build an estimator $\hat{s} = f_{\hat{\theta}} $ of $s$ such that $$\begin{aligned} \label{RelIntro} {\mathbb{P}\,}\left[C h^2 (s, f_{\hat{\theta}} ) \geq \inf_{\theta \in \Theta} h^2 (s, f_{\theta}) + \frac{d}{n} + \xi \right] \leq e^{-n \xi} \quad \text{for all $\xi > 0$,}\end{aligned}$$ where $C$ is a positive number depending on ${\mathscr{F}}$, $h$ the Hellinger distance and $d$ such that $\Theta \subset {\mathbb{R}}^d$. We recall that the Hellinger distance is defined on the cone ${\mathbb{L}}^1_+ ({\mathbb{X}}, \mu)$ of non-negative integrable functions on ${\mathbb{X}}$ with respect to $\mu$ by $$h^2(f,g) = \frac{1}{2} \int_{{\mathbb{X}}} \left(\sqrt{f (x)} - \sqrt{g(x)} \right)^2 {\, \mathrm{d}}\mu (x) \quad \text{for all $f,g \in {\mathbb{L}}^1_+ ({\mathbb{X}}, \mu)$.}$$ Let us make some comments on (\[RelIntro\]). When $s$ does belong to the model ${\mathscr{F}}$, the estimator achieves a quadratic risk of order $n^{-1}$ with respect to the Hellinger distance. Besides, there exists $\theta_0 \in \Theta$ such that $s = f_{\theta_0}$ and we may then derive from [(\[RelIntro\])]{} the rate of convergence of $\hat{\theta}$ to $\theta_0$. In general, we do not suppose that the unknown density belongs to the model but rather use ${\mathscr{F}}$ as an approximate class (sieve) for $s$. Inequality (\[RelIntro\]) shows then that the estimator $\hat{s} = f_{\hat{\theta}}$ cannot be strongly influenced by small departures from the model. As a matter of fact, if $\inf_{\theta \in \Theta} h^2 (s, f_{\theta}) \leq n^{-1}$, which means that the model is slightly misspecified, the quadratic risk of the estimator $\hat{s} = f_{\hat{\theta}}$ remains of order $n^{-1}$. This can be interpreted as a robustness property. The preceding inequality (\[RelIntro\]) is interesting because it proves that our estimator is robust and converges at the right rate of convergence when the model is correct. However, the constant $C$ depends on several parameters on the model such as the size of $\Theta$. It is thus far from obvious that such an estimator can be competitive against more traditional estimators (such as the m.l.e). In this paper, we try to give a partial answer for our estimator by carrying out numerical simulations. When a very thin discretisation ${\mathscr{F}_{\text{dis}}}$ is used, the simulations show that our estimator is very close to the m.l.e when the model is regular enough and contains $s$. More precisely, the larger is the number of observations $n$, the closer they are, suggesting that our estimator inherits the efficiency of the m.l.e. Of course, this does not in itself constitute a proof but this allows to indicate what kind of results can be expected. A theoretical connection between estimators built from tests (with the procedure described in [@BaraudMesure]) and the m.l.e will be found in a future paper of Yannick Baraud and Lucien Birgé. In the present paper, we consider the problem of estimation on a single model. Nevertheless, when the statistician has at disposal several candidate models for $s$, a natural issue is model selection. In order to address it, one may associate to each of these models the estimator resulting from our procedure and then select among those estimators by means of the procedure of [@BaraudMesure]. By combining his Theorem 2 with our risk bounds on each individual estimator, we obtain that the selected estimator satisfies an oracle-type inequality. We organize this paper as follows. We begin with a glimpse of the results in Section 2. We then present a procedure and its associated theoretical results to deal with models parametrized by an unidimensional parameter in Section 3
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'F. Govoni' - 'M. Murgia' - 'M. Markevitch' - 'L. Feretti' - 'G. Giovannini' - 'G.B. Taylor' - 'E. Carretti' date: 'Received; accepted' title: 'A search for diffuse radio emission in the relaxed, cool-core galaxy clusters A1068, A1413, A1650, A1835, A2029, and Ophiuchus' --- [ We analyze sensitive, high-dynamic-range, observations to search for extended, diffuse, radio emission in relaxed and cool-core galaxy clusters. ]{} [We performed deep 1.4 GHz Very Large Array observations, of A1068, A1413, A1650, A1835, A2029, and complemented our dataset with archival observations of Ophiuchus. ]{} [We find that, in the central regions of A1835, A2029, and Ophiuchus, the dominant radio galaxy is surrounded by diffuse low-brightness radio emission that takes the form of a mini-halo. We detect no diffuse emission in A1650, at a surface brightness level of the other mini-halos. We find low significance indications of diffuse emission in A1068 and A1413, although to be classified as mini-halos they would require further investigation, possibly with data of higher signal-to-noise ratio. In the Appendix, we report on the serendipitous detection of a giant radio galaxy with a total spatial extension of $\sim$1.6 Mpc. ]{} Introduction ============ There is now firm evidence that the intra-cluster medium (ICM) consists of a mixture of hot plasma, magnetic fields, and relativistic particles. While the baryonic content of the galaxy clusters is dominated by the hot ($T\simeq 2 - 10$ keV) intergalactic gas, whose thermal emission is observed in X-rays, a fraction of clusters also exhibit megaparsec-scale radio halos (see e.g., Feretti & Giovannini 2008, Ferrari et al. 2008, and references therein for reviews). Radio halos are diffuse, low-surface-brightness ($\simeq 10^{-6}$ Jy/arcsec$^2$ at 1.4 GHz), steep-spectrum[^1] ($\alpha > 1$) sources, permeating the central regions of clusters, produced by synchrotron radiation of relativistic electrons with energies of $\simeq 10$ GeV in magnetic fields with $B\simeq 0.5-1\;\mu$G. Radio halos represent the strongest evidence of large-scale magnetic fields and relativistic particles throughout the intra-cluster medium. Radio halos are typically found in clusters that display significant evidence for an ongoing merger (e.g., Buote 2001, Govoni et al. 2004). Recent cluster mergers were proposed to play an important role in the reacceleration of the radio-emitting relativistic particles, thus providing the energy to these extended sources (e.g., Brunetti et al. 2001, Petrosian 2001). To date about 30 radio halos are known (e.g., Giovannini & Feretti 2000, Bacchi et al. 2003, Govoni et al. 2001, Venturi et al. 2007, 2008, Giovannini et al. in preparation). Because of their extremely low surface brightness and large angular extent ($>$10$'$ at a redshift z$\le$0.1) radio halos are most appropriately studied at low spatial resolution. Several radio halos were detected by Giovannini et al. (1999) in the NRAO VLA Sky Survey (NVSS; Condon et al. 1998) and by Kempner & Sarazin (2001) in the Westerbork Northern Sky Survey (WENSS; Rengelink et al. 1997), where the relatively large beam of these surveys provide of the necessary sensitivity to large-scale emission in identifying these elusive sources. A major merger event is expected to disrupt cooling cores and create disturbances that are readily visible in an X-ray image of the cluster. Therefore, the merger scenario predicts the absence of large-scale radio halos in symmetric cooling-core clusters. However, a few cooling-core clusters exhibit signs of diffuse synchrotron emission that extends far from the dominant radio galaxy at the cluster center, forming what is referred to as a mini-halo. These diffuse radio sources are extended on a moderate scale (typically $\simeq$ 500 kpc) and, in common with large-scale halos, have a steep spectrum and a very low surface brightness. Because of a combination of small angular size and the strong radio emission of the central radio galaxy, the detection of a mini-halo requires data of a much higher dynamic range and resolution than those in available surveys, and this complicates their detection. As a consequence, our current observational knowledge on mini-halos is limited to only a handful of clusters (e.g., Perseus: Burns et al. 1992; A2390: Bacchi et al. 2003; RXJ1347.5-1145: Gitti et al. 2007), and their origin and physical properties are still poorly known. The study of radio emission from the center of cooling-core clusters is of large importance not only in understanding the feedback mechanism involved in the energy transfer between the AGN and the ambient medium (e.g., McNamara & Nulsen 2007) but also in the formation process of the non-thermal mini-halos. The energy released by the central AGN may also play a role in the formation of these extended structures (e.g. Fujita et al. 2007). On the other hand, the radiative lifetime of the relativistic electrons in mini-halos is of the order of $\simeq$10$^7$$-$10$^8$ yrs, much shorter than the time necessary for them to diffuse from the central radio galaxy to the mini-halo periphery. Thus, relativistic electrons must be reaccelerated and/or injected in-situ with high efficiency in mini-halos. Gitti et al. (2002) suggested that the mini-halo emission is due to a relic population of relativistic electrons reaccelerated by MHD turbulence via Fermi-like processes, the necessary energetics being supplied by the cooling flow. In support of mini-halo emission being triggered by the central cooling flow, Gitti et al. (2004) found a trend between the radio power of mini-halos and the cooling flow power. Although mini-halos are usually found in cooling-core clusters with no evidence of major mergers, signatures of minor-merging activities and gas-sloshing mechanisms in clusters containing mini-halos (e.g., Gitti et al. 2007, Mazzotta & Giacintucci 2008) have been revealed, suggesting that turbulence related to minor mergers could also play a role in the electron acceleration. Alternatively, Pfrommer & En[ß]{}lin (2004) proposed that relativistic electrons in mini-halos are of secondary origin and thus continuously produced by the interaction of cosmic ray protons with the ambient, thermal protons. Cassano et al. (2008) found that the synchrotron emissivity (energy per unit volume, per unit time, per unit frequency) of mini-halos is about a factor of 50 higher than that of radio halos. In the framework of the particle re-acceleration scenario, they suggested that, an extra amount of relativistic electrons would be necessary to explain the higher radio emissivity of mini-halos. These electrons could be provided by the central radio galaxy or be of secondary origin. To search for new extended diffuse radio emission in relaxed and cool-core galaxy clusters, we performed deep observations of A1068, A1413, A1650, A1835, and A2029, carried out with the Very Large Array at 1.4 GHz, and complemented our data set with a VLA archival observation of Ophiuchus. Here, we present the new mini-halos that we identified in these data. In Murgia et al. (submitted, hereafter Paper II), we quantitatively investigate the radio properties of these new sources and compare them with the radio properties of a statistically significant sample of mini-halos and halos already known in the literature, for which high quality VLA radio images at 1.4 GHz are available. The radio observations and data reduction are described in Sect. 2. For each cluster, in Sect. 3, we investigate the possible presence of a central mini-halo. In Sect. 4, we discuss the interplay between the mini-halos and the cluster X-ray emission. In Sect. 5, we analyze a possible connection between the central cD galaxy and the surrounding mini-halo. Finally, our conclusions are presented in Sect. 6. Throughout this paper, we assume a $\Lambda$CDM cosmology with $H_0$ = 71 km s$^{-1}$Mpc$^{-1}$, $\Omega_m$ = 0.27, and $\Omega_{\Lambda}$ = 0.73. VLA observations and data reduction =================================== To investigate the presence of diffuse, extended, radio emission in relaxed systems, we analyzed new and archival VLA data of cooling-core clusters. The list of the clusters is reported in Table 1, while the details of the observations are described in Table 2. [cccc]{} Cluster & z & kpc/$''$ & $D_L$\ & & & Mpc\ A1068 & 0.1375 & 2.40 &
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'It is shown that in a simple coupler where one of the waveguides is subject to controlled losses of the electric field, it is possible to observe optical analog of the linear and nonlinear quantum Zeno effects. The phenomenon consists in a *counter-intuitive* enhancement of transparency of the coupler with the increase of the dissipation and represents an optical analogue of the quantum Zeno effect. Experimental realization of the phenomenon based on the use of the chalcogenide glasses is proposed. The system allows for observation of the cross-over between the linear and nonlinear Zeno effects, as well as effective manipulation of light transmission through the coupler.' author: - 'F. Kh. Abdullaev$^{1}$, V. V. Konotop$^{1,2}$, and V. S. Shchesnovich$^{3}$' title: Linear and nonlinear Zeno effects in an optical coupler --- Introduction ============ Decay of a quantum system, either because it is in a metastable state or due to its interaction with an external system (say, with a measuring apparatus), is one of the fundamental problems of the quantum mechanics. Already more than fifty years ago it was proven that the decay of a quantum metastable system is, in general, non-exponential [@Khalfin; @Degasperis] (see also the reviews [@rev; @Khalfin_rev]). Ten years later in Ref. [@Misra] it was pointed that a quantum system undergoing frequent measurements does not decay at all in the limit of the infinitely-frequent measurements. This remarkable phenomenon was termed by the authors the quantum “Zeno’s paradox”. The Zeno’s paradox, i.e. the total inhibition of the decay, requires, however, unrealistic conditions and shows only as the Zeno effect, i.e. the decrease of the decay rate by frequent observations, either pulsed or continuous. The Zeno effect was observed experimentally by studying the decay of continuously counted beryllium ions [@beryl], the escape of cold atoms from an accelerating optical lattice [@zeno_lat], the control of spin motion by circularly polarized light [@zeno_spin], the decay of the externally driven mixture of two hyperfine states of rubidium atoms [@zeno_BEC], and the production of cold molecular gases [@zeno_molec]. There is also the opposite effect, i.e. the acceleration of the decay by observations, termed the anti-Zeno effect, which is even more ubiquitous in the quantum systems [@Zeno]. It was argued that the quantum Zeno and anti-Zeno effects can be explained from the purely dynamical point of view, without any reference to the projection postulate of the quantum mechanics [@FPJPhyA]. In this respect, in Ref. [@BKPO; @SK] it is shown that the Zeno effect can be understood within the framework of the mean field description, when the latter can be applied, thus providing the link between purely quantum and classical systems. The importance of the Zeno effect goes beyond the quantum systems. An analogy between the quantum Zeno effect and the decay of light in an array of optical waveguides, was suggested in Ref. [@Longhi]. Namely, the authors found an exact solution which showed a non-exponential decay of the field in one of the waveguides. Modeling of the quantum Zeno effect in the limit of frequent measurements using down conversion of light in the sliced nonlinear crystal was considered in Ref. [@Reh]. The effect has been mimicked by the wave process in a $\chi^{(2)}$ coupler with a linear and nonlinear arms, since in the strong coupling limit the pump photons propagate in the nonlinear arm without decay. The analogy between the inhibition of losses of molecules and the enhanced reflection of light from a medium with a very high absorption was also noticed in [@zeno_BEC]. Meantime, in the mean field models explored in Refs. [@BKPO; @SK] inter-atomic interactions play an important role, leading to the nonlinear terms in the resulting dynamical equations. In its turn, the nonlinearity introduces qualitative differences in the Zeno effect, in particular dramatically reducing the decay rate [@SK] compared to the case of noninteracting atoms. This phenomenon, the enhancement of the effect by the inter-atomic interactions, in Ref. [@SK] was termed the [*nonlinear Zeno effect*]{} (since, when the nonlinearity is negligible, it reduces to the usual linear Zeno effect). Mathematically, the mean field description of a Bose-Einstein condensate (BEC) and of the light propagation in Kerr-type media are known to have many similarities, due to the same (Gross-Pitaevskii or nonlinear Schrödinger) equation describing the both phenomena. [Furthermore, the linear Zeno effect, is observable not only in pure quantum statement, but also in the meanfield approximation [@SK]. This immediately suggests that detecting the Zeno dynamics, is possible in the classical systems, and in particular, in the nonlinear optics, thus offering new possibilities for managing of light [@com1]. Namely, one can expect the counter-intuitive reduction of attenuation of the total field amplitude (which would correspond to the reduction of losses of atoms in the BEC case) by increasing the losses in some parts of the system (an analogy to the increasing of the removal rate of atoms in the case of BEC). ]{} To report on a very basic system where analogs of the linear and nonlinear Zeno effects can be observed and exploited is the main goal of the present paper. More specifically, we explore the mathematical analogy of the semi-classical dynamics of a BEC in a double well potential subject to removal of atoms [@SK], with light propagation in a nonlinear optical coupler, in which one of the arms is subject to controllable losses. The paper is organized as follows. First in Sec. \[sec:two\_examp\] we consider two well known models of dissipative oscillators, which illustrate the classical analogues of the Zeno phenomenon (originally introduced in the quantum measurement theory). Next, in Sec. \[sec:experiment\] we discuss possible experimental settings allowing observation of the phenomenon in optics. In Sec. \[sec:NonLinZeno\] the theory of optical nonlinear Zeno effect is considered in details. Sec. \[sec:lin\_nonlin\] is devoted to comparative analysis of the linear and nonlinear Zeno effects. The outcomes are summarized in the Conclusion. Two trivial examples. {#sec:two_examp} ===================== Before going into details of the optical system, let us first give a simple insight on the pure classical origin of the phenomenon of inhibition of the field attenuation by strong dissipation. First, we recall the well-known fact, that increase of the dissipation $\alpha$ of an overdamped ($\alpha\gg \omega$) oscillator $\ddot{x} + \alpha \dot{x}+\omega^2 x=0$, results in decrease of the attenuation of the oscillations. Indeed the decay rate $R\approx \omega^2/\alpha$ approaches zero, when the dissipation coefficient $\alpha$ goes to infinity. But the amplitude of oscillations in this case is also nearly zero. However, the coupling of another linear oscillator to the dissipative one, $$\begin{aligned} %\begin{array}{l} \ddot{x}_1+\alpha\dot{x}_1+\omega^2 x_1+\kappa x_2=0, %\\ \quad \ddot{x}_2+ \omega^2 x_2+\kappa x_1=0, %\end{array}\end{aligned}$$ allows one to observe the inhibition of attenuation due to strong dissipation by following a finite amplitude $x_2$. Indeed, the characteristic equation, $$\displaystyle{\lambda=\frac{\lambda^4+2\lambda^2\omega^2+\kappa^2-\omega^4}{\alpha(\lambda^2+\omega^2)}},$$ evidently has the small root $ \lambda\approx (\kappa^2-\omega^4)/ (\alpha\omega^2) $ which appears for $\alpha\gg \kappa^2/\omega^2-\omega^2>0$. Thus, one of the dynamical regimes of the system is characterized by the decay rate which goes to zero in the overdamped case, moreover the relation between the amplitudes of the damped and undamped oscillators reads $|x_1/x_2|\to \omega^2/\kappa<1$ as $\alpha\to\infty$. In other words, strong dissipation in one of the oscillators can attenuate the energy decay in the whole system. On the other hand, the last example illustrates that if the coupling is of the same order as the eigenfrequencies of the subsystems, the energy is distributed between the two subsystems in approximately equal parts. This does not allow for further decrease of the decay rate of the energy, because its large part is concentrated in the damped subsystem. [The phenomenon descried above for the linear oscillators can be viewed as a classical analog of the linear Zeno effect.]{} The nonlinearity changes the situation dramatically. This case, however does not allow for complete analytical treatment, any more, and that is why we now turn to the specific nonlinear system, which will be studies numerically. We consider an optical coupler composed of two Kerr-type waveguides, one arm of which is subject to relatively strong field losses. We will show that such a coupler mimics the quantum Zeno effect, allowing one to follow, in a simple optical experiment, the cross-over between the linear [(weak intensities)]{} and nonlinear [(strong
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Knowledge mobilization and translation describes the process of moving knowledge from research and development (R&D) labs into environments where it can be put to use. There is increasing interest in understanding mechanisms for knowledge mobilization, specifically with respect to academia and industry collaborations. These mechanisms include funding programs, research centers, and conferences, among others. In this paper, we focus on one specific knowledge mobilization mechanism, the CASCON conference, the annual conference of the IBM Centre for Advanced Studies (CAS). The mandate of CAS when it was established in 1990 was to foster collaborative work between the IBM Toronto Lab and university researchers from around the world. The first CAS Conference (CASCON) was held one year after CAS was formed in 1991. The focus of this annual conference was, and continues to be, bringing together academic researchers, industry practitioners, and technology users in a forum for sharing ideas and showcasing the results of the CAS collaborative work. We collected data about CASCON for the past 25 years including information about papers, technology showcase demos, workshops, and keynote presentations. The resulting dataset, called “CASCONet”[^1] is available for analysis and integration with related datasets. Using CASCONet, we analyzed interactions between R&D topics and changes in those topics over time. Results of our analysis show how the domain of knowledge being mobilized through CAS had evolved over time. By making CASCONet available to others, we hope that the data can be used in additional ways to understand knowledge mobilization and translation in this unique context.' author: - | Dixin Luo, Kelly Lyons\ Faculty of Information, University of Toronto, Toronto, ON, Canada\ [$\{$dixin.luo,kelly.lyons$\}$@utoronto.ca]{} bibliography: - 'bare\_conf.bib' title: 'CASCONet: A Conference dataset' --- knowledge mobilization; knowledge translation; CASCON; CASCONet; computer science and engineering; topic models; time series analysis; Introduction ============ There is increasing interest in understanding how knowledge transfer and mobilization takes place. At the same time, the number of available datasets and accessible analysis tools is growing. Many efforts have been made to make conference datasets available and new techniques have been developed for analyzing conference data for the purpose of understanding outcomes such as knowledge mobilization. Vasilescu et al. [@vasilescu2013historical] present a dataset of software engineering conferences that contains historical data about the publications and the composition of program committees for eleven well-established conferences. This historical data is intended to assist conference steering committees or program committee chairs in assessing their selection process or to help prospective authors decide on conferences to which they should submit their work . Hayat and Lyons analyzed the social structure of the CASCON conference paper co-authorship network and proposed potential actions that might be taken to further develop the CASCON community [@hayat2010evolution]. They also analyzed the co-authorship ego networks of the ten most central authors in twenty-four years of papers published in the proceedings of CASCON using social network analysis and proposed a typology that differentiates three styles of co-authorship [@hayat2017typology]. Solomon presented an in-depth analysis of past and present publishing practices in academic computer science conference and journal publications (from DBLP) to suggest the establishment of a more consistent publishing standard [@solomon2009programmers]. Many other datasets about conference and journal publications have also been proposed, e.g., the NIPS dataset [@perrone2016poisson], the Microsoft Academic Graph (MAG) [@sinha2015overview], and the AMiner database[^2]. Interesting analyses have been proposed and carried out on some of these data sets. For example, a relatively new topic model is proposed in [@perrone2016poisson] and verified on the NIPS dataset — the dynamics of topics on NIPS over time are analyzed quantitatively, e.g., standard neural networks (“NNs backpropagation”) were extremely popular until the early 90s; however, after this, papers on this topic went through a steady decline, only to increase in popularity later on. Moreover, the popularity of deep architectures and convolutional neural networks (“deep learning”) steadily increased over these 29 years, to the point that deep learning was the most popular among all topics in NIPS in 2015. A heterogeneous entity graph of publications is proposed in MAG [@sinha2015overview], which has potential to improve academic information retrieval and recommendation systems. In this paper, we describe a specific conference dataset and demonstrate how analyses performed on that dataset can provide insights into mechanisms of knowledge transfer. We consider the CASCON conference, the annual conference of the IBM Centre for Advanced Studies (CAS). The mandate of CAS when it was established in $1990$ was to foster collaborative work between IBM Toronto and university researchers from around the world [@perelgut1997overview]. It is a unique knowledge mobilization and translation environment, specifically designed to facilitate the transfer of technology from university research into IBM products and processes. The CASCONet dataset presented in this paper is unique in that it includes not only data about authors and papers but also data about all aspects of the CASCON conference. ![image](Schema.png){width="0.9\linewidth"} The first CAS Conference (CASCON) was held in $1991$, one year after CAS was formed. The focus of this conference was, and continues to be, bringing together researchers, government employees, industry practitioners, and technology users in a forum for sharing ideas and results of CAS collaborative work [@perelgut1997overview]. The CASCON conference is an interesting object of study in this way because it is an annual conference of the IBM Center for Advanced Studies (CAS), a unique center for knowledge mobilization and translation. Furthermore, rather than focusing on a narrow topic area in computer science research, CASCON’s mandate is broader, covering many topics in computer science and software engineering with a focus around industry / university collaborations. It is therefore interesting to understand what kinds of unique knowledge mobilization structures can be identified by analyzing data about CASCON. The central data element of the CASCONet dataset is “person” and each person’s role in CASCON activities is described through the data. CASCONet includes the author role and provides title, author, and publication year for over 800 CASCON papers. The workshop chair role includes workshop title, workshop chair, and year. The keynote data associates people (presenters) with keynote titles and year. Finally, the demos data links demo presenter to title and year. The people, papers, themes of the workshops, topics of the keynote presentations, and the products and tools presented in the demos over the past 25 years reflect the evolving processes and knowledge mobilization in the CASCON community and may provide a glimpse into the field of computer science (advanced methods and techniques, challenges and urgent problems, innovation, and applications) overtime. As an example of the kinds of analyses that can be performed on this dataset, we present basic statistics of CASCON and analyze the temporal dynamics of topics presented at CASCON. We believe that this dataset and analyses such as these may provide researchers of computer science and social science with a new resource to study the co-evolution of academic and industry communities. Properties of CASCONet ====================== The first CASCON took place in $1991$. More than $1500$ researchers, technologists, developers, and decision makers attend CASCON each year. CASCONet $(1991 - 2016)$ contains data about a total of $2517$ people who have written $846$ papers, presented $1212$ demos, delivered $107$ keynote presentations, and organized $796$ workshops. Fig. \[fig:schema\] shows the schema of CASCONet. Topic Top 10 words ------------------------ ---------------------------------------------------------------------------------------------------- Users and Business User, Internet, task, trust, model, resource, web, information, database, search Cloud and Web Services Cloud, web, service, application, design, development, integral, user, code, URL Systems System, model, distributed, computing, database, user, management, paper, application, information Programs and Code Program, performance, compiler, language, parallel, class, oriented, object, code, Java Applications Interaction, application, user, mobile, interface, device, information, visual, tool, support Networks and Security Security, network, cache, local, communication, enterprise, layer, server, grid, privacy Software Software, design, analysis, engine, tool, approach, test, development, process, performance Databases Database, usage, system, transaction, optimization, query, DB2, data, user, system Data Analysis Data, analytic, user, mining, decision, event, distribution, information, business, system Algorithms Algorithm, problem, performance, architecture, system, cluster, design, time, schedule, test [**Person.**]{} According to the dataset, $24.0\%$ of the people who authored papers at CASCON have published more than one paper in CASCON. The most number of papers published by one person is $20$. There are $33$ people whose time span of authoring papers in CASCON is greater than $10$ years. There are only $4$ people who have participated in CASCON in all roles, as author, workshop chair, demo organizer, and keynote speaker. [**Papers.**]{} Fig. \[fig:paperyear\] shows the number of CASCON papers published each year. The CASCON main conference has accepted a relatively stable number of papers for each year since $1997$. In the early years, a greater number of papers were accepted to CASCON. Then in $2006$ and $2007
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We use the framework of a relativistic constituent quark model to study the semileptonic transitions of the $B_c$ meson into $(\bar c c)$ charmonium states where $(\bar c c)=\eta_c\,(^1S_0),$ $J/\psi\, (^3S_1),$ $\chi_{c0}\, (^3P_0),$$\chi_{c1}\, (^3P_1),$ $h_c\, (^1P_1),$ $\chi_{c2}\, (^3P_2),$ $\psi\, (^3D_2)$. We compute the $q^2$–dependence of all relevant form factors and give predictions for their semileptonic $B_c$ decay modes including also their $\tau$-modes. We derive a formula for the polar angle distribution of the charged lepton in the $(l\nu_l)$ c.m. frame and compute the partial helicity rates that multiply the angular factors in the decay distribution. For the discovery channel $B_c\to J/\psi(\rightarrow \mu^+ \mu^-) l \nu$ we compute the transverse/longitudinal composition of the $ J/\psi$ which can be determined by an angular analysis of the decay $ J/\psi \rightarrow \mu^+ \mu^-$. We compare our results with the results of other calculations.' author: - 'Mikhail A. Ivanov' - 'Juergen G. Körner' - Pietro Santorelli title: 'Semileptonic decays of $B_c$ mesons into charmonium states in a relativistic quark model' --- Introduction {#s:intro} ============ In 1998 the CDF Collaboration reported on the observation of the bottom-charm $B_c$ meson at Fermilab [@CDF]. The $B_c$ mesons were found in an analysis of the semileptonic decays $B_c\to J/\psi l \nu$ with the $J/\psi$ decaying into muon pairs. Values for the mass and the lifetime of the $B_c$ meson were given as $M(B_c)=6.40\pm 0.39\pm 0.13$ GeV and $\tau(B_c)=0.46^{+0.18}_{-0.16}({\rm stat})\pm 0.03({\rm syst})\cdot 10^{-12}$ s, respectively. First $B_c$ mesons are now starting to be seen also in the Run II data from the Tevatron [@lucchesi04; @cdf04]. Much larger samples of $B_c$ mesons and more information on their decay properties are expected from the current Run II at the Tevatron and future experiments at the LHC starting in 2007. In particular this holds true for the dedicated detectors BTeV and LHCB which are especially designed for the analysis of B physics where one expects to see up to $10^{10}$ $B_c$ events per year. The study of the $B_c$ meson is of great interest due to some of its outstanding features. It is the lowest bound state of two heavy quarks (charm and bottom) with open (explicit) flavor. As far as the bound state characteristics are concerned the $B_c$ meson is quite similar to the $J^{PC}=0^{-+}$ states $\eta_c$ and $\eta_b$ in the charmonium ($c\bar c$-bound state) and the bottomium ($b\bar b$-bound state) sector. However, the $\eta_c$ and $\eta_b$ have hidden (implicit) flavor and decay strongly and electromagnetically whereas the $B_c$-meson decays weakly since it lies below the $B\bar D$-threshold. The $B_c$ meson and its decays have been widely studied in the literature. The theoretical status of the $B_c$-meson was reviewed in [@Gershtein:1998mb]. The $B_c$ lifetime and decays were studied in the pioneering paper [@Lusignoli:1990ky]. The exclusive semileptonic and nonleptonic (assuming factorization) decays of the $B_c$-meson were calculated in a potential model approach [@Chang:1992pt]. The binding energy and the wave function of the $B_c$-meson were computed by using a flavor-independent potential with the parameters fixed by the $c\bar c$ and $b \bar b$ spectra and decays. The same processes were also studied in the framework of the Bethe-Salpeter equation in [@AMV], and, in the relativistic constituent quark model formulated on the light-front in [@AKNT]. Three-point sum rules of QCD and NRQCD were analyzed in [@KLO] and [@Kiselev:2000pp] to obtain the form factors of the semileptonic decays of $B^+_c\to J/\psi(\eta_c)l^+\nu$ and $B^+_c\to B_s(B_s^\ast)l^+\nu$. As shown by the authors of [@Jenkins], the form factors parameterizing the $B_c$ semileptonic matrix elements can be related to a smaller set of form factors if one exploits the decoupling of the spin of the heavy quarks in the $B_c$ and in the mesons produced in the semileptonic decays. The reduced form factors can be evaluated as an overlap integral of the meson wave-functions which can be obtained, for example, using a relativistic potential model. This was done in [@Colangelo], where the $B_c$ semileptonic form factors were computed and predictions for semileptonic and non-leptonic decay modes were given. In [@Ivanov:2000aj] we focused on its exclusive leptonic and semileptonic decays which are sensitive to the description of long distance effects. From the semileptonic decays one can obtain results on the corresponding two-body non-leptonic decay processes in the so-called factorization approximation. The calculations have been done within our relativistic constituent quark model based on an effective Lagrangian describing the coupling of hadrons $H$ to their constituent quarks. The relevant coupling strength is determined by the compositeness condition $Z_H=0$ [@SWH; @EI] where $Z_H$ is the wave function renormalization constant of the hadron $H$. The relativistic constituent quark model was also employed in a calculation of the exclusive rare decays $B_c\to D(D^\ast)\bar l l$ [@Faessler:2002ut] and of the nonleptonic decays $B_c\to D_s \overline {D^0}$ and $B_c\to D_s D^0$ [@Ivanov:2002un]. In the latter case we confirmed that the nonleptonic decays $B_c\to D_s \overline {D^0}$ and $B_c\to D_s D^0$ are well suited to extract the CKM angle $\gamma$ through amplitude relations, as was originally proposed in [@masetti1992; @fleischer2000]. The reason is that the branching fractions into the two channels are of the same order of magnitude. In this paper we continue the study of $B_c$ decay properties and calculate the branching rates of the semileptonic decays $B_c\to (\bar c c)\,l\nu$ with $(\bar c c)=\eta_c\,(^1S_0),$ $J/\psi\, (^3S_1),$ $\chi_{c0}\, (^3P_0),$$\chi_{c1}\, (^3P_1),$ $h_c\, (^1P_1),$ $\chi_{c2}\, (^3P_2),$ $\psi\, (^3D_2)$. We compare our results with the results of [@Chang:1992pt; @Chang:2001pm] where it was shown that these decay rates are quite sizable and may be accessible in RUN II of Tevatron and/or the LHC. Two-particle decays of the $B_c$-meson into charmonium states have been studied before in [@Kiselev:2001zb] by using the factorization of hard and soft contributions. The weak decays of the $B_c$-meson to charmonium have been studied in the framework of the relativistic quark model based on the quasipotential approach in [@Ebert:2003cn]. In this paper we compute all form factors of the above semileptonic $B_c$-transitions and give predictions for various semileptonic $B_c$ decay modes including their $\tau$-modes. From a general point of view we would like to remark that the semileptonic decays of the $\tau$-lepton have been studied within perturbative QCD. It has allowed one to determine the strong coupling constant with a high accuracy (see e.g. [@Korner:2000xk]). We have improved on our previous calculation [@Ivanov:2000aj] in that we no longer employ the so-called impulse approximation. In the impulse approximation one assumes that the vertex functions depend only on the loop momentum flowing through the vertex. Dropping the impulse approximation means that the vertex function can also depend on outer momenta according to the flow of momentum through the vertex. A comparison with the results for the decays into the para- and ortho-charmonium states $(\bar c c)=\eta_c\,(^1S_0),$
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Ranked data appear in many different applications, including voting and consumer surveys. There often exhibits a situation in which data are partially ranked. Partially ranked data is thought of as missing data. This paper addresses parameter estimation for partially ranked data under a (possibly) non-ignorable missing mechanism. We propose estimators for both complete rankings and missing mechanisms together with a simple estimation procedure. Our estimation procedure leverages a graph regularization in conjunction with the Expectation-Maximization algorithm. Our estimation procedure is theoretically guaranteed to have the convergence properties. We reduce a modeling bias by allowing a non-ignorable missing mechanism. In addition, we avoid the inherent complexity within a non-ignorable missing mechanism by introducing a graph regularization. The experimental results demonstrate that the proposed estimators work well under non-ignorable missing mechanisms.' address: | Department of Mathematical Informatics,\ Graduate School of Information Science and Technology,\ The University of Tokyo author: - 'Kento Nakamura, Keisuke Yano,' - Fumiyasu Komaki bibliography: - 'NakamuraYanoKomaki.bib' date: '.' title: | Learning partially ranked data\ based on graph regularization --- Introduction ============ Data commonly come in the form of ranking in preference survey such as voting and consumer surveys. Asking people to rearrange items according to their preference, we obtain the collection of rankings. Several methods for ranked data have been proposed. [@mallows1957non] proposed a parametric model, now called the Mallows model; [@diaconis1989generalization] developed a spectral analysis for ranked data; Recently, the analysis of ranked data has gathered much attention in machine learning community (see [@liu2011learning; @furnkranz2011preference]). See Section \[section: literature\] for more details. Partially ranked data is often observed in real data analysis. This is because one does not necessarily express his or her preference completely; for example, according to the election records of the American Psychological Association collected in 1980, one-third of ballots provided full preferences for five candidates, and the rest provided only top-$t$ preferences with $t=1,2,3$ (see Section 2A in [@diaconis1989generalization]); Data are commonly of partially ranked in movie ratings because respondents usually know only a few movie titles among a vast number of movies. Therefore, analyzing partially ranked data efficiently extends the range of application of statistical methods for ranked data. Partially ranked data is thought of as missing data. We can naturally consider that there exists a latent complete ranking behind a partial ranking as discussed in [@lebanon2008non]. The existing studies for partially ranked data make the Missing-At-Random (MAR) assumption, that is, an assumption that the missing mechanism generating partially ranked data is ignorable; Under the MAR assumption, [@busse2007cluster] and [@meilua2010dirichlet] leverage an extended distance for partially ranked data; [@lu2011learning] introduces a probability model for partially ranked data. However, an improper application of the MAR assumption may lead to a relatively large estimation error as argued in the literature on missing data analysis ([@little2014statistical]). In the statistical sense, if the missing mechanism is non-ignorable, using the MAR assumption is equivalent to using a misspecified likelihood function, which causes significantly biased parameter estimation and prediction. In fact, [@marlin2009collaborative] points out that there occurs a violation of the MAR assumption in music rankings. This paper addresses learning the distribution of complete and partial rankings based on partially ranked data under a (possibly) non-ignorable missing mechanism. Our approach includes estimating a missing mechanism. However, estimating a missing mechanism has an intrinsic difficulty. Consider a top-$t$ ranking of $r$ items. Length $t$ characterizes the missing pattern generating a top-$t$ ranking from a complete ranking with $r$ items. It requires $r!(r-2)$ parameters to fully parameterize the missing mechanism since $r!$ multinomial distributions with $r-1$ categories models the missing mechanism. Note that the number of complete rankings is $r!$. A large number of parameters cause over-fitting especially when the sample size is small. To avoid over-fitting, we introduce an estimation method leveraging the recent graph regularization technique ([@hallac2015network]) together with the Expectation-Maximization (EM) algorithm. The numerical experiments using simulation data as well as applications to real data indicate that our proposed estimation method works well especially under non-ignorable missing mechanisms. Contribution ------------ In this paper, we propose estimators for the distribution of a latent complete ranking and for a missing mechanism. To this end, we employ both a latent variable model and a recently developed graph regularization. Our proposal has two merits: First, we allow a missing mechanism to be non-ignorable by fully parameterizing it. Second, we reduce over-fitting due to the complexity of missing mechanisms by exploiting a graph regularization method. ![ A latent structure behind partially ranked data when the number of items is four: A ranking is expressed as a list of ranked items. The number located at the $i$-th position of a list represents the label of the $i$-th preference. The top layer shows latent complete rankings with a graph structure. A vertex in the top layer corresponds to a latent complete ranking. A edge in the top layer is endowed by a distance between complete rankings. The bottom three layers show partial rankings generated according to missing mechanisms. An arrow from the top layer to the bottom three layers corresponds to a missing pattern. A probability on arrows from a complete ranking to the resulting partial rankings corresponds to a missing mechanism. []{data-label="fig:concept"}](concept.pdf){width="7cm"} Our ideas for the construction of the estimators are two-fold: First, we work with a latent structure behind partially ranked data (see Figure \[fig:concept\]). This structure consists of the graph representing complete rankings (in the top layer) and arrows representing missing patterns. In this structure, a vertex in the top layer represents a latent complete ranking; An edge is endowed by a distance between complete rankings; An arrow from the top layer to the bottom layers represents a missing pattern; A multinomial distribution on arrows from a complete ranking corresponds to a missing mechanism. Second, we assume that two missing mechanisms become more similar as the associated complete rankings get closer to each other on the graph (in the top layer). Together with both the restriction to the probability simplex and the EM algorithm, these ideas are implemented by the graph regularization method ([@hallac2015network]) under the probability restriction. In addition, we discuss the convergence properties of the proposed method. The simulation studies as well as applications to real data demonstrate that the proposed method improves on the existing methods under non-ignorable missing mechanisms, and the performance of the proposed method is comparable to those of the existing methods under the MAR assumption. Literature review {#section: literature} ----------------- Relatively scarce is the literature on the inference for ranking-related data with (non-ignorable) missing data. [@Marlin:2007:CFM:3020488.3020521] points out that the MAR assumption does not hold in the context of the collaborative filtering. [@Marlin:2007:CFM:3020488.3020521] and [@marlin2009collaborative] propose two estimators based on missing mechanisms. These estimators show the higher performance both in prediction of rating and in the suggestion of top-$t$ ranked items to users than estimators ignoring a missing mechanism. Using the Plackett-Luce and the Mallows models, [@fahandar2017statistical] introduces a rank-dependent coarsening model for pairwise ranking data. This study is different from these studies in types of ranking-related data: [@marlin2009collaborative] and [@Marlin:2007:CFM:3020488.3020521] discuss rating data; [@fahandar2017statistical] discusses pairwise ranking data; this study discusses partially ranked data. Several methods have been proposed for estimating distributions of partially ranked data ([@beckett1993maximum; @busse2007cluster; @meilua2010dirichlet; @lebanon2008non; @jacques2014model; @caron2014bayesian]). These methods regard partially ranked data as missing data. [@beckett1993maximum] discusses imputing items on missing positions of a partial ranking by employing the EM algorithm. [@busse2007cluster] and [@meilua2010dirichlet] discuss the clustering of top-$t$ rankings by the existing ranking distances for top-$t$ rankings. [@lebanon2008non] proposes a non-parametric model together with a computationally efficient estimation method for partially ranked data. For the proposal, [@lebanon2008non] exploits the algebraic structure of partial rankings and utilizes the Mallows distribution as a smoothing kernel. [@jacques2014model] proposes a clustering algorithm for multivariate partially ranked data. [@caron2014bayesian] discusses Bayesian non-parametric inferences of top-$t$ rankings on the basis of the Plackett-Luce model. [@caron2014bayesian] does not explicitly rely on the framework that regards partially ranked data as the result of missing data; However, the model discussed in [@caron2014bayesian] is equivalent to that under the MAR assumption. Overall, all previous studies rely on the MAR assumption, whereas our study is the first attempt to estimate the distribution of partially ranked data with a (possibly) non-ignorable missing mechanism. We work with the graph regularization framework called Network Lasso ([@hallac2015network]). Network Lasso employs the alternating direction method of multipliers (ADMM; see [@boyd2011
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | We study the relationship of higher order variational eigenvalues of $p$-Laplacian and the higher order Cheeger constants. The asymptotic behavior of the $k$-th Cheeger constant is investigated. Using methods developed in [@2], we obtain the high-order Cheeger’s inequality of $p$-Laplacian on domain $h_k^p(\Omega)\leq C \lambda_{k}(p,\Omega)$. [***Keywords***]{}: [High order Cheeger’s inequality; eigenvalue problem; $p$-Laplacian]{} author: - | [Shumao Liu]{}\ [ School of Statistics and Mathematics]{}\ [ Central University of Finance and Economics]{}\ [Beijing, China, 100081]{}\ [(lioushumao@163.com ) ]{}\ title: ' High-order Cheeger’s inequality on domain ' --- Introduction. ============= Let $\Omega\subset\mathbb{R}^n$ be a bounded open domain. The minimax of the so-called Rayleigh quotient $$\label{lambda} \lambda_{k}(p,\Omega)=\inf_{A\in \Gamma_{k,p}}\max_{u\in A}\displaystyle \frac{\int_{\Omega}{|\nabla u|^pdx}}{\int_{\Omega}|u|^pdx},\ (1<p<\infty),$$ leads to a nonlinear eigenvalue problem, where $$\Gamma_{p,k}=\{A\in {W^{1,p}_0}(\Omega)\backslash \{0\}| A\cap\{\|u\|_p=1\}\mbox{is compact,}\ A \mbox{symmetric,}\ \gamma(A)\geq k\}.$$ The corresponding Euler-Lagrange equation is $$\label{p-laplacian equation} -\Delta_pu:=-\mbox{div}(|\nabla u|^{p-2}\nabla u)=\lambda|u|^{p-2}u,$$ with Dirichlet boundary condition. This eigenvalue problem has been extensively studied in the literature. When $p=2$, it is the familiar linear Laplacian equation$$\Delta u+\lambda u=0.$$ The solution of this Laplacian equation describes the shape of an eigenvibration, of frequency $\sqrt{\lambda}$, of homogeneous membrane stretched in the frame $\Omega$. It is well-known that the spectrum of Laplacian equation is discrete and all eigenfunctions form an orthonormal basis for $L^2(\Omega)$ space. For general $1<p<\infty$, the first eigenvalue $\lambda_1(p,\Omega)$ of $p$-Laplacian $-\Delta_p$ is simple and isolated. The second eigenvalue $\lambda_2(p,\Omega)$ is well-defined and has a “variational characterization", see [@20]. It has exactly 2 nodal domains, c.f.[@14]. However, we know little about the higher eigenvalues and eigenfunctions of the $p$-Laplacian when $p\not=2$. It is unknown whether the variational eigenvalues (\[lambda\]) can exhaust the spectrum of equation (\[p-laplacian equation\]). In this paper, we only discuss the variational eigenvalues (\[lambda\]). For (\[lambda\]), there are asymptotic estimates, c.f.[@17] and [@18]. [@21], [@22], and [@23] discuss the $p$-Laplacian eigenvalue problem as $p\rightarrow\infty$ and $p\rightarrow 1$. The Cheeger’s constant which was first studied by J.Cheeger in [@9] is defined by $$\label{cheeger inequality} h(\Omega):=\displaystyle\inf_{D\subseteq\Omega}\frac{|\partial D|}{|D|},$$ with $D$ varying over all smooth subdomains of $\Omega$ whose boundary $\partial D$ does not touch $\partial\Omega$ and with $|\partial D|$ and $|D|$ denoting $(n-1)$ and $n$-dimensional Lebesgue measure of $\partial D$ and $D$. We call a set $C\subseteq \overline{\Omega}$ Cheeger set of $\Omega$, if $\displaystyle\frac{|\partial C|}{|C|}=h(\Omega)$. For more about the uniqueness and regularity, we refer to [@11]. Cheeger sets are of significant importance in the modelling of landslides, see [@24],[@25], or in fracture mechanics, see [@26]. The classical Cheeger’s inequality is about the first eigenvalue of Laplacian and the Cheeger constant(c.f.[@3]) $$\lambda_{1}(2,\Omega)\geq \bigg(\frac{h(\Omega)}{2}\bigg)^2\quad\mbox{i.e.}\quad h(\Omega)\leq 2\sqrt{\lambda_{1}(2,\Omega)},$$ which was extent to the $p$-Laplacian in [@12]: $$\lambda_{1}(p,\Omega)\geq \bigg(\frac{h(\Omega)}{p}\bigg)^p.$$ When $p=1$, the first eigenvalue of $1$-Laplacian is defined by $$\label{1-laplace} \lambda_{1}(1,\Omega):=\min_{0\not=u\in BV(\Omega)}\displaystyle\frac{\int_{\Omega}|Du|+\int_{\partial\Omega}|u|d\mathcal{H}^{n-1}}{\int_{\Omega}|u|dx},$$ where $BV(\Omega)$ denotes the space of functions of bounded variation in $\Omega$. From [@3], $\lambda_{1}(1,\Omega)=h(\Omega)$. And, problem (\[cheeger inequality\]) and problem (\[1-laplace\]) are equivalent in the following sense: a function $u\in BV(\Omega)$ is a minimum of (\[1-laplace\]) if and only if almost every level set is a Cheeger set. An important difference between $\lambda_1(p,\Omega)$ and $h_k(\Omega)$ is that the first eigenfunction of $p$-Laplacian is unique while the uniqueness of Cheeger set depends on the topology of the domain. For counterexamples, see [@4 Remark 3.13]. For more results about the eigenvalues of 1-Laplacian, we refer to [@6] and [@7]. As to the more general Lipschitz domain, we need the following definition of perimeter: $$P_{\Omega}(E):=\sup\bigg{\{}\int_E \mbox{div}\phi dx\bigg{|} \phi\in C^1_c(\Omega, \mathbb{R}^n), |\phi|\leq 1, \mbox{div} \phi\in L^{\infty}(\Omega)\bigg{\}}.$$ For convenience, we denote $|\partial E|:=P_{\Omega}(E)$. The higher order Cheeger’s constant is defined by $$h_k(\Omega):=\inf\{\lambda\in \mathbb{R}^+|\exists \ E_1,E_2,\cdots,E_k\subseteq \Omega, E_i\cap E_j=\emptyset,\mbox{µ±}i\not=j,\max_{1,2,\cdots,k}\frac{|\partial E_i|}{|E_i|}\leq \lambda \};$$ if$|E|=0$, we set $\displaystyle\frac{|\partial E|}{|E|}=+\infty$. An equivalent characterization of the higher order Cheeger constant is (see[@4]) $$h_k(\Omega):=\inf_{\mathfrak{D}_k}\max_{i=1,2,\cdots,k}h(E_i),$$where $\mathfrak{D}_k$ are the set of all partitions of $\Omega$ with $k$ subsets. We set $h_1(\Omega):=h(\Omega)$. Obviously, if $R\subseteq \Omega$, then $h_k(\Omega)\leq h_k(R)$. For the high-order Cheeger constants, there is a conjecture: $$\label{conjecture} \lambda_{k}(p,\Omega)\geq \bigg(\frac{h_k(\Omega)}{p}\bigg)^p.\qquad \forall\ 1\leq k < +\infty, \ 1< p < +\infty.$$ From [@14 Theorem 3.3], the second variational eigenfunction of $-\Delta_p$ has exactly two nodal domains, see also [@20]. It follows that (\[conjecture\]) is hold for $k=1,2.$ We refer to [@4 Theorem 5.4] for more details. However, by Courant’s nodal domain theorem, for other variational eigenfunctions, it is not necessary to have exactly $k$ nodal domains. Therefore, the inequality (\[conjecture\]) on domain is still an open problem for $k>2$. In this paper, we will get an asymptotic estimate for $h_k(\Omega)$ and establish high-order Cheeger’s inequality for general $k$, and discuss the reversed inequality. To deal with the high-order Cheeger’s inequality, we should give some restriction on domain. If there exists $n$-dimensional rectangle $R\subset \Omega$ and $c_1,c_2$ independent of $\Omega$, such that $c_1|R|\leq |\Omega|\leq c_2|R|$, we say $R$ the comparable inscribed rectangle of $\Omega$. In graph theory, when $p=2$ the high-order Cheeger inequality was proved in [@1], and was improved in [@2]. In [@1], using orthogonality of the eigenfunctions of Laplacian in $l_2$ and a random partitioning, they got $$\frac{\lambda_k}{2}\leq \rho_G(k)\leq O(k^2)\sqrt{\lambda_k},$$ where $\rho_G(k)$ is the
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'Niral Desai,' - 'Can Kilic,' - 'Yuan-Pao Yang,' - Taewook Youn bibliography: - 'bib\_FDM\_xdim.bib' title: Suppressed flavor violation in Lepton Flavored Dark Matter from an extra dimension --- Introduction {#sec:intro} ============ While the existence of dark matter (DM) is strongly supported by astronomical observations, its microscopic nature remains a mystery. In the absence of experimental input from particle physics experiments such as the Large Hadron Collider (LHC), direct, or indirect DM detection experiments, models of DM are designed to be simple, and to be compatible with extensions of the Standard Model that are motivated by other considerations. For instance, in models that address the naturalness problem of the scalar sector in the Standard Model (SM) by introducing partner particles that are odd under a $Z_{2}$ symmetry, the DM can be the lightest partner particle, which often leads to its observed relic abundance through thermal production in the early universe. Alternatively, models of asymmetric DM [@Nussinov:1985xr; @Gelmini:1986zz; @Barr:1990ca; @Barr:1991qn; @Kaplan:1991ah; @Kaplan:2009ag; @Petraki:2013wwa; @Zurek:2013wia] allow for a simple connection between DM and the matter/antimatter asymmetry in the SM sector. Axion DM [@PhysRevD.16.1791; @PhysRevLett.38.1440; @PhysRevLett.40.223; @PhysRevLett.40.279] is motivated by its connection to the strong CP problem. Recently, models of Flavored Dark Matter (FDM) [@MarchRussell:2009aq; @Cheung:2010zf; @Kile:2011mn; @Batell:2011tc; @Agrawal:2011ze; @Kumar:2013hfa; @Lopez-Honorez:2013wla; @Kile:2013ola; @Batell:2013zwa; @Agrawal:2014una; @Agrawal:2014aoa; @Hamze:2014wca; @Lee:2014rba; @Kile:2014jea; @Kilic:2015vka; @Calibbi:2015sfa; @Agrawal:2015tfa; @Bishara:2015mha; @Bhattacharya:2015xha; @Baek:2015fma; @Chen:2015jkt; @Agrawal:2015kje; @Yu:2016lof; @Agrawal:2016uwf; @Galon:2016bka; @Blanke:2017tnb; @Blanke:2017fum; @Renner:2018fhh; @Dessert:2018khu] have been introduced to consider a different type of connection, between DM and the flavor structure of the SM. In FDM models, the DM is taken to transform non-trivially under lepton, quark, or extended flavor symmetries, and it couples to SM fermions at the renormalizable level via a mediator. This coupling is taken to be of the form \_[ij]{} |\_[i]{} \_[j]{}+ [h.c.]{}, \[eq:FDMgeneral\] where the $\chi_{i}$ represent the DM “flavors”, the $\psi_{j}$ are generations of a SM fermion (such as the right-handed leptons) and $\phi$ is the mediator. Both particle physics as well as astrophysical signatures of FDM have become active areas of research. Because of the non-trivial flavor structure of the interaction of equation \[eq:FDMgeneral\], one of the main phenomenological challenges for FDM models is to keep beyond the Standard Model flavor changing processes under control. Indeed, when no specific structure is assumed for the entries in the $\lambda_{ij}$ matrix, the off-diagonal elements can give rise to flavor changing neutral currents (FCNCs) with rates that are excluded experimentally [@TheMEG:2016wtm; @Tanabashi:2018oca]. Most phenomenological studies of FDM models simply assume that the entries in the $\lambda_{ij}$ matrix have a specified form, such as Minimal Flavor Violation (MFV) [@DAmbrosio:2002vsn], in order to minimize flavor violating processes, but it is not clear that there is a UV completion of the FDM model where the MFV structure arises naturally. In this paper we will adopt a benchmark of lepton-FDM, where the SM fields participating in the FDM interaction of equation \[eq:FDMgeneral\] are the right-handed ($SU(2)$ singlet) leptons, and we will show that in a (flat) five-dimensional (5D) UV completion[^1] of this model, the rates of flavor violating processes can be naturally small. In fact, as we will show, in the region of parameter space where relic abundance and indirect detection constraints are satisfied, the branching fraction for $\mu\rightarrow e\gamma$, which is the leading flavor violating process, is orders of magnitude below the experimental bounds. We take the DM ($\chi_{i}$) and mediator ($\phi$) fields to be confined to a brane on one end of the extra dimension (the “FDM brane”), and the Higgs field to be confined to a brane on the other end (the “Higgs brane”), while the SM fermion and gauge fields are the zero modes of corresponding 5D bulk fields. In the bulk and on the FDM brane, there exist global $SU(3)$ flavor symmetries for each SM fermion species $\{q_{L}, u_{R}, d_{R}, \ell_{L}, e_{R}\}$, but these symmetries are broken on the Higgs brane. Flavor violation can only arise due to the mismatch between the basis in which the Yukawa couplings and the boundary-localized kinetic terms (BLKTs) [@Georgi:2000ks; @Dvali:2001gm; @Carena:2002me; @delAguila:2003bh; @delAguila:2003kd; @delAguila:2003gv; @delAguila:2006atw] on the Higgs brane are diagonal, and the basis in which the interaction of equation \[eq:FDMgeneral\] on the FDM brane is diagonal. Naively, one may think that no such mismatch can arise, since the FDM interaction starts out proportional to $\delta_{ij}$, and must therefore remain so after any unitary basis transformation. The Higgs brane BLKTs however cause shifts in the normalization of the lepton kinetic terms in a non-flavor universal way, and therefore the basis transformation necessary to bring the fields back into canonically normalized form involves rescalings, which are not unitary. By the time this is done and all interactions on the Higgs brane are brought to diagonal form, the FDM interaction is no longer diagonal. However, the size of the off-diagonal entries can be controlled by adjusting the profiles of the leptons along the extra dimension. In particular, by an appropriate choice of bulk masses, the fermion profiles can be made to peak on either brane, and be exponentially suppressed on the other. In the limit where the lepton profiles are sharply peaked on the FDM brane, the effect of all Higgs brane couplings vanish, and there is no flavor violation. Of course, in that limit the lepton zero modes, which only obtain masses from the Yukawa interactions on the Higgs brane, also become massless. Thus there is a tension between reproducing the correct lepton masses and suppressing lepton flavor violating processes. In the rest of this paper, we will quantitatively study this setup, and show that there are regions in the parameter space where the model can be made consistent with all experimental constraints. The layout of the paper is as follows: In section \[sec:xdim\], we will introduce the details of the 5D model. Then in section \[sec:constraints\], we will study the impact of constraints from relic abundance, direct and indirect DM detection experiments, flavor violating processes and collider searches on the parameter space of the model. We will conclude in section \[sec:conclusions\]. Details of the model {#sec:xdim} ==================== [**Generalities:**]{} As described in the introduction, we will adopt a benchmark model of lepton-FDM. Since we wish to consider a 5D UV completion, it is convenient to make use of 4 component Dirac spinor notation. We introduce three flavors of DM \_[,i]{}=( [c]{} \_[L,i]{}\ \_[R,i]{} ), and a scalar mediator field $\phi$ with hypercharge $+1$, such that the 4D effective Lagrangian contains an interaction between $\chi_{L,i}$ and the right handed leptons $e_{R,j}$ \_[ij]{} |\_[L,i]{} e\_[R,j]{}+ [h.c.]{}. \[eq:FDM\] This effective interaction arises from an orbifolded flat extra dimension of length $L$, with the FDM brane at $y=0$ and the Higgs brane at $y=L$. As we will see in section \[sec:constraints\], constraints on the resonant production of the Kaluza-Klein (KK) modes of the SM gauge bosons suggest that the KK scale must be $\pi / L \gsim 10~$TeV, but we remark that the KK scale can in principle be much higher ($L^{-1}\lsim M_{\rm Planck,5D}$), which significantly simplifies the cosmological history. We will make
{ "pile_set_name": "ArXiv" }
null
null
--- address: | Center for Advanced Methods in Biological Image Analysis\ Center for Data Driven Discovery\ California Institute of Technology, Pasadena, CA, USA bibliography: - 'main.bib' title: Geometric Median Shapes --- Shape Analysis, Geometric Median, Median Shape, Average Shape, Segmentation Fusion
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Extending the notion of symmetry protected topological phases to insulating antiferromagnets (AFs) described in terms of opposite magnetic dipole moments associated with the magnetic N$\acute{{\rm{e}}} $el order, we establish a bosonic counterpart of topological insulators in semiconductors. Making use of the Aharonov-Casher effect, induced by electric field gradients, we propose a magnonic analog of the quantum spin Hall effect (magnonic QSHE) for edge states that carry helical magnons. We show that such up and down magnons form the same Landau levels and perform cyclotron motion with the same frequency but propagate in opposite direction. The insulating AF becomes characterized by a topological ${\mathbb{Z}}_{2}$ number consisting of the Chern integer associated with each helical magnon edge state. Focusing on the topological Hall phase for magnons, we study bulk magnon effects such as magnonic spin, thermal, Nernst, and Ettinghausen effects, as well as the thermomagnetic properties of helical magnon transport both in topologically trivial and nontrivial bulk AFs and establish the magnonic Wiedemann-Franz law. We show that our predictions are within experimental reach with current device and measurement techniques.' author: - 'Kouki Nakata,$^1$ Se Kwon Kim,$^2$ Jelena Klinovaja,$^1$ and Daniel Loss$^1$' bibliography: - 'PumpingRef.bib' title: Magnonic topological insulators in antiferromagnets --- Introduction {#sec:Intro} ============ Since the observation of quasiequilibrium Bose-Einstein condensation [@demokritov] of magnons in an insulating ferromagnet (FM) at room temperature, the last decade has seen remarkable and rapid development of a new branch of magnetism, dubbed magnonics [@MagnonSpintronics; @magnonics; @ReviewMagnon], aimed at utilizing magnons, the quantized version of spin-waves, as substitute for electrons with the advantage of low-dissipation. Magnons are chargeless bosonic quasi-particles with a magnetic dipole moment $g \mu_{\rm{B}} {\mathbf{e}}_z$ that can serve as a carrier of information in units of the Bohr magneton $\mu_{\rm{B}}$. In particular, insulating FMs [@spinwave; @onose; @WeesNatPhys; @MagnonHallEffectWees] that possess a macroscopic magnetization \[Fig. \[fig:HelicalAFChiralFM\] (a)\] have been playing an essential role in magnonics. Spin-wave spin current [@spinwave; @WeesNatPhys], thermal Hall effect of magnons [@onose], and Snell’s law [@Snell_Exp; @Snell2magnon] for spin-waves have been experimentally established and just this year the magnon planar Hall effect [@MagnonHallEffectWees] has been observed. A magnetic dipole moving in an electric field acquires a geometric phase by the Aharonov-Casher [@casher; @Mignani; @magnon2; @KKPD; @ACatom; @AC_Vignale; @AC_Vignale2] (AC) effect, which is analogous to the Aharonov-Bohm effect [@bohm; @LossPersistent; @LossPersistent2] of electrically charged particles in magnetic fields, and the AC effect in magnetic systems has also been experimentally confirmed [@ACspinwave]. ![(Color online) Left: Schematic representation of spin excitations in (a) a FM with the uniform ground state magnetization and (b) an AF with classical magnetic N$\acute{{\rm{e}}} $el order. Right: Schematic representation of edge magnon states in a two-dimensional topological (a) FM and (b) AF. (a) Chiral edge magnon state where magnons with a magnetic dipole moment $ g\mu_{\rm{B}} {\bf e}_z$ propagate along the edge of a finite sample in a given direction. (b) Helical edge magnon state where up and down magnons ($\sigma =\pm 1$) with opposite magnetic dipole moments $\sigma g\mu_{\rm{B}} {\bf e}_z$ propagate along the edge in opposite directions. The AF thus forms a bosonic analog of a TI characterized by two edge modes with opposite chiralities and can be identified with two independent copies (with opposite magnetic moments) of single-layer FMs shown in (a). []{data-label="fig:HelicalAFChiralFM"}](HelicalAFChiralFM.eps){width="7.5cm"} Under a strong magnetic field, two-dimensional electronic systems can exhibit the integer quantum Hall effect [@QHEcharge] (QHE), which is characterized by chiral edge modes. Thouless, Kohmoto, den Nijs, and Nightingale [@TKNN; @Kohmoto] (TKNN) described the QHE  [@AFQHE; @HaldaneQHEnoB; @VolovikQHE; @JK_IFQAHE] in terms of a topological invariant, known as TKNN integer, associated with bulk wave functions [@HalperinEdge; @BulkEdgeHatsugai] in momentum space. This introduced the notion of topological phases of matter, which has been attracting much attention over the last decade. In particular, in 2005, Kane and Mele [@Z2topo; @QSHE2005] have shown that graphene in the absence of a magnetic field exhibits a quantum spin Hall effect (QSHE) [@QSHE2005; @QSHE2006; @TIreview; @TIreview2], which is characterized by a pair of gapless spin-polarized edge states. These helical edge states are protected from backscattering by time-reversal symmetry (TRS), forming in this sense topologically protected Kramers pairs. This can be seen to be the first example of a symmetry protected topological (SPT) phase [@SPTreviewXGWen; @SPTreviewSenthil; @SenthilLevin] and it is now classified as a topological insulator (TI) [@Z2topo; @Z2topoHaldane; @Z2SM; @TIreview; @TIreview2], which is characterized by a ${\mathbb{Z}}_{2}$ number, as the TKNN integer [@TKNN; @Kohmoto] associated with each edge state. In this paper we extend the notion of topological phases to insulating antiferromagnets (AFs) in the N$\acute{{\rm{e}}} $el ordered phases which do not possess a macroscopic magnetization, see Fig. \[fig:HelicalAFChiralFM\] (b). The component of the total spin along the N$\acute{{\rm{e}}} $el vector is assumed to be conserved, and it is this conservation law which plays the role of the TRS (which is broken in the ordered AF) that protects the topological phase and helical edge states against nonmagnetic impurities and the details [^1] of the surface. [@SurfaceMode] In particular, using magnons [@AndersonAF; @RKuboAF; @AFspintronicsReview2; @AFspintronicsReview] we thus establish a bosonic counterpart of the TI and propose a magnonic QSHE resulting from the N$\acute{{\rm{e}}} $el order in AFs. In Ref. \[\], motivated by the above-mentioned remarkable progress in recent experiments, we [@magnon2; @magnonWF; @ReviewMagnon; @KevinHallEffect] have proposed a way to electromagnetically realize the ‘quantum’ Hall effect of magnons in FMs, in the sense that the magnon Hall conductances are characterized by a Chern number [@TKNN; @Kohmoto] in an almost flat magnon band, which hosts a chiral edge magnon state, see Fig. \[fig:HelicalAFChiralFM\] (a). [^2] By providing a topological description [@NiuBerry; @Kohmoto; @TKNN] of the classical magnon Hall effect induced by the AC effect, which was proposed in Ref. \[\], we developed it further into the magnonic ‘quantum’ Hall effect and appropriately defining the thermal conductance for bosons, we found that the magnon Hall conductances in such topological FMs obey a Wiedemann-Franz [@WFgermany] (WF) law for magnon transport [@magnonWF; @ReviewMagnon]. In this paper, motivated by the recent experimental [@SekiAF] demonstration of thermal generation of spin currents in AFs using the spin Seebeck effect [@uchidainsulator; @ishe; @ohnuma; @adachi; @adachiphonon; @xiao2010; @OnsagerExperiment; @Peltier] and by the report [@MagnonNernstAF; @MagnonNernstAF2; @MagnonNernstExp] of the magnonic spin Nernst effect in AFs, we develop Ref. \[\] further into the AF regime [@AFspintronicsReview2; @AFspintronicsReview; @RKuboAF; @AndersonAF; @Kevin2; @DLquantumAF] and propose a magnonic analog of the QSHE [@QSHE2005; @QSHE2006; @Z2topo; @Z2topoHaldane; @Z2SM; @TIreview; @TIreview2] for edge states that carry helical edge magnons \[Fig. \[fig:
{ "pile_set_name": "ArXiv" }
null
null
In the past few years, the statistical mechanics of disordered systems has been frequently applied to understand the macroscopic behavior of many technologically useful problems, such as optimization (e.g. graph partitioning and traveling salesman) [@mpv], learning in neural networks [@hkp], error correcting codes [@ecode] and the $K$-satisfiability problem [@mz]. One important phenomenon studied by this approach is the phase transitions in such systems, e.g. the glassy transition in optimization when the noise temperature of the simulated annealing process is reduced, the storage capacity in neural networks and the entropic transition in the $K$-satisfiability problem. Understanding these transitions are relevant to the design and algorithmic issues in their applications. In turn, since their behavior may be distinct from conventional disordered systems, the perspectives of statistical mechanics are widened. In this paper we consider the phase transitions in noise reduction (NR) techniques in signal processing. They have been used in a number of applications such as adaptive noise cancelation, echo cancelation, adaptive beamforming and more recently, blind separation of signals [@haykin; @blind]. While the formulation of the problem depends on the context, the following model is typical of the general problem. There are $N$ detectors picking up signals mixed with noises from $p$ noise sources. The input from detector $j$ is $x_j\!=\!a_jS\!+\!\sum_\mu\xi^\mu_j n_\mu$, where $S$ is the signal, $n_\mu$ for $\mu\!=\!1,..,p$ is the noise from the $\mu$th noise source, and $n_\mu\!\ll\!S$. $a_j$ and $\xi^\mu_j$ are the contributions of the signal and the $\mu$th noise source to detector $j$. NR involves finding a linear combination of the inputs so that the noises are minimized while the signal is kept detectable. Thus, we search for an $N$ dimensional vector $J_j$ such that the quantities $\sum_j \xi^\mu_j J_j$ are minimized, while $\sum_j a_j J_j$ remains a nonzero constant. To consider solutions with comparable power, we add the constraint $\sum_j J_j^2 = N$. While there exist adaptive algorithms for this objective [@haykin], here we are interested in whether the noise can be intrinsically kept below a tolerance level after the steady state is reached, provided that a converging algorithm is available. When both $p$ and $N$ are large, we use a formulation with normalized parameters. Let $h^\mu$ be the local fields for the $\mu$th source defined by $h^\mu \equiv \sum_j \xi^\mu_j J_j/\sqrt N$. Learning involves finding a vector $J_j$ such that the following conditions are fulfilled. (a) $|h^\mu|<k$ for all $\mu\!=\!1,..,p$, where $k$ is the tolerance bound. We assume that the vectors $\xi_j^\mu$ are randomly distributed, with $\left\langle\!\left\langle\xi^\mu_j\right\rangle\!\right\rangle\!=\!0$, and $\left\langle\!\left\langle\xi^\mu_i\xi^\nu_j\right\rangle\!\right\rangle \!=\!\delta_{ij}\delta_{\mu\nu}$. Hence, they introduce symmetric constraints to the solution space. (b) The normalization condition $\sum_j J_j^2\!=\!N$. (c) $|\sum_j a_j J_j/\sqrt N|\!=\!1$; however, this condition is easily satisfied: if there exists a solution satisfying (a) and (b) but yields $|\sum_j a_j J_j/\sqrt N|$ different from 1, it is possible to make an adjustment of each component $J_j$ proportional to ${\rm sgn}a_j/\sqrt N$. Since the noise components $\xi_j^\mu$ are uncorrelated with $a_j$, the local fields make a corresponding adjustment of the order $1/\sqrt N$, which vanishes in the large $N$ limit. The space of the vectors $J_j$ satisfying the constraints (a) and (b) is referred to as the [*version space*]{}. This formulation of the problem is very similar to that of pattern storage in the perceptron with continuous couplings [@Ga]. However, in the perceptron the constraints (a) are $h^\mu\!>\!k$, while there is an extra inversion symmetry in the NR model: the version space is invariant under $\vec J\to -\vec J$. We can also consider the NR model as a simplified version of the perceptron with multi-state output, in which the values of local fields for each pattern are bounded in one of the few possible intervals. In the present model, all local fields are bounded in the symmetric interval $[-k,k]$. This symmetry will lead to very different phase behavior, although it shares the common feature that the version space is not connected or not convex, with other perceptron models, e.g. errors were allowed [@ET], couplings were discrete [@KM] or pruned [@KGE], transfer functions were non-monotonic [@BE]. When the number of noise sources increases, the version space is reduced and undergoes a sequence of phase transitions, causing it to disappear eventually. These transitions are observed by monitoring the evolution of the overlap order parameter $q$, which is the typical overlap between two vectors in the version space. For few noise sources, the version space is extended and $q=0$. When the number of noise sources $p$ increases, the number of constraints increases and the version space shrinks. One possible scenario is that each constraint reduces the volume of the version space, and there is a continuous transition to a phase of nonzero value of $q$. Alternatively, each constraint introduces a volume reduction resembling a percolation process, in which the version space remains extended until a sufficient number of constraints have been introduced, and the version space is suddenly reduced to a localized cluster. This may result in a discontinuous transition from zero to nonzero $q$. We expect that the transition takes place when $p$ is of the order $N$, and we define $\alpha\equiv p/N$ as the noise population. When $\alpha$ increases further, $q$ reaches its maximum value of 1 at $\alpha = \alpha_c$, which is called the critical population. The purpose of this paper is to study the nature and conditions of occurrence of these transitions. We consider the entropy ${\cal S}$, which is the logarithm of the volume of the version space and is self-averaging. Using the replica method, ${\cal S}=\lim_{n\to0}(\left\langle \!\!\!\left\langle{\cal V}^n\right \rangle\!\!\!\right\rangle\!-\!1)/n$, and we have to calculate $\left\langle \!\!\!\left\langle{\cal V}^n\right\rangle\!\!\!\right\rangle$ given by $$\left\langle \!\!\!\left\langle\prod_{a=1}^n \int\prod_{j=1}^N dJ^a_j \delta(\sum_{j=1}^N J^{a2}_j\!-\! N)\prod_{\mu=1}^p \theta (k^2\!-\!{h^\mu_a}^2)\right\rangle\!\!\!\right\rangle, \label{pr:1}$$ with $h^\mu_a\!\equiv\!\sum_j J^a_j\xi^\mu_j/\sqrt N$. Averaging over the input patterns, and using the Gardner method [@Ga], we can rewrite (\[pr:1\]) as $\left\langle \!\!\!\left\langle{\cal V}^n\right\rangle\!\!\!\right\rangle=\int \prod_{a<b=1}^n dq_{ab}\exp(Ng)$. The overlaps between the coupling vectors of distinct replicas $a$ and $b$: $q_{ab}\equiv{\sum_{j=1}^N }J^a_jJ^b_j/N$, are determined from the stationarity conditions of $g$. Due to the inversion symmetry of the constraints, it always has the all-zero solution ($q_{ab}\!=\!0,\forall a\!<\!b$), but it becomes locally unstable at a noise population $$\alpha_{\rm AT}(k) ={\pi\over2}{{\rm erf}({k\over\sqrt{2}})^2\over k^2 \exp(-k^2)}\ . \label{pr:4}$$ For $\alpha\!>\!\alpha_{\rm AT}$, the simplest solution assumes $q_{ab}\!=\!q\!>\!0$. This replica symmetric solution (RS), however, is not stable against replica symmetry breaking (RSB) fluctuations for any $q\!>\!0$. Hence, (\[pr:4\]) is an Almeida-Thouless line [@mpv], and RSB solutions in the Parisi scheme [@mpv] have to be considered. The transition of $q$ from zero to nonzero is absent in the problem of pattern storage in the perceptron, where $q$ increases smoothly from zero when the storage level $\alpha$ increases [@Ga]. Rather, the situation is reminiscent of the spin glass transition in the Sherrington-Kirkpatrick (SK) model, which does possess an inversion symmetry [@mpv]. The phase
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In this article, we study optimal control problems of spiking neurons whose dynamics are described by a phase model. We design minimum-power current stimuli (controls) that lead to targeted spiking times of neurons, where the cases with unbounded and bounded control amplitude are considered. We show that theoretically the spiking period of a neuron, modeled by phase dynamics, can be arbitrarily altered by a smooth control. However, if the control amplitude is bounded, the range of possible spiking times is constrained and determined by the bound, and feasible spiking times are optimally achieved by piecewise continuous controls. We present analytic expressions of these minimum-power stimuli for spiking neurons and illustrate the optimal solutions with numerical simulations.' author: - Isuru Dasanayake - 'Jr-Shin Li' bibliography: - 'SingleNeuron\_PRE.bib' nocite: '[@*]' title: 'Optimal Design of Minimum-Power Stimuli for Spiking Neurons' --- Introduction {#sec:intro} ============ Control of neurons and hence the nervous system by external current stimuli (controls) has received increased scientific attention in recent years for its wide range of applications from deep brain stimulation to oscillatory neurocomputers [@uhlhaas06; @osipov07; @Izhikevich99]. Conventionally, neuron oscillators are represented by phase-reduced models, which form a standard nonlinear system [@Brown04; @Winfree01]. Intensive studies using phase models have been carried out, for example, on the investigation of the patterns of synchrony that result from the type and architecture of coupling [@Ashwin92; @Taylor98] and on the response of large groups of oscillators to external stimuli [@Moehlis06; @Tass89], where the inputs to the neuron systems were initially defined and the dynamics of neural populations were analyzed in detail. Recently, control theoretic approaches have been employed to design external stimuli that drive neurons to behave in a desired way. For example, a multilinear feedback control technique has been used to control the individual phase relation between coupled oscillators [@kano10]; a nonlinear feedback approach has been employed to engineer complex dynamic structures and synthesize delicate synchronization features of nonlinear systems [@Kiss07]; and our recent work has illustrated controllability of a network of neurons with different natural oscillation frequencies adopting tools from geometric control theory [@Li_NOLCOS10]. There has been an increase in the demand for controlling not only the collective behavior of a network of oscillators but also the behavior of each individual oscillator. It is feasible to change the spiking periods of oscillators or tune the individual phase relationship between coupled oscillators by the use of electric stimuli [@Schiff94; @kano10]. Minimum-power stimuli that elicit spikes of a neuron at specified times close to the natural spiking time were analyzed [@Moehlis06]. Optimal waveforms for the entrainment of weakly forced oscillators that maximize the locking range have been calculated, where first and second harmonics were used to approximate the phase response curve [@kiss10]. These optimal controls were found mainly based on the calculus of variations, which restricts the optimal solutions to the class of smooth controls and the bound of the control amplitude was not taken into account. In this paper, we apply the Pontryagin’s maximum principle [@Pontryagin62; @Stefanatos10] to derive minimum-power controls that spike a neuron at desired time instants. We consider both cases when the available control amplitude is unbounded and bounded. The latter is of practical importance due to physical limitations of experimental equipment and the safety margin for neurons, e.g., the requirement of a mild brain stimulations in neurological treatments for Parkinson’s disease and epilepsy. This paper is organized as follows. In Section \[sec:phase\_model\], we introduce the phase model for spiking neurons and formulate the related optimal control problem. In Section \[sec:minpower\_control\], we derive minimum-power controls associated with specified spiking times in the absence and presence of control amplitude constraints, in which various phase models including sinusoidal PRC, SNIPER PRC, and theta neuron models are considered. In addition, we present examples and simulations to demonstrate the resulting optimal control strategies. Optimal Control of Spiking Neurons {#sec:phase_model} ================================== A periodically spiking or firing neuron can be considered as a periodic oscillator governed by the nonlinear dynamical equation of the form $$\label{eq:phasemodel} \frac{d\theta}{dt}=f(\theta)+Z(\theta)I(t),$$ where $\theta$ is the phase of the oscillation, $f(\theta)$ and $Z(\theta)$ are real-valued functions giving the neuron’s baseline dynamics and its phase response, respectively, and $I(t)$ is an external current stimulus [@Brown04]. The nonlinear dynamical system described in is referred to as the phase model for the neuron. The assumption that $Z(\theta)$ vanishes only on isolated points and that $f\left(\theta\right)>0$ are made so that a full revolution of the phase is possible. By convention, neuron spikes occur when $\theta=2n\pi$, where $n\in\mathbb{N}$. In the absence of any input $I(t)$, the neuron spikes periodically at its natural frequency, while the periodicity can be altered in a desired manner by an appropriate choice of $I(t)$. In this article, we study optimal design of neural inputs that lead to the spiking of neurons at a specified time $T$ after spiking at time $t=0$. In particular, we find the stimulus that fires a neuron with minimum power, which is formulated as the following optimal control problem, $$\begin{aligned} \label{eq:opt_con_pro} \min_{I(t)} \quad & \int_0^T I(t)^2\,dt\\ {\rm s.t.} \quad & \dot{\theta}=f(\theta)+Z(\theta)I(t), \nonumber\\ &\theta(0)=0, \quad \theta(T)=2\pi \nonumber\\ &|I(t)|\leq M, \ \ \forall\ t, \nonumber\end{aligned}$$ where $M>0$ is the amplitude bound of the current stimulus $I(t)$. Note that instantaneous or arbitrarily delayed spiking of a neuron is possible if $I(t)$ is unbounded, i.e., $M=\infty$; however, the range of feasible spiking periods of a neuron described as in is restricted with a finite $M$. We consider both unbounded and bounded cases. Minimum-Power Stimulus for Specified Firing Time {#sec:minpower_control} ================================================ We consider the minimum-power optimal control problem of spiking neurons as formulated in for various phase models including sinusoidal PRC, SNIPER PRC, and theta neuron. Sinusoidal PRC Phase Model {#sec:sine_prc} -------------------------- Consider the sinusoidal PRC model, $$\label{eq:sin_model} \dot{\theta}=\omega+z_d\sin\theta\cdot I(t),$$ where $\omega$ is the natural oscillation frequency of the neuron and $z_d$ is a model-dependent constant. The neuron described by this phase model spikes periodically with the period $T=2\pi/\omega$ in the absence of any external input, i.e., $I(t)=0$. ### Spiking Neurons with Unbounded Control {#sec:unbounded_control_sine} The optimal current profile can be derived by Pontryagin’s Maximum Principle [@Pontryagin62]. Given the optimal control problem as in , we form the control Hamiltonian $$\label{eq:hamiltonian} H=I^2+\lambda(\omega+z_d\sin\theta\cdot I),$$ where $\lambda$ is the Lagrange multiplier. The necessary optimality conditions according to the Maximum Principle give $$\begin{aligned} \label{eq:lambda_dot} \dot{\lambda}=-\frac{\partial H}{\partial \theta}=-\lambda z_d I\cos\theta,\end{aligned}$$ and $\frac{\partial H}{\partial I}=2I+\lambda z_d\sin\theta=0$. Hence, the optimal current $I$ satisfies $$\begin{aligned} \label{eq:I} I=-\frac{1}{2}\lambda z_d\sin\theta.\end{aligned}$$ Substituting into and , the optimal control problem is then transformed to a boundary value problem, which characterizes the optimal trajectories of $\theta(t)$ and $\lambda(t)$, $$\begin{aligned} \label{eq:theta_p2} \dot{\theta} &= \omega-\frac{z_d^2\lambda}{2} \sin^2 \theta,\\ \label{eq:lambda_p2} \dot{\lambda} &= \frac{z_d^2\lambda^2}{2} \sin\theta\cos\theta,\end{aligned}$$ with boundary conditions $\theta(0)=0$ and $\theta(T)=2\pi$ while $\lambda(0)$ and $\lambda(T)$ are unspecified. Additionally, since the Hamiltonian is not explicitly dependent on time, the optimal triple $(\lambda,\theta, I)$ satisfies $H(\lambda,\theta,I)=c$, $\forall\, 0\leq
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We consider network coding for a noiseless broadcast channel where each receiver demands a subset of messages available at the transmitter and is equipped with *noisy side information* in the form an erroneous version of the message symbols it demands. We view the message symbols as elements from a finite field and assume that the number of symbol errors in the noisy side information is upper bounded by a known constant. This communication problem, which we refer to as *broadcasting with noisy side information (BNSI)*, has applications in the re-transmission phase of downlink networks. We derive a necessary and sufficient condition for a linear coding scheme to satisfy the demands of all the receivers in a given BNSI network, and show that syndrome decoding can be used at the receivers to decode the demanded messages from the received codeword and the available noisy side information. We represent BNSI problems as bipartite graphs, and using this representation, classify the family of problems where linear coding provides bandwidth savings compared to uncoded transmission. We provide a simple algorithm to determine if a given BNSI network belongs to this family of problems, i.e., to identify if linear coding provides an advantage over uncoded transmission for the given BNSI problem. We provide lower bounds and upper bounds on the optimal codelength and constructions of linear coding schemes based on linear error correcting codes. For any given BNSI problem, we construct an equivalent index coding problem. A linear code is a valid scheme for a BNSI problem if and only if it is valid for the constructed index coding problem.' title: Linear Codes for Broadcasting with Noisy Side Information --- Broadcast channel, index coding, linear error correcting codes, network coding, noisy side information, syndrome decoding Introduction {#Intro} ============ We consider the problem of broadcasting $n$ message symbols $x_1,\dots,x_n$ from a finite field ${\mathds{F}}_q$ to a set of $m$ users $u_1,\dots,u_m$ through a noiseless broadcast channel. The $i^{\text{th}}$ receiver $u_i$ requests the message vector ${{\bf{x}}}_{{\mathcal{X}}_i} = (x_j, \, j \in {\mathcal{X}}_i)$ where denotes the demands of $u_i$. We further assume that each receiver knows a noisy/erroneous version of *its own demanded message* as side information. In particular, we assume that the side information at $u_i$ is a ${\mathds{F}}_q$-vector ${{\bf{x}}}_{{\mathcal{X}}_i}^{e}$ such that the demanded message vector ${{\bf{x}}}_{{\mathcal{X}}_i}$ differs from the side information ${{\bf{x}}}_{{\mathcal{X}}_i}^{e}$ in at the most $\delta_s$ coordinates, where the integer $\delta_s$ determines the quality of side information. We assume that the transmitter does not know the exact realizations of the side information vectors available at the receivers. The objective of code design is to broadcast a codeword of as small a length as possible such that every receiver can retrieve its demanded message vector using the transmitted codeword and the available noisy side information. We refer to this communication problem as *broadcasting with noisy side information* (BNSI). Wireless broadcasting in downlink communication channels has gained considerable attention and has several important applications, such as cellular and satellite communication, digital video broadcasting, and wireless sensor networks. The BNSI problem considered in this paper models the re-transmission phase of downlink communication channels at the network layer. Suppose during the initial broadcast phase each receiver of a downlink network decodes its demanded message packet erroneously (such as when the wireless channel experiences outage). Instead of discarding this decoded message packet, the erroneous symbols from this packet can be used as noisy side information for the re-transmission phase. If the number of symbol errors $\delta_s$ in the erroneously decoded packets is not large, we might be able to reduce the number of channel uses required for the re-transmission phase by intelligently coding the message symbols at the network layer. Consider the example scenario shown in Fig. \[fig:subim1\]. The transmitter is required to broadcast 4 message symbols $x_1,x_2,x_3,x_4$ to 3 users. Each user requires a subset of the message symbols, for example, User 1, User 2 and User 3 demand $(x_1,x_2,x_3)$, $(x_2,x_3,x_4)$ and $(x_1,x_3,x_4)$, respectively. Suppose during the initial transmission the broadcast channel is in outage, as experienced during temporary weather conditions in satellite-to-terrestrial communications. As a result, at each user, one of the message symbols in the decoded packet is in error. Based on an error detection mechanism (such as cyclic redundancy check codes) all the users request for a re-transmission. We assume that the users are not aware of the position of the symbol errors. [0.5]{} ![An example of broadcast channel with noisy side information.[]{data-label="fig:image0"}](block_diagram "fig:"){width="3.25in"} [0.5]{} ![An example of broadcast channel with noisy side information.[]{data-label="fig:image0"}](problem_statement "fig:"){width="3.25in"} The transmitter attempts a retransmission when the channel conditions improve. Instead of retransmitting each message packet individually, which will require $4$ symbols to be transmitted, the transmitter will broadcast the coded sequence $(x_1+x_4,x_2+x_4,x_3+x_4)$ consisting of $3$ symbols, as shown in Fig. \[fig:subim2\]. Upon receiving this coded sequence it can be shown that using appropriate decoding algorithm (Examples \[exmp2\] and \[exmp3\] in Sections \[sec3\] and \[sec4\], respectively) each user can correctly retrieve its own demanded message symbols using the erroneous version that it already has. By using a carefully designed code the transmitter is be able to reduce the number of channel uses in the retransmission phase. Related Work ------------ *Index coding* [@YBJK_IEEE_IT_11] is a related code design problem that is concerned with the transmission of a set of information symbols to finitely many receivers in a noiseless broadcast channel where each receiver demands a subset of information symbols from the transmitter and already knows a *different* subset of symbols as side information. The demand subset and the side information subset at each receiver in index coding are disjoint and the side information is assumed to be noiseless. Several results on index coding are available based on algebraic and graph theoretic formulations [@linear_prog_10; @MCJ_IT_14; @VaR_GC_16; @MBBS_2016; @MaK_ISIT_17; @tehrani2012bipartite; @SDL_ISIT_13; @LOCKF_2016; @MAZ_IEEE_IT_13; @agar_maz_2016]. The problem of index coding under noisy broadcast channel conditions has also been studied. Dau et al. [@Dau_IEEE_J_IT_13] analyzed linear index codes for error-prone broadcast channels. Several works, for example [@karat_rajan_2017; @samuel_rajan_2017], provide constructions of error correcting index codes. Kim and No [@Kim_No_2017] consider errors both during broadcast channel transmission as well as in receiver side information. Index coding achieves bandwidth savings by requiring each receiver to know a subset of messages that it does not demand from the source. This side information might be gathered by overhearing previous transmissions from the source to other users in the network. In contrast, the coding scenario considered in this paper does not require a user to overhear and store data packets that it does not demand (which may incur additional storage and computational effort at the receivers), but achieves bandwidth savings by exploiting the erroneous symbols already available from prior failed transmissions to the same receiver. To the best of our knowledge, no code design criteria, analysis of code length or code constructions are available for the class of broadcast channels with noisy side information considered in this paper. Contributions and Organization ------------------------------ We view broadcasting with noisy side information as a coding theoretic problem at the network layer. We introduce the system model and provide relevant definitions in Section \[sec2\]. We consider linear coding schemes for the BNSI problem and provide a necessary and sufficient condition for a linear code to meet the demands of all the receivers in the broadcast channel (Theorem \[thm1\] and Corollary \[corr1\], Section \[sec3\]). Given a linear coding scheme for a BNSI problem, we show how each receiver can decode its demanded message from the transmitted codeword and its noisy side information using the syndrome decoding technique (Section \[sec4\]). We then provide an exact characterization of the family of BNSI problems where the number of channel uses required with linear coding is strictly less than that required by uncoded transmission (Theorem \[thm2\], Section \[sec5b\]). We provide a simple algorithm to determine if a given BNSI network belongs to this family of problems using a representation of the problem in terms of a bipartite graph (Algorithm 2, Section \[sec5c\]). Next we provide lower bounds on the optimal codelength (Section \[lb\]). A simple construction of an encoder matrix based on linear error correcting code is described (Section \[line
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Hybrid-kinetic numerical simulations of firehose and mirror instabilities in a collisionless plasma are performed in which pressure anisotropy is driven as the magnetic field is changed by a persistent linear shear $S$. For a decreasing field, it is found that mostly oblique firehose fluctuations grow at ion Larmor scales and saturate with energies $\propto$$S^{1/2}$; the pressure anisotropy is pinned at the stability threshold by particle scattering off microscale fluctuations. In contrast, nonlinear mirror fluctuations are large compared to the ion Larmor scale and grow secularly in time; marginality is maintained by an increasing population of resonant particles trapped in magnetic mirrors. After one shear time, saturated order-unity magnetic mirrors are formed and particles scatter off their sharp edges. Both instabilities drive sub-ion-Larmor–scale fluctuations, which appear to be kinetic-Alfvén-wave turbulence. Our results impact theories of momentum and heat transport in astrophysical and space plasmas, in which the stretching of a magnetic field by shear is a generic process.' author: - 'Matthew W. Kunz' - 'Alexander A. Schekochihin' - 'James M. Stone' bibliography: - 'KSS14.bib' title: Firehose and Mirror Instabilities in a Collisionless Shearing Plasma --- #### Introduction. Describing the large-scale behavior of weakly collisional magnetized plasmas, such as the solar wind, hot accretion flows, or the intracluster medium (ICM) of galaxy clusters, necessitates a detailed understanding of the kinetic-scale physics governing the dynamics of magnetic fields and the transport of momentum and heat. This physics is complicated by the fact that such plasmas are expected to exhibit particle distribution functions with unequal thermal pressures in the directions parallel ($||$) and perpendicular ($\perp$) to the local magnetic field [@msrmpn82; @sc06; @shqs06]. This pressure anisotropy can trigger fast micro-scale instabilities [@rosenbluth56; @ckw58; @parker58; @vs58; @barnes66; @hasegawa69], whose growth and saturation impact the structure of the magnetic field and the effective viscosity of the plasma. While solar-wind observations suggest that these instabilities are effective at regulating the pressure anisotropy to marginally stable levels [@gsss01; @klg02; @htkl06; @matteini07; @bkhqss09; @mhglvn13], it is not known how this is achieved. We address this question with nonlinear numerical simulations of the firehose and mirror instabilities. We leverage the universal physics at play in turbulent $\beta \gg 1$ astrophysical plasmas such as the ICM [@sckhs05; @kscbs11] and Galactic accretion flows [@qdh02; @rqss12]—magnetic field being changed by velocity shear, coupled with adiabatic invariance—to drive self-consistently a pressure anisotropy beyond the instability thresholds. Our setup represents a local patch of a turbulent velocity field, in which the magnetic field is sheared and its strength changed on a timescale much longer than that on which the unstable fluctuations grow. This approach is complementary to expanding-box models of the $\beta \sim 1$ solar wind [@gv96] used to drive firehose [@mlhv06; @ht08] and mirror/ion-cyclotron [@ht05] instabilities. #### Hybrid-kinetic equations in the shearing sheet. A non-relativistic, quasi-neutral, collisionless plasma of electrons (mass $m_{\rm e}$, charge $-e$) and ions (mass $m_{\rm i}$, charge $Ze$) is embedded in a linear shear flow, ${\mbox{\boldmath{$u$}}}_0 = - S x {\hat{{\mbox{\boldmath{$y$}}}}}$, in $(x,y,z)$ Cartesian coordinates. In a frame co-moving with the shear flow, the equations governing the evolution of the ion distribution function $f_{\rm i} (t, {\mbox{\boldmath{$r$}}}, {\mbox{\boldmath{$v$}}})$ and the magnetic field ${\mbox{\boldmath{$B$}}}$ are, respectively, the Vlasov equation $$\label{eqn:vlasov} {\frac{{\rm d} f_{\rm i}}{{\rm d} t}} + {\mbox{\boldmath{$v$}}} {{\mbox{\boldmath{$\cdot$}}}}{{\mbox{\boldmath{$\nabla$}}}}f_{\rm i} + \left[ \frac{Ze}{m_{\rm i}} \left( {\mbox{\boldmath{$E$}}}' + \frac{{\mbox{\boldmath{$v$}}}}{c} {{\mbox{\boldmath{$\times$}}}}{\mbox{\boldmath{$B$}}} \right) + S v_x {\hat{{\mbox{\boldmath{$y$}}}}}\right] \! {{\mbox{\boldmath{$\cdot$}}}}{\frac{\partial f_{\rm i}}{\partial {\mbox{\boldmath{$v$}}}}} = 0$$ and Faraday’s law $$\label{eqn:induction} {\frac{{\rm d} {\mbox{\boldmath{$B$}}}}{{\rm d} t}} = - c {{\mbox{\boldmath{$\nabla$}}}}{{\mbox{\boldmath{$\times$}}}}{\mbox{\boldmath{$E$}}}' - S B_x {\hat{{\mbox{\boldmath{$y$}}}}},$$ where ${\rm d} / {\rm d} t \equiv \partial / \partial t - S x \, \partial / \partial y$. The electric field, $$\label{eqn:efield} {\mbox{\boldmath{$E$}}}' = - \frac{{\mbox{\boldmath{$u$}}}_{\rm i} {{\mbox{\boldmath{$\times$}}}}{\mbox{\boldmath{$B$}}}}{c} + \frac{( {{\mbox{\boldmath{$\nabla$}}}}{{\mbox{\boldmath{$\times$}}}}{\mbox{\boldmath{$B$}}} ) {{\mbox{\boldmath{$\times$}}}}{\mbox{\boldmath{$B$}}}}{4\pi Z e n_{\rm i}} - \frac{T_{\rm e} {{\mbox{\boldmath{$\nabla$}}}}n_{\rm i}}{e n_{\rm i}} ,$$ is obtained by expanding the electron momentum equation in $( m_{\rm e} / m_{\rm i} )^{1/2}$, enforcing quasi-neutrality $$\label{eqn:quasineutrality} n_{\rm e} = Z n_{\rm i} \equiv Z \! \int {\rm d}^3 {\mbox{\boldmath{$v$}}} \, f_{\rm i} ,$$ assuming isothermal electrons, and using Ampère’s law to solve for the mean velocity of the electrons $$\label{eqn:ampere} {\mbox{\boldmath{$u$}}}_{\rm e} = {\mbox{\boldmath{$u$}}}_{\rm i} - \frac{{\mbox{\boldmath{$j$}}}}{Z e n_{\rm i}} \equiv \frac{1}{n_{\rm i}} \int {\rm d}^3 {\mbox{\boldmath{$v$}}} \, {\mbox{\boldmath{$v$}}} f_{\rm i} - \frac{ c {{\mbox{\boldmath{$\nabla$}}}}{{\mbox{\boldmath{$\times$}}}}{\mbox{\boldmath{$B$}}} }{4 \pi Z e n_{\rm i}}$$ in terms of the mean velocity of the ions ${\mbox{\boldmath{$u$}}}_{\rm i}$ and the current density ${\mbox{\boldmath{$j$}}}$ [@rsrc11; @rsc14]. This constitutes the “hybrid” description of kinetic ions and fluid electrons [@bcch78; @hn78]. #### Adiabatic invariance and pressure anisotropy. The final terms in Eqs. (\[eqn:vlasov\]) and (\[eqn:induction\]) represent the stretching of the phase-space density and the magnetic field in the $y$-direction by the shear flow. Conservation of the first adiabatic invariant $\mu \equiv m_{\rm i} v^2_\perp / 2B$ then renders $f_{\rm i}$ anisotropic with respect to the magnetic field. If ${\mbox{\boldmath{$E$}}}' = 0$, the ratio of the perpendicular and parallel pressures is $$\label{eqn:paniso} \frac{p_\perp}{p_{||}} \equiv \frac{ \int {\rm d}^3 {\mbox{\boldmath{$v$}}} \, \mu B \, f_{\rm i}}{\int {\rm d}^3 {\mbox{\boldmath{$v$}}} \, m_{\rm i} v^2_{||} \, f_{\rm i} } = \left[ 1 - 2 \frac{B_x B_{y0}}{B^2_0} S t + \frac{B^2_x}{B^2_0} ( S t )^2 \right]^{3/2} ,$$ where the subscript ‘$0$’ denotes initial values [@cgl56]. #### Method of solution. We solve Eqns. (\[eqn:vlasov\])–(\[eqn:ampere\]) using the second-order–accurate particle-in-cell code [Pegasus]{} [@ksb14]. We normalize magnetic field to $B_0$, velocity to the initial Alfvén speed $v_{\rm A0} \equiv B_0 / \sqrt{4\pi m_{\rm i} n_{\rm i0}}$, time to the inverse of the initial ion gyrofrequency $\Omega_{\rm i0} \equiv Z e B_0 / m_{\rm i} c$, and distance to the initial ion skin depth $d_{\rm i0} \equiv v_{\rm A0} / \Omega_{\rm i0}$. The ion
{ "pile_set_name": "ArXiv" }
null
null
\ .4in [*Department of Physics,\ National Technical University of Athens,\ Zografou Campus, 15780 Athens, Greece\ kfarakos@central.ntua.gr, metaxas@central.ntua.gr*]{}\ **Abstract** We consider the one-loop effective potential at zero and finite temperature in scalar field theories with anisotropic space-time scaling. For $z=2$, there is a symmetry breaking term induced at one loop at zero temperature and we find symmetry restoration through a first-order phase transition at high temperature. For $z=3$, we considered at first the case with a positive mass term at tree level and found no symmetry breaking effects induced at one-loop, and then we study the case with a negative mass term at tree level where we cannot conclude about symmetry restoration effects at high temperature because of the imaginary parts that appear in the effective potential for small values of the scalar field. Introduction ============ Non-relativistic field theories in the Lifshitz context, with anisotropic scaling between temporal and spatial directions, measured by the dynamical critical exponent, $z$, $$t\rightarrow b^z t,\,\,\,x_i\rightarrow b x_i,$$ have been considered recently since they have an improved ultraviolet behavior and their renormalizability properties are quite different than conventional Lorentz symmetric theories [@visser]–[@iengo]. Various field theoretical models and extensions of gauge field theories at the Lifshitz point have already been considered [@hor3]. When extended in curved space-time, these considerations may provide a renormalizable candidate theory of gravity [@hor1] and applications of these concepts in the gravitational and cosmological context have also been widely investigated [@kk]. We will consider here the case of a single scalar field in flat space-time. The weighted in the units of spatial momenta scaling dimensions are $[t]=-z$ and $[x_i]=-1$, with $z$ the anisotropic scaling, and $i=1,...,D$ the spatial index (here we consider $D=3$). The action with a single scalar field is $$S=\int dt d^Dx \left[ \frac{1}{2} \dot{\phi}^2 -\frac{1}{2}\phi(-\Delta)^z \phi-U_0(\phi)\right], \label{gen}$$ with $\Delta=\partial_i^2$ and $[\phi]=\frac{D-z}{2}$. In order to investigate the various implications of a field theory, in particle physics and cosmology, it is particularly important to examine its symmetry structure, both at the classical and the quantum level, at zero and finite temperature, via the effective action and effective potential [@col]– [@dolan]. We should note that, in order to get information on possible instabilities of the theory, we study the one-loop, perturbative effective potential, given by the one-particle irreducible diagrams of the theory, and not the full, non-perturbative, convex effective potential given by the so-called Maxwell construction [@wett2]. In a recent work [@kim1], the effective potential for a scalar theory was considered for the case of $z=2$ and it was shown that, at one loop order, there is a symmetry breaking term induced quantum mechanically; also the finite temperature effective potential was studied at one loop, and it was argued that there is no symmetry restoration at high temperature. We study the theory with $z=2$ in Sec. 2 and find at zero temperature a symmetry breaking term at one loop that agrees with the results of [@kim1]. However, we have also studied the finite temperature effective potential both analytically and numerically, and have found the interesting result of symmetry restoration at high temperature through a first-order phase transition. In view of the importance of symmetry breaking phenomena throughout field theory and cosmology, we have also studied the situation for the case of $z=3$ in Sec. 3: in the case of a positive or zero mass term in the tree level we found no symmetry breaking terms induced at one loop. In the case of a negative mass term at the tree level we calculated the full effective potential at high temperature and found no symmetry restoration effects induced at one loop because of the imaginary parts that appear in the effective potential for small values of the scalar field. Effective potential for $z=2$ at zero and finite temperature ============================================================ We consider the action (\[gen\]) with $z=2$, $$S=\int dt d^3x \left[ \frac{1}{2} \dot{\phi}^2 -\frac{1}{2}(\partial_i^2\phi)^2-U_0(\phi)\right].$$ Here we have $[\phi]=1/2$ and $U_0(\phi)$, the tree-level potential, is a polynomial up to the weighted marginal power of $\phi$ (here the tenth). The one-loop contribution to the effective potential, $$U_1=\frac{1}{2}\int\frac{d^4k}{(2\pi)^4}\ln (k_0^2+k_i^4+U_0'') =\frac{1}{4\pi^2}\int k^2 dk \sqrt{k^4+U_0''}$$ (where, in the last equation, $k^2=k_i^2$) can be evaluated with a cutoff $\Lambda$ in the spatial momentum via differentiation with respect to $y=U_0''$ (primes denote differentiation with respect to $\phi$). We get $$\frac{d^2 U_1}{dy^2}=-\frac{1}{16\pi^2}\frac{1}{y^{3/4}}\int_0^{\infty}dx\frac{x^2}{(x^4+1)^{3/2}}$$ and, using the boundary conditions $$\frac{d U_1}{dy}(y=0)=\frac{\Lambda}{8\pi^2},\,\,\,U_1(y=0)=\frac{\Lambda^5}{20\pi^2},$$ we get $$U_1(\phi)=\frac{1}{8\pi^2}U_0'' \,\Lambda \, - \, c (U_0'')^{5/4},$$ where $c= \frac{1}{4\pi^2} \int_0^{\infty}dx\frac{x^2}{(x^4+1)^{3/2}} =\Gamma(3/4)^2/10\pi^{5/2}$. The first term, which is linearly divergent, can be renormalized with appropriate counterterms in the potential, and the second term, which is generally negative, can lead to a non-zero minimum, even if the original potential had a unique minimum at $\phi=0$. We consider here the case of a massless theory, with a single relevant operator, $U_0(\phi)=\frac{\lambda}{4!}\phi^4$, and add the counterterms $\frac{1}{2}A\phi^2+\frac{1}{4!}B\phi^4$. The condition $U''(0)=0$ eliminates the quadratic terms and, because of the infrared divergence, the condition at a non-zero $\phi=\alpha$, $U''''(\alpha)=\lambda$, has been imposed. Since $[\lambda]=3$ and $[\phi]=1/2$, we write $\alpha^2 =\mu$ and $\lambda=\tilde{\lambda}\mu^3$, in terms of an overall mass scale $\mu$. The full effective potential after renormalization is $$U(\phi)=\frac{\lambda}{4!}\left(1-\frac{15c\tilde{\lambda}^{1/4}}{2^5\cdot2^{1/4}}\right)\phi^4 \,-\,c \left(\frac{\lambda}{2}\phi^2\right)^{5/4}. \label{res1}$$ The full effective potential now has a non-zero minimum, and a mass term will be generated after expansion around this minimum, but it should be noted that the situation is not entirely analogous to the usual Coleman-Weinberg mechanism, since the tree-level potential has a dimensionful parameter already. The situation is similar if other relevant operators with dimensionful couplings are considered ($\phi^6$ and $\phi^8$) but not if only the marginal $\phi^{10}$ operator, with a dimensionless coupling is considered in the tree-level potential. These results agree with the corresponding conclusions from [@kim1]. We now proceed to the calculation of the finite temperature effects and show that when the appropriate corrections to the effective potential are taken into account, there appear to exist symmetry restoration effects at high temperature, and indeed with a first-order phase transition. The one-loop effective potential at finite temperature is [@dolan] $$U_{1T}=\frac{1}{2\beta}\sum_n \int\frac{d^3k}{(2\pi)^3}\ln \left(\frac{4\pi^2n^2}{\beta^2}+E^2\right),$$ where $\beta = 1/T$ is the inverse temperature, $E^2=k^4+U_0''$ and the sum is over non-negative integers, $n$. Using $$\sum_n\ln\left(\frac{4\pi^2n^2}{\beta^2}+E^2\right)= 2\beta\left[\frac{E}{2}+\frac{1}{\beta}\ln(1-e^{-\beta E})\right],$$ the total effective potential can be written as $U_{1T}=U_1 + U_T$, where $U_
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In this paper, we use the approximation of shallow water waves (Margaritondo 2005 [*Eur. J. Phys.*]{} [**26**]{} 401) to understand the behavior of a tsunami in a variable depth. We deduce the shallow water wave equation and the continuity equation that must be satisfied when a wave encounters a discontinuity in the sea depth. A short explanation about how the tsunami hit the west coast of India is given based on the refraction phenomenon. Our procedure also includes a simple numerical calculation suitable for undergraduate students in physics and engineering.' address: - 'Instituto de Física da Universidade de São Paulo, C.P. 66318, CEP 05315-970, São Paulo, Brazil' - 'Universidade Estadual Paulista, CEP 18409-010, Itapeva/SP, Brazil' author: - O Helene - M T Yamashita title: Understanding the tsunami with a simple model --- Introduction ============ Tsunamis are water waves with long wavelengths that can be triggered by submarine earthquakes, landslides, volcanic eruption and large asteroid impacts. These non-dispersive waves can travel for thousands of kilometers from the disturbance area where they have been created with a minimum loss of energy. As any wave, tsunamis can be reflected, transmitted, refracted and diffracted. The physics of a tsunami can be very complex, especially if we consider its creation and behavior next to the beach, where it can break. However, since tsunamis are composed of waves with very large wavelengths, sometimes greater than 100 km, they can be considered as shallow waves, even in oceans with depths of a few kilometers. The shallow water approximation simplifies considerably the problem and still allows us to understand a lot of the physics of a tsunami. Using such approximation, Margaritondo [@MaEJP05] deduced the dispersion relation of tsunami waves extending a model developed by Behroozi and Podolefsky [@BeEJP01]. Since energy losses due to viscosity and friction at the bottom [@CrAJP87] can be neglected in the case of shallow waves, Margaritondo, considering energy conservation, explained the increase of the wave height when a tsunami approaches the coast, where the depth of the sea and the wave velocity are both reduced. In this paper we use one of the results of Ref. [@MaEJP05] in order to deduce the wave equation and include the variation of the seabed. Thus, we are able to explain the increase of the wave amplitude when passing from deeper to shallow water. Also, we discuss the refraction of tsunami waves. This phenomenon allowed the tsunami of December 24, 2004, created at the Bay of Bengal, to hit the west coast of India (a detailed description is given in [@LaSc05]). These both inclusions - the seabed topography and the wave refraction - where pointed out by Chu [@chu] as necessary to understand some other phenomena observed in tsunamis. This paper is organized as follows. The wave equation and the water flux conservation are used in section 2 in order to explain how and how much a shallow wave increases when passing from a deeper to a shallow water. In section 3, we extend the results obtained in section 2 to study how a wave packet propagates in a water tank where the depth varies; in this section we use some numerical procedures that can be extended to the study of any wave propagating in a non-homogeneous medium. Also, the refraction of the 2004 tsunami in the south on India is discussed in section 3. The shallow wave and the continuity equations are deduced in appendix A. Reflection and transmission of waves in one dimension ===================================================== Consider a perturbation on the water surface in a rectangular tank with a constant depth. In the limit of large wavelengths and a small amplitude compared with the depth of the tank, the wave equation can be simplified to (see Appendix A) $$\frac{\partial^2y}{\partial t^2}=gh\frac{\partial^2y}{\partial x^2}, \label{wave}$$ where $y(x,t)$ is the vertical displacement of the water surface at a time $t$, propagating in the $x$ direction, $g$ is the gravity acceleration and $h$ is the water depth. Equation (\[wave\]) is the most common one-dimensional wave equation. It is a second-order linear partial differential equation and, since $g$ and $h$ are constants, any function $y=f(x\pm vt)$ is a solution ($v=\sqrt{gh}$ is the wave velocity). An interesting aspect of eq. (\[wave\]) is that a propagating pulse does not present a dispersion due to the same velocity of all wavelengths and, thus, preserves its shape. Light in vacuum (and, in a very good approximation, in air) is non-dispersive. Also, sound waves in the air are nearly non-dispersive. (If dispersion was important in the propagation of the sound in the air, a sound would be heard different in different positions, i.e., music and conversation would be impossible) However, the velocity of shallow-water wave varies with the depth. Thus, shallow-water waves are dispersive in a non-uniform seadepth. In order to study the evolution of a tsunami in a rectangular box with variable depth, which will be detailed in the next section, we approximate the irregular depth by successive steps. So, in the next paragraphs we will explain the treatment used when a wave encounters a discontinuity. Every time the tsunami encounters a step, part is transmitted and part is reflected. Then, consider a wave with an amplitude given by $y=\cos(kx-\omega t)$, where $k$ and $\omega$ are, respectively, the wave number and the frequency incoming in a region where the depth of the water, and also the wave velocity, have a discontinuity as represented in Fig. \[fig1\]. On the left-side of the discontinuity the perturbation is given by $$y_1(x,t)=\cos(kx-\omega t)+R\cos(kx+\omega t+\varphi_1), \label{left}$$ where $R\cos(kx+\omega t+\varphi_1)$ corresponds to the reflected wave and $\varphi_1$ is a phase to be determined by the boundary conditions. On the right-side of the discontinuity the wave amplitude is given by $$y_2(x,t)=T\cos(k^\prime x-\omega t+\varphi_2), \label{right}$$ corresponding to the transmitted wave part. The wave numbers for $x<0$ and $x>0$ are, respectively, $$k=\frac{\omega}{v} \label{k}$$ and $$k^\prime=\frac{\omega}{v^\prime}, \label{kprime}$$ where $v$ and $v^\prime$ are the velocities of the wavepacket at the left and right sides of the discontinuity. In order to determine $R$ and $T$ we must impose the boundary conditions at $x=0$. For any instant, the wave should be continuous at $x=0$: $\cos\omega t+R\cos(\omega t+\varphi_1)=T\cos(-\omega t+\varphi_2)$. The same should happen with the flux, $f(x,t)$, given by (see eq. (\[flux1\])) $$f(x,t)=h\frac{\partial z(x,t)}{\partial t},$$ where $z(x,t)$ is the horizontal displacement of a transversal section of water (see equation (\[mcons\]) for the relation between $z$ and $y$). Imposing the boundary conditions $y_1(0,t)=y_2(0,t)$ and $f_1(0,t)=f_2(0,t)$ we can deduce $\sin\varphi_1=\sin\varphi_2=0$. Then choosing $\varphi_1=\varphi_2=0$ we obtain $$R=\frac{k^\prime-k}{k+k^\prime}=\frac{v-v^\prime}{v+v^\prime} \label{R}$$ and $$T=\frac{2k^\prime}{k+k^\prime}=\frac{2v}{v+v^\prime}. \label{T}$$ It is worthwhile to mention here that other choices of $\varphi_1$ and $\varphi_2$ will change the signs of $R$ and $T$. However, in this case, it will also change the phases of the reflected and transmitted waves. Both modifications will compensate themselves and the shape of the wave will remain unchanged in relation of the choice $\varphi_1=\varphi_2=0$. Reflection and transmission are very important effects in wave propagation: every time a traveling wave (light, water waves, pulses in strings, etc.) encounters a discontinuity in the medium where it propagates, reflection and transmission occur. Since there is no energy losses, energy flux is conserved. The energy of a wave is proportional to the square of its amplitude [@MaEJP05]. Thus, the energy flux is proportional to the squared amplitude times the wave velocity. The energy flux of the incident wave at $x=0$ is given by $$\phi_{inc}=v$$ (the amplitude of the incident wave was chosen as 1). The reflected and transmitted energy flux are given by $$\phi_{refl}=R^2v$$ and $$\phi_{trans}=T^2v^\prime,$$ respectively.
{ "pile_set_name": "ArXiv" }
null
null
--- author: - | Amir Rosenfeld, John K. Tsotsos\ Department of Electrical Engineering and Computer Science\ York University, Toronto, ON, Canada\ `amir@eecs.yorku.ca,tsotsos@cse.yorku.ca`\ bibliography: - 'cognitivePrograms1.bib' title: Bridging Cognitive Programs and Machine Learning --- abstract ======== While great advances are made in pattern recognition and machine learning, the successes of such fields remain restricted to narrow applications and seem to break down when training data is scarce, a shift in domain occurs, or when intelligent reasoning is required for rapid adaptation to new environments. In this work, we list several of the shortcomings of modern machine-learning solutions, specifically in the contexts of computer vision and in reinforcement learning and suggest directions to explore in order to try to ameliorate these weaknesses. Introduction ============ The Selective Tuning Attentive Reference (STAR) model of attention is a theoretical computational model designed to reproduce and predict the characteristics of the human visual system when observing an image or video, possibly with some task at hand. It is based on psycho-physical observations and constraints on the amount and nature of computations that can be carried out in the human brain. The model contains multiple sub-modules, such as the Visual Hierarchy (VH), visual working memory (vWM), fixation controller (FC), and other. The model describes flow of data between different components and how they affect each other. As the model is given various tasks, an executive controller orchestrates the action of the different modules. This is viewed as a general purpose processor which is able to reason about the task at hand and formulate what is called Cognitive Programs (CP). Cognitive Programs are made up of a language describing the the set of steps required to control the visual system, obtain the required information and track the sequence of observations so that the desired goal is achieved. In recent years, methods of pattern recognition have taken a large step forward in terms of performance. Visual recognition of thousands of object classes as well as detection and segmentation have been made much more reliable than in the past. In the related field of artificial intelligence, progress has been made by the marriage of reinforcement learning and deep learning, allowing agents to successfully play a multitude of game and solve complex environments without the need for manually crafting feature spaces or adding prior knowledge specific to the task. There is much progress still to be made in all of the above mentioned models, namely \(1) a computational model of the human visual system (2) purely computational object recognition systems (e.g, computer vision) and (3) intelligent agents. The purpose of this work is to bridge the gap between the worlds of machine learning and modeling of the way human beings solve visual tasks. Specifically, providing a general enough solution to the problem of coming up with Cognitive Programs which will enable solving visual tasks given some specification. We make two main predictions: 1. Many components of the STAR model can benefit greatly from modern machine learning tools and practices. 2. Constraining the machine learning methods used to solve tasks, using what is known on biological vision will benefit these models and, if done right, improve their performance and perhaps allow us to gain further insights. The next sections will attempt to briefly overview the STAR model as well as the recent trends in machine learning. In the remainder of this report, we shall show how the best of both worlds of STAR and Machine Learning can be brought together to create a working model of an agent which is able to perform various visual tasks. Selective Tuning & Cognitive Programs ===================================== The Selective Tuning (ST) [@tsotsos1993inhibitory; @tsotsos1995modeling; @culhane1992attentional; @books/daglib/0026815] is a theoretical model set out to explain and predict the behavior of the human visual system when performing a task on some visual input. Specifically, it focuses on the phenomena of visual attention, which includes overt attention (moving the eyes to fixate on a new location), covert attention (internally attending to a location inside the field of view without moving the eyes) and neural modulation and feedback that facilitates these processes. The model is derived from first principles which involve analysis of the computational complexity of general vision tasks, as well as biological constraints known from experimental observation on human subjects. Following these constraints, it aims to be biologically plausible while ensuring a runtime which is practical (in terms of complexity) to solve various vision tasks. In [@tsotsos2014cognitive] , ST has been extended to the STAR (Selective Tuning Attentive Reference) model to include the capacity for cognitive programs. We will now describe the main components of STAR. This description is here to draw a high-level picture and is by no means complete. For a reader interested in delving into further details, please refer to [@books/daglib/0026815] for theoretical justifications and a broad discussion and read [@tsotsos2014cognitive] for further description of these components. The ST model described here is extended with a concept of Cognitive Programs (CP) which allows a controller to break down visual tasks into a sequence of actions designed to solve them. ![image](figures/figure5){width="90.00000%"} Fig. \[fig:High-level-view-of\] describes the flow of information in the STAR architecture at a high-level. Central to this architecture is the Visual Hierarchy. The VH is meant to represent the ventral and dorsal streams of processing in the brain and is implemented as a neural network with feedforward and recurrent connections. The structure of the VH is designed to allow recurrent localization of input stimuli, as well as discrimination, categorization and identification. While a single feed-forward pass may suffice for some of the tasks, for other, such as visual search, multiple forward-backward passes (and possibly changing the focus of attention) may be required. Tuning of the VH is allowed so it will perform better on specific tasks. The recurrent tracing of neuron activation along the hierarchy is performed using a $\Theta$-WTA decision process. This induces an Attentional Sample (AS) which represents the set of neurons whose response matches the currently attended stimulus. The Fixation Control mechanism has two main components. The Peripheral Priority Map (PPM) represents the saliency of the peripheral visual field. The History Biased Priority Map (HBPM) combines the focus of attention derived from the central visual field (cFOA) and the foci of attention derived from the peripheral visual field (pFOA). Together, these produce a map based on the previous fixations (and possibly the current task), setting the priority for the next gaze. Cognitive Programs ------------------ To perform some task, the Visual Hierarchy and the Fixation Controller need to be controlled by a process which receives a task and breaks it down into a sequence of *methods*, which are basic procedures commonly used across the wide range of visual tasks. Each method may be applied with some degree of tuning to match it to the specific task at hand, whereas it become an executable *script.* A set of functional sub-modules is required for the execution of CP’s. The controller orchestrating the execution of tasks is called the Visual Task Executive (vTE). Given a task (from some external source), the vTE selects appropriate methods, tunes them into scripts and controls the execution of these scripts by using several sub-modules. Each script initiates an attentive cycle and sends the element of the task required for attentive tuning to the Visual Attention Executive (vAE). The vAE primes the Visual Hierarchy (VH) with top-down signals reflecting the expectations of the stimulus or instructions and sets required parameters. Meanwhile, the current attention is disengaged and any feature surround suppression imposed for previous stimuli is lifted. Once this is completed, a feed-forward signal enters the tuned VH. After the feed-forward pass is completed, the $\Theta$-WTA process selects the makes a decision as to what to attend and passes on this choice from the next stage. The vTE, monitoring the execution of the scripts, can decide based on this information whether the task is completed or not. The selection of the basic methods to execute a task is done by using the Long Term Memory for Method (mLTM). This is an associative memory which allows for fast retrieval of methods. The Visual Working Memory (vWM) contains two representations: the Fixation History Map stores the last several fixation locations, each decaying over time. This allows for location based Inhibition of Return (IOR). The second representation is the Blackboard (BB), which store the current Attentional Sample (AS). Task Working Memory (tWM) includes the Active Script NotePad which itself might have several compartments. One such compartment would store the active scripts with pointers to indicate progress along the sequence. Another might store information relevant to script progress including the sequence of attentional samples and fixation changes as they occur during the process of fulfilling a task. Another might store relevant world knowledge that might be used in executing the CP. The Active Script NotePad would provide the vTE with any information required to monitor task progress or take any corrective actions if task progress is unsatisfactory. Finally, the Visual Attention Executive contains a Cycle Controller, which is responsible for starting and terminating each stage of the ST process. The vAE also initiates and monitors the recurrent localization process in the VH [@rothenstein2014attentional]. A detailed view of the entire architecure can be seen in Fig \[fig:Detailed-view-of\]. ![image](figures/figure6){width="
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We have obtained $V$ and $I$ images of the lone globular cluster that belongs to the dwarf Local Group irregular galaxy known as WLM. The color-magnitude diagram of the cluster shows that it is a normal old globular cluster with a well-defined giant branch reaching to $M_V=-2.5$, a horizontal branch at $M_V=+0.5$, and a sub-giant branch extending to our photometry limit of $M_V=+2.0$. A best fit to theoretical isochrones indicates that this cluster has a metallicity of \[Fe/H\]$=-1.52\pm0.08$ and an age of $14.8\pm0.6$ Gyr, thus indicating that it is similar to normal old halo globulars in our Galaxy. From the fit we also find that the distance modulus of the cluster is $24.73\pm0.07$ and the extinction is $A_V=0.07\pm0.06$, both values that agree within the errors with data obtained for the galaxy itself by others. We conclude that this normal massive cluster was able to form during the formation of WLM, despite the parent galaxy’s very small intrinsic mass and size.' author: - 'Paul W. Hodge, Andrew E. Dolphin, and Toby R. Smith' - Mario Mateo title: 'HST Studies of the WLM Galaxy. I. The Age and Metallicity of the Globular Cluster [^1]' --- Introduction ============ The galaxy known as WLM is a low-luminosity, dwarf irregular galaxy in the Local Group. A history of its discovery and early study was given by Sandage & Carlson (1985). Photographic surface photometry of the galaxy was published by Ables & Ables (1977). Its stellar population has been investigated from ground-based observations by Ferraro et al (1989) and by Minniti & Zijlstra (1997). The former showed that the main body of the galaxy consists of a young population, which dominates the light, while the latter added the fact that there appears to be a very old population in its faint outer regions. Cepheid variables were detected by Sandage & Carlson (1985), who derived its distance, and were reanalyzed by Feast & Walker (1987) and by Lee et al. (1992). The latter paper used $I$ photometry of the Cepheids and the RGB (red giant branch) distance criterion to conclude that the distance modulus for WLM is 24.87 $\pm$ 0.08. The extinction determined by Feast & Walker (1987) is $A_B$ = 0.1. Humason et al. (1956), when measuring the radial velocity of WLM, noticed a bright object next to it that had the appearance of a globular cluster. Its radial velocity was the same as that of WLM, indicating membership. Ables & Ables (1977) found that the cluster’s colors were like those of a globular cluster, and Sandage & Carlson (1985) confirmed this. Its total luminosity is unusual for its being the sole globular of a galaxy. Sandage & Carlson (1985) quote a magnitude of $V$ = 16.06, indicating an absolute magnitude of $M_V$ = -8.8. This can be compared to the mean absolute magnitude of globulars in galaxies, which is $M_V = -7.1 \pm 0.43$ (Harris 1991). The cluster, though unusually bright, has only a small fraction of the $V$ luminosity of the galaxy, which is 5.2 magnitudes brighter in $V$. One could ask the question of whether there are other massive clusters in the galaxy, such as luminous blue clusters similar to those in the Magellanic Clouds. Minniti & Zijlstra (1997), using the NTT and thus having a wider field than ours, searched for other globular clusters and found none. However, the central area of the galaxy has one very young, luminous cluster, designated C3 in Hodge, Skelton and Ashizawa (1999). This object is the nuclear cluster of one of the brightest HII regions (Hodge & Miller 1995). There do not appear to be any large intermediate age clusters, such as those in the Magellanic Clouds or that recently identified spectroscopically in the irregular galaxy NGC 6822 by Cohen & Blakeslee (1998) . No other Local Group irregular galaxy fainter than $M_V$ = -16 contains a globular cluster. The elliptical dwarf galaxies NGC 147 and NGC 185 (0.8 and 1.3 absolute magnitudes brighter than WLM, respectively) do have a few globular clusters each and Fornax (1.7 absolute magnitudes fainter) has five, which makes it quite anomalous, even for an elliptical galaxy (see Harris, 1991, for references). Another comparison can be made using the specific frequency parameter, as defined and discussed by Harris (1991). The value of the specific frequency calculated for WLM is 7.4, which can be compared to Harris’ value calculated for late-type galaxies, which is $0.5 \pm 0.2$. The highest average specific frequency is found for nucleated dwarf elliptical galaxies by Miller et al (1998), which is $6.5 \pm 1.2$, while non-nucleated dwarf elliptical galaxies have an average of $3.1 \pm 0.5$. These values are similar to those found by Durrell et al (1996), implying that the specific frequency for WLM is comparable to that for dwarf elliptical galaxies but possibly higher than that for other late-type galaxies. Because the WLM cluster as a globular in an irregular dwarf galaxy is unique, it may represent an unusual opportunity to investigate the question of whether Local Group dwarf irregulars share the early history of our Galaxy and other more luminous Group members, which formed their massive clusters some 15 Gyr ago, or whether they formed later, as the early ideas about the globular clusters of the Magellanic Clouds seemed to indicate. Of course, we now know that the LMC has several true globular clusters that are essentially identical in age to the old halo globulars of our Galaxy (Olsen et al. 1998, Johnson et al. 1998), so the evidence suggesting a delayed formation now seems to come only from the SMC. In any case, WLM gives us a rare opportunity to find the oldest cluster (and probably the oldest stars) in a more distant and intrinsically much less luminous star-forming galaxy in the Local Group. Data and Reduction ================== Observations ------------ As part of a Cycle 6 HST GO program, we obtained four images of the WLM globular cluster on 26 September, 1998. There were two exposures taken with the F814W filter of 2700 seconds and two with the F555W filter, one each of 2700 seconds and 2600 seconds. The globular cluster was centered on the PC chip and the orientation of the camera was such that the WF chips lay approximately along the galaxy’s minor axis, providing a representative sample of the WLM field stars to allow us to separate cluster stars reliably. Reductions ---------- With two images of equal time per filter, cosmic rays were cleaned with an algorithm nearly identical to that used by the IRAF task CRREJ. The two images were compared at each pixel, with the higher value thrown out if it exceeded 2.5 sigma of the average. The cleaned, combined F555W image is shown in Figure \[fig\_image\]. Photometry was then carried out using a program specifically designed to reduce undersampled WFPC2 data. The first step was to build a library of synthetic point spread functions (PSFs), for which Tiny Tim 4.0 (Krist 1995) was used. PSFs were calculated at 49 positions on each chip in F555W and F814W, subsampled at 20 per pixel in the WF chips and 10 in the PC chip. The subsampled PSFs were adjusted for charge diffusion and estimated subpixel QE variations, and combined for various locations of a star’s center within a pixel. For example, the library would contain a PSF for the case of a star centered in the middle of a pixel, as well as for a star centered on the edge of a pixel. In all, a 10x10 grid of possible centerings was made for the WF chips and a 5x5 grid for the PC chip. This served as the PSF library for the photometry. The photometry was run with an iterative fit that located stars and then found the best combinations of stellar profiles to match the image. Rather than using a centroid to determine which PSF to use for a star, a fit was attempted with each PSF centered near the star’s position and the best-fitting PSF was chosen. This method helped avoid the problem of centering on an undersampled image. Residual cosmic rays and other non-stellar images were removed through a chi-squared cut over the final photometry list. The PSF fit was normalized to give the total number of counts from the star falling within a 0.5 arcsec radius of the center. This count rate was then converted into magnitudes as described in Holtzman et al (1995), using the CTE correction, geometric corrections, and transformation. For the color-magnitude diagram (CMD) and luminosity function analyses below, roughly the central 20% of the image was analyzed to maximize the signal from the globular while minimizing the background star contamination. In that region, the effect of background stars is negligible. The CMD from this method,
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Magnetism in the FeAs stoichiometric compounds and its interplay with superconductivity in vortex states are studied by self-consistently solving the BdG equations based on a two-orbital model with including the on-site interactions between electrons in the two orbitals. It is revealed that for the parent compound, magnetism is caused by the strong Hund’s coupling, and the Fermi surface topology aids to select the spin-density-wave (SDW) pattern. The superconducting (SC) order parameter with $s_{\pm}=\Delta_{0}\cos(k_{x})\cos(k_{y})$ symmetry is found to be the most favorable pairing for both the electron- and hole-doped cases, while the local density-of-states (LDOS) exhibits the characteristic of nodal gap for the former and full gap for the latter. In the vortex state, the emergence of the field-induced SDW depends on the strength of the Hund’s coupling and the Coulomb repulsions. The field-induced SDW gaps the finite energy contours on the electron and hole pocket sides, leading to the dual structures with one reflecting the SC pairing and the other being related to the SDW order. These features can be discernable in STM measurements for identifying the interplay between the field-induced SDW order and the SC order around the core region.' author: - 'Hong-Min Jiang' - 'Jian-Xin Li' - 'Z. D. Wang' title: 'Vortex states in iron-based superconductors with collinear antiferromagnetic cores' --- introduction ============ The recently discovered iron arsenide superconductors, [@kami1; @xhchen1; @zaren1; @gfchen1; @wang1] which display superconducting transition temperature as high as more than 50K, appear to share a number of general features with high-$T_{c}$ cuprates, including the layered structure and proximity to a magnetically ordered state. [@kami1; @cruz1; @jdong] The accumulated evidences have subsequently established a fact that the parent compounds are generally poor metal and undergo structure and antiferromagnetic (AFM) spin-density-wave (SDW) transitions below certain temperatures. [@cruz1; @klau1] Elastic neutron scattering experiments have shown the antiferromagnetic order is collinear and has a wavevector $(\pi,0)$ or $(0,\pi)$ in the unfolded Brillouin zone corresponding to a unit cell with only one Fe atom per unit cell. [@cruz1] Either chemical doping or/and pressure suppresses the AFM SDW instability and eventually results in the emergence of superconductivity. [@kami1; @taka1] The novel magnetism and superconducting properties in these compounds have been a great spur to recent researches. [@zhao1; @chen1; @goto1; @luet1; @qsi1; @ccao1; @djsingh; @yao1; @huwz; @daikou; @qhwang; @ragh1] The relation between magnetism and superconductivity and the origin of magnetic order have attracted significant attentions in the current research on FeAs superconductors. Discrepancies exist in the experimental results, i.e., whether the superconductivity and antiferromagnetic order are well separated or they can coexist in the underdoped region of the phase diagram, and how they coexist if they happen to do so. For example, there is no overlap between those two phases in CeFeAsO$_{1-x}$F$_{x}$ [@zhao1], while the coexistence of the two phases was observed in a narrow doping range in SmFeAsO$_{1-x}$F$_{x}$ [@liudrew], and in a broader range in Ba$_{1-x}$K$_{x}$Fe$_{2}$As$_{2}$ [@chen1; @goto1]. Even for the same LaFeAsO$_{1-x}$F$_{x}$ system, different experiments display conflicting results. It was reported that before the orthorhombic SDW phase is completely suppressed by doping, superconductivity has already appeared at low temperatures [@kami1], while it was also observed experimentally that superconductivity appears after the SDW is completely suppressed [@luet1]. As for the origin of the SDW phase, two distinct types of theories have been proposed: local moment antiferromagnetic ground state for strong coupling, [@qsi1] and itinerant ground state for weak coupling. [@ccao1; @djsingh; @yao1; @ragh1] The detection of the local moment seems to question the weak coupling scenario, but the metallic-like (or bad metal) nature as opposed to a correlated insulator as in cuprates renders the strong coupling theories doubtable. [@huwz] More recently, a compromised scheme was adopted: the SDW instability is assumed to result from the coupling of itinerant electrons with local moment, namely, neither the Fermi surface nesting nor the local moment scenario alone is able to account for it. [@daikou] Although many research efforts have been already made to identify the existence of magnetic order and its origin as well as the relationship with superconductivity, there have been fewer studies on vortex states of the systems. While the interplay between magnetism and superconductivity has been yet to be experimentally clarified, the superconducting critical temperature $T_{c}$ reaches its maximum value after the antiferromagnetic spin order is completely suppressed in the materials, indicating the competition nature between AFM SDW instability and superconductivity. At this stage, it is valuable and interesting to investigate vortex states in the family of FeAs compounds, mainly considering that the magnetic order may arise naturally when the superconducting order is destroyed by the magnetic vortex. Therefore, one can perform local tunneling spectroscopic probes in vortex states to understand profoundly the interplay between magnetic order and superconductivity. In this paper, we investigate magnetism in the FeAs stoichiometric compounds, and the interplay between it and superconductivity upon doping in vortex states by self-consistently solving the BdG equations based on the two-orbital model with including the on-site interactions between electrons in the two orbitals. It is shown that for the parent compound, magnetism is caused by the strong Hund’s coupling, and the Fermi surface topology aids to select the SDW ordering pattern. The SDW results in the pseudogap-like feature at the Fermi level in the LDOS. It is found that the SC order parameter with $s_{\pm}=\Delta_{0}\cos(k_{x})\cos(k_{y})$ symmetry is the most favorable pairing at both the electron- and hole-doped sides, while the LDOS exhibits the characteristic of nodal gap for the former and full gap for the latter. In the vortex states, the emergence of the field-induced SDW order depends heavily on the strength of the Hund’s coupling and the Coulomb repulsions. The coexistence of the field-induced SDW order and SC order around the core region is realized due to the fact that the two orders emerge at different energies. The corresponding LDOS at the core region displays a kind of dual structures, with one reflecting the SC pairing and the other being related to the SDW order. The paper is organized as follows. In Sec. II, we introduce the model Hamiltonian and carry out analytical calculations. In Sec. III, we present numerical calculations and discuss the results. In Sec. IV, we make remarks and conclusion. THEORY AND METHOD ================= We start with an effective two-orbital model [@ragh1] that takes only the iron $d_{xz}$ and $d_{yz}$ orbitals into account. By assuming an effective attraction that causes the superconducting pairing and including the possible interactions between the two orbitals’ electrons, one can construct an effective model to study the vortex physics of the iron-based superconductors in the mixed state: $$\begin{aligned} H=H_{0}+H_{pair}+H_{int}.\end{aligned}$$ The first term is a tight-binding model $$\begin{aligned} H_{0}=&&-\sum_{ij,\alpha\beta,\sigma}e^{i\varphi_{ij}}t_{ij,\alpha\beta} c^{\dag}_{i,\alpha,\sigma}c_{j,\beta,\sigma} \nonumber \\ &&-\mu\sum_{i,\alpha,\sigma}c^{\dag}_{i,\alpha,\sigma}c_{i,\alpha,\sigma},\end{aligned}$$ which describes the electron effective hoppings between sites $i$ and $j$ of the Fe ions on the square lattice, including the intra- ($t_{ij,\alpha\alpha}$) and inter-orbital ($t_{ij,\alpha,\beta}, \alpha\neq\beta$) hoppings with the subscripts $\alpha$, $\beta$ ($\alpha,(\beta)=1,2$ for $xz$ and $yz$ orbital, respectively) denoting the orbitals and $\sigma$ the spin. $c^{\dag}_{i,\alpha\sigma}$ creates an $\alpha$ orbital electron with spin $\sigma$ at the site $i$ ($i\equiv(i_{x},i_{y})$), and $\mu$ is the chemical potential. The magnetic field is introduced through the Peierls phase factor $e^{i\varphi_{ij}}$ with $\varphi_{ij}=\frac{\pi}{\Phi_{0}}\int^{r_{i}}_{r_{j}}\mathbf{A(r)}\cdot d\mathbf{r}$, where $A=(-Hy, 0, 0)$ stands for the vector potential in the Landau gauge and $\Phi_{
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Modern-day ‘testing’ of (perturbative) QCD is as much about pushing the boundaries of its applicability as about the verification that QCD is the correct theory of hadronic physics. This talk gives a brief discussion of a small selection of topics: factorisation and jets in diffraction, power corrections and event shapes, the apparent excess of $b$-production in a variety of experiments, and the matching of event generators and NLO calculations.' address: 'LPTHE, Universités Paris VI et Paris VII, Paris, France.' author: - 'G. P. Salam' title: 'QCD tests through hadronic final-state measurements[^1]' --- Introduction ============ [0.4]{} The testing of QCD is a subject that many would consider to be well into maturity. The simplest test is perhaps that ${\alpha_\mathrm{s}}$ values measured in different processes and at different scales should all be consistent. It suffices to take a look at compilations by the PDG [@PDG] or Bethke [@Bethke] to see that this condition is satisfied for a range of observables, to within the current theoretical and experimental precision, namely a few percent. There exist many other potentially more discriminatory tests, examples explicit measurements of the QCD colour factors [@ColourFactors] or the running of the $b$-quark mass [@Bambade] — and there too one finds a systematic and excellent agreement with the QCD predictions. A significant amount of the data comes from HERA experiments, and to illustrate this, figure \[fig:HERAalphas\] shows a compilation of a subset of the results on ${\alpha_\mathrm{s}}$, as compiled by ZEUS [@ZEUSalphas]. In the space available however, it would be impossible to give a critical and detailed discussion of the range of different observables that are used to verify that QCD is ‘correct’. Rather let us start from the premise that, in light of the large body of data supporting it, QCD *is* the right theory of hadronic physics, and consider what then is meant by ‘testing QCD’. One large body of activity is centred around constraining QCD. This includes such diverse activities as measuring fundamental (for the time being) unknowns such as the strong coupling and the quark masses; measuring quantities such as structure functions and fragmentation functions, which though formally predictable by the theory are beyond the scope of the tools currently at our disposal (perturbation theory, lattice methods); and the understanding, improvement and verification of the accuracy of QCD predictions, through NNLO calculations, resummations and projects such as the matching of fixed-order calculations with event-generators. One of the major purposes of such work is to provide a reliable ‘reference’ for the inputs and backgrounds in searches for new physics. A complementary approach to testing QCD is more about exploring the less well understood aspects of the theory, for example trying to develop an understanding of non-perturbative phenomena such as hadronisation and diffraction, or the separation of perturbative and non-perturbative aspects of problems such as heavy-quark decays; pushing the theory to new limits as is done at small-$x$ and in studies of saturation; or even the search for and study of qualitatively new phenomena and phases of QCD, be they within immediate reach of experiments (the quark-gluon plasma, instantons) or not (colour superconductors)! Of course these two branches of activity are far from being completely separated: it would in many cases be impossible to study the less well understood aspects of QCD without the solid knowledge that we have of its more ‘traditional’ aspects — and it is the exploration of novel aspects of QCD that will provide the ‘references’ of the future. The scope of this talk is restricted to tests involving final states. Final states tend to be highly discriminatory as well as complementary to more inclusive measurements. We shall consider two examples where our understanding of QCD has seen vast progress over the past years, taking us from a purely ‘exploratory’ stage almost to the ‘reference’ stage: the question of jets and factorisation in diffraction (section \[sec:Diff\]); and that of hadronisation corrections in event shapes (section \[sec:Hadr\]). We will then consider two questions that are more directly related to the ‘reference’ stage: the topical issue of the excess of $b$-quark production seen in a range of experiments (section \[sec:Heavy\]); and then the problem of providing Monte Carlo event generators that are correct to NLO accuracy, which while currently only in its infancy is a subject whose practical importance warrants an awareness of progress and pitfalls. For reasons of lack of space, many active and interesting areas will not be covered in this talk, among them small-$x$ physics, progress in next-to-next-to-leading order calculations, questions related to prompt photons, the topic of generalised parton distributions and deeply-virtual Compton scattering, hints (or not) of instantons, a range of measurements involving polarisation and so on. Many of these subjects are widely discussed in other contributions to both the plenary and parallel sessions of this conference, to which the reader is referred for more details. Jets in diffraction and factorisation {#sec:Diff} ===================================== Factorisation, for problems explicitly involving initial or final state hadrons, is the statement that to leading twist, predictions for observables can be written as a convolution of one or more non-perturbative but universal functions (typically structure or fragmentation functions) with some perturbatively calculable coefficient function. [0.42]{} While factorisation has long been established in inclusive processes [@GenFact] it has been realised in the past few years [@DiffFact] that it should also hold in more exclusive cases — in particular for diffraction, in terms of diffractive parton distributions $f_{a/p}^\mathrm{diff}(x,x_{\mathrm{I\!P}},\mu^2,t)$, which can be interpreted loosely as being related to the probability of finding a parton $a$ at scale $\mu^2$ with longitudinal momentum fraction $x$, inside a diffractively scattered proton $p$, which in the scattering exchanges a squared momentum $t$ and loses a longitudinal momentum fraction $x_{\mathrm{I\!P}}$. These kinematic variables are illustrated in fig. \[fig:Diffraction\]. The dependence of the diffractive parton distributions on so many variables means that without a large kinematical range (separately in $x$, $x_{\mathrm{I\!P}}$ and $Q^2$, while perhaps integrating over $t$) it is a priori difficult to thoroughly test diffractive factorisation. An interesting simplifying assumption is that of Regge factorisation, where one writes [@IngelmanSchlein] $$f_{a/p}^\mathrm{diff}(x,x_{\mathrm{I\!P}},\mu^2,t) = |\beta_p(t)|^2 x_{\mathrm{I\!P}}^{-2\alpha(t)} f_{a/{\mathrm{I\!P}}}(x/x_{\mathrm{I\!P}}, \mu^2, t)$$ the interpretation of diffraction being due to (uncut) pomeron exchange (first two factors), with the virtual photon probing the parton distribution of the pomeron (last factor). As yet no formal justification exists for this extra Regge factorisation. Furthermore given that diffraction is arguably related to saturation and high parton densities (assuming the AGK cutting rules [@AGK]) one could even question the validity of arguments for general diffractive factorisation, which rely on parton densities being low (as does normal inclusive factorisation). The experimental study of factorisation in diffraction relied until recently exclusively on inclusive $F_2^d$ measurements. This was somewhat unsatisfactory because of the wide range of alternative models able to reproduce the data and even the existence of significantly different forms for the $f_{a/{\mathrm{I\!P}}}(x/x_{\mathrm{I\!P}}, \mu^2, t)$ which gave a satisfactory description of the data within the Regge factorisation picture. However diffractive factorisation allows one to predict not only inclusive cross sections but also jet cross sections. Results in the Regge factorisation framework are compared to data in figure \[fig:DiffDijets\] (taken from [@SchillingThesis]), showing remarkable agreement between the data and the predictions (based on one of the pomeron PDF fits obtained from $F_2^d$). On the other hand, when one considers certain other models that work well for $F_2^d$ the disagreement is dramatic, as for example is shown with the soft colour neutralisation models [@SCI; @BGH] in figure \[fig:DiffSCI\]. [0.42]{} Despite this apparently strong confirmation of diffractive factorisation, a word of warning is perhaps needed. Firstly there exist other models which have not been ruled out (for example the dipole model [@DiffDipole]). In these cases it would be of interest to establish whether these models can be expressed in a way which satisfies some effective kind of factorisation. Other important provisos are that a diffractive PDF fit based on more recent $F_2^{d}$ data has a lower gluon distribution and so leads to diffractive dijet predictions which are a bit lower than the data, though still compatible to within experimental and theoretical uncertainties [@DijetTalks]. And secondly that the predictions themselves are based on the Rapgap event generator [@Rapgap] which incorporates only leading order dijet production. It would be of interest (and assuming that the results depend little on the treatment of the ‘pomeron remnant,’ technically not
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | We study integrable systems on double Lie algebras in absence of Ad-invariant bilinear form by passing to the semidirect product with the $% \tau $-representation. We show that in this stage a natural Ad-invariant bilinear form does exist, allowing for a straightforward application of the AKS theory, and giving rise to Manin triple structure, thus bringing the problem to the realm of Lie bialgebras and Poisson-Lie groups. author: - | **S. Capriotti$^{\dag }$ & H. Montani$^{\ddag }$[[^1] ]{}**\  *Departamento de Matemática, Universidad Nacional del Sur,* \   *Av. Alem 1253, 8000* - *Bahía Blanca, Buenos Aires, Argentina.*\ \  CONICET & *Departamento de Ciencias Exactas y Naturales,*\   *Universidad Nacional de la Patagonia Austral.*\   *9011 - Caleta Olivia, Argentina*\ title: | **Double Lie algebras, semidirect product, and integrable systems**\   --- Introduction ============ The deep relation between integrable systems and Lie algebras finds its optimal realization when the involved Lie algebra is equipped with an ad-invariant nondegenerate symmetric bilinear form. There, the coadjoint orbit setting turns to be equivalent to the Lax pair formulation, the Adler-Kostant-Symes theory of integrability [@Adler],[@Kostant],[Symes]{} works perfectly, the Poisson-Lie group structures and Lie bialgebras naturally arises, etc. Semisimplicity is an usual requirement warranting a framework with this kind of bilinear form, however out of this framework it becomes in a rather stringent condition. This is the case with semidirect product Lie algebras. Integrable systems can also be modelled on Lie groups, and their Hamiltonian version is realized on their cotangent bundle. There, reduction of the cotangent bundle of a Lie group by the action of some Lie subgroup brings the problem to the realm of semidirect products [@Guillemin-Sternberg], [@MWR] where, in spite of the lack of semisimplicity, an ad-invariant form can be defined provided the original Lie algebra had one. Also, at the Lie algebra level, in ref. [@Trofimov; @1983] the complete integrability of the Euler equations on a semidirect product of a semisimple Lie algebra with its adjoint representation was proven. In ref. [@CapMon; @JPA], the AKS theory was applied to study integrable systems on this kind of Lie groups. However, the lack of an ad-invariant bilinear form is not an obstruction to the application of AKS ideas. In fact, in ref. [@Ovando; @1],[@Ovando; @2] the AKS theory is adapted to a context equipped with a symmetric and nondegenerate bilinear form, which also produces a decomposition of the Lie algebra into two complementary orthogonal subspaces. This is performed by using the *B operation* introduced by Arnold in [@Arnold; @1], and realizing that it amounts to be an action of the Lie algebra on itself, which can be promoted to an action of the Lie group on its Lie algebra, called the $\tau $-action. Thus, the restriction of the system to one of its orthogonal components becomes integrable by factorization. The main goal of this work is to study integrable systems on a semidirect product of a Lie algebra with its adjoint representation, disregarding the ad-invariance property of the bilinear form. So, the framework is that of a double Lie algebra $\mathfrak{g}=\mathfrak{g}_{+}\oplus \mathfrak{g}_{-}$ equipped with a symmetric nondegenerate bilinear form, and the semidirect product $\mathfrak{h}=\mathfrak{g}\ltimes _{\tau }\mathfrak{g}$ where the left $\mathfrak{g}$ act on the other one by the $\tau $-action. The main achievement is the introduction of an $\mathrm{ad}^{\mathfrak{h}}$-invariant symmetric nondegenerate bilinear form which induces a decomposition $% \mathfrak{h}=\mathfrak{h}_{+}\oplus \mathfrak{h}_{-}$, with $\mathfrak{h}% _{+},\mathfrak{h}_{-}$ being Lie subalgebras and isotropic subspaces. In this way, a natural Manin triple structure arises on the Lie algebra $% \mathfrak{h}$, bringing the problem into the realm of the original AKS theory and in that of Lie bialgebras and Poisson-Lie groups. In fact, we show in which way integrable systems on $\mathfrak{h}_{\pm }$ arising from the restriction of an almost trivial system on $\mathfrak{h}$ defined by an $% \mathrm{ad}^{\mathfrak{h}}$-invariant Hamilton function, can be solved by the factorization of an exponential curve in the associated connected simply-connected Lie group $H$. Moreover, we built explicitly the Poisson-Lie structures on the factors $H_{\pm }$ of the group $H$. As the application of main interest, we think of that derived from Lie group with no bi-invariant Riemannian metric. From the result by Milnor [Milnor]{} asserting that a Riemannian metric is bi-invariant if and only if the Lie group is a product of a compact semisimple and Abelian groups, one finds a wide class of examples fitting in the above scheme among the solvable and nilpotent Lie algebras. Many examples with dimension up to 6 are studied in ref. [@ghanam], one of them is fully developed in the present work as an example. The work is ordered as follows: in Section II we fix the algebraic tools of the problem by introducing the $\tau $-action, in Section III we present the main results of this work, dealing with many issues in the semidirect product framework. In Section IV we show how works the integrability by factorization in the framework developed in the previous section. In Section V, we present three examples without Ad-invariant bilinear forms on which we apply the construction developed in the previous sections. Finally, in Section VI we include some conclusions. Double Lie algebras and the $\protect\tau $-action ================================================== Let us consider a *double Lie group* $\left( G,G_{+},G_{-}\right) $ and its associated *double Lie algebra* $\left( \mathfrak{g},\mathfrak{g}% _{+},\mathfrak{g}_{-}\right) $. These mean that $G_{+}$ and $G_{-}$ are Lie subgroups of $G$ such that $G=G_{+}G_{-}$, and that $\mathfrak{g}_{+}$ and $% \mathfrak{g}_{-}$ are Lie subalgebras of $\mathfrak{g}$ with $\mathfrak{g}=% \mathfrak{g}_{+}\oplus \mathfrak{g}_{-}$. We also assume there is a symmetric nondegenerate bilinear form $\left( \cdot ,\cdot \right) _{% \mathfrak{g}}$ on $\mathfrak{g}$, which induces the direct sum decomposition: $$\mathfrak{g}=\mathfrak{g}_{+}^{\perp }\oplus \mathfrak{g}_{-}^{\perp }$$where $\mathfrak{g}_{\pm }^{\perp }$ are the annihilators subspaces of $% \mathfrak{g}_{\pm }$, respectively, $$\mathfrak{g}_{\pm }^{\perp }:=\left\{ Z\in \mathfrak{g}:\left( Z,X\right) _{% \mathfrak{g}}=0\quad \forall X\in \mathfrak{g}_{\pm }\right\}$$ Since the bilinear form is not assumed to be $\mathrm{Ad}^{G}$-invariant, the adjoint action is not a good symmetry in building integrable systems. Following references $\cite{Ovando 1},\cite{Ovando 2}$, where AKS ideas are adapted to a framework lacking of an ad-invariant bilinear form by using the so called $\tau $*-action*, we take this symmetry as the building block of our construction. Let us briefly review the main result of these references: let $\mathfrak{g}=\mathfrak{g}_{+}\oplus \mathfrak{g}_{-}$ as above, then the adjoint action induces the $\tau $*-action* defined as$$\begin{array}{ccc} \mathrm{ad}^{\tau }:\mathfrak{g}\rightarrow \mathrm{End}\left( \mathfrak{g}% \right) & / & \left( \mathrm{ad}_{X}^{\tau }Z,Y\right) _{\mathfrak{g}% }:=-\left( Z,\left[ X,Y\right] \right) _{\mathfrak{g}}% \end{array} \label{ad-tao 0}$$$\forall X,Y,Z\in \mathfrak{g}$. It can be promoted to an action of the associated Lie group $G$ on $\mathfrak{g}$ through the exponential map, thereby $$\begin{array}{ccc} \tau :G\rightarrow \mathrm{Aut}\left( \mathfrak{g}\right) & / & \left( \tau \left( g\right) X,Y\right) _{\mathfrak{g}}:=\left( X,\mathrm{Ad}% _{g^{-1}}^{G}Y\right) _{\mathfrak{g}}% \end{array}%$$Often we also use the notation $\tau _{g}X=\tau \left( g\right) X$. It is worth to observe that, since the bilinear form is nondegenerate, it allows for the identification of the $\mathfrak{g}$ with its dual vector space $\mathfrak{
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We derive sharp estimates on modulus of continuity for solutions of the heat equation on a compact Riemannian manifold with a Ricci curvature bound, in terms of initial oscillation and elapsed time. As an application, we give an easy proof of the optimal lower bound on the first eigenvalue of the Laplacian on such a manifold as a function of diameter.' address: - 'Mathematical Sciences Institute, Australia National University; Mathematical Sciences Center, Tsinghua University; and Morningside Center for Mathematics, Chinese Academy of Sciences.' - 'Mathematical Sciences Institute, Australia National University' - author: - Ben Andrews - Julie Clutterbuck title: Sharp modulus of continuity for parabolic equations on manifolds and lower bounds for the first eigenvalue --- [^1] Introductory comments ===================== In our previous papers [@AC1; @AC2] we proved sharp bounds on the modulus of continuity of solutions of various parabolic boundary value problems on domains in Euclidean space. In this paper, our aim is to extend these estimates to parabolic equations on manifolds. Precisely, let $(M,g)$ be a compact Riemannian manifold with induced distance function $d$, diameter $\sup\{d(x,y):\ x,y\in M\}=D$ and lower Ricci curvature bound $\text{\rm Ric}(v,v)\geq (n-1)\kappa g(v,v)$. Let $a:\ T^*M\to\text{\rm Sym}_2\left(T^*M\right)$ be a parallel equivariant map (so that $a(S^*\omega)(S^*\mu,S^*\nu)=a(\omega)(\mu,\nu)$ for any $\omega$, $\mu$, $\nu$ in $T_x^*M$ and $S\in O(T_xM)$, while $\nabla\left(a(\omega)(\mu,\nu)\right)=0$ whenever $\nabla\omega=\nabla\mu=\nabla\nu=0$). Then we consider solutions to the parabolic equation $$\begin{aligned} \label{eq:flow} \dfrac{\partial u}{\partial t} &= a^{ij}(Du)\nabla_i\nabla_ju. \end{aligned}$$ Our assumptions imply that the coefficients $a^{ij}$ have the form $$\label{eq:formofa} a(Du)(\xi,\xi) =\alpha(|Du|)\frac{\left(Du\cdot\xi\right)^2}{|Du|^2} + \beta(|Du|)\left(|\xi|^2-\frac{\left(Du\cdot\xi\right)^2}{|Du|^2}\right)$$ for some smooth positive functions $\alpha$ and $\beta$. Of particular interest are the cases of the heat equation (with $\alpha=\beta=1$) and the $p$-laplacian heat flows (with $\alpha=(p-1)|Du|^{p-2}$ and $\beta = |Du|^{p-2}$). Here we are principally concerned with the case of manifolds without boundary, but can also allow $M$ to have a nontrivial convex boundary (in which case we impose Neumann boundary conditions $D_\nu u=0$). Our main aim is to provide the following estimates on the modulus of continuity of solutions in terms of the initial oscillation, elapsed time, $\kappa$ and $D$: \[thm:moc\] Let $(M,g)$ be a compact Riemannian manifold (possibly with smooth, uniformly locally convex boundary) with diameter $D$ and Ricci curvature bound $\operatorname{Ric}\ge (n-1)\kappa g$ for some constant $\kappa\in{\mathbb{R}}$. Let $u: M\times [0,T)\rightarrow {{\mathbb R}}$ be a smooth solution to equation , with Neumann boundary conditions if $\partial M\neq\emptyset$. Suppose that - $u(\cdot,0)$ has a smooth modulus of continuity $\varphi_0:[0,D/2]\rightarrow{{\mathbb R}}$ with $\varphi_0(0)=0$ and $\varphi_0'\geq 0$; - $\varphi:[0,D/2]\times {\mathbb{R}}_+\rightarrow {\mathbb{R}}$ satisfies 1. $\varphi(z,0)=\varphi_0(z)$ for each $z\in[0,D/2]$; 2. \[1deqn\] $\frac{\partial \varphi}{\partial t}\ge \alpha(\varphi')\varphi'' - (n-1) {\mathbf{T}_\kappa}\beta(\varphi')\varphi'$; 3. $\varphi'\geq 0$ on $[0,D/2]\times {\mathbb{R}}_+$. Then $\varphi(\cdot,t)$ is a modulus of continuity for $u(\cdot,t)$ for each $t\in[0,T)$: $$|u(x,t)-u(y,t)|\le 2\varphi\left( \frac{d(x,y)}2,t\right).$$ Here we use the notation $$\label{defn of ck} {\mathbf{C_\kappa}}(\tau )=\begin{cases} \cos\sqrt{\kappa}\tau , & \kappa>0 \\ 1, &\kappa =0 \\ \cosh \sqrt{-\kappa} \tau , & \kappa <0, \end{cases} \quad \text{ and } \quad {\mathbf{S_\kappa}}(\tau )=\begin{cases} \frac1{\sqrt{\kappa}}\sin\sqrt{\kappa}\tau , & \kappa>0 \\ \tau , &\kappa =0 \\ \frac1{\sqrt{-\kappa}}\sinh \sqrt{-\kappa} \tau , & \kappa <0, \end{cases}$$ and $${\mathbf{T}_\kappa}(s) := \kappa\frac{{\mathbf{S_\kappa}}(s)}{{\mathbf{C_\kappa}}(s)}=\begin{cases} \sqrt{\kappa}\tan\left(\sqrt{\kappa}s\right),&\kappa>0\\ 0,& \kappa=0\\ -\sqrt{-\kappa}\tanh\left(\sqrt{-\kappa}s\right),&\kappa<0. \end{cases}$$ These estimates are sharp, holding exactly for certain symmetric solutions on particular warped product spaces. The modulus of continuity estimates also imply sharp gradient bounds which hold in the same situation. The central ingredient in our argument is a comparison result for the second derivatives of the distance function (Theorem \[thm:dist-comp\]) which is a close relative of the well-known Laplacian comparison theorem. We remark that the assumption of smoothness can be weakened: For example in the case of the $p$-laplacian heat flow we do not expect solutions to be smooth near spatial critical points, but nevertheless solutions are smooth at other points and this is sufficient for our argument. As an immediate application of the modulus of continuity estimates, we provide a new proof of the optimal lower bound on the smallest positive eigenvalue of the Laplacian in terms of $D$ and $\kappa$: Precisely, if we define $$\lambda_1(M,g) = \inf\left\{\int_M |Du|_g^2\,d\text{\rm Vol}(g):\ \int_M u^2d\text{\rm Vol}(g)=1,\ \int_Mu\,d\text{\rm Vol}(g)=0\right\},$$ and $$\lambda_1(D,\kappa,n) = \inf\left\{\lambda_1(M,g):\ \text{\rm dim}(M)=n,\ \text{\rm diam}(M)\leq D,\ \text{\rm Ric}\geq (n-1)\kappa g\right\},$$ then we characterise $\lambda_1(D,\kappa)$ precisely as the first eigenvalue of a certain one-dimensional Sturm-Liouville problem: \[first eigenvalue estimate\] Let $\mu$ be the first eigenvalue of the Sturm–Liouville problem $$\begin{gathered} \label{SL equation} \begin{split} \frac1{{\mathbf{C_\kappa}}^{n-1}}\left(\Phi' {\mathbf{C_\kappa}}^{n-1}\right)' +\mu\Phi&=0 \text{ on }[-D/2,D/2],\\ \Phi'(\pm D/2 )&=0. \end{split}\end{gathered}$$ Then $\lambda_1(D,\kappa,n)=\mu$. Previous results in this direction include the results derived from gradient estimates due to Li [@Li-ev] and Li and Yau [@LiYau], with the sharp result for non-negative Ricci curvature first proved by Zhong and Yang [@ZY]. The complete result as stated above is implicit in the results of Kröger [@Kroeger]\*[Theorem 2]{} and explicit in those of Bakry and Qian [@BakryQian]\*[Theorem 14]{}, which are also based on gradient estimate methods. Our contribution is the rather simple proof using the long-time behaviour of the heat equation (a method which was also central in our work on the fundamental gap conjecture [@AC3], and which has also been employed successfully in [@Ni]) which seems considerably easier than the previously available arguments. In particular the complications arising in previous works from possible asymmetry of the first eigenfunction are avoided in our argument. A similar argument proving the sharp lower bound for $\lambda_1$ on a Bakry-Emery manifold may be found in [@Andrews-Ni]. The estimate in Theorem \[first
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Situational awareness is crucial for effective disaster management. However, obtaining information about the actual situation is usually difficult and time-consuming. While there has been some effort in terms of incorporating the affected population as a source of information, the issue of obtaining trustworthy information has not yet received much attention. Therefore, we introduce the concept of witness-based report verification, which enables users from the affected population to evaluate reports issued by other users. We present an extensive overview of the objectives to be fulfilled by such a scheme and provide a first approach considering security and privacy. Finally, we evaluate the performance of our approach in a simulation study. Our results highlight synergetic effects of group mobility patterns that are likely in disaster situations.' author: - bibliography: - 'IEEEabrv.bib' - 'references.bib' title: Towards Trustworthy Mobile Social Networking Services for Disaster Response --- [0.78]{}(0.11,0.03) Security and Privacy Protection, Mobile communication systems, Multicast Introduction {#sec:intro} ============ Responding to large-scale disasters has always been a challenging task. One of the reasons for this is the unpredictability of the actual situation at hand. With first responders usually being short on technical and human resources, an awareness of the current circumstances, e.g. the location of casualties, is substantial to effectively providing help to victims within the first critical hours. In order to increase the situational awareness of officials and to support mutual first response, the concept of incorporating the affected population as a potential source of information has emerged recently [@palen2010vision]. Among the potential services for disaster response [@wozniak2011towards], one of the most important services is a reporting service that enables the affected population to issue reports about the locations of victims, remaining or evolving hazards, resource requirements, etc. With other services building upon the data collected by this service, it is essential that this information is authentic and accurate to allow appropriate decision making. Therefore, apart from ensuring a high quality of information, a crucial aspect of this service is to implement countermeasures against users trying to inject false or inaccurate information about allegedly urgent events. In this work, we introduce a rating approach relying on the affected population to verify the correctness and urgency of reports. In our approach, which we refer to as , witnesses report certain events to so-called verifier nodes. These verifier nodes issue confirmation requests to potential witnesses of the event, asking them to decide about the accuracy and urgency of the report. Witnesses can then vote with their decision, allowing the verifier node to rate a report (see Fig.\[fig:concept\]). Our witness-based approach is inspired by the issue of obtaining credible information in *social swarming* applications [@liu2011optimizing]. In social swarming, a swarm of users tries to cooperatively fulfill certain tasks, e.g., search and rescue. Users in the swarm may send reports to a swarm director using their smartphones. Based on his global view, the swarm director then provides instructions to users to achieve the common goal. In order to obtain credible information, the swarm director may selectively query users for confirmation. Accordingly, in our verification schemes, confirmation requests are issued to certain users. However, in their work, the authors focus on the problem of optimizing the network resources by querying the most suitable users based on their credibility under normal network conditions. In contrast, we apply the concept of querying specific users to deal with the challenges of verifying reports in disaster situations. On one hand, this concerns the need to communicate in a delay-tolerant manner due to the failure of parts of the network infrastructure. On the other hand, in order to meet legal requirements and gain acceptance among users, such a scheme has to protect the privacy of the witnesses. This is especially the case if such an approach is deployed on mobile devices that are also used in normal conditions, e.g., to provide help also in a small scale car accident. ![Witness-based report verification[]{data-label="fig:concept"}](concept.pdf) Apart from the issue of obtaining credible information in social swarming, there are several related research areas. On one hand, there has been work on trustworthy ubiquitous emergency communication [@weber2011mundomessage]. However, it focuses on first responders and does not consider the verification of information for services. On the other hand, regarding the issue of crowdsourcing information in disasters, existing approaches are usually open-access, with no or only limited verification [@gao2011harnessing]. Furthermore, while there has been work on the trustworthiness of information obtained from microblogging services for emergency situations [@gupta2012credibility], the aspect of querying witnesses in the disaster area in order to verify reports has not been considered yet. Finally, our approach can be considered an application of the concept of *spatiotemporal multicast*, where a message is delivered to users, i.e., witnesses, encountered in the past while protecting their privacy from the sender of the message [@wozniak2012geocast]. In this article we make the following contributions: We propose the concept of witness-based report verification in the context of a reporting service for disasters and derive extensive security and privacy objectives (section\[sec:objectives\]). Furthermore, we present a first approach for such a scheme (section\[sec:approach\]) and provide a detailed discussion of its security and privacy features (section\[sec:discussion\]). Finally, we evaluate our approach by an extensive simulation study (section\[sec:evaluation\]). Design Objectives {#sec:objectives} ================= In this work, we consider a network model where users are able to sporadically access the Internet via a cellular network infrastructure. Furthermore, we assume that devices are able to communicate directly forming a local wireless network. Functional Objectives --------------------- **Proximity restriction:** Only users close to an event should be able vote for reports about this event. **Deferring of votes:** Users should be able to defer a vote, e.g., if a user has to provide first aid, he should be able to defer his vote and submit it later. Non-functional Objectives ------------------------- **Verification delay:** Reports should be verified quickly. **Robustness:** After a disaster, parts of the infrastructure may fail. Hence, the scheme has to operate in a delay- and disruption-tolerant manner. Furthermore, it should be robust against occasional false reports and votes. **Scalability:** The objectives should not be severely degraded by an increasing number of users and reports. **Efficiency:** The service should be efficient in terms of computation, memory, and communication overhead. Security Objectives ------------------- **Secure communication:** Reports and votes must be delivered confidential, authentic, and of integrity. **Resilient decision making:** The service should be resilient against malicious reports and votes. Consequently, users must only issue one report about an event and vote once for each report. Thus, attackers must not be able to perform Sybil attacks. **Accountability:** Official authorities should be able to obtain the identity of a reporter or witness for the prosecution of crimes. However, restrictions must apply for access to this information in order to prevent abuse. **Availability:** The verification service should provide resistance against attacks. This includes spamming of reports and votes. Privacy Objectives ------------------ **Anonymity:** Attackers must not learn about the identities of users issuing reports and votes. **Location privacy:** Attackers must not determine the location of users. Otherwise, by following their movements, attackers might be able to infer their identities. **Co-location privacy:** Attackers must not determine whether two users have been residing at the same location at the same time. Otherwise, attackers might infer a social connection between those users. **Absence privacy:** Attackers must not learn about a user’s absence from a location during a certain time. This information can be harmful if a user was not present at a location although he was supposed to be. Verification Approach {#sec:approach} ===================== In this article, we present a verification scheme, which we refer to as . Our approach allows users to report events to one of potentially many *verifiers* via their smartphones, i.e., **. In order to verify a report, the verifier issues confirmation requests to users that have been residing close to the event at the time the report has been submitted. Delivering these confirmation requests in a privacy-preserving manner while supporting delay-tolerant communication and deferring of votes requires a scheme [@wozniak2012geocast]. It is necessary to rely on this concept as employing a scheme would require witnesses to stay close to the place of the event, which is an unrealistic assumption. Therefore, building upon the approach in [@wozniak2012geocast], we rely on ** to deliver confirmation requests in a privacy-preserving manner. This -based approach requires that users poll in regular time intervals using a *token* $\tau$ containing a *key* $K$ that has been negotiated at some location and time in the past. To allow for extensive anonymity guarantees these tokens are negotiated between nearby users in a cryptographically secure manner. Hence, in certain time intervals, users initiate the negotiation of a *group key* $K$ with all users that are currently in communication range. Users may also forward the negotiation requests over several hops to increase the number of users within a
{ "pile_set_name": "ArXiv" }
null
null
--- author: - | \ *1. Institute of Experimental Physics, Johannes Kepler University Linz, A-4040 Linz, Austria*\ *2. State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Weijin Road 92,*\ *Nankai District, CN-300072 Tianjin, China*\ *3. Nanchang Institute for Microtechnology of Tianjin University, Weijin Road 92, Nankai District, 300072 Tianjin, China*\ *\* Corresponding author: lidong.sun@jku.at* bibliography: - 'RDSofMoS2.bib' title: ' Substrate Induced Optical Anisotropy in Monolayer MoS$_2$' --- Abstract {#abstract .unnumbered} ======== In-plane optical anisotropy has been detected from monolayer MoS$_2$ grown on a-plane $(11\overline{2}0)$ sapphire substrate in the ultraviolet-visible wavelength range. Based on the measured optical anisotropy, the energy differences between the optical transitions polarized along the ordinary and extraordinary directions of the underlying sapphire substrate have been determined. The results corroborate comprehensively with the dielectric environment induced modification on the electronic band structure and exciton binding energy of monolayer MoS$_2$ predicted recently by first principle calculations. The output of this study proposes the symmetry as a new degree of freedom for dielectric engineering of the two-dimensional materials.\ **Keywords:** Monolayer MoS$_2$, Optical anisotropy, Dielectric screening, Dielectric Engineering, Two-dimensional (2D) materials. Introduction ============ Among the most studied two-dimensional (2D) semiconductors, monolayer transition metal dichalcogenides (TMDCs) serve as the platform for fundamental studies in nanoscale and promise a wide range of potential applications. [@mak2010Atomically; @splendiani2010Emerging; @Radisavljevic2011Single; @Wang2012Electronics; @Geim2013Van; @qiu2013Optical; @ugeda2014giant] Recently, the dielectric environment induced modification on the excitonic structures of monolayer TMDCs becomes a topic of intensive research efforts, [@komsa2012effects; @chernikov2014exciton; @stier2016probing; @rosner2016two; @qiu2016screening; @raja2017coulomb; @kirsten2017band; @cho2018environmentally; @wang2018Colloquium] and the potential of the so called dielectric engineering in constructing novel optoelectronic devices has also been demonstrated.[@raja2017coulomb; @utama2019dielectric; @raja2019dielectric] For freestanding monolayer TMDCs, due to quantum confinement and reduced dielectric screening, the Coulomb interactions between charge carriers are enhanced leading to a significant renormalization of the electronic structure and the formation of tightly bound excitons. While freestanding monolayer in vacuum representing the utmost reduction of dielectric screening, the electronic band structure and the binding energy between charge carriers in monolayer TMDCs can also be tuned by selecting dielectric environment. Indeed, first principle calculations predict a monotonic decrease of both electronic bandgap and exciton binding energy with increasing dielectric screening,[@komsa2012effects; @qiu2013Optical; @kirsten2017band; @cho2018environmentally; @wang2018Colloquium] which has also been observed experimentally[@stier2016probing; @rosner2016two; @raja2017coulomb; @wang2018Colloquium]. Recently, by overlapping a homogeneous monolayer of MoS$_2$ (molybdenum disulfide) with the boundary connecting two substrates with different dielectric constants, an operational lateral heterojunction diode has been successfully constructed.[@utama2019dielectric] Even recently, a new concept named “dielectric order” has been introduced and its strong influence on the electronic transitions and exciton propagation has been illustrated using monolayer of WS$_2$ (tungsten disulfide)[@raja2019dielectric]. However, among these in-depth studies, the influence of the dielectric environment with a reduced symmetry has not been investigated,[@neupane2019plane] and its potential for realizing anisotropic modification on the electronic and optical properties of the monolayer TMDCs remains unexploited. In this letter, we report the breaking of the three-fold in-plane symmetry of the MoS$_2$ monolayer by depositing on the low-symmetry surface of sapphire, demonstrating the symmetry associated dielectric engineering of the 2D materials. Results and Discussions ======================= ![(a)The setup of the RDS measurement and its alignment to the substrate.[]{data-label="figure1"}](Fig1.pdf){width="6cm"} Due to their attractive properties, sapphire crystals are widely applied in solid-state device fabrications and also among the substrate candidates for 2D semiconductors.[@singh2015al2o3; @dumcenco2015large] Sapphire belongs to negative uniaxial crystals, i.e., its extraordinary dielectric function $\epsilon_e$ smaller than its ordinary dielectric function $\epsilon_o$.[@harman1994optical; @yao1999anisotropic] So far, only c-plane (0001) sapphire substrate has been used to investigate its dielectric screening effects on the monolayer TMDCs.[@yu2015exciton; @park2018direct] With isotropic in-plane dielectric properties defined by $\epsilon_o$, the underlying c-plane (0001) sapphire substrate induces a dielectric modification, which is laterally isotropic to monolayer TMDCs. In contrast, we prepared monolayer MoS$_2$ on a-plane ($11\overline{2}0$) sapphire substrate using chemical vapor deposition (CVD).[@supplemental] By selecting low symmetry a-plane sapphire as the substrate, we supply monolayer MoS$_2$ with an anisotropic dielectric environment defined by $\Delta \epsilon_{ext}=\epsilon_o-\epsilon_e$(see Fig.1). The resultant anisotropic modification was then investigate by measuring the optical anisotropy in the monolayer MoS$_2$ over the ultraviolet-visible (UV-Vis) range using reflectance difference spectroscopy (RDS),[@aspnes1985anisotropies; @weightman2005reflection] which measures the reflectance difference between the light polarized along two orthogonal directions at close normal incidence (see Fig.1). This highly sensitive technology has been successfully applied to investigate the optical properties of ultra-narrow graphene nanoribbons.[@denk2014exciton] For the monolayer MoS$_2$ covered a-plane ($11\overline{2}0$) substrate, the RD signals can be described by the following equation: $$\label{Eq.1} \frac{\Delta r}{r}=\frac{1}{2}\frac{r_{[1\overline{1}00]}-r_{[0001]}}{r_{[1\overline{1}00]}+r_{[0001]}}$$ where $r_{[1\overline{1}00]}$ and $r_{[0001]}$ denote the reflectance of the light polarized along the $[1\overline{1}00]$ and the \[0001\] directions of the a-plane sapphire substrate, respectively. ![(a) The RD spectrum taken from the monolayer MoS$_2$ on Al$_2$O$_3$($11\overline{2}0$), (b) The absorption spectrum (b) and its first derivative (c) measured from monolayer MoS$_2$ on Al$_2$O$_3$(0001).[]{data-label="figure2"}](Fig2.pdf){width="8cm"} After the systematic characterization using conventional techniques,[@supplemental] RDS measurement was then applied to investigate the optical anisotropy within the plane of the MoS$_2$ monolayer.The real part of the RD spectra measured from the bare Al2O3(11$\overline{2}$0) surface and the one covered by monolayer MoS$_2$ are plotted in Fig. 2(a), respectively. The bare Al2O3(11$\overline{2}$0) surface shows an optical anisotropy with an almost constant value which can be directly attributed to the in-plane birefringence of the a-plane sapphire substrate. Actually, the corresponding in-plane axis, namely \[1$\overline{1}$00\] and \[0001\] axis are parallel to the ordinary and extraordinary directions of sapphire, respectively. The result reveals thus the dielectric anisotropy $\Delta \epsilon_{ext}=\epsilon_o-\epsilon_e$ in a-plane sapphire substrate. Furthermore, additional optical anisotropy shows up from the a-plane sapphire substrate covered by monolayer MoS$_2$. It worth mentioning that, above the transparent sapphire substrate, the real part of the RD signal is predominantly associated with the anisotropy of the absorption of the monolayer MoS$_2$. For comparison, the absorption spectrum of the monolayer MoS$_2$ grown on a c-plane (0001) sapphire substrate[@supplemental] is plotted in Fig. 2(b). The spectrum exhibits typical absorption spectral line shape of monolayer MoS$_2$ with well resolved peaks indicated as A, B and C locating at 1.89eV, 2.03eV and 2.87eV, respectively. The peaks A and B are attributed to the electronic transitions from the spin-orbit split valence band (VB) to the conduction band (CB) around the critical points of K and K$'$ in the Brillouin zone, whereas the feature C is assigned to the transitions from VB to the CB in a localized region between critical
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'At 66Mpc, AT2019qiz is the closest optical tidal disruption event (TDE) to date, with a luminosity intermediate between the bulk of the population and the faint-and-fast event iPTF16fnl. Its proximity allowed a very early detection and triggering of multiwavelength and spectroscopic follow-up well before maximum light. The velocity dispersion of the host galaxy and fits to the TDE light curve indicate a black hole mass $\approx 10^6$, disrupting a star of $\approx1$. By analysing our comprehensive UV, optical and X-ray data, we show that the early optical emission is dominated by an outflow, with a luminosity evolution $L\propto t^2$, consistent with a photosphere expanding at constant velocity ($\gtrsim 2000$), and a line-forming region producing initially blueshifted H and He II profiles with $v=3000-10000$. The fastest optical ejecta approach the velocity inferred from radio detections, thus the same outflow is likely responsible for both the fast optical rise and the radio emission – the first time this connection has been determined in a TDE. The light curve rise begins $35\pm1.5$ days before maximum light, peaking when AT2019qiz reaches the radius where optical photons can escape. The photosphere then undergoes a sudden transition, first cooling at constant radius then contracting at constant temperature. At the same time, the blueshifts disappear from the spectrum and Bowen fluorescence lines (N III) become prominent, implying a source of far-UV photons, while the X-ray light curve peaks at $\approx10^{41}$. Thus accretion began promptly in this event, favouring accretion-powered over collision-powered outflow models. The size and mass of the outflow are consistent with the reprocessing layer needed to explain the large optical to X-ray ratio in this and other optical TDEs.' author: - | [M. Nicholl$^{1,2}$]{}[^1], [T. Wevers$^{3}$]{}, [S. R. Oates$^{1}$]{}, [K. D. Alexander$^{4}$[^2]]{}, [G. Leloudas$^{5}$]{}, [F. Onori$^{6}$]{}, , [S. Gomez$^{9}$]{}, [S. Campana$^{10}$]{}, [I. Arcavi$^{11,12}$]{}, [P. Charalampopoulos$^{5}$]{}, , [N. Ihanec$^{13}$]{}, [P. G. Jonker$^{14,15}$]{}, [A. Lawrence$^{2}$]{}, [I. Mandel$^{16,17,1}$]{}, , [J. Burke$^{18,19}$]{}, [D. Hiramatsu$^{18,19}$]{}, [D. A. Howell$^{18,19}$]{}, [C. Pellegrino$^{18,19}$]{}, , [J. P. Anderson$^{21}$]{}, [E. Berger$^{9}$]{}, [P. K. Blanchard$^{9}$]{}, [G. Cannizzaro$^{14,15}$]{} , [M. Dennefeld$^{22}$]{}, [L. Galbany$^{23}$]{}, [S. González-Gaitán$^{24}$]{}, [G. Hosseinzadeh$^{9}$]{}, , [I. Irani$^{26}$]{}, [P. Kuin$^{27}$]{}, [T. Muller-Bravo$^{28}$]{}, [J. Pineda$^{29}$]{}, [N. P. Ross$^{2}$]{}, , [B. Tucker$^{18}$]{}, [[Ł]{}. Wyrzykowski$^{13}$]{}, [D. R. Young$^{31}$]{}\ Affiliations at end of paper bibliography: - 'refs.bib' title: 'An outflow powers the optical rise of the nearby, fast-evolving tidal disruption event AT2019qiz' --- \[firstpage\] transients: tidal disruption events – galaxies: nuclei – black hole physics Introduction {#sec:intro} ============ An unfortunate star in the nucleus of a galaxy can find itself on an orbit that intersects the tidal radius of the central supermassive black hole (SMBH), $R_t\approx R_*(M_\bullet/M_*)^{1/3}$ for a black hole of mass $M_\bullet$ and a star of mass $M_*$ and radius $R_*$ [@Hills1975]. This encounter induces a spread in the specific orbital binding energy across the star that is orders of magnitude greater than the mean binding energy [@Rees1988], sufficient to tear the star apart in a ‘tidal disruption event’ (TDE). The stellar debris, confined in the vertical direction by self-gravity [@Kochanek1994; @Guillochon2014], is stretched into a long, thin stream, roughly half of which remains bound to the SMBH [@Rees1988]. As the bound debris orbits the SMBH, relativistic apsidal precession causes the stream to self-intersect and dissipate energy. This destruction can power a very luminous flare, up to or exceeding the Eddington luminosity, either when the intersecting streams circularise and form an accretion disk [@Rees1988; @Phinney1989], or even earlier if comparable radiation is produced directly from the stream collisions [@Piran2015; @Jiang2016]. Such flares are now regularly discovered, at a rate exceeding a few per year, by the various wide-field time-domain surveys. Observed TDEs are bright in the UV, with characteristic temperatures $\sim2-5\times10^4$K and luminosities $\sim10^{44}$. They are classified according to their spectra, generally exhibiting broad, low equivalent width[^3] emission lines of hydrogen, neutral and ionised helium, and Bowen fluorescence lines of doubly-ionised nitrogen and oxygen [e.g. @Gezari2012; @Holoien2014; @Arcavi2014; @Leloudas2019]. This prompted @vanVelzen2020 to suggest three sub-classes labelled TDE-H, TDE-He and TDE-Bowen, though some TDEs defy a consistent classification by changing their apparent spectral type as they evolve [@Nicholl2019]. TDE flares were initially predicted to be brightest in X-rays, due to the high temperature of an accretion disk, and indeed this is the wavelength where the earliest TDE candidates were identified [@Komossa2002]. However, the optically-discovered TDEs have proven to be surprisingly diverse in their X-ray properties. Their X-ray to optical ratios at maximum light range from $\gtrsim10^{3}$ to $<10^{-3}$ [@Auchettl2017]. Producing such luminous optical emission without significant X-ray flux can be explained in one of two ways: either X-ray faint TDEs are powered primarily by stream collisions rather than accretion, or the accretion disk emission is reprocessed through an atmosphere [@Strubbe2009; @Guillochon2014; @Roth2016]. Several lines of evidence have indicated that accretion disks do form promptly even in X-ray faint TDEs: Bowen fluorescence lines that require excitation from far-UV photons [@Blagorodnova2018; @Leloudas2019]; low-ionisation iron emission appearing shortly after maximum light [@Wevers2019b]; and recently the direct detection of double-peaked Balmer lines that match predicted disk profiles [@Short2020; @Hung2020]. Thus a critical question is to understand the nature and origin of the implied reprocessing layer. It has already been established that this cannot be simply the unbound debris stream, as the apparent cross-section is too low to intercept a significant fraction of the TDE flux [@Guillochon2014]. Inhibiting progress is the messy geometry of the debris. Colliding streams, inflowing and outflowing gas, and a viewing-angle dependence on both the broad-band [@Dai2018] and spectroscopic [@Nicholl2019] properties all contribute to a messy knot that must be untangled. One important clue comes from radio observations: although only a small (but growing) sample of TDEs have been detected in the radio [see recent review by @Alexander2020], in such cases we can measure the properties (energy, velocity, and density) of an outflow directly. In some TDEs this emission is from a relativistic jet [@Zauderer2011; @Bloom2011; @Burrows2011; @Cenko2012; @Mattila2018], which does not appear to be a common feature of TDEs, but other radio TDEs have launched sub-relativistic outflows [@vanvelzen2016; @Alexander2016; @Alexander2017]. A number of radio-quiet TDEs have exhibited indirect evidence for slower outflows in the form of blueshifted optical/UV emission and absorption lines [@Roth2018; @Hung2019; @Blanchard2017], suggesting that outflows may be common. This is crucial, as the expanding material offers a promising means to form the apparently ubiquitous reprocessing layer required by the optical/X-ray ratios. Suggested models include an Eddington envelope [@Loeb1997], possibly inflated by radiatively inefficient accretion or an optically thick disk wind [@Metzger2016; @Dai2018]; or a collision-induced outflow [@Lu2020]. Understanding whether the optical reprocessing layer is connected to the non-
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'A model of localized classical electrons coupled to lattice degrees of freedom and, via the Coulomb interaction, to each other, has been studied to gain insight into the charge and orbital ordering observed in lightly doped manganese perovskites. Expressions are obtained for the minimum energy and ionic displacements caused by given hole and electron orbital configurations. The expressions are analyzed for several hole configurations, including that experimentally observed by Yamada [*et al.*]{} in ${\rm La_{7/8}Sr_{1/8}MnO_3}$. We find that, although the preferred charge and orbital ordering depend sensitively on parameters, there are ranges of the parameters in which the experimentally observed hole configuration has the lowest energy. For these parameter values we also find that the energy differences between different hole configurations are on the order of the observed charge ordering transition temperature. The effects of additional strains are also studied. Some results for ${\rm La_{1/2}Ca_{1/2}MnO_3}$ are presented, although our model may not adequately describe this material because the high temperature phase is metallic.' address: | Department of Physics and Astronomy, The Johns Hopkins University\ Baltimore, Maryland 21218 author: - 'K. H. Ahn and A. J. Millis' title: Interplay of charge and orbital ordering in manganese perovskites --- Over the last few years much attention has been focused on manganese perovskite-based oxides, most notably the pseudocubic materials $Re_{1-x}Ak_x{\rm MnO_3}$. (Here $Re$ is a rare earth element such as La, and $Ak$ is a divalent alkali metal element such as Ca or Sr.) The initial motivation came from the observation that for some range of $x$, and temperature, $T$, resistance can be reduced by a factor of up to $10^7$ in the presence of a magnetic field.[@ahn] Two other interesting physical phenomena occurring in this class of materials are charge ordering and orbital ordering.[@chen] In this paper, we study the connection between the two. The important electrons in $Re_{1-x}Ak_x{\rm MnO_3}$ are the Mn $e_g$ electrons; their concentration is $1-x$. For many choices of $Re$, $Ak$, and $x$, especially at commensurate $x$ values, the $e_g$ charge distribution is not uniform and it indeed appears that a fraction $x$ of Mn ions have no $e_g$ electron while $1-x$ have a localized $e_g$ electron. A periodic pattern of filled and empty sites is said to exhibit charge ordering. There are two $e_g$ orbitals per Mn ion. A localized Mn $e_g$ electron will be in one linear combination of these; a periodic pattern of orbital occupancy is said to exhibit orbital ordering. Recently, Murakami [*et al.*]{} [@murakami] observed the charge ordering transition accompanying simultaneous orbital ordering in ${\rm La_{1/2}Sr_{3/2}MnO_4}$ at 217 K (well above the magnetic phase transition temperature 110 K). It indicates that the interplay of the charge and orbital ordering to minimize the lattice energy could be the origin of the charge ordering. In this paper we present an expression for the coupling between charge and orbital ordering, with different charge ordering patterns favoring different orbital orderings. We also argue that the orbital ordering energy differences determine the observed charge ordering in lightly doped manganites. Localized charges induce local lattice distortions, which must be accommodated into the global crystal structure; the energy cost of this accommodation is different for different charge ordering patterns. To model the charge and orbital ordering, we assume that the electrons are localized classical objects, so that each Mn site is occupied by zero or one $e_g$ electron, and each $e_g$ electron is in a definite orbital state. This assumption seems reasonable in the lightly doped materials such as ${\rm La_{7/8}Sr_{1/8}MnO_3}$, which are strongly insulating at all temperatures,[@yamada] but may not be reasonable for the ${\rm La_{1/2}Ca_{1/2}MnO_3}$ composition,[@chen] where the charge ordered state emerges at a low temperature from a metallic state. We proceed by calculating the energies of different charge ordering patterns, emphasizing the 1/8 doping case. It is practically impossible to consider all possible charge ordering configurations. Therefore, we consider the three configurations shown in Fig. \[fig1\], which are the only ones consistent with the following basic features of the hole-lattice implied by the experimental results by Yamada [*et al.*]{}[@yamada] : invariance under translation by two lattice constants in the $x$ or $y$ direction, four in the $z$ direction, and an alternating pattern of occupied and empty planes along $z$ direction. The configuration in Fig. \[fig1\](b) is the one proposed by Yamada [*et al.*]{}[@yamada] to explain their experimental results for ${\rm La_{7/8}Sr_{1/8}MnO_3}$. For localized electrons there are three energy terms : the coupling to the lattice, which will be discussed at length below, the Coulomb interaction, and the magnetic interaction. First, we argue that the Coulomb energy cannot explain the observed ordering pattern or transition temperature. We take as reference the state with one $e_g$ electron per Mn and denote by $\delta q_i$ the charge of a hole on a Mn site. From the classical Coulomb energy $$U_{\rm Coulomb}=\frac{1}{2\epsilon_0} \sum_{i \neq j} \frac{\delta q_i \delta q_j}{r_{ij}},$$ one finds that the difference in energy between the configurations in Fig. \[fig1\] is $$\Delta{\cal U}_{\rm Coulomb,\: per\: hole} =\frac{1}{2 \epsilon_0} \sum_{i\neq o} \frac{\Delta(\delta q_i)}{r_{io}},$$ where $o$ is a site containing a hole and $\Delta(\delta q)$ is the difference in charge between the two configurations. We estimated the above infinite sum by repeated numerical calculations for larger and larger volumes of the unit cells around the origin. We find that Fig. \[fig1\](c) has the lowest energy; 12 meV/$\epsilon_o$ lower than Fig. \[fig1\](b), and 27 meV/$\epsilon_o$ lower than Fig. \[fig1\](a). To estimate the magnitude of the Coulomb energy differences, we need an estimate for the dielectric constant $\epsilon_0$, which we obtain from the measured reflectivity for ${\rm La_{0.9}Sr_{0.1}MnO_3}$,[@okimoto1] and the Lyddane-Sachs-Teller relation[@aschcroft] $\omega_L^2=\omega_T^2 \epsilon_0 / \epsilon_{\infty}$. At frequencies greater than the greatest phonon frequency the reflectivity is close to 0.1, implying $\epsilon_{\infty} \approx 3.4$; the reflectivity is near unity between $\omega_T=0.020$ eV, and $\omega_L=0.024$ eV, implying $\epsilon_0 \approx 5.0$. Because both ${\rm La_{7/8}Sr_{1/8}MnO_3}$ and ${\rm La_{0.9}Sr_{0.1}MnO_3}$ are insulating and have similar compositions, their static dielectric constants are expected to be similar. Using $\epsilon_0\approx 5.0$, the energy difference between different configurations of holes is only around 2.4 meV, or 30 K per hole, which is small compared to the observed charge ordering temperature of 150 K $-$ 200 K of these materials. The inconsistency with the experimentally observed hole configuration and the smallness of the energy difference scale indicate that the electrostatic energy is not the main origin of charge ordering for this material. Even though the magnetic and charge ordering transitions show a correlation in ${\rm La_{7/8}Sr_{1/8}MnO_3}$,[@kawano] we do not think that the magnetic contribution to charge and orbital ordering is as important as the lattice contribution for three reasons. First, in undoped ${\rm LaMnO_3}$, the orbital ordering and the structural phase transition occur at around 800 K and the magnetic ordering at around 140 K,[@wollan; @kanamori] suggesting that the magnetic effects are relatively weak. Second, in ${\rm La_{7/8}Sr_{1/8}MnO_3}$ the Mn spins are ferromagnetically ordered with moment close to the full Mn moment at temperatures greater than the charge ordering temperature,[@kawano] and ferromagnetic order does not favor one charge configuration over another. Third, although in ${\rm La_{7/8}Sr_{1/8}MnO_3}$ antiferromagnetic order appears at the charge ordering transition, the antiferromagnetic moment is very small (less than 0.1 of the full Mn moment),[@kawano] so the energy associated with this ordering must be much less than 140 K/site associated with magnetic ordering in ${\rm LaMnO_3}$. Therefore, we think that the canted antiferromagnetism occuring upon charge ordering in ${\rm La_{7/8}Sr_{1/8}MnO_3}$ (Ref. 7) is not the cause but the effect of the charge and orbital ordering. We now turn our
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In baseball games, the coefficient of restitution of baseballs strongly affects the flying distance of batted balls, which determines the home-run probability. In Japan, the range of the coefficient of restitution of official baseballs has changed frequently over the past five years, causing the number of home runs to vary drastically. We analyzed data from Japanese baseball games played in 2014 to investigate the statistical properties of pitched balls. In addition, we used the analysis results to develop a baseball-batting simulator for determining the home-run probability as a function of the coefficient of restitution. Our simulation results are explained by a simple theoretical argument.' author: - Hiroto Kuninaka - Ikuma Kosaka - Hiroshi Mizutani title: 'Home-run probability as a function of the coefficient of restitution of baseballs' --- Introduction ============ The bounce characteristics of baseballs have a large influence in baseball games; thus, baseball organizations often establish rules concerning official balls. For example, in Major League Baseball (MLB), the baseballs are made by tightly winding yarn around a small core and covering it with two strips of white horsehide or cowhide[@cross]. For the estimation of the bounce characteristics, the coefficient of restitution $e$ is widely used, which is defined as $$e = \frac{V_{r}}{V_{i}},$$ where $V_{i}$ and $V_{r}$ are the speeds of incidence and rebound, respectively, in a head-on collision of a ball with a plane. Note that the coefficient of restitution determines the loss of translational energy during the collision. The coefficient of restitution depends on the kind of material and the internal structure of the ball, as well as other factors such as the impact speed[@cross; @adair; @stronge; @johnson; @goldsmith], impact angle[@louge; @kuninaka_prl], temperature of the ball[@drane; @allen], and humidity at which the balls are stored[@kagan]. Various baseball organizations officially determine the range of the coefficient of restitution of baseballs. For example, the coefficient of restitution of an MLB baseball is required to be $0.546 \pm 0.032$[@kagan2]. Regarding Japanese professional baseballs, the Nippon Professional Baseball Organization (NPB) first introduced their official baseball in 2011, which was used in both Pacific and Central League. Table \[tb1\] shows a chronological table indicating the average coefficient of restitution for baseballs used in Japanese professional baseball games from 2010 to 2013[@npb], along with the annual number of home runs[@brc]. Clearly, the number of home runs decreased drastically in 2011 compared with 2010, although the difference in the average coefficient of restitution is only on the order of $10^{-2}$. The average coefficient of restitution increased in 2013 because the NPB made a baseball equipment manufacturer change the specification of the baseballs in order to increase the level of offense in baseball games. Year Coefficient of Restitution (average) Number of Home Runs ------ -------------------------------------- --------------------- -- -- -- 2010 0.418 1,605 2011 0.408 939 2012 0.408 881 2013 0.416 1,311 : Coefficient of restitution of baseballs and number of home runs in Japanese professional baseball games. \[tb1\] Generally, the number of home runs is strongly affected not only by the coefficient of restitution of baseballs, but also other various factors, such as the climate, the specifications of bats, and the batting skills of players, and so on. Sawicki et al. constructed a detailed batting model incorporating several factors, including the air resistance, friction between the bat and ball, wind velocity, and bat swing[@sawicki]. Although they investigated the optimal strategy for achieving the maximal range of a batted ball, they did not calculate the home-run probability, because it may be difficult to choose proper parameters for the home run probability function. However, a quantitative research on the relationship between the coefficient of restitution of baseballs and the home-run probability is valuable for two reasons. First, Table. \[tb1\] indicates that the home-run probability strongly depends on the coefficient of restitution of baseballs because the small amount of changes in the coefficient of restitution can alter the flying distances of batted balls[@nathan11]. Second, the coefficient of restitution of baseballs is a controllable factor that is important for the design of baseball equipment. The home run probability as a function of the coefficient of restitution can be a simple criterion to evaluate the characteristics of official baseballs. In addition, a quantitative research on the relationship between the coefficient of restitution of balls and the home-run probability is also valuable for physics education, as the problem is closely related to topics covered in undergraduate physics. In this study, we developed a batting simulator using real baseball data to quantitatively investigate the home-run probability as a function of the coefficient of restitution. This paper is structured as follows. In the next section, we describe the data analysis and analysis results. Sections 3 presents the construction of our batting simulator and the simulation results. In Sections 4 and 5, we discuss and summarize our results. Appendices A and B are devoted to the derivation of the averaged force in a binary collision between a ball and a bat and the algorithm for the collision, respectively. Data Analysis ============= To construct our batting simulator, we first analyzed pitching data for Japanese professional baseball games held in 2014. We used data from Sportsnavi[@sportsnavi], which show various data about the pitched balls in an official game, including the ball speed, pitch type, and position of a ball crossing the home plate. Figure \[fig1\] shows a schematic of a part of a Sportsnavi page. In the data, the pitching zone is divided into 5 $\times$ 5 grids from the pitcher’s perspective, wherein 3 $\times$ 3 grids, represented by thick lines, corresponds to the strike zone (see the left panel of Fig. \[fig1\]). The numbers and the symbols in a grid respectively show the order and types of pitches, respectively, at different positions on the grid. Information about each pitch, including the ball speed, is presented in the table shown in the right side of Fig. \[fig1\]. For a later discussion, we numbered the horizontal and vertical positions of each grid as shown in Fig. \[fig1\]. Using the Sportsnavi database, we manually recorded all the positions and ball speeds of pitches in 12 selected games held in Nagoya Dome Stadium in Nagoya, Japan, from August 6 2014 to September 25 2014. We chose games held in indoor domes because the flight of baseballs is hardly affected by climatic factors such as the wind strength. We collected and analyzed data for 1,548 pitched balls. ![Schematic of a part of the Sportsnavi page. []{data-label="fig1"}](fig1.eps){width="10cm"} Figure \[fig2\] shows the distribution of the pitched-ball speed $v$, where the open circles indicate the calculated probabilities as a function of $v$. To obtain the distribution function approximating these data, we divided the ball-speed data into two categories: those for straight balls and those for breaking balls having a curve, a two-seam fastball, etc. For each of the categorized data points, we fit the normal distribution defined by $$f_{i}(v) = \frac{1}{\sqrt{2 \pi }\sigma_{i}} \exp \left\{ - \frac{(v - \mu_{i})^{2}}{\sigma_{i}^{2}} \right\} \hspace{3mm} (i = 1, 2), \label{norm}$$ where $\mu_{i}$ and $\sigma_{i}$ are the mean and the standard deviation, respectively. The fitting parameters are presented in Table \[tb2\], where $i=1$ and $i=2$ correspond to the straight and breaking balls, respectively. ![Distribution of ball speeds. Open circles show the probabilities at each ball speed. Solid black curve shows Eq. (\[md\]) with the fitting parameters shown in Table \[tb2\]. Solid red and blue curves show the distributions of the straight and the breaking balls, respectively, weighted with $p=0.45$.[]{data-label="fig2"}](fig2.eps){width="7cm"} Considering $f_{i}(v)$ ($i=1, 2$) to be components, we finally obtained the mixture distribution of the pitched-ball speed $v$ as $$\begin{aligned} \label{md} \phi(v) = p f_{1}(v) + (1-p) f_{2}(v) \hspace{3mm} (0 < p < 1).\end{aligned}$$ Here, $p$ is the mixing parameter. The black solid curve shown in Fig.\[fig2\] indicates Eq. (\[md\]) with $p=0.45$, which closely approximates the distribution of the pitched-ball speed with the coefficient of determination equal to 0.9942. The red and the blue curves show the first and the second terms, respectively, in the right-hand side of Eq. (\[md\]). Generally, the value of $p$ represents the probability of selecting each component of the mixture distribution. Thus, we consider that the pitchers chose straight and breaking balls with probabilities of $p=0.45$ and $0.55$, respectively. $\sigma_{1}$ \[km/h\] $\mu_{1}$ \[km/h\] $\sigma_{2}$ \[km/h\] $\mu_{2}$
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We report the first observation of anti-Stokes laser-induced cooling in the Er$^{3+}$:KPb$_{2}$Cl$_{5}$ crystal and in the Er$^{3+}$:CNBZn (CdF$_{2}$-CdCl$_{2}$-NaF-BaF$_{2}$-BaCl$_{2}$-ZnF$_{2}$) glass. The internal cooling efficiencies have been calculated by using photothermal deflection spectroscopy. Thermal scans acquired with an infrared thermal camera proved the bulk cooling capability of the studied samples. Implications of these results are discussed.' author: - Joaquín Fernandez - 'Angel J. Garcia–Adeva' - Rolindes Balda title: 'Anti-Stokes laser cooling in bulk Erbium-doped materials' --- The basic principle that anti-Stokes fluorescence might be used to cool a material was first postulated by Pringsheim in 1929. Twenty years later Kastler suggested [@kastler1950] that rare-earth-doped crystals might provide a way to obtain solid-state cooling by anti-Stokes emission (CASE). A few years later, the invention of the laser promoted the first experimental attempt by Kushida and Geusic to demonstrate radiation cooling in a Nd$^{3+}$:YAG crystal [@kushida1968]. However, it was not until 1995 that the first solid-state CASE was convincingly proven by Epstein and coworkers in an ytterbium-doped heavy-metal fluoride glass [@Epstein1995]. Since then on, the efforts to develop other different materials doped with rare-earth (RE) ions were unsuccessful due to the inherent characteristics of the absorption and emission processes in RE ions. In most of the materials studied, the presence of nonradiative (NR) processes hindered the CASE performance. As a rule of thumb a negligible impurity parasitic absorption and near-unity quantum efficiency of the anti-Stokes emission from the RE levels involved in the cooling process are required, so that NR transition probabilities by multiphonon emission or whatever other heat generating process remain as low as possible. These constraints could explain why most of the efforts to obtain CASE in condensed matter were performed on trivalent ytterbium doped solids (glasses [@Hoyt2003] and crystals [@Bowman2000; @Medioroz2002]) having only one excited state manifold which is placed $\sim10000$ cm$^{-1}$ above the ground state. The only exception was the observation of CASE in a thulium-doped glass by using the transitions between the $^{3}$H$_{6}$ and $^{3}$H$_{4}$ manifolds to cool down the sample [@Hoyt2000]. Therefore, it is easy to see that identifying new optically active ions and materials capable of producing CASE is still an open problem with very important implications from both the fundamental and practical points of view. On the other hand, the recent finding of new low phonon materials (both glasses [@Fernandez2000] and crystals [@Medioroz2002]) as RE hosts which may significantly decrease the NR emissions from excited state levels have renewed the interest in investigating new RE anti-Stokes emission channels. In this work, we present the first experimental demonstration of anti-Stokes laser-induced cooling in two different erbium-doped matrices: a low phonon KPb$_{2}$Cl$_{5}$ crystal and a fluorochloride glass. In order to assess the presence of internal cooling in these systems we employed the photothermal deflection technique, whereas the bulk cooling was detected by means of a calibrated thermal sensitive camera. The cooling was obtained by exciting the Er$^{3+}$ ions at the low energy side of the $^{4}$I$_{9/2}$ manifold with a tunable Ti:sapphire laser. It is worthwhile to mention that this excited state, where cooling can be induced, is also involved in infrared to visible upconversion processes nearby the cooling spectral region [@Balda2004]. Moreover, it is also noticeable that the laser induced cooling can be easily reached at wavelengths and powers at which conventional laser diodes operate, which renders these systems very convenient for applications, such as compact solid-state optical cryo-coolers. Single crystals of nonhygroscopic Er$^{3+}$:KPb$_{2}$Cl$_{5}$ were grown in our laboratory by the Bridgman technique [@Voda2004]. The rare earth content was $0.5$ mol% of ErCl$_{3}$. The fluorochloride CNBZn glass doped with $0.5$ mol% of ErF$_{3}$ was synthesized at the Laboratoire de Verres et Ceramiques of the University of Rennes. The experimental setup and procedure for photothermal deflection measurements have been described elsewhere [@Fernandez2000; @fernandez2001]. The beam of a tunable cw Ti:sapphire ring laser (Coherent 899), with a maximum output power of $2.5$ W, was modulated at low frequency ($1-10$ Hz) by a mechanical chopper and focused into the middle of the sample with a diameter of $\sim100$ $\mu$m. The copropagating helium-neon probe laser beam ($\lambda=632.8$ nm) was focused to $\sim60$ $\mu$m, co-aligned with the pump beam, and its deflection detected by a quadrant position detector. The samples (of sizes $4.5\times6.5\times2.7\,\text{mm}^{3}$ and $10.7\times10.7\times2.2\,\text{mm}^{3}$ for the crystal and glass, respectively) were freely placed on a teflon holder inside a low vacuum ($\sim10^{-2}$ mbar) cryostat chamber at room temperature. The cooling efficiencies of the Er$^{3+}$-doped materials were evaluated at room temperature by measuring the quantum efficiency (QE) of the emission from the $^{4}$I$_{9/2}$ manifold in the heating and cooling regions by means of the photothermal deflection spectroscopy in a collinear configuration [@Fernandez2000; @fernandez2001]. The evaluation of the QE has been carried out by considering a simplified two level system for each of the transitions involved. In the photothermal collinear configuration, the amplitude of the angular deviation of the probe beam is always proportional to the amount of heat the sample exchanges, whatever its optical or thermal properties are. The QE of the transition, $\eta$, can be obtained from the ratio of the photothermal deflection amplitude (PDS) to the sample absorption (Abs) obtained as a function of the excitation wavelength $\lambda$ around the mean fluorescence wavelength $\lambda_{0}$$$\frac{\text{PDS}}{\text{Abs}}=C\left(1-\eta\frac{\lambda}{\lambda_{0}}\right),$$ where $C$ is a proportionality constant that depends on the experimental conditions. The mean fluorescence wavelength, above which cooling is expected to occur, was calculated by taking into account the branching ratios for the emissions from level $^{4}$I$_{9/2}$. As expected, the calculated value is close to that found experimentally for the transition wavelength at which the cooling region begins. ![\[fig\_pds\_crystal\](a) Signal deflection amplitude normalized by the sample absorption as a function of pumping wavelength for the Er$^{3+}$:KPb$_{2}$Cl$_{5}$ crystal. (b) Phase of the photothermal deflection signal as a function of pumping wavelength. (c) Photothermal deflection signal waveforms in the heating (800 nm) and cooling (870 nm) regions and around the cooling threshold (850 nm).](fig1.eps) Figure \[fig\_pds\_crystal\]a shows the normalized PDS spectrum of Er$^{3+}$:KPb$_{2}$Cl$_{5}$ crystal around the zero deflection signal ($852.5$ nm) –obtained at an input power of $1.5$ W– together with the best least-squares fitting in both the heating and cooling regions. The resulting QE values are $0.99973$ and $1.00345$, respectively and, therefore, the cooling efficiency estimated by using the QE measurements is $0.37\%$. As predicted by the theory [@Jackson1981], a sharp jump of $180^{\circ}$ in the PDS phase measured by lock-in detection can be observed during the transition from the heating to the cooling region (see Fig. \[fig\_pds\_crystal\]b). The Figure \[fig\_pds\_crystal\]c shows the PDS amplitude waveforms registered in the oscilloscope at three different excitation wavelengths: $800$ nm (heating region), $852.5$ (mean fluorescence wavelength), and $870$ nm (cooling region). As can be noticed, at $852.5$ nm the signal is almost zero whereas in the cooling region, at $870$ nm, the waveform of the PDS signal shows an unmistakable phase reversal of $180^{\circ}$ when compared with the one at $800$ nm. Figure \[fig\_pds\_glass\] shows the CASE results for the Er$^{3+}$:CNBZn glass (obtained at a pump power of $1.9$ W) where the zero deflection signal occurs around $843$ nm. The $180^{\circ}$ change of the PDS phase is also clearly attained but with a little less sharpness than for the Er$^{3+}$:KPb$_{2}$Cl$_{5}$ crystal (see Fig. \[fig\_pds\_glass\]b). The QE values corresponding to the heating and cooling regions are $0.99764$ and $1.00446$, respectively, and the estimated cooling efficiency is $0.68\%$. The PDS waveforms corresponding to the heating and
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Nearly all glass forming liquids display secondary relaxations, dynamical modes seemingly distinct from the primary alpha relaxations. We show that accounting for driving force fluctuations and the diversity of reconfiguring shapes in the random first order transition theory yields a low free energy tail on the activation barrier distribution which shares many of the features ascribed to secondary relaxations. While primary relaxation takes place through activated events involving compact regions, secondary relaxation corresponding to the tail is governed by more ramified, string-like, or percolation-like clusters of particles. These secondary relaxations merge with the primary relaxation peak becoming dominant near the dynamical crossover temperature $T_c$, where they smooth the transition between continuous dynamics described by mode-coupling theory and activated events.' author: - 'Jacob D. Stevenson' - 'Peter G. Wolynes' title: A universal origin for secondary relaxations in supercooled liquids and structural glasses --- Diversity, a key feature of glassy systems, is most apparent in their relaxation properties. Dielectric, mechanical and calorimetric responses of supercooled liquids are not single exponentials in time, but manifest a distribution of relaxation times. The typical relaxation time grows upon cooling the liquid until it exceeds the preparation time, yielding a non-equilibrium glass, which can still relax but in an age dependent fashion. In addition to the main relaxations that are responsible for the glass transition, supercooled liquids and structural glasses exhibit faster motions, some distinct enough in time scale from the typical relaxation to be called “secondary” relaxation processes[@adichtchev.2007; @kudlik.1999; @ngai.2004; @wang.2007; @lunkenheimer.2000]. These faster motions account for only a fraction of the relaxation amplitude in the liquid but become dominant features in the relaxation of otherwise frozen glass, where they are important to the mechanical properties. These secondary relaxation processes in the solvation shell of proteins are also prominent in protein dynamics[@frauenfelder.2009]. The phenomenology of secondary relaxation has been much discussed but, owing especially to the problem of how to subtract the main peak, the patterns observed seem to be more complex and system specific than those for the main glassy relaxation. Some of the secondary relaxation motions are, doubtless, chemically specific, occurring on the shortest length scales. Nevertheless the presence of secondary relaxation in glassy systems is nearly universal[@thayyil.2008]. In this paper we will show how secondary relaxations naturally arise in the random first order transition (RFOT) theory of glasses[@lubchenko.2007] and are predicted to scale in intensity and frequency in a manner consistent with observation. The RFOT theory is based on the notion that there is a diversity of locally frozen free energy minima that can inter-convert via activated transitions. The inter-conversions are driven by an extensive configurational entropy. RFOT theory accounts for the well known correlations between the primary relaxation time scale in supercooled liquids and thermodynamics[@xia.2000; @stevenson.2005] as well as the aging behavior in the glassy state[@lubchenko.2004]. By taking account of local fluctuations in the driving force, RFOT theory also gives a good account of the breadth of the rate distribution of the main relaxation[@xia.2001; @dzero.2008]. Here we will argue that RFOT theory suggests, universally, a secondary relaxation also will appear and that its intensity and shape depends on the configurational thermodynamics of the liquid. This relaxation corresponds with the low free energy tail of the activation barrier distribution. The distinct character of this tail comes about because the geometry of the reconfiguring regions for low barrier transitions is different from that of those rearranging regions responsible for the main relaxation. Near to the laboratory $T_g$, the primary relaxation process involves reconfiguring a rather compact cluster, but the reconfiguring clusters become more ramified as the temperature is raised and eventually resembling percolation clusters or strings near the dynamical crossover to mode coupling behavior, identified with the onset of non-activated motions[@stevenson.2006]. Reconfiguration events of the more extended type are more susceptible to fluctuations in the local driving force, even away from the crossover. These ramified or “stringy” reconfiguration events thus dominate the low barrier tail of the activation energy distribution. When the shape distribution of reconfiguration processes is accounted for, a simple statistical computation shows that a two peaked distribution of barriers can arise. This calculation motivates a more explicit but approximate theory that gives analytical expressions for the distribution of relaxation times in the tail. In keeping with experiment, the theory predicts the secondary relaxation motions are actually most numerous near the crossover, but of course, merge in frequency with the main relaxation peak in time scale also at that crossover. Furthermore the relaxation time distribution for secondary relaxations is predicted to be described by an asymptotic power law. The theory is easily extended to the aging regime where these secondary relaxations can dominate the rearranging motions. In RFOT theory, above the glass transition temperature, the entropic advantage of exploring phase space, manifested as a driving force for reconfiguration, is balanced by a mismatch energy at the interface between adjacent metastable states. For a flat interface in the deeply supercooled regime the mismatch energy can be described as a surface tension that can be estimated from the entropy cost of localizing a bead[@kirkpatrick.1989; @xia.2000], giving a surface tension $\sigma_0 = (3/4) k_B T r_0^{-2} \ln [1/(d_L^2 \pi e)]$ where $d_L$ is the Lindemann length, the magnitude of particle fluctuations necessary to break up a solid structure, and is nearly universally a tenth of the inter-particle spacing, ($d_L = 0.1 r_0$). The free energy profile for reconfiguration events resembles nucleation theory at first order transitions but is conceptually quite distinct. Following Stevenson-Schmalian-Wolynes (SSW)[@stevenson.2006] the free energy cost of an $N$ particle cluster with surface area $\Sigma$ making a structural transition to a new metastable state may be written $$F(N, \Sigma ) = \Sigma \sigma_0 - N k_B T s_c - k_B T \ln \Omega(N, \Sigma) - \sum_{\textrm{particles}} \!\!\! \delta \! \tilde{f} \label{eqn:full_profile}$$ A key element of the free energy profile is the shape entropy $k_B \log \Omega(N, \Sigma)$ which accounts for the number of distinct ways to construct a cluster of $N$ particles having surface area $\Sigma$. At one extreme are compact, nearly spherical objects with shape entropy close to zero. While objects such as percolation clusters or stringy chains have surface area and shape entropy both of which grow linearly with $N$. The last term of equation \[eqn:full\_profile\] accounts for the inherent spatial fluctuations in the disordered glassy system that give fluctuations in the driving force. We presently ignore local fluctuations in the surface mismatch free energy, but their inclusion would not qualitatively alter the results[@biroli.2008; @cammarota.2009; @dzero.2008]. We simplify by assuming uncorrelated disorder, so each particle joining the reconfiguration event is given a random energy, $\delta \tilde{f}$, drawn from a distribution of width ${\delta \! f}$. The r.m.s. magnitude of the driving force fluctuations above $T_g$ follows from the configurational heat capacity through the relation ${\delta \! f}\approx T \sqrt{\Delta C_p k_B}$, a result expected for large enough regions. We will assume no correlations for simplicity, but they can be included. For nearly spherical reconfiguring regions forming compact clusters the shape entropy is very small and the mismatch free energy is $\sigma_0 4 \pi (3 N /(4\pi \rho_0))^{\theta/3}$ with $\theta = 2$ if fluctuations are small. In disordered systems the mismatch free energies grow with exponent $\theta$ generally less than 2 reflecting preferred growth in regions of favorable energetics and the large number of metastable states which can wet the interface and reduce the effective surface tension. A renormalization group treatment of the wetting effect[@kirkpatrick.1989] suggests that $\theta = 3/2$ in the vicinity of an ideal glass transition. Incomplete wetting giving strictly $\theta=2$ only asymptotically would not change the numerics of the present theory much. Whether complete wetting occurs for supercooled liquids under laboratory conditions is still debated[@capaccioli.2008; @stevenson.2008a; @cammarota.2009]. The free energy profile describing reconfiguration events restricted to compact clusters becomes, then, $F_{\textrm{compact}}(N) = \sigma_0 4 \pi (3 N /(4\pi \rho_0))^{\theta/3} - N T s_c$. The minimum number of particles participating in a reconfiguration event is determined by finding where the free energy profile crosses zero. For $\theta = 3/2$ the activation free energy barrier is inversely proportional to the configurational entropy, leading to the Adam-Gibbs[@adam.1965] relation for the most probable relaxation time ${F^{\ddagger}}_{\alpha} / k_BT \sim \ln \tau_{\alpha} / \tau_0 \sim s_c^{-1}$. Adding fluctuations to the profile of compact reconfiguration events yields an approximate Gaussian distribution of barriers with width scaling as $\sqrt{N^{\ddagger}} {\delta \! f}$. Xia and Wolynes[@xia.2001], and more explicitly Bhatt
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'The Coulomb interaction between the two protons is included in the calculation of proton-deuteron breakup and of three-body electromagnetic disintegration of ${}^3\mathrm{He}$. The hadron dynamics is based on the purely nucleonic charge-dependent (CD) Bonn potential and its realistic extension CD Bonn + $\Delta$ to a coupled-channel two-baryon potential, allowing for single virtual $\Delta$-isobar excitation. Calculations are done using integral equations in momentum space. The screening and renormalization approach is employed for including the Coulomb interaction. Convergence of the procedure is found at moderate screening radii. The reliability of the method is demonstrated. The Coulomb effect on breakup observables is seen at all energies in particular kinematic regimes.' author: - 'A. Deltuva' - 'A. C. Fonseca' - 'P. U. Sauer' title: 'Momentum-space description of three-nucleon breakup reactions including the Coulomb interaction' --- [^1] Introduction \[sec:intro\] ========================== The inclusion of the Coulomb interaction in the description of the three-nucleon continuum is one of the most challenging tasks in theoretical few-body nuclear physics [@alt:02a]. Whereas it has already been solved for elastic proton-deuteron $(pd)$ scattering with realistic hadronic interactions using various procedures [@alt:02a; @kievsky:01a; @chen:01a; @ishikawa:03a; @deltuva:05a], there are only very few attempts [@alt:94a; @kievsky:97a; @suslov:04a] to calculate $pd$ breakup, and none of them uses a complete treatment of the Coulomb interaction and realistic hadronic potentials allowing for a stringent comparison with the experimental data. Recently in [Ref.]{} [@deltuva:05a] we included the Coulomb interaction between the protons in the description of three-nucleon reactions with two-body initial and final states. The description is based on the Alt-Grassberger-Sandhas (AGS) equation [@alt:67a] in momentum space. The Coulomb potential is screened and the resulting scattering amplitudes are corrected by the renormalization technique of [Refs.]{} [@taylor:74a; @alt:78a] to recover the unscreened limit. The treatment is applicable to any two-nucleon potential without separable expansion. Reference [@deltuva:05a] and this paper use the purely nucleonic charge-dependent (CD) Bonn potential [@machleidt:01a] and its coupled-channel extension CD Bonn + $\Delta$ [@deltuva:03c], allowing for a single virtual $\Delta$-isobar excitation and fitted to the experimental data with the same degree of accuracy as CD Bonn itself. In the three-nucleon system the $\Delta$ isobar mediates an effective three-nucleon force and effective two- and three-nucleon currents, both consistent with the underlying two-nucleon force. The treatment of [Ref.]{} [@deltuva:05a] is technically highly successful, but still limited to the description of proton-deuteron $(pd)$ elastic scattering and of electromagnetic (e.m.) reactions involving ${{}^3\mathrm{He}}$ with $pd$ initial or final states only. This paper extends the treatment of Coulomb to breakup in $pd$ scattering and to e.m. three-body disintegration of ${{}^3\mathrm{He}}$. In that extension we follow the ideas of [Refs.]{} [@taylor:74a; @alt:78a; @alt:94a], but avoid approximations on the hadronic potential and in the treatment of screened Coulomb. Thus, our three-particle equations, including the screened Coulomb potential, are completely different from the quasiparticle equations solved in [Ref.]{} [@alt:94a] where the two-nucleon screened Coulomb transition matrix is approximated by the screened Coulomb potential. In [Ref.]{} [@deltuva:05c] we presented for the first time a limited set of results for $pd$ breakup using the same technical developments we explain here in greater detail. We have to recall that the screened Coulomb potential $w_R$ we work with is particular. It is screened around the separation $r=R$ between two charged baryons and in configuration space is given by $$\begin{gathered} \label{eq:wr} w_R(r) = w(r) \; e^{-(r/R)^n},\end{gathered}$$ with the true Coulomb potential $w(r)=\alpha_e/r$, $\alpha_e$ being the fine structure constant and $n$ controlling the smoothness of the screening. We prefer to work with a sharper screening than the Yukawa screening $(n=1)$ of [Ref.]{} [@alt:94a]. We want to ensure that the screened Coulomb potential $w_R$ approximates well the true Coulomb one $w$ for distances $r<R$ and simultaneously vanishes rapidly for $r>R$, providing a comparatively fast convergence of the partial-wave expansion. In contrast, the sharp cutoff $(n \to \infty)$ yields an unpleasant oscillatory behavior in the momentum-space representation, leading to convergence problems. We find the values $3 \le n \le 6$ to provide a sufficiently smooth, but at the same time a sufficiently rapid screening around $r=R$ like in [Ref.]{} [@deltuva:05a]; $n=4$ is our choice for the results of this paper. The screening radius $R$ is chosen much larger than the range of the strong interaction which is of the order of the pion wavelength $\hbar/m_\pi c \approx 1.4{\;\mathrm{fm}}$. Nevertheless, the screened Coulomb potential $w_R$ is of short range in the sense of scattering theory. Standard scattering theory is therefore applicable. A reliable technique [@deltuva:03a] for solving the AGS equation [@alt:67a] with short-range interactions is extended in [Ref.]{} [@deltuva:05a] to include the screened Coulomb potential between the charged baryons. However, the partial-wave expansion of the pair interaction requires much higher angular momenta than the one of the strong two-nucleon potential alone. The screening radius $R$ will always remain very small compared with nuclear screening distances of atomic scale, i.e., $10^5{\;\mathrm{fm}}$. Thus, the employed screened Coulomb potential $w_R$ is unable to simulate properly the physics of nuclear screening and, even more, all features of the true Coulomb potential. Thus, the approximate breakup calculations with screened Coulomb $w_R$ have to be corrected for their shortcomings in a controlled way. References [@taylor:74a; @alt:78a] give the prescription for the correction procedure which we follow here for breakup as we did previously for elastic scattering, and that involves the renormalization of the on-shell amplitudes in order to get the proper unscreened Coulomb limit. After the indicated corrections, the predictions for breakup observables have to show independence from the choice of the screening radius $R$, provided it is chosen sufficiently large. That convergence will be the internal criterion for the reliability of our Coulomb treatment. Configuration space treatments of Coulomb [@kievsky:97a; @suslov:04a] may provide a viable alternative to the integral equation approach in momentum space on which this paper is based. References [@kievsky:97a; @suslov:04a] have provided first results for $pd$ breakup, but they still involve approximations in the treatment of Coulomb and the employed hadronic dynamics is not realistic. Thus, a benchmark comparison between our breakup results and corresponding configuration space results is, in contrast to $pd$ elastic scattering [@deltuva:05b], not possible yet. With respect to the reliability of our Coulomb treatment for breakup, we rely solely on our internal criterion, i.e., the convergence of breakup observables with the screening radius $R$; however, that criterion was absolutely reliable for $pd$ elastic scattering and related e.m. reactions. Section \[sec:th\] develops the technical apparatus underlying the calculations. Section \[sec:res\] presents some characteristic effects of Coulomb in three-nucleon breakup reactions. Section \[sec:concl\] gives our conclusions. Treatment of Coulomb interaction between protons in breakup \[sec:th\] ====================================================================== This section carries over the treatment of the Coulomb interaction given in [Ref.]{} [@deltuva:05a] for $pd$ elastic scattering and corresponding e.m. reactions, to $pd$ breakup and to e.m. three-body disintegration of ${{}^3\mathrm{He}}$. It establishes a theoretical procedure leading to a calculational scheme. The discussions of hadronic and e.m. reactions are done separately. Theoretical framework for the description of proton-deuteron breakup with Coulomb \[sec:thpdb\] ----------------------------------------------------------------------------------------------- This section focuses on $pd$ breakup. However, the transition matrices for elastic scattering and breakup are so closely connected that certain relations between scattering operators already developed in [Ref.]{} [@deltuva:05a] have to be recalled to make this paper selfcontained. Each pair of nucleons $(\beta \gamma)$ interacts through the strong coupled-channel potential $v_\alpha$ and the Coulomb potential $w_\alpha$. We assume that $w_\alpha$ acts
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | In this article, we prove the existence of bounded solutions of quadratic backward SDEs with jumps, that is to say for which the generator has quadratic growth in the variables $(z,u)$. From a technical point of view, we use a direct fixed point approach as in Tevzadze [@tev], which allows us to obtain existence and uniqueness of a solution when the terminal condition is small enough. Then, thanks to a well-chosen splitting, we recover an existence result for general bounded solution. Under additional assumptions, we can obtain stability results and a comparison theorem, which as usual imply uniqueness. [**Key words:**]{} BSDEs, quadratic growth, jumps, fixed-point theorem. [**AMS 2000 subject classifications:**]{} 60H10, 60H30 author: - 'Nabil [Kazi-Tani]{}[^1]' - 'Dylan [Possamaï]{}[^2]' - 'Chao [Zhou]{}[^3]' title: 'Quadratic BSDEs with jumps: a fixed-point approach[^4] ' --- Introduction ============ Motivated by duality methods and maximum principles for optimal stochastic control, Bismut studied in [@bis] a linear backward stochastic differential equation (BSDE). In their seminal paper [@pardpeng], Pardoux and Peng generalized such equations to the non-linear Lipschitz case and proved existence and uniqueness results in a Brownian framework. Since then, a lot of attention has been given to BSDEs and their applications, not only in stochastic control, but also in theoretical economics, stochastic differential games and financial mathematics. In this context, the generalization of Backward SDEs to a setting with jumps enlarges again the scope of their applications, for instance to insurance modeling, in which jumps are inherent (see for instance Liu and Ma [@liu]). Li and Tang [@li] were the first to obtain a wellposedness result for Lipschitz BSDEs with jumps, using a fixed point approach similar to the one used in [@pardpeng]. Let us now precise the structure of these equations in a discontinuous setting. Given a filtered probability space $(\Omega,\mathcal F,\left\{\mathcal F_t\right\}_{0\leq t\leq T},\mathbb P)$ generated by an $\mathbb R^d$-valued Brownian motion $B$ and a random measure $\mu$ with compensator $\nu$, solving a BSDEJ with generator $g$ and terminal condition $\xi$ consists in finding a triple of progressively measurable processes $(Y,Z,U)$ such that for all $t \in [0,T]$, $\mathbb P-a.s.$ $$\begin{aligned} Y_t=\xi +\int_t^T g_s(Y_s,Z_s,U_s)ds-\int_t^T Z_s dB_s -\int_t^T \int_{\R^d\backslash \{0\}} U_s(x)(\mu-\nu)(ds,dx). \label{def_bsdej}\end{aligned}$$ We refer the reader to Section \[notations\_qbsdej\] for more precise definitions and notations. In this paper, $g$ will be supposed to satisfy a Lipschitz-quadratic growth property. More precisely, $g$ will be Lipschitz in $y$, and will satisfy a quadratic growth condition in $(z,u)$ (see Assumption \[assump:hquad\](iii) below). The interest for such a class of quadratic BSDEs has increased a lot in the past few years, mainly due to the fact that they naturally appear in many stochastic control problems, for instance involving utility maximization (see among many others [@ekr] and [@him]). When the filtration is generated only by a Brownian motion, the existence and uniqueness of quadratic BSDEs with a bounded terminal condition has been first treated by Kobylanski [@kob]. Using an exponential transformation, she managed to fall back into the scope of BSDEs with a coefficient having linear growth. Then the wellposedness result for quadratic BSDEs is obtained by means of an approximation method. The main difficulty lies then in proving that the martingale part of the approximation converges in a strong sense. This result has then been extended in several directions, to a continuous setting by Morlais [@morlais], to unbounded solutions by Briand and Hu [@bh] or more recently by Mocha and Westray [@moc]. In particular cases, several authors managed to obtain further results, to name but a few, see Hu and Schweizer [@hu], Hu, Imkeller and Müller [@him], Mania and Tevzadze [@man] or Delbaen, et al. [@del]. This approach was later totally revisited by Tevzadze [@tev], who gave a direct proof in the Lipschitz-quadratic setting. His methodology is fundamentally different, since he uses a fixed-point argument to obtain existence of a solution for small terminal condition, and then pastes solutions together in the general bounded case. In this regard, there is no longer any need to obtain the difficult strong convergence result needed by Kobylanski [@kob]. More recently, applying yet a completely different approach using now a forward point of view and stability results for a special class of quadratic semimartingales, Barrieu and El Karoui [@elkarbar] generalized the above results. Their approach has the merit of greatly simplifying the problem of strong convergence of the martingale part when using approximation arguments, since they rely on very general semimartingale convergence results. Notice that this approach was, partially, present in an earlier work of Cazanave, Barrieu and El Karoui [@elkarcaz], but limited to a bounded framework. Nonetheless, when it comes to quadratic BSDEs in a discontinuous setting, the literature is far less abounding. Until very recently, the only existing results concerned particular cases of quadratic BSDEs, which were exactly the ones appearing in utility maximization or indifference pricing problems in a jump setting. Thus, Becherer [@bech] first studied bounded solutions to BSDEs with jumps in a finite activity setting, and his general results were improved by Morlais [@mor], who proved existence of the solution to a special quadratic BSDE with jumps, which naturally appears in a utility maximization problem, using the same type of techniques as Kobylanski. The first breakthrough in order to tackle the general case was obtained by Ngoupeyou [@ngou] in his PhD thesis, and the subsequent papers by El Karoui, Matoussi and Ngoupeyou [@elmatn] and by Jeanblanc, Matoussi and Ngoupeyou [@jmn]. They non-trivially extended the techniques developed in [@elkarbar] to a jump setting, and managed to obtain existence of solutions for quadratic BSDEs with non-bounded terminal conditions. We emphasize that some of our arguments were inspired by their techniques and the ones developed in [@elkarbar]. Nonetheless, as explained throughout the paper, our approach follows a completely different direction and allows in some cases to consider BSDEs which are outside of the scope of [@elmatn], even though, unlike them, we are constrained to work with bounded terminal conditions. Moreover, at least for small terminal conditions, our approach allows to obtain a wellposedness theory for multidimensional quadratic BSDEs with jumps. After the completion of this paper, we became aware of a very recent result of Laeven and Stadje [@laev] who proved a general existence result for BSDEJs with convex generators, using verification arguments. We emphasize that our approach is very different and do not need any convexity assumption in order to obtain existence of a solution. Nonetheless, their result and ours do not imply each other. Our aim here is to extend the fixed-point methodology of Tevzadze [@tev] to the case of a discontinuous filtration. We first obtain an existence result for a terminal condition $\xi$ having a ${\left\|\cdot\right\|}_{\infty}$-norm which is small enough. Then the result for any $\xi$ in $\mathbb L^{\infty}$ follows by splitting $\xi$ in pieces having a small enough norm, and then pasting the obtained solutions to a single equation. Since we deal with bounded solutions, the space of BMO martingales will play a particular role in our setting. We will show that it is indeed the natural space for the continuous and the pure jump martingale terms appearing in the BSDE \[def\_bsdej\], when $Y$ is bounded. When it comes to uniqueness of a solution in this framework with jumps, we need additional assumptions on the generator $g$ for a comparison theorem to hold. Namely, we will use on the one hand the Assumption \[assump.roy\], which was first introduced by Royer [@roy] in order to ensure the validity of a comparison theorem for Lipschitz BSDEs with jumps, and on the other hand a convexity assumption which was already considered by Briand and Hu [@bh2] in the continuous case. We extend here these comparison theorems to our setting (Proposition \[prop.comp\]), and then use them to give a uniqueness result. This wellposedness result for bounded quadratic BSDEs with jumps opens the way to many possible applications. Barrieu and El Karoui [@elkarbar2] used quadratic BSDEs to define time consistent convex risk measures and study their properties. The extension of some of these results to the case with jumps is the object of our accompanying paper [@kpz4]. The rest of this paper is organized as follows. In Section \[section.1\], we give all the notations and present the
{ "pile_set_name": "ArXiv" }
null
null
--- author: - Wenjun Liao - Chenghua Lin bibliography: - 'sample.bib' title: Deep Ensemble Learning for News Stance Detection --- ***Keywords: Stance detection, Fake News, Neural Network, Deep ensemble learning, NLP*** Extended Abstract {#extended-abstract .unnumbered} ================= Detecting stance in news is important for news veracity assessment because it helps fact-checking by predicting a stance with respect to a central claim from different information sources. Initiated in 2017, the Fake News Challenge Stage One[^1] (FNC-1) proposed the task of detecting the stance of a news article body relative to a given headline, as a first step towards fake news detection. The body text may agree or disagree with the headline, discuss the same claim as the headline without taking a position or is unrelated to the headline. Several state-of-the-art algorithms [@hanselowski2018retrospective; @riedel2017simple] have been implemented based on the training dataset provided by FNC-1. We conducted error analysis for the top three performing systems in FNC-1. Team1 *‘SOLAT in the SWEN’* from Talos Intelligence[^2] won the competition by using a 50/50 weighted average ensemble of convolutional neural network and gradient boosted decision trees. Team2, *‘Athene’* from TU Darmstadt achieved the second place by using hard-voting for results generated by five randomly initialized Multilayer Perceptron (MLP) structures, where each MLP is constructed with seven hidden layers [@hanselowski2018retrospective]. The two approaches use features of semantic analysis, bag of words as well as baseline features defined by FNC-1, which include word/ngram overlap features and indicator features for polarity and refutation. Team3, *‘UCL Machine Reading’* uses a simple end to end MLP model with a 10000-dimension Term Frequency (TF) vector (5000 extracted from headlines and 5000 from text body) and a one-dimension TF-IDF cosine similarity vector as input features [@riedel2017simple]. The MLP architecture has one hidden layer with 100 units, and it’s output layer has four units corresponding to four possible classes. Rectified linear unit activation function is applied on the hidden-layer and Softmax is applied on the output layer. The loss function is the sum of $l_2$ regularization of MLP weights and cross entropy between outputs and true labels. The result is decided by the argmax function upon output layer. Several techniques are adopted to optimize the model training process such as mini-batch training and dropout. According to our error analysis, UCL’s system is simple but tough-to-beat, therefore it is chosen as the new baseline. **Method.**   In this work, we developed five new models by extending the system of UCL. They can be divided into two categories. The first category encodes additional keyword features during model training, where the keywords are represented as indicator vectors and are concatenated to the baseline features. The keywords consist of manually selected refutation words based on error analysis. To make this selection process automatic, three algorithms are created based on the Mutual Information (MI) theory. The keywords generator based on MI customized class (MICC) gave the best performance. Figure 1(a) illustrates the work-flow of the MICC algorithm. The second category adopts article body-title similarity as part of the model training input, where word2vec is introduced and two document similarity calculation algorithms are implemented: word2vec cosine similarity and Word Mover’s Distance. **Results.**   Outputs generated from different aforementioned methods are combined following two rules, *concatenation* and *summation*. Next, single models as well as ensemble of two or three randomly selected models go through 10-fold cross validation. The output layer becomes $4\cdot\textit{N}$-dimension when adopting concatenation rule, where *N* is the number of models selected for ensemble. We considered the evaluation metric defined by FNC, where the correct classification of relatedness contributes 0.25 points and correctly classify related pairs as agree, disagree or discuss contributes 0.75 points. Experimental results show that ensemble of three neural network models trained from simple bag-of-words features gives the best performance. These three models are: the baseline MLP; a model from category one where manually selected keyword features are added; a model from category one where added keywords feature are selected by the MICC algorithm. After hyperparameters tuning on validation set, the ensemble of three selected models has shown great performance on the test dataset. As shown in Table 1, our system beats the FNC-1 winner team Talos by 34.25 marks, which is remarkable considering our system’s relatively simple architecture. Figure 1(b) demonstrates the performance of our system. Our deep ensemble model does not outstand in any of the four stance detection categories. However, it reflects the averaging outcome of the best results from the three individual models. It is the ensemble effect that brings the best result in the end. Evaluation has demonstrated that our proposed ensemble-based system can outperform the state-of-the-art algorithms in news stance detection task with a relatively simple implementation. [.45]{} ![(a) The illustration of customized-class based MI algorithm. The input is the customized theme word, documents are then classified according to the themes. The output are groups of keywords under different class. (b) The heat map of the detection results. []{data-label="fig:fig"}](Mutual_Information.jpg "fig:"){width="0.95\linewidth"} [.55]{} ![(a) The illustration of customized-class based MI algorithm. The input is the customized theme word, documents are then classified according to the themes. The output are groups of keywords under different class. (b) The heat map of the detection results. []{data-label="fig:fig"}](Heatmap_of_confusion_matrix.png "fig:"){width="0.95\linewidth"} \[1\][D[.]{}[.]{}[\#1]{}]{} [@ l \*[7]{}[d[2.3]{}]{} ]{} Team & & & &\ & & & &\ UCL & 0.44 & 0.066 & 0.814 & 0.979 & 0.404 & &\ Athene & 0.447 & 0.095 & 0.809 & 0.992 &0.416 & &\ Talos & 0.585 & 0.019 & 0.762 &0.987 &0.409& &\ This work & 0.391 & 0.067 & 0.855 &0.980 &0.403& &\ Appendix[^3] {#appendix .unnumbered} ============ The code of this work is available at Github: <https://github.com/amazingclaude/Fake_News_Stance_Detection>.\ \ The full thesis regarding this work is available at ResearchGate: <https://www.researchgate.net/publication/327634447_Stance_Detection_in_Fake_News_An_Approach_based_on_Deep_Ensemble_Learning>. [^1]: <http://www.fakenewschallenge.org> [^2]: <https://github.com/Cisco-Talos/fnc-1> [^3]: This page is not included in the submitted camera-ready version.
{ "pile_set_name": "ArXiv" }
null
null
   cond-mat/0602009 [Comments on the Superconductivity Solution\ of an Ideal Charged Boson System$^*$]{} R. Friedberg$^1$ and T. D. Lee$^{1,~2}$\ [\ [*New York, NY 10027, U.S.A.*]{}\ [*2. China Center of Advanced Science and Technology (CCAST/World Lab.)*]{}\ [*P.O. Box 8730, Beijing 100080, China*]{}\ ]{} [Abstract]{}      We review the present status of the superconductivity solution    for an ideal charged boson system, with suggestions for possible    improvement. ———————————- A dedication in celebration of the 90th birthday of Professor V. L. Ginzburg  This research was supported in part by the U. S. Department of Energy Grant DE-FG02-92ER-40699 [**1. Introduction**]{} An ideal charged boson system is of interest because of the simplicity in its formulation and yet the complexity of its manifestations. The astonishingly complicated behavior of this idealized system may provide some insight to the still not fully understood properties of high $T_c$ superconductivity. As is well known, R. Schafroth\[1\] first studied the superconductivity of this model fifty years ago. In this classic paper he concluded that at zero temperature $T=0$ and in an external constant magnetic field $H$, there is a critical field $$(H_c)_{Sch}=e\rho/2m\eqno(1.1)$$ with $\rho$ denoting the overall number density of the charged bosons and $m$, $e$ their mass and electric charge respectively; the system is in the super phase when $H<H_c$, and in the normal phase when $H>H_c$. Due to an oversight, Schafroth neglected the exchange part of the electrostatic energy, which invalidates his conclusion as was pointed out in a 1990 paper \[2\] by Friedberg, Lee and Ren (FLR). This oversight when corrected makes the ideal charged boson model even more interesting. Some aspects of this simple model are still not well understood. In what follows we first review the Schafroth solution and then the FLR corrections. Our discussions are confined only to $T=0$. [**2. Hamiltonian and Schafroth Solution**]{} Let $\phi({\bf r})$ be the charged boson field operator and $\phi^\dag({\bf r})$ its hermitian conjugate, with their equal-time commutator given by $$[\phi({\bf r}),~\phi^\dag({\bf r}')]=\delta^3({\bf r}-{\bf r}').\eqno(2.1)$$ These bosons are non-relativistic, enclosed in a large cubic volume $\Omega=L^3$ and with an external constant background charge density $-e\rho_{ext}~$ so that the integral of the total charge density $$eJ_0\equiv e \phi^\dag\phi-e \rho_{ext}\eqno(2.2)$$ is zero. The Coulomb energy operator is given by $$H_{Coul}=\frac{e^2}{8\pi} \int~|~{\bf r}-{\bf r}'|^{-1}~:J_0({\bf r})J_0({\bf r}'):~ d^3rd^3r'\eqno(2.3)$$ where $:~:$ denotes the normal product in Wick’s notation\[3\] so as to exclude the Coulomb self-energy. Expand the field operator $\phi({\bf r})$ in terms of a complete orthonormal set of $c$-number function $\{f_i({\bf r})\}$: $$\phi({\bf r})=\sum\limits_i a_if_i({\bf r})\eqno(2.4)$$ with $a_i$ and its hermitian conjugate $a_i^\dag$ obeying the commutation relation $[a_i,~a_j^\dag]=\delta_{ij}$, in accordance with (2.1). Take a normalized state vector $|>$ which is also an eigenstate of all $a_i^\dag a_i$ with $$a_i^\dag a_i |>=n_i|>.\eqno(2.5)$$ For such a state, the expectation value of the Coulomb energy $E_{Coul}$ can be written as a sum of three terms: $$<|H_{Coul}|>=E_{ex}+E_{dir}+E_{dir}'\eqno(2.6)$$ where $$E_{ex}=\sum\limits_{i\neq j}\frac{e^2}{8\pi}\int d^3rd^3r' |~{\bf r}-{\bf r}'|^{-1}n_i n_j f_i^*({\bf r}) f_j^*({\bf r}')f_i({\bf r}')f_j({\bf r})$$ $$E_{dir}=\frac{e^2}{8\pi}\int d^3rd^3r' |~{\bf r}-{\bf r}'|^{-1}<|J_0({\bf r})|><|J_0({\bf r}')|>\eqno(2.7)$$ and $$E_{dir}'=-\sum\limits_{i}\frac{e^2}{8\pi}\int d^3rd^3r'|~{\bf r}-{\bf r}'|^{-1}n_i |f_i({\bf r})|^2 |f_i({\bf r}')|^2.$$ The last term $E_{dir}'$ is the subtraction, recognizing that in Wick’s normal product each particle does not interact with itself. In the Schafroth solution, for the super phase at $T=0$ all particles are in the zero momentum state; therefore, on account of (2.2) the ensemble average of $J_0$ is zero and so is the Coulomb energy. For the normal phase, take the magnetic field ${\bf B}=B\hat{z}$ with $B$ uniform and pick its gauge field ${\bf A}=Bx\hat{y}$. At $T=0$, let $$f_i({\bf r})=e^{ip_iy}\psi_i(x).\eqno(2.8)$$ Schafroth assumed $p_i=eBx_i$ with $x_i$ spaced at regular intervals $\lambda=2\pi/eBL$, which approaches zero as $L\rightarrow \infty$. This makes the boson density uniform and therefore $E_{dir}=0$. In the same infinite volume limit, one can show readily that $\Omega^{-1}E_{dir}'\rightarrow 0$. Since Schafroth omitted $E_{ex}$, his energy consists only of $$E_{field}=\int d^3r \frac{1}{2}~B^2,\eqno(2.9)$$ $$E_{mech}=\sum\limits_{i}n_i \int d^3r \frac{1}{2m}~\bigg(\frac{d\psi_i}{dx}\bigg)^2\eqno(2.10)$$ and $$E_{dia}=\sum\limits_{i}n_i \int d^3r \frac{1}{2m}~(p_i-eA_y(x))^2$$ $$~~~~~~~~=\sum\limits_{i}n_i \int d^3r \frac{eB}{2m}~(x-x_i)^2.\eqno(2.11)$$ The sum of (2.10) and (2.11) gives the usual cyclotron energy $$E_{mech}+E_{dia}=\sum\limits_{i}n_i ~\frac{eB}{2m}.\eqno(2.12)$$ Combining with (2.9), Schafroth derived the total Helmholtz free energy density in the normal phase at zero temperature to be $$F_n=\frac{1}{2}~B^2+\frac{e\rho}{2m}~B\eqno(2.13)$$ (Throughout the paper, we take $e$ and $B$ to be positive, since all energies are even in these parameters.) The derivation of (2.13) is, however, flawed by the omission of $E_{ex}$. It turns out that for the above particle wave function (2.8), when $x_i-x_j$ is $<<$ the cyclotron radius $a=(eB)^{-\frac{1}{2}}$, the coefficient of $n_in_j$ in $E_{ex}$ is proportional to $|x_i-x_j|^{-1}$. Hence $\Omega^{-1}E_{ex}$ becomes $\infty$ logarithmically as the spacing $\lambda \rightarrow 0$. [**3. Corrected Normal State at High Density**]{} In this and the next section, we review the FLR analysis for the high density case, when $\rho > r_b^{-3}$ where $r_b=$ Bohr radius $=4\pi/me^2$. a\. . We discuss first the case when $B$ is $>>(m\rho)^{\frac{1}{2}}$, so that the Coulomb correction to the magnetic energy (2.13) can be treated as a perturbation. To find the groundstate energy, we shall continue to assume (2.8) with $p_i=eBx_i$ and $x_i
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We consider revenue maximization in online auction/pricing problems. A seller sells an identical item in each period to a new buyer, or a new set of buyers. For the online pricing problem, we show regret bounds that scale with the *best fixed price*, rather than the range of the values. We also show regret bounds that are *almost scale free*, and match the offline sample complexity, when comparing to a benchmark that requires a *lower bound on the market share*. These results are obtained by generalizing the classical learning from experts and multi-armed bandit problems to their *multi-scale* versions. In this version, the reward of each action is in a *different range*, and the regret with respect to a given action scales with its *own range*, rather than the maximum range.' author: - | Sébastien Bubeck sebubeck@microsoft.com\ Microsoft Research,\ 1 Microsoft Way,\ Redmond, WA 98052, USA. Nikhil Devanur nikdev@microsoft.com\ Microsoft Research,\ 1 Microsoft Way,\ Redmond, WA 98052, USA. Zhiyi Huang zhiyi@cs.hku.hk\ Department of Computer Science,\ The University of Hong Kong,\ Pokfulam, Hong Kong. Rad Niazadeh rad@cs.stanford.edu\ Department of Computer Science,\ Stanford University,\ Stanford, CA 94305, USA. bibliography: - 'bibliography.bib' title: 'Multi-scale Online Learning and its Applications to Online Auctions' --- online learning, multi-scale learning, auction theory, bandit information, sample complexity
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'L. Sbordone' - 'L. Monaco' - 'C. Moni Bidin' - 'P. Bonifacio' - 'S. Villanova' - 'M. Bellazzini' - 'R. Ibata' - 'M. Chiba' - 'D. Geisler' - 'E. Caffau' - 'S. Duffau' date: 'Received September 15, 1996; accepted March 16, 1997' title: 'Chemical abundances of giant stars in and , two globular clusters associated with the Sagittarius dwarf Spheroidal galaxy?' --- [The tidal disruption of the Sagittarius dwarf Spheroidal galaxy (Sgr dSph) is producing the most prominent substructure in the Milky Way (MW) halo, the Sagittarius Stream. Aside from field stars, it is suspected that the Sgr dSph has lost a number of globular clusters (GC). Many Galactic GC are thought to have originated in the Sgr dSph. While for some candidates an origin in the Sgr dSph has been confirmed owing to chemical similarities, others exist whose chemical composition has never been investigated.]{} [ and  are two of these scarcely studied Sgr dSph candidate-member clusters. To characterize their composition we analyzed one giant star in , and two in .]{} [We analyze high-resolution and signal-to-noise spectra by means of the MyGIsFOS code, determining atmospheric parameters and abundances for up to 21 species between O and Eu. The abundances are compared with those of MW halo field stars, of unassociated MW halo globulars, and of the metal-poor Sgr dSph main body population.]{} [We derive a metallicity of \[/H\]=$-2.26\pm$0.10 for , and of \[/H\]=$-1.99\pm0.075$ and $-1.97\pm0.076$ for the two stars in . This makes  one of the most metal-poor globular clusters in the MW. Both clusters display an $\alpha$ enhancement similar to the one of the halo at comparable metallicity. The two stars in  clearly display the Na-O anticorrelation widespread among MW globulars. Most other abundances are in good agreement with standard MW halo trends.]{} [The chemistry of the Sgr dSph main body populations is similar to that of the halo at low metallicity. It is thus difficult to discriminate between an origin of  and  in the Sgr dSph, and one in the MW. However, the abundances of these clusters do appear closer to that of Sgr dSph than of the halo, favoring an origin in the Sgr dSph system. ]{} Introduction {#c_intro} ============ It is a fundamental prediction of models of galaxy formation based on the cold dark matter (CDM) scenario, that dark matter haloes of the size of that of the Milky Way grow through the accretion of smaller subsystems [see e.g. @Font11 and references therein]. These resemble very much the “protogalactic fragments” invoked by @searle78. The merging of minor systems is supposed to be a common event in the early stages of the galactic history, playing a role even in the formation of the stellar disk [e.g. @lake89; @abadi03]. Despite this general agreement, the processes governing galaxy formation still present many obscure aspects, and understanding them is one of the greatest challenges of modern astrophysics. For example, it has been noticed that the chemical abundance patterns of present-day dwarf spheroidal (dSph) galaxies in the Local Group are very different from those observed among stars in the Galactic halo [see @vladilo03; @venn04 and references therein]. Most noticeably, dSphs typically show a disappearance of $\alpha$ enhancement at lower metallicity than stars in the Milky Way, which is considered evidence of a slow, or bursting, star formation history. This is at variance with the properties of the stars belonging to the old, spheroidal Galactic component. This clearly excludes that the known dSph can represent the typical building blocks of larger structures such as the Galactic halo [@geisler07]. The observed differences are not unexpected, however, because dSphs represent a very different environment for star formation [@lanfranchi03; @lanfranchi04]. In any case, the observed dSphs are evolved structures that survived merging, while the models suggest that, although accretion events take place even today, the majority of the merging processes occurred very early in the history of our Galaxy. The chemical peculiarities of the present-day small satellite galaxies could have appeared later in their evolution, and the genuine building blocks could therefore have been chemically very different from what is observed today, but more similar to the resulting merged structures. The model of @Font06 implies that the satellites that formed the halo were accreted eight to nine Gyr ago, while the presently observed satellites were accreted only four to five Gyr ago, or are still being accreted. The Sagittarius (Sgr dSph) galaxy is one of the most studied systems in the Local Group because it is the nearest known dSph [@monaco04], currently merging with the Milky Way [@ibata94]. It thus represents a unique opportunity to study in detail both the stellar population of a dSph and the merging process of a minor satellite into a larger structure. Among Local Group galaxies the Sgr dSph is certainly exceptional, first because of the high metallicity of its dominant population (\[Fe/H\]$\sim -0.5$) compared to its relatively low luminosity ($M_V=-$13.4, @Mateo). While the other galaxies of the Local Group follow a well-defined metallicity-luminosity relation, the Sgr dSph is clearly underluminous by almost three magnitudes with respect to this relation [see figure 5 of @Bonifacio05]. The chemical composition of the Sgr dSph is also very exotic because, aside from the aforementioned underabundance of $\alpha$-elements typical of small galaxies, all the other chemical elements studied so far present very peculiar patterns, clearly distinct from the Milky Way [@sbordone07]. However, this behavior is observed only for stars with $[\mathrm{Fe/H}]\geq -$1. No full chemical analysis has been performed to date on field Sgr stars of lower metallicity, but the measured abundances of $\alpha$-elements suggest that the chemical differences with the Galactic halo should be much lower for $[\mathrm{Fe/H}]\leq -$1 [@monaco05]. At $[\mathrm{Fe/H}]\leq -$1.5 the Sgr dSph stellar population could be chemically indistinguishable from the halo, at variance with other dSphs in the Local Group [@shetrone01; @shetrone03], although even this difference tends to disappear at lower metallicities [@tolstoy09]. Decades of Galactic studies have shown that crucial information about the properties and the history of a galaxy can be unveiled by the study of its globular clusters (GCs). For many aspects they can still be approximated as simple, coeval, and chemically homogeneous stellar populations, although it has been known for a while that this is not strictly true [see @gratton12 for a review]. They thus represent a snapshot of the chemical composition of the host galaxy at the time of their formation. The family of GCs associated with the Sgr dSph today counts five confirmed members. However eighteen more clusters have been proposed as belonging to the Sgr dSph (see @bellazzini02 [@bellazzini03a; @bellazzini03] and Table 1 of @law10, hereafter L10, for a complete census). Nevertheless, the probability of a chance alignment with the Sgr streams is not negligible, and many objects in this large list of candidates are most probably not real members. In their recent analysis based on new models of the Sgr tidal disruption, L10 found that only fifteen of the candidates proposed in the literature have a non-negligible probability of belonging to the Sgr dSph. However, calculating the expected quantity of false associations in the sample, they proposed that only the nine GCs with high confidence levels most likely originate from the Sgr galaxy (in good quantitative agreement with the previous analysis by @bellazzini03). This sample of objects with very high membership probability includes all five of the confirmed clusters (M54, , , , and ), plus [@carraro09], [@carraro07], , and  [@bellazzini03]. [The large list of GC candidate members is particularly interesting because the estimated total luminosity of the Sgr galaxy is comparable to that of Fornax [@vandenBergh00; @majewski03] which, with its five confirmed GCs, is known for its anomalously high GC specific frequency [@vandenBergh98]. Hence, if more than five GCs were confirmed members of the Sgr family, the parent dSph would be even more anomalous than Fornax, unless its total luminosity has been largely underestimated. Estimating Sgr dSph mass is, however, difficult because the the galaxy is being tidally destroyed, and its relatively fast chemical evolution and presence of young, metal-rich populations hint at a very massive progenitor [@bonifacio04; @sbordone07; @siegel07; @tolstoy09; @deboer14].]{} Stimulated by the results of L10, we performed a chemical analysis of
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We prove that $\Ext^{\bullet}_A(k,k)$ is a Gerstenhaber algebra, where $A$ is a Hopf algebra. In case $A=D(H)$ is the Drinfeld double of a finite dimensional Hopf algebra $H$, our results implies the existence of a Gerstenhaber bracket on $H^{\bullet}_{GS}(H,H)$. This fact was conjectured by R. Taillefer in [@T3]. The method consists in identifying $\Ext^{\bullet}_A(k,k)$ as a Gerstenhaber subalgebra of $H^{\bullet}(A,A)$ (the Hochschild cohomology of $A$).' author: - 'Marco A. Farinati $^{1}$ - Andrea Solotar ${}^{1}$' title: 'G-structure on the cohomology of Hopf algebras' --- [Dto. de Matemática Facultad de Cs. Exactas y Naturales. Universidad de Buenos Aires. Ciudad Universitaria Pab I. 1428 - Buenos Aires - Argentina. e-mail: asolotar@dm.uba.ar, mfarinat@dm.uba.ar\ Research partially supported by UBACYT X062 and Fundación Antorchas (proyecto 14022-47).\ Both authors are research members of CONICET (Argentina).]{} Introduction {#introduction .unnumbered} ============ The motivation of this paper is to prove that $H^{\bullet}_{GS}(H,H)$ has a structure of a G-algebra. We prove this result when $H$ is a finite dimensional Hopf algebra (see Theorem \[teo3\] and Corollary \[coroimportante\]). $H^{\bullet}_{GS}$ is the cohomology theory for Hopf algebras defined by Gerstenhaber and Schack in [@GS1]. In order to obtain commutativity of the cup product we prove a general statement on $\Ext$ groups over Hopf algebras (without any finiteness assumption). When $H$ is finite dimensional, the category of Hopf bimodules is isomorphic to a module category, over an algebra $X$ (also finite dimensional) defined by C. Cibils and M. Rosso (see [@CR]), and this category is also equivalent to the category of Yetter-Drinfeld modules, which is isomorphic to the category of modules over the Hopf algebra $D(H)$ (the Drinfeld double of $H$). In [@T2], R. Taillefer has defined a natural cup product in $H^{\bullet}_{GS}(H,H)=H^{\bullet}_b(H,H)$ (see [@GS2] for the definition of $H^{\bullet}_b$). When $H$ is finite dimensional she proved that $H^{\bullet}_b(H,H)\cong\Ext^{\bullet}_{X}(H,H)$, and using this isomorphism she showed that it is (graded) commutative. In a later work [@T3] she extended the result of commutativity of the cup product to arbitrary dimensional Hopf algebras and she conjectured the existence (and a formula) of a Gerstenhaber bracket. Our method for giving a Gerstenhaber bracket is the following: under the equivalence of categories ${}_X$-$\mod\cong {}_{D(H)}$-$\mod$, the object $H$ corresponds to $H^{coH}=k$, so $\Ext^{\bullet}_{X}(H,H)\cong \Ext^{\bullet}_{D(H)}(k,k)$ (isomorphism of graded algebras); after D. Ştefan [@St] one knows that $\Ext^{\bullet}_{D(H)}(k,k)\cong H^{\bullet}(D(H),k)$. In Theorem \[teo2\] we prove that, if $A$ is an arbitrary Hopf algebra, then $H^{\bullet}(A,k)$ is isomorphic to a subalgebra of $H^{\bullet}(A,A)$ (in particular it is graded commutative) and in Theorem \[teo3\] we prove that the image of $H^{\bullet}(A,k)$ in $H^{\bullet}(A,A)$ is stable under the brace operation, in particular it is closed under the Gerstenhaber bracket of $H^(A,A)$. So, the existence of the Gerstenhaber bracket on $H^{\bullet}_{GS}(H,H)$ follows, at least in the finite dimensional case, taking $A=D(H)$. We don’t know if this bracket coincides with the formula proposed in [@T3]. We also provide a proof that the algebra $\Ext^{\bullet}_{\C}(k,k)$ is graded commutative when $\C$ is a braided monoidal category satisfying certain homological hypothesis (see Theorem \[teo1\]). This gives an alternative proof of the commutativity result in the arbitrary dimensional case taking $\C={}_H^H\Y\D$, the Yetter-Drinfeld modules. In this paper, the letter $A$ will denote a Hopf algebra over a field $k$. Cup products ============ This section has two parts. First we prove a generalization of the fact that the cup product on $H^{\bullet}(G,k)$ is graded commutative. The general abstract setting is that of a braided (abelian) category with enough injectives satisfying a Künneth formula (see definitions below). The other part will concern the relation between self extensions of $k$ and Hochschild cohomology of $A$ with coefficients in $k$. Let us recall the definition of a braided category: The data $(\C,\ot,k,c)$ is called a [**braided**]{} category with unit element $k$ if 1. $\C$ is an abelian category. 2. $-\ot -$ is a bifunctor, bilinear, associative, and there are natural isomorphisms $k\ot X\cong X\cong X\ot k$ for all objects $X$ in $\C$. 3. For all pair of objects $X$ and $Y$, $c_{X,Y}:X\ot Y\to Y\ot X$ is a natural isomorphism. The isomorphisms $c_{X,k}:X\ot k\cong k\ot X$ agrees with the isomorphism of the unit axiom, and for all triple $X$, $Y$, $Z$ of objects in $\C$, the Yang-Baxter equation is satisfied: $$(\id_Z\ot c_{X,Y})\circ (c_{X,Z}\ot \id_Y)\circ(\id_X\ot c_{Y,Z})= (\id_Y\ot c_{X,Z})\circ (c_{X,Y}\ot \id_Z)$$ If one doesn’t have the data $c$, and axioms 1 and 2 are satisfied, we say that $(\C,\ot,k)$ is a [**monoidal**]{} category. We will say that a monoidal category $(\C,\ot,k)$ satisfies the [**Künneth formula**]{} if and only if there are natural isomorphisms $H_*(X_*,d_X)\ot H_*(Y_*,d_Y)\cong H_*(X_*\ot Y_*,d_{X\ot Y})$ for all pair of complexes in $\C$. \[teo1\] Let $(\C, \ot,k,c)$ be a braided category with enough injectives satisfying the Künneth formula, then $\Ext^{\bullet}_{\C}(k,k)$ is graded commutative. We proceed as in the proof that $H^{\bullet}(G,k)$ is graded commutative (see for example [@B], page 51, Vol I). The proof is based on two points: firstly a definition of a cup product using $\ot$, secondly a Lemma relating this construction and the Yoneda product of extensions. Let $0\to M\to X_p\to \dots X_1\to N\to 0$ and $0\to M'\to X'_q\to \dots X'_1\to N'\to 0$ be two extensions in $\C$. Then $N_*:=(0\to M\to X_p\to \dots X_1\to 0)$ and $N'_*:=(0\to M'\to X'_q\to \dots X'_1\to 0)$ are two complexes, quasi-isomorphic to $N$ and $N'$ respectively. By the Künneth formula $N_*\ot N'_*$ is a complex quasi-isomorphic to $N\ot N'$, so “completing” this complex with $N\ot N'$ (more precisely considering the mapping cone of the chain map $N_*\ot N'_*\to N\ot N'$) one has an extension in $\C$, beginning with $M\ot M'$ and ending with $N\ot N'$. So, we have defined a cup product: $$\Ext^p_{\C}(N,M)\times\Ext_{\C}^q(N', M')\to \Ext_{\C}^{p+q}(N\ot N',M\ot M')$$ We will denote this product by a dot, and the Yoneda product by $\smile$. The Lemma relating this product and the Yoneda one is the following: If $f\in\Ext^p_{\C}(M,N)$ and $g\in\Ext^q_{\C}(M
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In addition to constraining bilateral exposures of financial institutions, there are essentially two options for future financial regulation of systemic risk (SR): First, financial regulation could attempt to reduce the financial fragility of global or domestic systemically important financial institutions (G-SIBs or D-SIBs), as for instance proposed in Basel III. Second, future financial regulation could attempt strengthening the financial system as a whole. This can be achieved by re-shaping the topology of financial networks. We use an agent-based model (ABM) of a financial system and the real economy to study and compare the consequences of these two options. By conducting three “computer experiments” with the ABM we find that re-shaping financial networks is more effective and efficient than reducing leverage. Capital surcharges for G-SIBs can reduce SR, but must be larger than those specified in Basel III in order to have a measurable impact. This can cause a loss of efficiency. Basel III capital surcharges for G-SIBs can have pro-cyclical side effects.' author: - 'Sebastian Poledna$^{1,2}$' - 'Olaf Bochmann$^{4,5}$' - 'Stefan Thurner$^{1,2,3}$' title: 'Basel III capital surcharges for G-SIBs fail to control systemic risk and can cause pro-cyclical side effects' --- Introduction {#intro} ============ Six years after the financial crisis of 2007-2008, millions of households worldwide are still struggling to recover from the aftermath of those traumatic events. The majority of losses are indirect, such as people losing homes or jobs, and for the majority, income levels have dropped substantially. For the economy as a whole, and for households and for public budgets, the miseries of the market meltdown of 2007-2008 are not yet over.As a consequence, a consensus for the need for new financial regulation is emerging [@Aikman:2013aa]. Future financial regulation should be designed to mitigate risks within the financial system as a whole, and should specifically address the issue of systemic risk (SR). SR is the risk that the financial system as a whole, or a large fraction thereof, can no longer perform its function as a credit provider, and as a result collapses. In a narrow sense, it is the notion of contagion or impact from the failure of a financial institution or group of institutions on the financial system and the wider economy [@De-Bandt:2000aa; @BIS:2010aa]. Generally, it emerges through one of two mechanisms, either through interconnectedness or through the synchronization of behavior of agents (fire sales, margin calls, herding). The latter can be measured by a potential capital shortfall during periods of synchronized behavior where many institutions are simultaneously distressed [@Adrian:2011aa; @Acharya:2010aa; @Brownlees:2012aa; @Huang:2012aa]. Measures for a potential capital shortfall are closely related to the leverage of financial institutions [@Acharya:2010aa; @Brownlees:2012aa]. Interconnectedness is a consequence of the network nature of financial claims and liabilities [@Eisenberg:2001aa]. Several studies indicate that financial network measures could potentially serve as early warning indicators for crises [@Caballero:2012aa; @Billio:2012aa; @Minoiu:2013aa]. In addition to constraining the (potentially harmful) bilateral exposures of financial institutions, there are essentially two options for future financial regulation to address the problem [@Haldane:2011aa; @Markose:2012aa]: First, financial regulation could attempt to reduce the financial fragility of “super-spreaders” or *systemically important financial institutions* (SIFIs), i.e. limiting a potential capital shortfall. This can achieved by reducing the leverage or increasing the capital requirements for SIFIs. “Super-spreaders” are institutions that are either too big, too connected or otherwise too important to fail. However, a reduction of leverage simultaneously reduces efficiency and can lead to pro-cyclical effects [@Minsky:1992aa; @Fostel:2008aa; @Geanakoplos:2010aa; @Adrian:2008aa; @Brunnermeier:2009aa; @Thurner:2012aa; @Caccioli:2012aa; @Poledna:2014ab; @Aymanns:2014aa; @Caccioli:2015aa]. Second, future financial regulation could attempt strengthening the financial system as a whole. It has been noted that different financial network topologies have different probabilities for systemic collapse [@Roukny:2013aa]. In this sense the management of SR is reduced to the technical problem of re-shaping the topology of financial networks [@Haldane:2011aa]. The Basel Committee on Banking Supervision (BCBS) recommendation for future financial regulation for SIFIs is an example of the first option. The Basel III framework recognizes SIFIs and, in particular, global and domestic systemically important banks (G-SIBs or D-SIBs). The BCBS recommends increased capital requirements for SIFIs – the so called “SIFI surcharges” [@BIS:2010aa; @Georg:2011aa]. They propose that SR should be measured in terms of the impact that a bank’s failure can have on the global financial system and the wider economy, rather than just the risk that a failure could occur. Therefore they understand SR as a global, system-wide, loss-given-default (LGD) concept, as opposed to a probability of default (PD) concept. Instead of using quantitative models to estimate SR, Basel III proposes an indicator-based approach that includes the size of banks, their interconnectedness, and other quantitative and qualitative aspects of systemic importance. There is not much literature on the problem of dynamically re-shaping network topology so that networks adapt over time to function optimally in terms of stability and efficiency. A major problem in NW-based SR management is to provide agents with incentives to re-arrange their local contracts so that global (system-wide) SR is reduced. Recently, it has been noted empirically that individual transactions in the interbank market alter the SR in the total financial system in a measurable way [@Poledna:2014aa; @Poledna:2015aa]. This allows an estimation of the marginal SR associated with financial transactions, a fact that has been used to propose a tax on systemically relevant transactions [@Poledna:2014aa]. It was demonstrated with an agent-based model (ABM) that such a tax – the [*systemic risk tax*]{} (SRT) – is leading to a dynamical re-structuring of financial networks, so that overall SR is substantially reduced [@Poledna:2014aa]. In this paper we study and compare the consequences of two different options for the regulation of SR with an ABM. As an example for the first option we study Basel III with capital surcharges for G-SIBs and compare it with an example for the second option – the SRT that leads to a self-organized re-structuring of financial networks. A number of ABMs have been used recently to study interactions between the financial system and the real economy, focusing on destabilizing feedback loops between the two sectors [@Delli-Gatti:2009aa; @Battiston:2012ab; @Tedeschi:2012aa; @Porter:2014aa; @Thurner:2013aa; @Poledna:2014aa; @Klimek:2014aa]. We study the different options for the regulation of SR within the framework of the CRISIS macro-financial model[^1]. In this ABM, we implement both the Basel III indicator-based measurement approach, and the increased capital requirements for G-SIBs. We compare both to an implementation of the SRT developed in [@Poledna:2014aa]. We conduct three “computer experiments” with the different regulation schemes. First, we investigate which of the two options to regulate SR is superior. Second, we study the effect of increased capital requirements, the “surcharges”, on G-SIBs and the real economy. Third, we clarify to what extend the Basel III indicator-based measurement approach really quantifies SR, as indented by the BCBS. Basel III indicator-based measurement approach and capital surcharges for G-SIBs ================================================================================ Basel III indicator-based measurement approach ---------------------------------------------- The Basel III indicator-based measurement approach consists of five broad categories: size, interconnectedness, lack of readily available substitutes or financial institution infrastructure, global (cross-jurisdictional) activity and “complexity”. As shown in \[indicator\], the measure gives equal weight to each of the five categories. Each category may again contain individual indicators, which are equally weighted within the category. [p[5cm]{} p[7cm]{} p[3cm]{}]{} Category (and weighting) & Individual indicator & Indicator weighting\ Cross-jurisdictional activity (20%) & Cross-jurisdictional claims & 10%\ & Cross-jurisdictional liabilities & 10%\ Size (20%) & Total exposures as defined for use in the Basel III leverage ratio & 20%\ Interconnectedness (20%) & Intra-financial system assets & 6.67%\ & Intra-financial system liabilities & 6.67%\ & Securities outstanding & 6.67%\ Substitutability/financial institution infrastructure (20%) & Assets under custody & 6.67%\ & Pay
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In the first part of the paper we introduce some geometric tools needed to describe slow-fast Hamiltonian systems on smooth manifolds. We start with a smooth Poisson bundle $p: M\to B$ of a regular (i.e. of constant rank) Poisson manifold $(M,\omega)$ over a smooth symplectic manifold $(B,\lambda)$, the foliation into leaves of the bundle coincides with the symplectic foliation generated by the Poisson structure on $M$. This defines a singular symplectic structure $\Omega_{\varepsilon}=$ $\omega + \varepsilon^{-1}p^*\lambda$ on $M$ for any positive small $\varepsilon$, where $p^*\lambda$ is a lift of 2-form $\lambda$ on $M$. Given a smooth Hamiltonian $H$ on $M$ one gets a slow-fast Hamiltonian system w.r.t. $\Omega_{\varepsilon}$. We define a slow manifold $SM$ of this system. Assuming $SM$ to be a smooth submanifold, we define a slow Hamiltonian flow on $SM$. The second part of the paper deals with singularities of the restriction of $p$ on $SM$ and their relations with the description of the system near them. It appears, if $\dim M = 4,$ $\dim B = 2$ and Hamilton function $H$ is generic, then behavior of the system near singularities of the fold type is described in the principal approximation by the equation Painlevé-I, but if a singular point is a cusp, then the related equation is Painlevé-II. This fact for particular types of Hamiltonian systems with one and a half degrees of freedom was discovered earlier by R.Haberman.' author: - | L.M. Lerman, E.I. Yakovlev\ Lobachevsky State University of Nizhny Novgorod, Russia title: | Geometry of slow-fast Hamiltonian systems\ and Painlevé equations --- Introduction ============ Slow-fast Hamiltonian systems are ubiquitous in the applications in different fields of science. These applications range from astrophysics, plasma physics and ocean hydrodynamics till molecular dynamics. Usually these problems are given in the coordinate form, moreover, in the form where a symplectic structure in the phase space is standard (in Darboux coordinates). But there are cases where this form is either nonstandard or the system under study is of a kind when its symplectic form has to be found, in particular, when we deal with the system on a manifold. It is our aim in this paper to present basic geometric tools to describe slow-fast Hamiltonian systems on manifolds, that is in a coordinate-free way. For a general case this was done by V.I. Arnold [@Arn]. Recall that a customary slow-fast dynamical system is defined by a system of differential equations $$\label{sf} {\varepsilon}\dot x = f(x,y,{\varepsilon}),\;\dot y = g(x,y,{\varepsilon}),\;(x,y)\in \mathbb R^m\times \mathbb R^n,$$ depending on a small positive parameter ${\varepsilon}$ (its positivity is needed to fix the positive direction of varying time $t$). It is evident that $x$-variables in the region of the phase space where $f\ne 0$ change with the speed $\sim 1/{\varepsilon}$ that is fast. In comparison with them the change of $y$-variables is slow. Therefore variables $x$ are called to be fast and those of $y$ are slow ones. With such the system two limiting systems usually connect whose properties influence on the dynamics of the slow-fast system for a small ${\varepsilon}.$ One of the limiting system is called to be fast or layer system and is derived in the following way. One is introduced a so-called fast time $\tau = t/{\varepsilon}$, after that the system w.r.t. differentiating in $\tau$ gains the parameter ${\varepsilon}$ in the r.h.s. of the second equation and looses it in the first equation, that is, the right hand sides become dependent in ${\varepsilon}$ in a regular way $$\label{fs} \frac{d x}{d\tau} = f(x,y,{\varepsilon}),\;\frac{d y}{d\tau} = {\varepsilon}g(x,y,{\varepsilon}),\;(x,y)\in \mathbb R^m\times \mathbb R^n.$$ Setting then ${\varepsilon}=0$ we get the system, where $y$ are constants $y=y_0$ and they can be considered as parameters for the equations in $x$. Sometimes these equations are called as layer equations. Because this system depends on parameters, it may pass through many bifurcations as parameters change and this can be useful to find some special motions in the full system as ${\varepsilon}> 0$ is small. The slow equations are derived as follows. Let us formally set ${\varepsilon}= 0$ in the system (\[sf\]) and solve the equations $f=0$ with respect to $x$ (where it is possible). The most natural case, when this can be done, is if matrix $f_x$ be invertible at the solution point in some domain where solutions for equations $f=0$ exist. Denote the related branch of solutions as $x=h(y)$ and insert it into the second equation instead of $x$. Then one gets a differential system w.r.t. $y$ variables $$\dot y = g(h(y),y,0),$$ which is called to be the slow system. The idea behind this construction is as follows: if fast motions are directed to the slow manifold, then in a small enough neighborhood of this manifold the motion of the full system happens near this manifold and it is described in the first approximation by the slow system. Now the primary problem for the slow-fast systems is formulated as follows. Suppose we know something about the dynamics of both (slow and fast) systems, for instance, some structure in the phase space composed from pieces of fast and slow motions. Can we say anything about the dynamics of the full system for a small positive ${\varepsilon}$ near this structure? There is a vast literature devoted to the study of these systems, see, for instance, some of the references in [@Gucken]. This set-up can be generalized to the case of manifolds in a free-coordinate manner [@Arn]. Consider a smooth bundle $M\to B$ with a leaf $F$ being a smooth manifold and assume a vertical vector field $v$ is given on $M$. The latter means that any vector $v(x)$ is tangent to the leaf $F_b$ for any $x\in M$ and $b = p(x)\in B.$ In other words, every leaf $F_b$ of $v$ is an invariant submanifold for this vector field. Let $v_{\varepsilon}$ be a smooth unfolding of $v = v_0$. Consider the set of zeroes for vector field $v$, that is, one fixes a leaf $F_b$ then on this smooth manifold $v$ generates a vector field $v^b$ and we consider its zeroes (equilibria for this vector field). If the linearization operator of $v^b$ (along the leaf) at zero $x$, being a linear operator $Dv^b_x: T_xF_b \to T_xF_b$ in invariant linear subspace $V_x = T_xF_b$ of $T_xM$, has not zero eigenvalues, then the set of zeroes is smoothly continued in $b$ for $b$ close to $b = p(x)$. It is a consequence of the implicit function theorem. For this case one gets a local section $z: B \to M,$ $p\circ z(b) = b$, which gives a smooth submanifold $Z$ of dimension $dim B.$ One can define a vector field on $Z$ in the following way. Let us represent vector $v_{\varepsilon}(x)$ in the unique way as $v_{\varepsilon}(x)= v^1_{\varepsilon}(x) \oplus v^2_{\varepsilon}(x)$, a sum of two vectors of which $v^1_{\varepsilon}(x)$ belongs to $V_x$ and $v^2_{\varepsilon}(x)$ is in $T_xZ$. Then vector $v^2_{\varepsilon}(x)$ is of order ${\varepsilon}$ in its norm, since $v_{\varepsilon}$ smoothly depends on ${\varepsilon}$, and it is zero vector as ${\varepsilon}= 0$. Due to Arnold [@Arn] the vector field on $Z$ given as $(d/d{\varepsilon})(v^2_{\varepsilon})$ at ${\varepsilon}= 0$ is called to be slow vector field, in coordinate form it gives just what was written above. It is worth mentioning that one can call as [*slow manifold*]{} all set in $M$ being the zero set for all vertical fields (for ${\varepsilon}=0$). Generically, this set is a smooth submanifold in $M$ but it can be tangent to leaves $F_b$ at some of its points. In this case it is also possible sometimes to define a vector field on $Z$ that can be called a slow vector field, but it is more complicated problem intimately related with degenerations of the projection of $p$ at these points (ranks of $Dp$ at these points, etc.) [@Arn]. Hamiltonian slow-fast systems ============================= Now we turn to Hamiltonian vector fields. It is well known, in order to define in an invariant way a Hamiltonian vector field, the phase manifold $M$ has to be symplectic: a smooth nondegenerate
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Relying on work of Kashiwara-Schapira and Schmid-Vilonen, we describe the behaviour of characteristic cycles with respect to the operation of geometric induction, the geometric counterpart of taking parabolic or cohomological induction in representation theory. By doing this, we are able to describe the characteristic cycle associated to an induced representation in terms of the characteristic cycle of the representation being induced. As a consequence, we prove that the cohomology packets defined by Adams and Johnson in [@Adams-Johnson] are micro-packets, that is to say that the cohomological constructions of [@Adams-Johnson] are particular cases of the sheaf-theoretic ones in [@ABV]. It is important to mention that the equality between the packets defined in [@Adams-Johnson] and the ones in [@ABV] is known for the experts, but to my knowledge no proof of it can be found in the literature.' author: - Nicolás Arancibia Robert bibliography: - 'reference.bib' title: 'Characteristic cycles, micro local packets and packets with cohomology' --- Introduction ============ Let $G$ be a connected reductive algebraic group defined over a number field $F$. In [@Arthur84] and [@Arthur89], Arthur gives a conjectural description of the discrete spectrum of $G$ by introducing at each place $v$ of $F$ a set of parameters $\Psi_v(G)$, that should parameterize all the unitary representations of $G(F_v)$ that are of interest for global applications. More precisely, Arthur conjectured that attached to every parameter $\psi_v\in \Psi_v(G)$ we should have a finite set $\Pi_{\psi_v}(G(F_v))$, called an $A$-packet, of irreducible representations of $G(F_v)$, uniquely characterized by the following properties: - $\Pi_{\psi_v}(G(F_v))$ consists of unitary representations. - The parameter $\psi_v$ corresponds to a unique $L$-parameter $\varphi_{\psi_v}$ and $\Pi_{\psi_v}(G(F_v))$ contains the $L$-packet associated to $\varphi_{\psi_v}$. - $\Pi_{\psi_v}(G(F_v))$ is the support of a stable virtual character distribution on $G(F_v)$. - $\Pi_{\psi_v}(G(F_v))$ verifies the ordinary and twisted spectral transfer identities predicted by the theory of endoscopy. Furthermore, any representation occurring in the discrete spectrum of square integrable automorphic representations of $G$, should be a restricted product over all places of representations in the corresponding $A$-packets. In the case when $G$ is a real reductive algebraic group, Adams, Barbasch and Vogan proposed in [@ABV] a candidate for an $A$-packet, proving in the process all of the predicted properties with the exception of the twisted endoscopic identity and unitarity. The packets in [@ABV], that we call micro-packets or ABV-packets, are defined by means of sophisticated geometrical methods. As explained in the introduction of [@ABV], the inspiration behind their construction comes from the combination of ideas of Langlands and Shelstad (concerning dual groups and endoscopy) with those of Kazhdan and Lusztig (concerning the fine structure of irreducible representations), to describe the representations of $G(\mathbb{R})$ in terms of an appropriate geometry on an $L$-group. The geometric methods are remarkable, but they have the constraint of being extremely difficult to calculate in practice. Without considering some exceptions, like for example ABV-packets attached to tempered Arthur parameters (see Section 7.1 below) or to principal unipotent Arthur parameters (See Chapter 7 [@ABV] and Section 7.2 below), we cannot identify the members of an ABV-packet in any known classification (in the Langlands classification for example). The difficulty comes from the central role played by characteristic cycles in their construction. This cycles are geometric invariants that can be understood as a way to measure how far a constructible sheaf is from a local system. In the present article, relying on work of Kashiwara-Schapira and Schmid-Vilonen, we describe the behaviour of characteristic cycles with respect to the operation of geometric induction, the geometric counterpart of taking parabolic or cohomological induction in representation theory. By doing this, we are able to describe the characteristic cycle associated to an induced representation, in terms of the characteristic cycle of the representation being induced (see Proposition \[prop:ccLG\] below). Before continuing with a more detailed description on the behaviour of characteristic cycles under induction, let us mention some consequences of it. As a first application we have the proof that the cohomology packets defined by Adams and Johnson in [@Adams-Johnson] are micro-packets. In more detail, Adams and Johnson proposed in [@Adams-Johnson] a candidate for an $A$-packet by attaching to any member in a particular family of Arthur parameters (see points (AJ1), (AJ2) and (AJ3) Section 7.3), a packet consisting of representations cohomologically induced from unitary characters. Now, from the behaviour of characteristic cycles under induction, the description of the ABV-packets corresponding to any Arthur parameter in the family studied in [@Adams-Johnson], reduces to the description of ABV-packets corresponding to essentially unipotent Arthur parameters (see Section 7.2), and from this reduction we prove in Theorem \[theo:ABV-AJ\], that the cohomological constructions of [@Adams-Johnson] are particular cases of the ones in [@ABV]. It is important to point out that the equality between Adams-Johnson and ABV-packets is known to experts, but to my knowledge no proof of it can be found in the literature. Let us also say that from this equality and the proof in [@AMR] that for classical groups the packets defined in [@Adams-Johnson] are $A$-packets ([@Arthur]), we conclude that in the framework of [@Adams-Johnson] and for classical groups, the three constructions of $A$-packets coincide. As a second application we can mention that, an important step in the proof that for classical groups the $A$-packets introduced in [@Arthur] are ABV-packets (work in progress with Paul Mezo), is the description of the ABV-packets for the general linear group. The understanding of the behaviour of characteristic cycles under induction will show to be important in the proof that for the general linear group, ABV-packets are Langlands packets, that is, they consist of a single representation. Let us give now a quick overview on how geometric induction affects characteristic cycles. We begin by introducing the geometric induction functor. Suppose $G$ is a connected reductive complex algebraic group defined over $\mathbb{R}$ with Lie algebra $\mathfrak{g}$. Denote by $K$ the complexification in $G$ of some maximal compact subgroup of $G(\mathbb{R})$. Write $X_G$ for the flag variety of $G$, and suppose $Q$ is a parabolic subgroup of $G$ with Levi decomposition $Q=LN$. Consider the fibration $X_G\rightarrow G/Q$. Its fiber over $Q$ can be identified with the flag variety $X_L$ of $L$. We denote the inclusion of that fiber in $X_G$ by $$\begin{aligned} \label{eq:mapvarieties} \iota:X_L&\longrightarrow X_G.\end{aligned}$$ Let $D_{c}^{b}(X_G,K)$ be the $K$-equivariant bounded derived category of sheaves of complex vector spaces on $X_G$ having cohomology sheaves constructible with respect to an algebraic stratification of $X_G$. Living inside this category we have the subcategory $\mathcal{P}(X_G,K)$ of $K$-equivariant perverse sheaves on $X_G$. Set $\mathcal{D}_{X_G}$ to be the sheaf of algebraic differential operators on $X_G$. The Riemann-Hilbert correspondence (see Theorem 7.2.1[@Hotta] and Theorem 7.2.5 [@Hotta]) defines an equivalence of categories between $\mathcal{P}(X_G,K)$ and the category $\mathcal{D}(X_G,K)$ of $K$-equivariant $\mathcal{D}_{X_{G}}$-modules on $X_G$. Now, write $\mathcal{M}(\mathfrak{g},K)$ for the category of $(\mathfrak{g}, K)$-modules of $G$, and $\mathcal{M}(\mathfrak{g},K, I_{X_G})$ for the subcategory of $(\mathfrak{g}, K)$-modules of $G$ annihilated by the kernel $I_{X_G}$ of the operator representation (see equations (\[eq:operatorrepresentation\]) and (\[eq:keroperatorrepresentation\]) below). The categories $\mathcal{M}(\mathfrak{g},K, I_{X_G})$ and $\mathcal{D}(X_G,K)$ are identified through the Beilinson-Bernstein correspondence ([@BB]), and composing this functor with the Riemann-Hilbert correspondence we obtain the equivalence of categories: $$\begin{aligned} \label{eq:rhbb} \Phi_{X_G}:\mathcal{M}(\mathfrak{g},K,I_{X_G}) \xrightarrow{\sim} \mathcal{P}(X_G,K)\end{aligned}$$ and consequently a bijection between
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Let $G$ be a group hyperbolic relative to a finite collection of subgroups $\mc P$. Let $\mathcal F$ be the family of subgroups consisting of all the conjugates of subgroups in $\mc P$, all their subgroups, and all finite subgroups. Then there is a cocompact model for $E_{\mc F} G$. This result was known in the torsion-free case. In the presence of torsion, a new approach was necessary. Our method is to exploit the notion of dismantlability. A number of sample applications are discussed.' address: - 'Memorial University, St. John’s, Newfoundland, Canada A1C 5S7 ' - 'McGill University, Montreal, Quebec, Canada H3A 0B9' author: - 'Eduardo Martinez-Pedroza' - Piotr Przytycki title: Dismantlable classifying space for the family of parabolic subgroups of a relatively hyperbolic group --- Introduction ============ Let $G$ be a finitely generated group hyperbolic relative to a finite collection $\mc P=\{P_\lambda\}_{\lambda\in \Lambda}$ of its subgroups (for a definition see Section \[sec:Rips\]). Let $\mathcal F$ be the collection of all the conjugates of $P_\lambda$ for $\lambda\in\Lambda$, all their subgroups, and all finite subgroups of $G$. *A model for $E_{\mc F}G$* is a $G$-complex $X$ such that all point stabilisers belong to $\mc F$, and for every $H\in \mc F$ the fixed-point set $X^H$ is a (nonempty) contractible subcomplex of $X$. A model for $E_{\mc F}G$ is also called the *classifying space for the family $\mc F$*. In this article we describe a particular classifying space for the family $\mc F$. It admits the following simple description. Let $S$ be a finite generating set of $G$. Let $V=G$ and let $W$ denote the set of cosets $gP_\lambda$ for $g\in G$ and $\lambda\in \Lambda$. We consider the elements of $W$ as subsets of the vertex set of the Cayley graph of $G$ with respect to $S$. Then $|\cdot , \cdot|_S$, which denotes the distance in the Cayley graph, is defined on $V\cup W$. The *$n$-Rips graph* $\Gamma_n$ is the graph with vertex set $V\cup W$ and edges between $u,u'\in V\cup W$ whenever $|u,u'|_S\leq n$. The *$n$-Rips complex* $\Gamma_n^\blacktriangle$ is obtained from $\Gamma_n$ by spanning simplices on all cliques. It is easy to prove that $\Gamma_n$ is a fine $\delta$-hyperbolic connected graph (see Section \[sec:Rips\]). Our main result is the following. \[thm:MMain\] For $n$ sufficiently large, the $n$-Rips complex $\Gamma_n^\blacktriangle$ is a cocompact model for $E_{\mc F}G$. Theorem \[thm:MMain\] was known to hold if - $G$ is a torsion-free hyperbolic group and $\mc P=\emptyset$, since in that case the $n$-Rips complex $\Gamma_n^\blacktriangle$ is contractible for $n$ sufficiently large [@ABC91 Theorem 4.11]. - $G$ is a hyperbolic group and $\mc P=\emptyset$, hence $\mathcal F$ is the family of all finite subgroups, since in that case $\Gamma_n^\blacktriangle$ is $\underline{E}(G)$ [@MS02 Theorem 1], see also [@HOP14 Theorem 1.5] and [@La13 Theorem 1.4]. - $G$ is a torsion-free relatively hyperbolic group, but with different definitions of the $n$-Rips complex, the result follows from the work of Dahmani [@Da03-2 Theorem 6.2], or Mineyev and Yaman [@MiYa07 Theorem 19]. In the presence of torsion, a new approach was necessary. Our method is to exploit the notion of *dismantlability*. Dismantlability, a property of a graph guaranteeing strong fixed-point properties (see [@Po93]) was brought to geometric group theory by Chepoi and Osajda [@ChOs15]. Dismantlability was observed for hyperbolic groups in [@HOP14], following the usual proof of the contractibility of the Rips complex [@BrHa99 Prop III.$\Gamma$ 3.23]. While we discuss the $n$-Rips complex only for finitely generated relatively hyperbolic groups, Theorem \[thm:MMain\] has the following extension. \[cor:infinite\] If $G$ is an infinitely generated group hyperbolic relative to a finite collection $\mc P$, then there is a cocompact model for $E_{\mc F}G$. By [@Os06 Theorem 2.44], there is a finitely generated subgroup $G'\leq G$ such that $G$ is isomorphic to $G'$ amalgamated with all $P_\lambda$ along $P'_\lambda=P_\lambda \cap G'$. Moreover, $G'$ is hyperbolic relative to $\{P_\lambda'\}_{\lambda\in \Lambda}$. Let $S$ be a finite generating set of $G'$. While $S$ does not generate $G$, we can still use it in the construction of $X=\Gamma^\blacktriangle_n$. More explicitly, if $X'$ is the $n$-Rips complex for $S$ and $G'$, then $X$ is a tree of copies of $X'$ amalgamated along vertices in $W$. Let $\mathcal{F}'$ be the collection of all the conjugates of $P'_\lambda$, all their subgroups, and all finite subgroups of $G'$. By Theorem \[thm:MMain\], we have that $X'$ is a cocompact model for $E_{\mc F'}G'$, and it is easy to deduce that $X$ is a cocompact model for $E_{\mc F}G$. Applications {#applications .unnumbered} ------------ On our way towards Theorem \[thm:MMain\] we will establish the following, for the proof see Section \[sec:Rips\]. We learned from François Dahmani that this corollary can be also obtained from one of Bowditch’s approaches to relative hyperbolicity. \[sec:subgroups\] There is finite collection of finite subgroups $\{F_1,\ldots, F_k\}$ such that any finite subgroup of $G$ is conjugate to a subgroup of some $P_\lambda$ or some $F_i$. Note that by [@Os06 Theorem 2.44], Corollary \[sec:subgroups\] holds also if $G$ is infinitely generated, which we allow in the remaining part of the introduction. Our next application regards the cohomological dimension of relatively hyperbolic groups in the framework of Bredon modules. Given a group $G$ and a nonempty family $\mc F$ of subgroups closed under conjugation and taking subgroups, the theory of (right) modules over the orbit category $\mc{O_F}(G)$ was established by Bredon [@Br67], tom Dieck [@tD87] and L[ü]{}ck [@Lu89]. In the case where $\mc F$ is the trivial family, the $\Or$-modules are $\ZG$-modules. The notions of cohomological dimension $\cdF(G)$ and finiteness properties $FP_{n, \mc F}$ for the pair $(G, \mc F)$ are defined analogously to their counterparts $\cd(G)$ and $FP_n$. The geometric dimension $\gdF(G)$ is defined as the smallest dimension of a model for $E_{\mc F}G$. A theorem of Lück and Meintrup [@LuMe00 Theorem 0.1] shows that $$\cdF(G) \leq \gdF(G) \leq \max\{3, \cdF(G)\}.$$ Together with Theorem \[thm:MMain\], this yields the following. Here as before $\mathcal F$ is the collection of all the conjugates of $\{P_\lambda\}$, all their subgroups, and all finite subgroups of $G$. Let $G$ be relatively hyperbolic. Then $\cdF(G)$ is finite. The *homological Dehn function* $\operatorname{\mathsf{FV}}_X(k)$ of a simply-connected cell complex $X$ measures the difficulty of filling cellular $1$-cycles with $2$-chains. For a finitely presented group $G$ and $X$ a model for $EG$ with $G$-cocompact $2$-skeleton, the growth rate of $\operatorname{\mathsf{FV}}_{G}(k):=\operatorname{\mathsf{FV}}_X(k)$ is a group invariant [@Fl98 Theorem 2.1]. The function $\operatorname{\mathsf{FV}}_G(k)$ can also be defined from algebraic considerations under the weaker assumption that $G$ is $FP_2$, see [@HaMa15 Section 3]. Analogously, for a group $G$ and a family of subgroups $\mc F$ with a cocompact
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'Hisashi Johno[^1]' - 'Masahide Saito[^2]' - Hiroshi Onishi title: 'Prediction-based compensation for gate on/off latency during respiratory-gated radiotherapy[^3]' --- [^1]: Department of Mathematical Sciences, University of Yamanashi (). [^2]: Department of Radiology, University of Yamanashi (, ). [^3]: Accepted by *Computational and Mathematical Methods in Medicine* on October 8, 2018.
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We present the results of 45 transit observations obtained for the transiting exoplanet HAT-P-32b. The transits have been observed using several telescopes mainly throughout the YETI network. In 25 cases, complete transit light curves with a timing precision better than $1.4\:$min have been obtained. These light curves have been used to refine the system properties, namely inclination $i$, planet-to-star radius ratio $R_\textrm{p}/R_\textrm{s}$, and the ratio between the semimajor axis and the stellar radius $a/R_\textrm{s}$. First analyses by @Hartman2011 suggest the existence of a second planet in the system, thus we tried to find an additional body using the transit timing variation (TTV) technique. Taking also literature data points into account, we can explain all mid-transit times by refining the linear ephemeris by [$21\:$ms]{}. Thus we can exclude TTV amplitudes of more than [$\sim1.5\:$min]{}.' author: - | M. Seeliger,$^{1}$[^1] D. Dimitrov,$^{2}$ D. Kjurkchieva,$^{3}$ M. Mallonn,$^{4}$ M. Fernandez,$^{5}$ M. Kitze,$^{1}$ V. Casanova,$^{5}$ G. Maciejewski,$^{6}$ J. M. Ohlert,$^{7,8}$ J. G. Schmidt,$^{1}$ A. Pannicke,$^{1}$ D. Puchalski,$^{6}$ E. Göğüş,$^{9}$ T. Güver,$^{10}$ S. Bilir,$^{10}$ T. Ak,$^{10}$ M. M. Hohle,$^{1}$ T. O. B. Schmidt,$^{1}$ R. Errmann,$^{1,11}$ E. Jensen,$^{12}$ D. Cohen,$^{12}$ L. Marschall,$^{13}$ G. Saral,$^{14,15}$ I. Bernt,$^{4}$ E. Derman,$^{15}$ C. Ga[ł]{}an,$^{6}$ and R. Neuhäuser$^{1}$\ $^{1}~$ Astrophysical Institute and University Observatory Jena, Schillergaesschen 2-3, 07745 Jena, Germany\ $^{2}~$ Institute of Astronomy and NAO, Bulg. Acad. Sc., 72 Tsarigradsko Chaussee Blvd., 1784 Sofia, Bulgaria\ $^{3}~$ Shumen University, 115 Universitetska str., 9700 Shumen, Bulgaria\ $^{4}~$ Leibnitz Institut für Astrophysik Potsdam, An der Sternwarte 16, 14482 Potsdam, Germany\ $^{5}~$ Instituto de Astrofisica de Andalucia, CSIC, Apdo. 3004, 18080 Granada, Spain\ $^{6}~$ Centre for Astronomy, Faculty of Physics, Astronomy and Informatics, N. Copernicus University, Grudziadzka 5, 87-100 Toruń, Poland\ $^{7}~$ Astronomie Stiftung Trebur, Michael Adrian Observatorium, Fichtenstraße 7, 65468 Trebur, Germany\ $^{8}~$ University of Applied Sciences, Technische Hochschule Mittelhessen, Friedberg, Germany\ $^{9}~$ Sabanci University, Orhanli-Tuzla 34956, İstanbul, Turkey\ $^{10}$ Istanbul University, Faculty of Sciences, Department of Astronomy and Space Sciences, 34119 University, Istanbul, Turkey\ $^{11}$ Abbe Center of Photonis, Friedrich Schiller Universität, Max-Wien-Platz 1, 07743 Jena, Germany\ $^{12}$ Dept. of Physics and Astronomy, Swarthmore College, Swarthmore, PA 19081-1390, USA\ $^{13}$ Gettysburg College Observatory, Department of Physics, 300 North Washington St., Gettysburg, PA 17325, USA\ $^{14}$ Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA\ $^{15}$ Ankara University, Astronomy and Space Sciences Department, 06100 Tandoǧan, Ankara, Turkey title: 'Transit Timing Analysis in the HAT-P-32 system' --- \[firstpage\] stars: individual: HAT-P-32 – planets and satellites: individual: HAT-P-32b – planetary systems Introduction {#sec:Introduction} ============ Since the first results of the [*Kepler*]{} mission were published, the number of known planet candidates has enlarged tremendously. Most [*hot Jupiters*]{} have been found in single planetary systems and it was believed that those kind of giant, close-in planets are not accompanied by other planets [see e.g. @Steffen2012]. This result has been obtained analysing 63 [*Kepler*]{} hot Jupiter candidates and is in good agreement with inward migration theories of massive outer planets, and planet–planet scattering that could explain the lack of additional close planets in hot Jupiter systems. Nonetheless, wide companions to hot jupiters have been found, as shown e.g. in @Bakos2009 for the HAT-P-13 system. One has to state, though, that the formation of hot Jupiters is not yet fully understood (see @Steffen2012 and references therein for some formation scenarios, and e.g. @Lloyd2013 for possible tests). Recently @Szabo2013 reanalysed a larger sample of 159 [*Kepler*]{} candidates and in some cases found dynamically induced [*Transit Timing Variations (TTVs)*]{}. If the existence of additional planets in hot Jupiter systems can be confirmed, planet formation and migration theories can be constrained. Since, according to @Szabo2013, there is only a small fraction of hot Jupiters believed to be part of a multiplanetary system, it is important to analyse those systems where an additional body is expected. In contrast to e.g. the Kepler mission, where a fixed field on the sky is monitored over a long time span, our ongoing study of TTVs in exoplanetary systems only performs follow-up observations of specific promising transiting planets where additional bodies are suspected. The targets are selected by the following criteria: - The orbital solution of the known transiting planet shows non-zero eccentricity (though the circularization time-scale is much shorter than the system age) and/or deviant radial velocity (RV) data points – both indicating a perturber. - The brightness of the host star is $V\leq13\:$mag to ensure sufficient photometric and timing precision at 1-2m telescopes. - The target location on the sky is visible from the Northern hemisphere. - The transit depth is at least 10 mmag to ensure a significant detection at medium-sized, ground-based telescopes. - The target has not been studied intensively for TTV signals before. Our observations make use of the YETI network [Young Exoplanet Transit Initiative; @YETI], a worldwide network of small to medium sized telescopes mostly on the Northern hemisphere dedicated to explore transiting planets in young open clusters. This way, we can observe consecutive transits, which are needed to enhance the possibility to model TTVs as described in [@Szabo2013], and [@PTmet]. Furthermore, we are able to obtain simultaneous transits observations to expose hidden systematics in the transit light curves, like time synchronization errors, or flat fielding errors. In the past, the transiting exoplanets WASP-12b [@Maciejewski2011a; @Maciejewski2013a], WASP-3b [@Maciejewski2010; @Maciejewski2013b], WASP-10b [@Maciejewski2011b; @Maciejewski2011c], WASP-14b [@Raetz2012] and TrES-2 (Raetz et al. 2014, submitted) have been studied by our group in detail. In most cases, except for WASP-12b, no TTVs could be confirmed. Recently, also @vonEssen2013 claimed to have found possible TTV signals around Qatar-1. However, all possible variations should be treated with reasonable care. In this project we monitor the transiting exoplanet HAT-P-32b. The G0V type [@Pickles2010] host star HAT-P-32 was found to harbour a transiting exoplanet with a period of $P=2.15\:$d by @Hartman2011. Having a host star brightness of $V=11.3\:$mag and a planetary transit depth of $21\:$mmag the sensitivity of medium-sized telescopes is sufficient to achieve high timing precision, therefore it is an optimal target for the YETI telescopes. The RV signal of HAT-P-32 is dominated by high jitter of $>60\:$ms$^{-1}$. @Hartman2011 claim that ’a possible cause of the jitter is the presence of one or more additional planets’. @Knutson
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We present exact analytical solutions for the zero-energy modes of two-dimensional massless Dirac fermions fully confined within a smooth one-dimensional potential $V(x)=-\alpha/\cosh(\beta{}x)$, which provides a good fit for potential profiles of existing top-gated graphene structures. We show that there is a threshold value of the characteristic potential strength $\alpha/\beta$ for which the first mode appears, in striking contrast to the non-relativistic case. A simple relationship between the characteristic strength and the number of modes within the potential is found. An experimental setup is proposed for the observation of these modes. The proposed geometry could be utilized in future graphene-based devices with high on/off current ratios.' author: - 'R. R. Hartmann' - 'N. J. Robinson' - 'M. E. Portnoi' date: 21 June 2010 title: Smooth electron waveguides in graphene --- Introduction ============ Klein proposed that relativistic particles do not experience exponential damping within a barrier like their non-relativistic counterparts, and that as the barrier height tends towards infinity, the transmission coefficient approaches unity.[@Klein] This inherent property of relativistic particles makes confinement non-trivial. Carriers within graphene behave as two-dimensional (2D) massless Dirac fermions, exhibiting relativistic behavior at sub-light speed[@DiracFermions; @CastroNetoReview] owing to their linear dispersion, which leads to many optical analogies. [@Lens; @LevitovParabolic; @ZBZcombined; @Beenakker_PRL_102_2009; @Chen_APL_94_2009] Klein tunneling through p-n junction structures in graphene has been studied both theoretically [@LevitovParabolic; @KleinCombined; @Cheianov_Falko_PRB(R)_74_2006; @Peeters_PRB_74_2006; @Chaplik_JETP_84_2006; @Peeters_APL_90_2007; @Fogler_PRL_100_2008; @Fogler_PRB_77_2008; @BeenakkerRMP08; @ChineseShape] and experimentally. [@Transport; @PN; @TopGateCombined; @Kim_PRL_99_2007; @Savchenko_NanoLett_8_2008; @Liu_APL_92_2008; @GG_PRL_102_2009; @Kim_NatPhys_5_2009] Quasi-bound states were considered in order to study resonant tunneling through various sharply terminated barriers. [@LevitovParabolic; @Peeters_PRB_74_2006; @Chaplik_JETP_84_2006; @Peeters_APL_90_2007; @ChineseShape] We propose to change the geometry of the problem in order to study the propagation of fully confined modes along a smooth electrostatic potential, much like photons moving along an optical fiber. So far quasi-one-dimensional channels have been achieved within graphene nanoribbons, [@CastroNetoReview; @Nanoribbons; @RibbonTheoryCombined; @Efetov_PRL_98_2007; @Peres_JPhysCondMat_21_2009] however, controlling their transport properties requires precise tailoring of edge termination,[@RibbonTheoryCombined] currently unachievable. In this paper we claim that truly bound modes can be created within bulk graphene by top gated structures, [@Kim_PRL_99_2007; @Savchenko_NanoLett_8_2008; @Liu_APL_92_2008; @GG_PRL_102_2009; @Kim_NatPhys_5_2009] such as the one shown in Fig. \[fig:Cosh\_1/2\](a). In an ideal graphene sheet at half-filling, the Fermi level is at the Dirac point and the density of states for a linear 2D dispersion vanishes. In realistic graphene devices the Fermi level can be set using the back gate. This is key to the realization of truly bound modes within a graphene waveguide, as zero-energy modes cannot escape into the bulk as there are no states to tunnel into. Moreover the electrostatic confinement isolates carriers from the sample edges, which are considered as a major source of intervalley scattering in clean graphene.[@SavchenkoSSC09] ![(a) A schematic diagram of a Gedankenexperiment for the observation of localized modes in graphene waveguides, created by the top gate (V$_{\textrm{\scriptsize{TG}}}$). The Fermi level is set using the back gate (V$_{\textrm{\scriptsize{BG}}}$) to be at the Dirac point ($\varepsilon_{\textrm{\scriptsize{F}}}=0$). (b) The electrostatic potential created by the applied top gate voltage. The plane shows the Fermi level position at $\varepsilon_{\textrm{\scriptsize{F}}}=0$.[]{data-label="fig:Cosh_1/2"}](fig1a "fig:"){width="7.5cm"} ![(a) A schematic diagram of a Gedankenexperiment for the observation of localized modes in graphene waveguides, created by the top gate (V$_{\textrm{\scriptsize{TG}}}$). The Fermi level is set using the back gate (V$_{\textrm{\scriptsize{BG}}}$) to be at the Dirac point ($\varepsilon_{\textrm{\scriptsize{F}}}=0$). (b) The electrostatic potential created by the applied top gate voltage. The plane shows the Fermi level position at $\varepsilon_{\textrm{\scriptsize{F}}}=0$.[]{data-label="fig:Cosh_1/2"}](fig1b "fig:"){width="7.5cm"} In this paper we obtain an exact analytical solution for bound modes within a smooth electrostatic potential in pristine graphene at half-filling, count the number of modes and calculate the conductance of the channel. The conductance carried by each of these modes is comparable to the minimal conductivity of a realistic disordered graphene system. [@DiracFermions; @Min.; @Con1; @MinConTheoryCombined; @DasSarma_PNASUSA_104_2007] For the considered model potential we show that there is a threshold potential characteristic strength (the product of the potential strength with its width), for which bound modes appear. Whereas a symmetric quantum well always contains a bound mode for non-relativistic particles, we show that it is not the case for charge carriers in graphene. Fully confined modes in a model potential ========================================= The Hamiltonian of graphene for a two-component Dirac wavefunction in the presence of a one-dimensional potential $U(x)$ is $$\hat{H}=v_{\textrm{\scriptsize{F}}}\left(\sigma_{x}\hat{p}_{x}+\sigma_{y}\hat{p}_{y}\right)+U(x), \label{eq:Hamiltonian}$$ where $\sigma_{x,y}$ are the Pauli spin matrices, $\hat{p}_{x}=-i\hbar\frac{\partial}{\partial x}$ and $\hat{p}_{y}=-i\hbar\frac{\partial}{\partial y}$ are the momentum operators in the $x$ and $y$ directions respectively and $v_{\textrm{\scriptsize{F}}}\approx1\times10^{6}$m/s is the Fermi velocity in graphene. In what follows we will consider smooth confining potentials, which do not mix the two non-equivalent valleys. All our results herein can be easily reproduced for the other valley. When Eq. (\[eq:Hamiltonian\]) is applied to a two-component Dirac wavefunction of the form: $$\mbox{e}^{iq_{y}y}\left({\Psi_{A}(x) \atop \Psi_{B}(x)}\right),$$ where $\Psi_{A}(x)$ and $\Psi_{B}(x)$ are the wavefunctions associated with the $A$ and $B$ sublattices of graphene respectively and the free motion in the $y$-direction is characterized by the wavevector $q_{y}$ measured with respect to the Dirac point, the following coupled first-order differential equations are obtained: $$\left(V(x)-\varepsilon\right)\Psi_{A}(x)-i\left(\frac{\mbox{d}}{\mbox{d}x}+q_{y}\right)\Psi_{B}(x)=0,\label{eq:basic1}$$ $$-i\left(\frac{\mbox{d}}{\mbox{d}x}-q_{y}\right)\Psi_{A}(x)+\left(V(x)-\varepsilon\right)\Psi_{B}(x)=0.\label{eq:basic2}$$ Here $V(x)=U(x)/\hbar v_{\textrm{\scriptsize{F}}}$ and energy $\varepsilon$ is measured in units of $\hbar v_{\textrm{\scriptsize{F}}}$. For the treatment of confined modes within a symmetric electron waveguide, $V(x)=V(-x)$, it is convenient to consider symmetric and anti-symmetric modes. One can see from Eqs. (\[eq:basic1\]-\[eq:basic2\]) that $\Psi_{A}(x)$ and $\Psi_{B}(x)$ are neither even nor odd, so we transform to symmetrized functions: $$\Psi_{1}=\Psi_{A}(x)-i\Psi_{B}(x),\quad\ensuremath{\Psi_{2}}=\ensuremath{\Psi_{A}(x)+i\Psi_{B}(x)}.$$ The wavefunctions $\Psi_{1}$ and $\Psi_{2}$ satisfy the following system of coupled first-order differential equations:$$\left[V(x)-\left(\varepsilon-q_{y}\right)\right]\Psi_{1}-\frac{\mbox{d}\
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We present the results of theoretical study of Current-Phase Relations (CPR) $J_{S}(\varphi )$ in Josephson junctions of SIsFS type, where ’S’ is a bulk superconductor and ’IsF’ is a complex weak link consisting of a superconducting film ’s’, a metallic ferromagnet ’F’ and an insulating barrier ’I’. At temperatures close to critical, $T\lesssim T_{C}$, calculations are performed analytically in the frame of the Ginsburg-Landau equations. At low temperatures numerical method is developed to solve selfconsistently the Usadel equations in the structure. We demonstrate that SIsFS junctions have several distinct regimes of supercurrent transport and we examine spatial distributions of the pair potential across the structure in different regimes. We study the crossover between these regimes which is caused by shifting the location of a weak link from the tunnel barrier ’I’ to the F-layer. We show that strong deviations of the CPR from sinusoidal shape occur even in a vicinity of $T_{C}$, and these deviations are strongest in the crossover regime. We demonstrate the existence of temperature-induced crossover between 0 and $\pi$ states in the contact and show that smoothness of this transition strongly depends on the CPR shape.' author: - 'S. V. Bakurskiy' - 'N. V. Klenov' - 'I. I. Soloviev' - 'M. Yu. Kupriyanov' - 'A. A. Golubov' title: Theory of supercurrent transport in SIsFS Josephson junctions --- Introduction ============ Josephson structures with a ferromagnetic layer became very active field of research because of the interplay between superconducting and magnetic order in a ferromagnet leading to variety of new effects including the realization of a $\pi $-state with phase difference $\pi $ in the ground state of a junction, as well as long-range Josephson coupling due generation of odd-frequency triplet order parameter [@RevG; @RevB; @RevV]. Further interest to Josephson junctions with magnetic barrier is due to emerging possibilities of their practical use as elements of a superconducting memory [@Oh]$^{-}$ [@APL], on-chip $\pi$- phase shifters for self-biasing various electronic quantum and classical circuits [@Rogalla]$^{-}$ [@Ustinov], as well as $\varphi$- batteries, the structures having in the ground state phase difference $\varphi _{g}=\varphi $, $(0<|\varphi |<\pi )$ between superconducting electrodes [Buzdin,Koshelev,Pugach,Gold1,Gold2,Bakurskiy,Heim, Linder, Chan]{}. In standard experimental implementations SFS Josephson contacts are sandwich-type structures [@ryazanov2001]$^{-}$ [@Ryazanov2006a]. The characteristic voltage $V_{C}=J_{C}R_{N}$ ($J_{C}$ is critical current of the junction, $R_{N}$ is resistance in the normal state) of these SFS devices is typically quite low, which limits their practical applications. In SIFS structures [@Kontos]$^{-}$ [@Weides3] containing an additional tunnel barrier I, the $J_{C}R_{N}$ product in a $0$-state is increased [@Ryazanov3], however in a $\pi $-state $V_{C}$ is still too small [@Vasenko; @Vasenko1] due to strong suppression of the superconducting correlations in the ferromagnetic layer. Recently, new SIsFS type of magnetic Josepshon junction was realized experimentally [@Ryazanov3; @Larkin; @Vernik; @APL]. This structure represents a connection of an SIs tunnel junction and an sFS contact in series. Properties of SIsFS structures are controlled by the thickness of s layer $d_{s}$ and by relation between critical currents $J_{CSIs}$ and $J_{CsFS}$ of their SIs and sFS parts, respectively. If the thickness of s-layer $d_{s}$ is much larger than its coherence length $\xi _{S}$ and $J_{CSIs}\ll J_{CsFS} $, then characteristic voltage of an SIsFS device is determined by its SIs part and may reach its maximum corresponding to a standard SIS junction. At the same time, the phase difference $\varphi $ in a ground state of an SIsFS junction is controlled by its sFS part. As a result, both $0 $- and $\pi $-states can be achieved depending on a thickness of the F layer. This opens the possibility to realize controllable $\pi $ junctions having large $J_{C}R_{N}$ product. At the same time, being placed in external magnetic field $H_{ext}$ SIsFS structure behaves as a single junction, since $d_{s}$ is typically too thin to screen $H_{ext}$. This provides the possibility to switch $J_{C}$ by an external field. However, theoretical analysis of SIsFS junctions was not performed up to now. The purpose of this paper is to develop a microscopic theory providing the dependence of the characteristic voltage on temperature $T$, exchange energy $H$ in a ferromagnet, transport properties of FS and sF interfaces and thicknesses of s and F layers. Special attention will be given to determining the current-phase relation (CPR) between the supercurrent $J_{S}$ and the phase difference $\varphi $ across the structure. ![ a) Schematic design of SIsFS Josephson junction. b), c) Typical distribution of amplitude $|\Delta (x)|$ and phase difference $\protect\chi (x)$ of pair potential along the structure. []{data-label="design"}](design.eps){width="8.5cm"} Model of SIsFS Josephson device \[Model\] ========================================= We consider multilayered structure presented in Fig.\[design\]a. It consists of two superconducting electrodes separated by complex interlayer including tunnel barrier I, intermediate superconducting s and ferromagnetic F films. We assume that the conditions of a dirty limit are fulfilled for all materials in the structure. In order to simplify the problem, we also assume that all superconducting films are identical and can be described by a single critical temperature $T_{C}$ and coherence length $\xi _{S}.$ Transport properties of both sF and FS interfaces are also assumed identical and are characterized by the interface parameters $$\gamma =\frac{\rho _{S}\xi _{S}}{\rho _{F}\xi _{F}},\quad \gamma _{B}=\frac{R_{BF}\mathcal{A}_{B}}{\rho _{F}\xi _{F}}. \label{gammas}$$Here $R_{BF}$ and $\mathcal{A}_{B}$ are the resistance and area of the sF and FS interfaces $\xi _{S}$ and $\xi _{F}$ are the decay lengths of S and F materials while $\rho _{S}$ and $\rho _{F}$ are their resistivities. Under the above conditions the problem of calculation of the critical current in the SIsFS structure reduces to solution of the set of the Usadel equations[@Usadel]. For the S layers these equations have the form [@RevG; @RevB; @RevV] $$\frac{\xi _{S}^{2}}{\Omega G_{m}}\frac{d}{dx}\left( G_{m}^{2}\frac{d}{dx}\Phi _{m}\right) -\Phi _{m}=-\Delta _{m},~G_{m}=\frac{\Omega }{\sqrt{\Omega ^{2}+\Phi _{m}\Phi _{m}^{\ast }}}, \label{fiS}$$$$\Delta _{m}\ln \frac{T}{T_{C}}+\frac{T}{T_{C}}\sum_{\omega =-\infty }^{\infty }\left( \frac{\Delta _{m}}{\left\vert \Omega \right\vert }-\frac{\Phi _{m}G_{m}}{\Omega }\right) =0, \label{delta}$$where $m=S$ for $x\leq -d_{s}~$and$~x\geq d_{F};$ $m=s$ in the interval $-d_{s}\leq x\leq 0.$ In the F film $(0\leq x\leq d_{F})$ they are $$\xi _{F}^{2}\frac{d}{dx}\left( G_{F}^{2}\frac{d}{dx}\Phi _{F}\right) -\widetilde{\Omega }\Phi _{F}G_{F}=0. \label{FiF}$$Here $\Omega =T(2n+1)/T_{C}$ are Matsubara frequencies normalized to $\pi T_{C}$, $\widetilde{\Omega }=\Omega +iH/\pi T_{C},$ $G_{F}=\widetilde{\Omega }/(\widetilde{\Omega }^{2}+\Phi _{F,\omega }\Phi _{F,-\omega }^{\ast })^{1/2},$ $H$ is exchange energy, $\xi _{S,F}^{2}=(D_{S,F}/2\pi T_{C})$ and $D_{S,F},$ are diffusion coefficients in S and F metals, respectively. Pair potential $\Delta _{m}$ and the Usadel functions $\Phi _{m}$ and $\Phi _{F}$ in (\[fiS\]) - (\[FiF\]) are also normalized to $\pi T_{C}.$ To write equations (\[fiS\]) - (\[FiF\]), we have chosen the $x$ axis in the directions perpendicular to the SI, FS and sF interfaces and put the origin at sF interface. Equations (\[fiS\]) - (\[
{ "pile_set_name": "ArXiv" }
null
null
=10000 Introduction ============ The understanding, description and control of structures at the nanometer scales is a subject of interest from the fundamental and applied points of view [@generale; @revue]. From the fundamental point of view, there is a large literature [@noziere; @pimpinelli] concerning the growth of crystals and their shape. Yet, while the description of the equilibrium shape is rather clear, the dynamic description of crystal growth is still not well understood. In particular, we lack a complete understanding of the time scales involved in the relaxation process, and the mechanisms which irreversibly conduce the island to its equilibrium shape. In this work, we study the shape relaxation of two dimensional islands by boundary diffusion at low temperatures. The typical size of the islands we will be concerned with consists of a few thousand atoms or molecules, corresponding to islands of a few nanometers. The model we consider is the same as the one studied in [@eur_physB], where two mechanisms of relaxation, depending on temperature, were pointed out: At high temperatures, the classical theory developed by Herring, Mullins and Nichols [@nichols] appears to describe adequately the relaxation process. In particular, it predicts that the relaxation time scales as the number of atoms to the power $2$. However, at low temperatures, the islands spend long times in fully faceted configurations, suggesting that the limiting step of the relaxation in this situation is the nucleation of a new row on a facet. This assumption leads to the correct scaling behavior of the relaxation time on the size of the island, as well as the correct temperature dependence. Yet, it is unclear what drives the island towards equilibrium in this scenario. In this paper we propose a detailed description of this low temperature relaxation mechanism, and identify the event that drives the island towards its equilibrium shape. Based on our description, we construct a Markov process from which we can estimate the duration of each stage of the relaxation process. Finally, we use our result to determine the relaxation time of the islands and compare with simulation results. The specific model under consideration consists of 2D islands having a perfect triangular crystalline structure. A very simple energy landscape for activated atomic motion was chosen: the aim being to point out the basic of mechanisms of relaxation, and not to fit the specific behavior of a particular material. The potential energy $E_p$ of an atom is assumed to be proportional to the number $i$ of neighbors, and the [*kinetic barrier*]{} $E_{act}$ for diffusion is also proportional to the number of [*initial*]{} neighbors before the jump, regardless of the [*final*]{} number of neighbors: $ E_{act}=- E_p = i*E $ where $E$ sets the energy scale ($E=0.1$ eV throughout the paper). Therefore, the probability $p_i$ per unit time that an atom with $i$ neighbors moves is $p_i = \nu_0 \exp[-i*E/k_bT]$, where $\nu_0= 10^{13} s^{-1}$ is the Debye frequency, $k_B$ is the Boltzmann constant and $T$ the absolute temperature. Hence, the average time in which a particle with $i$ neighbors would move is given by : $$\tau_i= \nu_0^{-1} \exp[i*E/k_bT] \label{taui}$$ The complete description of the model and of the simulation algorithm can be found in [@eur_physB], where it was studied using Standard Kinetic Monte Carlo simulations. This simple kinetic model has only [*one*]{} parameter, the ratio $E/k_B T$. The temperature was varied from $83 K$ to $500 K$, and the number of atoms in the islands from $90$ up to $20000$. The initial configurations of the islands were elongated (same initial aspect ratio of about 10), and the simulations were stopped when the islands were close to equilibrium, with an aspect ratio of 1.2. The time required for this to happen was defined as the relaxation time corresponding to that island size and temperature. Concerning the dependence of the relaxation time on the size of the island, two different behaviors depending on temperature were distinguished[@eur_physB]. At high temperature, the relaxation time scaled as the number of atoms to the power $2$, but this exponent decreased when temperature was decreased. A careful analysis showed that the exponent tends towards $1$ at low temperature. The dependence of the relaxation time as a function of temperature also changes, the activation energy was 0.3 eV at high temperature and 0.4 eV at low temperature. In this context, it is important to define what we call a low temperature: following [@eur_physB], we denote by $L_c$ the average distance between kinks on a infinite facet: we define the low temperature regime as that in which $L_c \gg L$ where L is the typical size of our island, large facets are then visible on the island. It was shown that $L_c=\frac{a}{2} exp(\frac{E}{2k_bT})$ where $a$ is the lattice spacing.\ The behavior of the relaxation time as a function of the temperature and $N$, the number of particles of the island, can be summed up with two equations corresponding to the high and low temperature regimes: $$\begin{aligned} t^{HT}_{relaxation}& \propto & \exp[3E/k_bT] N^2 \; \mbox{for} \; N \gg L_c^2 \label{teqHT} \\ t^{LT}_{relaxation}& \propto & \exp[4E/k_bT] N \; \mbox{for} \; N \ll L_c^2 \label{teqLT}\end{aligned}$$ Replacing the temperature dependent factors by a function of $N_c$ the crossover island size (where $N_c=L_c^2 \propto exp(E/k_bT)$), these two laws can be expressed as a unique scaling function depending on the rescaled number of particles $N/N_c$: $$t_{relaxation } \propto \left\{ \begin{array}{ll} N_c^{5} \left(\frac{N}{N_c}\right)^2 \; & \mbox{for} \;\frac{N}{N_c} \gg 1 \\ N_c^{5} \frac{N}{N_c} \; & \mbox{for} \;\frac{N}{N_c} \ll 1 \end{array} \right.$$ So that the relaxation time [@note] is a simple monotonous function of $N/N_c$, and the temperature dependence is contained in $N_c$.\ We will now focus on the precise microscopic description of the limiting step for relaxation in the low temperature regime. Description of the limiting process at low temperature ====================================================== Qualitative description ----------------------- During relaxation at low temperature, islands are mostly in fully faceted configurations. Let us, for instance, consider an island in a simple configuration given by fig. \[island\]. When $L$ is larger than $l$, the island is not in its equilibrium shape (which should be more or less a regular hexagon). To reach the equilibrium shape, matter has to flow from the “tips” of the island (facets of length $l$ in this case) to the large facets $L$. In this low temperature regime there are very few mobile atoms at any given time, therefore this mass transfer must be done step by step: the initial step being the nucleation of a “germ” of two bound atoms on a facet of length $L$ and then, the growth of this germ up to a size $L-1$ due to the arrival of particles emitted from the kinks and corners of the boundary of the island. Thus the germ grows, and eventually completes a new row on the facet. This simple picture still leaves a basic question unanswered: the relatively faster formation of a new row on a small facet would lead the island further away from its equilibrium shape, and yet, we observe that this never happens. Indeed, sometimes a germ appears on a small facet but it eventually disappears afterwards, whereas the appearance of a germ on a large facet frequently leads to the formation of a new row, taking the island closer to its equilibrium shape. These observations are at the root of irreversible nature of the relaxation, germs only grow and become stable on the large facet, so the island can only evolve to a shape closer to equilibrium. Yet, there is clearly no local drive for growth on large facets nor any mechanism inhibiting growth on small ones. In order to explain how this irreversibility comes about, we propose the following detailed description of the mechanism of nucleation and of growth of a germ. First, to create a germ, 2 atoms emitted from the corners of the island have to encounter on a facet. The activation energy required for this event is obviously independent of whether it occurs on a large facet or on a small facet. Once there is a germ of 2 atoms on a facet, the total energy of the island [*does not*]{} change when a particle is transfered from a kink to the germ (3 bonds are broken, and 3 are created) see Fig. \[island2\]. Clearly the same is true if a particle from the germ is transfered to its site of emission or any other kink. Thus, germs can grow or decrease randomly without energy variations driving the process. An exception to this occurs if the particle that reaches the germ is the last one of a row on a facet; in that case, the
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'Makoto Naka, and Sumio Ishihara [^1]' title: Electronic Ferroelectricity in a Dimer Mott Insulator --- Novel dielectric and magneto-dielectric phenomena are one of the recent central issues in solid state physics. Beyond a conventional picture based on classical dipole moments, ferroelectricity on which electronic contribution plays crucial roles is termed electronic ferroelectricity. [@brink; @ishihara] Recently discovered multiferroics are known as phenomena where ferroelectricity is driven by a spin ordering. In a Mott insulator with frustrated exchange interactions, an electric polarization is induced by the exchange striction effects under a non-collinear spin structure. There is another class of electronic ferroelectricity; charge-order driven ferroelectricity where electric polarization is caused by an electronic charge order (CO) without inversion symmetry. This class of ferroelectricity is observed in transition-metal oxides and charge-transfer type organic salts, for exmple LuFe$_2$O$_4$, [@ikeda; @nagano; @naka] (TMTTF)$_2$X (X: a monovalent cation), [@monceau; @yoshioka; @otsuka] and $\alpha$-(BEDT-TTF)$_2$I$_3$. [@yamamoto] A large magneto-dielectric coupling and fast polarization switching are expected in charge-driven ferroelectricity, since the electric polarization is governed by electrons. Dielectric anomaly recently discovered in a quasi-two dimensional organic salt $\kappa$-(BEDT-TTF)$_2$Cu$_2$(CN)$_3$ suggests a possibility of electronic ferroelectricty in this compound. The crystal structure consists of an alternate stacking of the BEDT-TTF donor layers and the Cu$_2$(CN)$_3$ acceptor layers. In a quasi-two-dimensional BEDT-TTF layer, pairs of dimerized molecules locate in an almost equilateral triangular lattice. When two dimerized molecules are considered as a unit, since average hole number per dimer is one, this material is identified as a Mott insulator. One noticeable property observed experimentally is a low temperature spin state; no evidences of a long-range magnetic order down to 32mK. [@yamashita; @shimizu] A possibility of quantum spin-liquid states is proposed. In the recent experiments [@abel], temperature dependence of the dielectric constant has a broad maximum around 25K and shows relaxor-like dielectric relaxation. Some anomalies are also seen in the lattice expansion coefficient and specific heat around 6K. [@yamashita2; @manna] These data promote us to reexamine electronic structure in dimer Mott (DM) insulators. In this Letter, motivated from the recent experimental results in $\kappa$-(BEDT-TTF)$_2$Cu$_2$(CN)$_3$, we study dielectric and magnetic properties in a DM insulating system. From a Hubbard-type Hamiltonian, we derive the effective Hamiltonian where the number of electron per dimer is one. By using the mean-field (MF) approximation and the classical Monte-Carlo (MC) simulation, we examine spin and charge structures in finite temperature. It is shown that the ferroelectric and magnetic phases are exclusive with each other. A reentrant feature of the DM phase enhances the dielectric fluctuation near the CO phase. Implications of the present results for $\kappa$-(BEDT-TTF)$_2$Cu$_2$(CN)$_3$ are discussed. We start from the model Hamiltonian to describe the electronic structure in a DM insulator. A moleculer-dimer is regarded as a unit and is allocated at each site of a two-dimensional triangular lattice. An average electron number per dimer is assumed to be one. The Hamiltonian consists of the two terms as $$\begin{aligned} {\cal H}_0={\cal H}_{\rm intra}+{\cal H}_{\rm inter} . \label{eq:h0}\end{aligned}$$ The first term is for the intra-dimer part given by $$\begin{aligned} {\cal H}_{\rm intra} &= \varepsilon\sum_{i \mu s} c_{i \mu s}^\dagger c_{i \mu s}^{} -t_0 \sum_{i s} \left ( c_{i a s}^\dagger c_{i b s}^{}+ H.c. \right ) \nonumber \\ &+ U_0 \sum_{i \mu } n_{i \mu \uparrow} n_{i \mu \downarrow} +V_0 \sum_{i } n_{i a} n_{i b} , \label{eq:hintra}\end{aligned}$$ where two molecules are identified by a subscript $\mu(=a,b)$. We introduce the electron annihilation operator $c_{i \mu s}$ for molecule $\mu$, spin $s(=\uparrow, \downarrow)$ at site $i$, and the number operator $n_{i \mu}=\sum_{s} n_{i \mu s}=\sum_{s} c_{i \mu s}^\dagger c_{i \mu s}$. We consider a level energy $\varepsilon$, the inter-molecule electron transfer $t_0(>0)$ in a dimer, the intra-molecule Coulomb interaction $U_0$ and the inter-molecule Coulomb interaction $V_0$ in a dimer. In addition, the inter-dimer part in the second term of Eq. (\[eq:h0\]) is given by $$\begin{aligned} {\cal H}_{\rm inter} &=-\sum_{\langle ij \rangle \mu \mu' s} t_{ij}^{\mu \mu'} \left ( c_{i \mu s}^\dagger c_{j \mu' s}+H.c. \right ) \nonumber \\ &+ \sum_{\langle ij \rangle \mu \mu'} V_{i j}^{\mu \mu'} n_{i \mu } n_{j \mu'}, \label{eq:hinter} \end{aligned}$$ where $t_{ij}^{\mu \mu'}$ and $V_{ij}^{\mu \mu'}$ are the electron transfer and the Coulomb interaction between an electron in a molecule $\mu$ at site $i$ and that in a molecule $\mu'$ at site $j$, respectively. The first and second terms in ${\cal H}_{\rm inter}$ are denoted by ${\cal H}_t$ and ${\cal H}_V$, respectively. We briefly introduce the electronic structure in an isolated dimer. In the case where one electron occupies a dimer, the bonding and anti-bonding states are given by $|\beta_s \rangle =(|a_s \rangle + | b_s \rangle)/\sqrt{2}$ and $|\alpha_s \rangle =(|a_s \rangle - | b_s \rangle)/\sqrt{2}$ with energies $E_\beta = \varepsilon - t_0$ and $E_\alpha = \varepsilon + t_0$, respectively. In these bases, we introduce the electron operator $\hat c_{i \gamma s}$ for $\gamma=(\alpha, \beta)$ and the electron transfer integral ${\hat t}^{\gamma \gamma'}_{ij}$ between the NN molecular orbitals $\gamma$ and $\gamma'$. These are obtained by the unitary transformation from $c_{i \mu s}$ and $t_{ij}^{\mu \mu'}$. Two-electron states in a dimer are the following six states: the spin-triplet states $\{ |T_{\uparrow} \rangle, | T_\downarrow \rangle , | T_0 \rangle \} = \{ |\alpha_\uparrow \beta_\uparrow \rangle, |\alpha_\downarrow \beta_\downarrow \rangle, (|\alpha_\uparrow \beta_\downarrow \rangle+|\alpha_\downarrow \beta_\uparrow \rangle)/\sqrt{2} \} $ with the energy $E_T=2\varepsilon+V_0$, the spin-singlet state $|S \rangle =(|\alpha_\uparrow \beta_\downarrow \rangle -|\alpha_\downarrow \beta_\uparrow \rangle) /\sqrt{2} $ with $E_{S}=2\varepsilon+U_0$, and the doubly-occupied states $|D_+ \rangle=C_1| \alpha_\uparrow \alpha_\downarrow \rangle + C_2|\beta_\uparrow \beta_\downarrow \rangle$ and $|D_- \rangle=C_2| \alpha_\uparrow \alpha_\downarrow \rangle - C_1|\beta_\uparrow \beta_\downarrow \rangle$ with $E_{D\pm}=(4\varepsilon+U_0+V_0 \pm \sqrt{(U_0-V_0)^2+16t_0^2} )/2$ and coefficients $C_2/C_1=(U_0-V_0)/[2E_{D+}-4(\varepsilon-t_0)-(U_0-V_0)]$. The lowest eigen state is $|D_- \rangle $. The effective Coulomb interaction in the lowest eigen state is $U_{eff} \equiv E_{D_-}-2E_\beta \sim V_0+2t_0$ in the limit of $U_0, V_0 >> t_0$. ![(Color online) Pseudo-spin directions in the $Q^x-Q^z$ plane and electronic structures in a dimer. []{data-label="fig:ps
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We report critical current density ($J_c$) in tetragonal FeS single crystals, similar to iron based superconductors with much higher superconducting critical temperatures ($T_{c}$’s). The $J_c$ is enhanced 3 times by 6% Se doping. We observe scaling of the normalized vortex pinning force as a function of reduced field at all temperatures. Vortex pinning in FeS and FeS$_{0.94}$Se$_{0.06}$ shows contribution of core-normal surface-like pinning. Reduced temperature dependence of $J_c$ indicates that dominant interaction of vortex cores and pinning centers is via scattering of charge carriers with reduced mean free path ($\delta$$l$), in contrast to K$_x$Fe$_{2-y}$Se$_2$ where spatial variations in $T_{c}$ ($\delta$$T_{c}$) prevails.' author: - 'Aifeng Wang,$^{1}$ Lijun Wu,$^{1}$ V. N. Ivanovski,$^{2}$ J. B. Warren,$^{3}$ Jianjun Tian,$^{1,4}$ Yimei Zhu$^{1}$ and C. Petrovic$^{1}$' title: 'Critical current density and vortex pinning in tetragonal FeS$_{1-x}$Se$_{x}$ ($x=0,0.06$)' --- INTRODUCTION ============ Fe-based superconductors have been attracting considerable attention since their discovery in 2008.[@Kamihara] Due to rich structural variety and signatures of high-temperature superconductivity similar or above iron arsenides, iron chalcogenide materials with Fe-Ch (Ch=S,Se,Te) building blocks are of particular interest.[@WangQY; @HeS; @GeJF; @ShiogaiJ] Recently, superconductivity below 5 K is found in tetragonal FeS synthesized by the hydrothermal reaction.[@LaiXF] The superconducting state is multiband with nodal gap and large upper critical field anisotropy.[@Borg; @LinH; @XingJ; @YingTP] Local probe $\mu$SR measurements indicate two $s$-wave gaps but also a disordered impurity magnetism with small moment that microscopically coexists with bulk superconductivity below superconducting transition temperature.[@Holenstein] This is similar to FeSe at high pressures albeit with weaker coupling and larger coherence length.[@Khasanov1; @Khasanov2] Binary iron chalcogenides show potential for high field applications.[@SiW; @SunY; @LeoA; @JungSG] Since FeCh tetrahedra could be incorporated in different superconducting materials, it is of interest to study critical currents and vortex pinning mechanism in tetragonal FeS.[@HosonoS; @ForondaFR; @LuXF] Moreover, vortex pinning and dynamics is strongly related to coherence length and superconducting pairing mechanism. Here we report critical current density and the vortex pinning mechanism in FeS and and FeS$_{0.94}$Se$_{0.06}$. In contrast to the point defect pinning in Ba$_{0.6}$K$_{0.4}$Fe$_2$As$_2$ and K$_x$Fe$_{2-y}$Se$_2$,[@YangH; @LeiHC2; @LeiHC1] the scattering of charge carriers with reduced mean free path $l$ ($\delta$$l$ pinning) is important in vortex interaction with pinning centers. EXPERIMENTAL DETAILS ==================== FeS and FeS$_{0.94}$Se$_{0.06}$ single crystals were synthesized by de-intercalation of potassium from K$_x$Fe$_{2-y}$(Se,S)$_2$ single crystals, using the hydrothermal reaction method.[@LaiXF; @LeiHC1] First, 8 mmol Fe powder, 5 mmol Na$_2$S$\cdot$9H$_2$O, 5 mmol NaOH, and 10 ml deionized water were mixed together and put into 25 ml Teflon-lined steel autoclave. After that, $\sim$0.2g K$_x$Fe$_{2-y}$S$_2$ and K$_x$Fe$_{2-y}$S$_{1.6}$Se$_{0.4}$ single crystals were added. The autoclave is tightly sealed and annealed at 120 $^{\circ}$C for three days. Silver colored FeS single crystals were obtained by washing the powder by de-ionized water and alcohol. Finally, FeS single crystals were obtained by drying in the vacuum overnight. X-ray diffraction (XRD) data were taken with Cu K$_{\alpha}$ ($\lambda=0.15418$ nm) radiation of Rigaku Miniflex powder diffractometer. The element analysis was performed using an energy-dispersive x-ray spectroscopy (EDX) in a JEOL LSM-6500 scanning electron microscope. High-resolution TEM imaging and electron diffraction were performed using the double aberration-corrected JEOL-ARM200CF microscope with a cold-field emission gun and operated at 200 kV. Mössbauer spectrum was performed in a transmission geometry with $^{57}$Co(Rh) source at the room temperature. Single crystals are aligned on the sample holder plane with some overlap but without stack overflow. The spectrum has been examined by WinNormos software.[@Brand] Calibration of the spectrum was performed by laser and isomer shifts were given with respect to $\alpha $-Fe. Magnetization measurements on rectangular bar samples were performed in a Quantum Design Magnetic Property Measurement System (MPMS-XL5). RESULTS AND DISCUSSIONS ======================= ![(Color online). (a) Powder x-ray diffraction pattern of tetragonal FeS (bottom) and FeS$_{0.94}$Se$_{0.06}$ (top). Vertical ticks mark reflections of the *P4/nmm* space group. Electron diffraction pattern for FeS (b), and FeS$_{0.94}$Se$_{0.06}$ (c) and (d). High angle annular dark field scanning transmission electron microscopy (HAADF-STEM) image viewed along \[001\] direction of FeS (e) and FeS$_{0.94}$Se$_{0.06}$ (f) single crystal. \[001\] atomic projection of FeS is embedded in (b) with red and green spheres representing Fe and S/Se, respectively. The reflection condition in (b), (c) and (d) is consistent with P4/nmm space group. While the spots with h+k=odd are extinct in FeS, they are more \[in (d)\] or less \[in (c)\] observed here, indicating possible ordering of Se.[]{data-label="magnetism"}](fig1.eps) Figure 1 (a) shows powder X ray diffraction pattern of FeS and FeS$_{0.94}$Se$_{0.06}$. The lattice parameters of FeS$_{0.94}$Se$_{0.06}$ are a=0.3682(2) nm and c=0.5063(3) nm, suggesting Se substitution on S atomic site in FeS \[a=0.3673(2) nm, c=0.5028(2) nm\]. High-resolution TEM imaging is consistent with the *P4/nmm* unit cell and indicates possible ordering of Se atoms. ![(Color online). (a) Mössbauer spectrum at 294 K of the tetragonal FeS. The observed data are presented by the gray solid circles, fit is given by the red solid line, and difference is shown by the blue solid line. Vertical arrow denotes relative position of the experimental point with respect to the background. (b) Superconducting transition of FeS and FeS$_{0.94}$Se$_{0.06}$ measured by magnetic susceptibility in magnetic field 10 Oe.[]{data-label="magnetism"}](fig2.eps) FeS Mössbauer fit at the room temperature shows a singlet line \[Fig.2(a)\] and the absence of long range magnetic order. The isomer shift is $\delta$ = 0.373(1) mm/s whereas the Lorentz line width is 0.335(3) mm/s, in agreement with the previous measurements.[@MulletM; @VaughanDJ; @BertautEF] Since FeS$_{4}$ tetrahedra are nearly ideal, one would expect axial symmetry of the electric field gradient (*EFG*) and small values of the largest component of its diagonalized tensor $V_{zz}$. The linewidth is somewhat enhanced and is likely the consequence of small quadrupole splitting. If the Lorentz singlet would be split into two lines, their centroids would have been 0.06 mm/s apart, which is the measure of quadrupole splitting ($\Delta$). The measured isomer shift is consistent with Fe$^{2+}$, in agreement with X-ray absorbtion and photoemission spectroscopy studies.[@KwonK] There is very mild discrepancy of Mössbauer theoretical curve when compared to observed values near 0.2 mm/s, most likely due to texture effects and small deviations of incident $\gamma$ rays from the c-axis of the crystal. The point defect corrections to Mössbauer fitting curve are negligible. Fig. 2(b) presents the zero-field-cooling (ZFC) magnetic susceptibility taken at 10 Oe applied perpendicular to the $c$ axis for FeS and FeS$_{0.94}$Se$_{0.06}$ single crystals. Superconducting transition temperature Tc = 4.4 K (onset of diam
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We consider possibilities of observing CP-violation effects in neutrino oscillation experiments with low energy ($\sim$ several hundreds MeV).' address: ' Department of Physics, University of Tokyo, Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan' author: - Joe Sato title: ' CP and T violation in (long) long baseline neutrino oscillation experiments ' --- Introduction ============ Many experiments and observations have shown evidences for neutrino oscillation one after another. The solar neutrino deficit has long been observed[@Ga1; @Ga2; @Kam; @Cl; @SolSK]. The atmospheric neutrino anomaly has been found[@AtmKam; @IMB; @SOUDAN2; @MACRO] and recently almost confirmed by SuperKamiokande[@AtmSK]. There is also another suggestion given by LSND[@LSND]. All of them can be understood by neutrino oscillation and hence indicates that neutrinos are massive and there is a mixing in lepton sector[@FukugitaYanagida]. Since there is a mixing in lepton sector, it is quite natural to imagine that there occurs CP violation in lepton sector. Several physicists have considered whether we may see CP-violation effect in lepton sector through long baseline neutrino oscillation experiments. First it has been studied in the context of currently planed experiments[@Tanimoto; @ArafuneJoe; @AKS; @MN; @BGG] and recently in the context of neutrino factory[@BGW; @Tanimoto2; @Romanino; @GH]. The use of neutrinos from muon beam has great advantages compared with those from pion beam[@Geer]. Neutrinos from $\mu^+$($\mu^-$) beam consist of pure $\nu_{\rm e}$ and $\bar\nu_\mu$ ($\bar\nu_{\rm e}$ and $\nu_\mu$) and will contain no contamination of other kinds of neutrinos. Also their energy distribution will be determined very well. In this proceedings, we will consider how large CP-violation effect we will see in oscillation experiments with low energy neutrino from muon beam. Such neutrinos with high intensity will be available in near future[@PRISM]. We will consider three active neutrinos without any sterile one by attributing the solar neutrino deficit and atmospheric neutrino anomaly to the neutrino oscillation. CP violation in long baseline neutrino oscillation experiments ============================================================== Here we consider neutrino oscillation experiments with baseline $L\sim$ several hundreds km. Oscillation probability and its approximated formula ---------------------------------------------------- First we derive approximated formulas[@AKS] of neutrino oscillation to clarify our notation. We assume three generations of neutrinos which have mass eigenvalues $m_{i} (i=1, 2, 3)$ and MNS mixing matrix $U$[@MNS] relating the flavor eigenstates $\nu_{\alpha} (\alpha={\rm e}, \mu, \tau)$ and the mass eigenstates in the vacuum $\nu\,'_{i} (i=1, 2, 3)$ as $$\nu_{\alpha} = U_{\alpha i} \nu\,'_{i}. \label{Udef}$$ We parameterize $U$[@ChauKeung; @KuoPnataleone; @Toshev] as $$\begin{aligned} & & U = {\rm e}^{{\rm i} \psi \lambda_{7}} \Gamma {\rm e}^{{\rm i} \phi \lambda_{5}} {\rm e}^{{\rm i} \omega \lambda_{2}} \nonumber \\ &=& \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & c_{\psi} & s_{\psi} \\ 0 & -s_{\psi} & c_{\psi} \end{array} \right) \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & {\rm e}^{{\rm i} \delta} \end{array} \right) \left( \begin{array}{ccc} c_{\phi} & 0 & s_{\phi} \\ 0 & 1 & 0 \\ -s_{\phi} & 0 & c_{\phi} \end{array} \right) \left( \begin{array}{ccc} c_{\omega} & s_{\omega} & 0 \\ -s_{\omega} & c_{\omega} & 0 \\ 0 & 0 & 1 \end{array} \right) \nonumber \\ &=& \left( \begin{array}{ccc} c_{\phi} c_{\omega} & c_{\phi} s_{\omega} & s_{\phi} \\ -c_{\psi} s_{\omega} -s_{\psi} s_{\phi} c_{\omega} {\rm e}^{{\rm i} \delta} & c_{\psi} c_{\omega} -s_{\psi} s_{\phi} s_{\omega} {\rm e}^{{\rm i} \delta} & s_{\psi} c_{\phi} {\rm e}^{{\rm i} \delta} \\ s_{\psi} s_{\omega} -c_{\psi} s_{\phi} c_{\omega} {\rm e}^{{\rm i} \delta} & -s_{\psi} c_{\omega} -c_{\psi} s_{\phi} s_{\omega} {\rm e}^{{\rm i} \delta} & c_{\psi} c_{\phi} {\rm e}^{{\rm i} \delta} \end{array} \right), \label{UPar2}\end{aligned}$$ where $c_{\psi} = \cos \psi, s_{\phi} = \sin \phi$, etc. The evolution equation of neutrino with energy $E$ in matter is expressed as $${\rm i} \frac{{\rm d} \nu}{{\rm d} x} = H \nu, \label{MatEqn}$$ where $$H \equiv \frac{1}{2 E} \tilde U {\rm diag} (\tilde m^2_1, \tilde m^2_2, \tilde m^2_3) \tilde U^{\dagger}, \label{Hdef}$$ with a unitary mixing matrix $\tilde U$ and the effective mass squared $\tilde m^{2}_{i}$’s $(i=1, 2, 3)$. The matrix $\tilde U$ and the masses $\tilde m_{i}$’s are determined by[@Wolf; @MS; @BPPW] $$\tilde U \left( \begin{array}{ccc} \tilde m^2_1 & & \\ & \tilde m^2_2 & \\ & & \tilde m^2_3 \end{array} \right) \tilde U^{\dagger} = U \left( \begin{array}{ccc} 0 & & \\ & \delta m^2_{21} & \\ & & \delta m^2_{31} \end{array} \right) U^{\dagger} + \left( \begin{array}{ccc} a & & \\ & 0 & \\ & & 0 \end{array} \right). \label{MassMatrixInMatter}$$ Here $\delta m^2_{ij} = m^2_i - m^2_j$ and $$a \equiv 2 \sqrt{2} G_{\rm F} n_{\rm e} E \nonumber \\ = 7.56 \times 10^{-5} {\rm eV^{2}} \cdot \left( \frac{\rho}{\rm g\,cm^{-3}} \right) \left( \frac{E}{\rm GeV} \right), \label{aDef}$$ with the electron density, $n_{\rm e}$ and the averaged matter density[@KS], $\rho$. The solution of eq.(\[MatEqn\]) is then $$\begin{aligned} \nu (x) &=& S(x) \nu(0) \label{nu(x)}\\ S &\equiv& {\rm T\, e}^{ -{\rm i} \int_0^x {\rm d} s H (s) } \label{Sdef}\end{aligned}$$ (T being the symbol for time ordering), giving the oscillation probability for $\nu_{\alpha} \rightarrow \nu_{\beta} (\alpha, \beta = {\rm e}, \mu, \tau)$ at distance $L$ as $$\begin{aligned} P(\nu_{\alpha} \rightarrow \nu_{\beta}; E, L) &=& \left| S_{\beta \alpha} (L) \right|^2. \label{alpha2beta}\end{aligned}$$ Note that $P(\bar\nu_{\alpha} \rightarrow \bar\nu_{\beta})$ is related to $P(\nu_{\alpha} \rightarrow \nu_{\beta})$ through $a \rightarrow -a$ and $U \rightarrow U^{\ast} ({\rm i.e.\,} \delta \rightarrow -\delta)$. Similarly, we obtain $P(\nu_{\beta} \rightarrow \nu_{\alpha
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We propose an alternate construction to compute the minimal entanglement wedge cross section (EWCS) for a single interval in a $(1+1)$ dimensional holographic conformal field theory at a finite temperature, dual to a bulk planar BTZ black hole geometry. Utilizing this construction we compute the holographic entanglement negativity for the above mixed state configuration from a recent conjecture in the literature. Our results exactly reproduce the corresponding replica technique results in the large central charge limit and resolves the issue of the missing thermal term for the holographic entanglement negativity computed earlier in the literature. In this context we compare the results for the holographic entanglement negativity utilizing the minimum EWCS and an alternate earlier proposal involving an algebraic sum of the lengths of the geodesics homologous to specific combinations of appropriate intervals. From our analysis we conclude that the two quantities are proportional in the context of the $AdS_3/CFT_2$ scenario and this possibly extends to the higher dimensional $AdS_{d+1}/CFT_d$ framework.' author: - 'Jaydeep Kumar Basak[^1]' - 'Vinay Malvimat[^2]' - 'Himanshu Parihar[^3]' - 'Boudhayan Paul[^4]' - 'Gautam Sengupta[^5]' bibliography: - 'references.bib' title: On minimal entanglement wedge cross section for holographic entanglement negativity --- Introduction {#sec_intro} ============ Quantum entanglement has evolved as one of the dominant themes in the development of diverse disciplines covering issues from condensed matter physics to quantum gravity and has garnered intense research attention. Entanglement entropy, defined as the von Neumann entropy of the reduced density matrix for the subsystem being considered, plays a crucial role in the characterization of entanglement for bipartite pure states. However for bipartite mixed states, entanglement entropy receives contributions from irrelevant correlations (e.g., for finite temperature configurations, it includes thermal correlations), and fails to correctly capture the entanglement of the mixed state under consideration. This crucial issue was taken up in a classic work [@Vidal] by Vidal and Werner, where a computable measure termed entanglement negativity was proposed to characterize mixed state entanglement and provided an upper bound on the distillable entanglement of the bipartite mixed state in question.[^6] It was defined as the logarithm of the trace norm of the partially transposed reduced density matrix with respect to one of the subsystems of a bipartite system. Subsequently it could be established in [@Plenio:2005cwa] that despite being non convex, the entanglement negativity was an entanglement monotone under local operations and classical communication (LOCC). In a series of communications [@Calabrese:2004eu; @Calabrese:2009qy; @Calabrese:2009ez; @Calabrese:2010he] the authors formulated a replica technique to compute the entanglement entropy in $(1+1)$ dimensional conformal field theories ($CFT_{1+1}$s). The procedure was later extended to configurations with multiple disjoint intervals in [@Hartman:2013mia; @Headrick:2010zt], where it was shown that the entanglement entropy receives non universal contributions, which depended on the full operator content of the theory, and were sub leading in the large central charge limit. A variant of the replica technique described above was developed in [@Calabrese:2012ew; @Calabrese:2012nk; @Calabrese:2014yza] to compute the entanglement negativity of various bipartite pure and mixed state configurations in a $CFT_{1+1}$. This was subsequently extended to a mixed state configuration of two disjoint intervals in [@Kulaxizi:2014nma] where the entanglement negativity was found to be non universal, and it was possible to elicit a universal contribution in the large central charge limit if the intervals were in proximity. Interestingly the entanglement negativity for this configuration was numerically shown to exhibit a phase transition depending upon the separation of the intervals [@Kulaxizi:2014nma; @Dong:2018esp]. In [@Ryu:2006bv; @Ryu:2006ef] Ryu and Takayanagi (RT) proposed a holographic conjecture where the universal part of the entanglement entropy of a subsystem in a dual $CFT_d$ could be expressed in terms of the area of the co dimension two static minimal surface (RT surface) in the bulk $AdS_{d+1}$ geometry, homologous to the subsystem. This development opened up a significant line of research in the context of the $AdS/CFT$ correspondence (for a detailed review see [@Ryu:2006ef; @Nishioka:2009un; @Rangamani:2016dms; @Nishioka:2018khk]). The RT conjecture was proved initially for the $AdS_3/CFT_2$ scenario, with later generalization to the $AdS_{d+1}/CFT_d$ framework in [@Fursaev:2006ih; @Casini:2011kv; @Faulkner:2013yia; @Lewkowycz:2013nqa]. Hubeny, Rangamani and Takayanagi (HRT) extended the RT conjecture to covariant scenarios in [@Hubeny:2007xt], a proof of which was established in [@Dong:2016hjy]. The above developments motivated a corresponding holographic characterization for the entanglement negativity and could be utilized to compute the entanglement negativity for the vacuum state of a $CFT_d$ dual to a bulk pure $AdS_{d+1}$ geometry in [@Rangamani:2014ywa]. In [@Chaturvedi:2016rcn; @Chaturvedi:2016opa] a holographic entanglement negativity conjecture and its covariant extension were advanced for bipartite mixed state configurations in the $AdS_3/CFT_2$ scenario, with the generalization to higher dimensions reported in[@Chaturvedi:2016rft]. A large central charge analysis of the entanglement negativity through the monodromy technique for holographic $CFT_{1+1}$s was established in [@Malvimat:2017yaj] which provided a strong substantiation for the holographic entanglement negativity construction described above. Subsequently in [@Jain:2017sct; @Jain:2017uhe] the above conjecture along with its covariant version was extended for bipartite mixed state configurations of adjacent intervals in dual $CFT_{1+1}$s, with the higher dimensional generalization described in [@Jain:2017xsu]. These conjectures were applied to explore the holographic entanglement negativity for various mixed state configurations in $CFT_d$s dual to the bulk pure $AdS_{d+1}$ geometry, $AdS_{d+1}$-Schwarzschild black hole and the $AdS_{d+1}$-Reissner-Nordstr[ö]{}m black hole [@Jain:2017xsu; @Jain:2018bai]. Subsequently the entanglement negativity conjecture was extended to the mixed state configurations of two disjoint intervals in proximity in the $AdS_3/CFT_2$ framework in [@Malvimat:2018txq], with its covariant generalization given in [@Malvimat:2018ood]. In a recent communication [@Basak:2020bot], a higher dimensional generalization of the conjecture described above was proposed and utilized to compute the holographic entanglement negativity for such mixed state configurations with long rectangular strip geometries in $CFT_d$s dual to bulk pure $AdS_{d+1}$ geometry and $AdS_{d+1}$-Schwarzschild black hole. On a different note, motivated by the quantum error correcting codes, an alternate approach involving the backreacted minimal entanglement wedge cross section (EWCS) to compute the holographic entanglement negativity for configurations with spherical entangling surface was advanced in [@Kudler-Flam:2018qjo]. Furthermore a [*proof*]{} for this proposal, based on the [*reflected entropy*]{} [@Dutta:2019gen] was established in another recent communication [@Kusuki:2019zsp]. The entanglement wedge was earlier shown to be the bulk subregion dual to the reduced density matrix of the dual $CFT$s in [@Czech:2012bh; @Wall:2012uf; @Headrick:2014cta; @Jafferis:2014lza; @Jafferis:2015del]. Recently the minimal entanglement wedge cross section has been proposed to be the bulk dual of the entanglement of purification (EoP) [@Takayanagi:2017knl; @Nguyen:2017yqw] (For recent progress see [@Bhattacharyya:2018sbw; @Bao:2017nhh; @Hirai:2018jwy; @Espindola:2018ozt; @Umemoto:2018jpc; @Bao:2018gck; @Umemoto:2019jlz; @Guo:2019pfl; @Bao:2019wcf; @Harper:2019lff]). Unlike entanglement negativity, the entanglement of purification receives contributions from both quantum and classical correlations (see [@Terhal_2002] for details). The connection of minimal entanglement wedge cross section to the odd entanglement entropy [@Tamaoka:2018ned] and reflected entropy [@Dutta:2019gen; @Jeong:2019xdr; @Bao:2019zqc; @Chu:2019etd] has also been explored. As mentioned earlier, in [@Takayanagi:2017knl] the authors advanced a construction for the computation of the minimal EWCS. In [@Kudler-Flam:2018qjo; @Kusuki:2019zsp], the authors proposed that for configurations involving spherical entangling surfaces, the holographic entanglement
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: '[The presence of a turbulent magnetic field in the quiet Sun has been unveiled observationally using different techniques. The magnetic field is quasi-isotropic and has field strengths weaker than 100G. It is pervasive and may host a local dynamo.]{} [We aim to determine the length scale of the turbulent magnetic field in the quiet Sun.]{} [ The Stokes V area asymmetry is sensitive to minute variations in the magnetic topology along the line of sight. Using data provided by Hinode-SOT/SP instrument, we performed a statistical study of this quantity. We classified the different magnetic regimes and infer properties of the turbulent magnetic regime. In particular we measured the correlation length associated to these fields for the first time.]{} [The histograms of Stokes V area asymmetries reveal three different regimes: one organized, quasi-vertical and strong field (flux tubes or other structures of the like); a strongly asymmetric group of profiles found around field concentrations; and a turbulent isotropic field. For the last, we confirm its isotropy and measure correlation lengths from hundreds of kilometers down to 10km, at which point we lost sensitivity. A crude attempt to measure the power spectra of these turbulent fields is made.]{} [In addition to confirming the existence of a turbulent field in the quiet Sun, we give further prove of its isotropy. We also measure correlation lengths down to 10km. The combined results show magnetic fields with a large span of length scales, as expected from a turbulent cascade.]{}' author: - '[A. L[ó]{}pez Ariste]{}' - 'A. Sainz Dalda' date: 'Received ; accepted' title: 'Scales of the magnetic fields in the quiet Sun.' --- Introduction ============ This work is dedicated to explore the properties of the turbulent magnetic field in the quiet Sun through the analysis of the asymmetries in the Stokes V profiles observed by Hinode-SOT/SP [@kosugi_hinode_2007; @tsuneta_solar_2008] in a Zeeman-sensitive line. The existence of a magnetic field turbulent in nature in those places with weak Zeeman signals and absence of temporal coherence in the plasma flows is taken for granted and, from Section 2 and thereafter, we shall not discuss whether these fields are turbulent or not. Although our analysis provides further evidence of the existence of this turbulent field, we will assume that this existence is proven and make our analysis in that framework, studying the coexistence of those turbulent fields with others more structured in nature, whose existence is also beyond questioning. Despite that, but also because of that, we now dedicate a few lines to justify and put into context the turbulent nature of the magnetic field in most of the quiet Sun. The existence of a turbulent magnetic field accompanying the turbulent plasma in the quiet Sun is not a theoretical surprise, rather the opposite. Upon the discovery of flux tubes in the photospheric network, [@parker_dynamics_1982] expressed surprise at the existence of these coherent magnetic structures and referred to them as *extraordinary state of the field*. He argued in that paper that only under conditions of temporal coherence of the plasma flows in the photosphere could these structures be stable. Consequently one could expect to find them in the photospheric network where advection flows concur before dipping into the solar interior. Similar conditions can be found here and there in the internetwork in those intergranular lanes where plumes have grown strong enough to survive granular lifetimes. Everywhere else the high Reynolds number of the photospheric plasma does not allow any structure of any kind and a turbulent field, if anything, is to be expected. Other theoretical analyses [@petrovay_turbulence_2001] confirm and insist on these turbulent fields. The first attempts of numerical simulations of magnetoconvection [@nordlund_dynamo_1992] revealed a magnetic field whose field lines, away from downflows, are twisted and folded even for the low Reynolds numbers (kinetic and magnetic) and for the wrong ratio of both adimensional quantities. Independent of the existence of a local dynamo on which most of these simulations focus, the turbulent field is there. Observationally, the picture has been different until recently. The discovery of flux tubes in the network [@stenflo_magnetic-field_1973] together with the common observation of G-band bright points in high-resolution images of the photosphere has spread the idea that flux tubes are everywhere. Because inversion techniques for the measurement of the magnetic field mostly used Milne-Eddington atmospheric models that assume a single value of the magnetic field per line of sight, whenever they have been used in the quiet Sun, a single field value was attached to a point in the photosphere, which additionally spread the impression that it was the field of the flux tube. Doubts were cast on this observational picture of the quiet Sun when magnetic measurements using infrared lines produced fields for the same points different than measurements with visible lines. The wavelength dependence of the Zeeman effect sufficed to expose the fact that a single field could not be attached to a given point in the quiet Sun. Both measurements were right, in the sense at least that they were measuring different aspects of the complex magnetic topology of the quiet Sun. Turbulent fields had in the meantime been the solution offered by those observing the quiet Sun through the Hanle effect. The absence of Stokes U in those measurements and the high degree of depolarization of the lines pointed to a turbulent field as the only scenario fitting their observations. Unfortunately, the difficulties both in the observations (very low spatial and temporal resolutions) and in the diagnostic (many subtle quantum effects involved) made the comparison with the observations using Zeeman effect difficult. The advent of the statistical analysis of Zeeman observations has solved the problem. First it was the observation that the quiet Sun, if one excludes the network and strong magnetic patches from the data, looks suspiciously similar independent of the position on the solar disk that one is observing [@martinez_gonzalez_near-ir_2008]. This independence of the measurements with the viewing angle pointed toward isotropy, a characteristic of the quiet Sun fields later confirmed by [@asensio_ramos_evidence_2009]. Then came the realization that the average longitudinal flux density measured in the quiet Sun at different spatial resolutions was roughly the same [@lites_characterization_2002; @martinez_gonzalez_statistical_2010]. This could only be interpreted that either the field was already resolved, but obviously it was not, or that the observed signals were just the result of a random addition of many magnetic elements. Within the limit of large numbers the amplitude of this fluctuation only depends on the square root of the size and not linearly, as expected for a non-resolved flux tube. The turbulent field is in this way unveiled by the statistical analysis of Zeeman effect, and it was shown that Zeeman signatures in the quiet Sun were often merely statistical fluctuations of the turbulent field and not measurements of the field itself [@lopez_ariste_turbulent_2007]. The solar magnetic turbulence was therefore explored in a statistical manner. Furthermore, it was explored assuming that different realizations of the magnetic probability distribution functions sit side by side. In this approximation one can compute the resulting polarization by just adding up the individual contributions of each magnetic field. The Stokes V profile caused by the Zeeman effect of each individual magnetic field will be anti-symmetric with respect to the central wavelength, with one positive and one negative lobe. The areas of the two lobes of every profile are identical and their addition, the area asymmetry, will be zero. Adding many such polarization profiles will alter the resulting profile, but the area asymmetry of the final profile will always be zero. A completely different result is obtained if we consider the different realizations of the magnetic field probability distribution function placed one after the other along the line of sight. Computing the resulting polarization profile now requires integrating the radiative transfer equation for polarized light in a non-constant atmosphere. If the variations in the magnetic field along the line of sight are associated with velocities, the integration results in a profile that lacks any particular symmetry. Therefore, measuring and analyzing the area asymmetry of the Stokes V profiles in the quiet Sun provides information on the properties of the turbulent magnetic field along the line of sight, in contrast with the previous studies, which only explored this turbulence in terms of accumulation of magnetic elements in a plane perpendicular to the line of sight. At disk center, the line of sight means exploring those fields with depth, while near the limb it means exploring fields sitting side by side. Comparing asymmetries in statistical terms from quiet regions at different heliocentric angles provides us with a probe on the angular dependence of the magnetic fields, and this is one of the purposes of this paper. In Section 2 we describe the asymmetries observed by Hinode-SOT in these terms and recover the three expected magnetic regimes: the structured and mostly vertical strong fields (strong in terms of quiet Sun magnetism), the turbulent, ubiquitous, disorganized and weak fields and a class of profiles with strong asymmetries that can be observed at those places where the line of sight crosses from one regime to the other, from turbulent to organized. Focusing on these profiles assigned to turbulent magnetic fields, the value of the area asymmetry can be linked with the dominant scales of variation of the magnetic field. The results on stochastic radiative transfer that allow us to make that link are recalled in section \[sec\_scales\], in particular those of [@carroll_meso-structured_2007]. Thanks to those works we can quantitatively determine the correlation length of the magnetic field for every value of area asymmetry. From this determination we attempt to give an energy spectrum for the magnetic turbulence at scales below the spatial resolution. For this attempt, we will use the longitudinal flux density as a lower boundary to the field strength, and hence to the magnetic energy, and plot it versus the correlation length already determined. The approximations and simplifications made
{ "pile_set_name": "ArXiv" }
null
null
--- author: - Tayyaba Zafar - Attila Popping - Céline Péroux bibliography: - 'dla.bib' date: 'Received / Accepted ' title: 'The ESO UVES Advanced Data Products Quasar Sample - I. Dataset and New $N_{{H}\,{\sc I}}$ Measurements of Damped Absorbers' --- Introduction ============ Among the absorption systems observed in the spectra of quasars, those with the highest neutral hydrogen column density are thought to be connected with the gas reservoir responsible for forming galaxies at high redshift and have deserved wide attention (see review by @wolfe05). These systems are usually classified according to their neutral hydrogen column density as damped Ly$\alpha$ systems (hereafter DLAs) with $N_{{\rm H}\,{\sc \rm I}}\ge2\times10^{20}$ atoms cm$^{-2}$ [e.g., @storrie00; @wolfe05] and sub-damped Ly$\alpha$ systems (sub-DLAs) with $10^{19}\le N_{{\rm H}\,{\sc \rm I}}\le2\times10^{20}$ atoms cm$^{-2}$ [e.g., @peroux03b]. The study of these systems has made significant progress in recent years, thanks to the availability of large sets of quasar spectra with the two-degree field survey (2dF, @croom01) and the Sloan Digital Sky Survey (SDSS; @prochaska05; @noterdaeme09; @noterdaeme12b). They have been shown to contain most of the neutral gas mass in the Universe [@lanzetta91; @lanzetta95; @wolfe95] and are currently used to measure the redshift evolution of the total amount of neutral gas mass density [@lanzetta91; @wolfe95; @storrie00; @peroux03; @prochaska05; @noterdaeme09; @noterdaeme12b]. In addition, the sub-DLAs may contribute significantly to the cosmic metal budget, which is still highly incomplete. Indeed, only $\sim$20% of the metals are observed when one adds the contribution of the Ly$\alpha$ forest, DLAs, and galaxies such as Lyman break galaxies [e.g, @pettini99; @pagel02; @wolfe03; @pettini04; @pettini06; @bouche05; @bouche06; @bouche07]. Therefore, to obtain a complete picture of the redshift evolution of both the cosmological neutral gas mass density and the metal content of the Universe, the less-studied sub-DLAs should be taken into account [@peroux03b]. However, these systems cannot be readily studied at low resolution, and only limited samples of high-resolution quasar spectra have been available until now [e.g., @peroux03b; @dessauges03; @ledoux03; @kulkarni07; @meiring08; @meiring09].The excellent resolution and large wavelength coverage of UVES allows this less studied class of absorber to be explored. We have therefore examined the high-resolution quasar spectra taken between February 2000 and March 2007 and available in the UVES [@dekker00] Advanced Data Products archive, ending up with a sample of 250 quasar spectra. In this paper we present both the dataset of quasars observed with UVES and the damped absorbers (DLAs and sub-DLAs) covered by these spectra. In addition, we measured column densities of DLAs/sub-DLAs seen in the spectra of these quasars and not reported in the literature. In a companion paper [@zafar12b], we built a carefully selected subset of this dataset to study the statistical properties of DLAs and sub-DLAs, their column density distribution, and the contribution of sub-DLAs to the gas mass density. Further studies, based on specifically designed subsets of the dataset built in this paper, will follow (e.g., studies of metal abundances, molecules). This work is organized as follows. In §2, information about the UVES quasar data sample is provided. In §3, the properties of the damped absorbers are described. This section also summarizes the details of the new column density measurements. In §4, some global properties of the full sample are presented and lines-of-sight of interest are reported in §5. All log values and expressions correspond to log base 10. The Quasar Sample ================= ESO Advanced Data Products -------------------------- In 2007, the European Southern Observatory (ESO) managing the 8.2m Very Large Telescope (VLT) observatory has made available to the international community a set of Advanced Data Products for some of its instruments, including the high-resolution UVES[^1] instrument. The reduced archival UVES echelle dataset is processed by the ESO UVES pipeline (version 3.2) within the `MIDAS` environment with the best available calibration data. This process has been executed by the quality control (QC) group, part of the Data Flow Department. The resulting sample is based on an uniform reprocessing of UVES echelle point source data from the beginning of operations (dated 18$^{\rm th}$ of February 2000) up to the 31$^{\rm st}$ of March 2007. The standard quality assessment, quality control and certification have been integral parts of the process. The following types of UVES data are not included in the product data set: $i)$ data using the image slicers and/or the absorption cell; $ii)$ Echelle data from extended objects and iii) data from the Fibre Large Array Multi Element Spectrograph (FLAMES)/UVES instrument mode. In general, no distinction has been made between visitor mode (VM) and service mode (SM) data, nor between standard settings and non-standard settings. However, the data reduction was performed only when robust calibration solutions i.e., (“master calibrations") were available. In the UVES Advanced Data Products archive, these calibrations are available only for the standard settings centered on $\lambda$ 346, 390, 437, 520, 564, 580, 600 or 860 nm. For certain “non-standard" settings, master calibrations were not produced in the first years of UVES operations (until about 2003). These are e.g. 1x2 or 2x3 binnings, or the central wavelengths mentioned above. As a result, the Advanced Data Products database used for the study presented here is not as complete as the ESO UVES raw data archive. Quasars Selection ----------------- The UVES archives do not provide information on the nature of the targets. Indeed, the target names are chosen by the users and only recently does the Phase 2 step propose for the user to classify the targets, but only on a voluntary-basis. Therefore, the first step to construct a sample of quasar spectra out of the Advanced Data Products archive is to identify the nature of the objects. For this purpose we retrieved quasar lists issued from quasar surveys: the Sloan digital sky survey data release 7 (DR7) database[^2], HyperLeda[^3], 2dF quasar redshift survey[^4], Simbad[^5] and the Hamburg ESO catalogue. The resulting right ascension (RA) and declination (Dec) of the quasars were cross-matched with UVES Advanced Data Products archive within a radius of $15.0''$. The large radius was chosen to overcome possible relative astrometric shifts between the various surveys and the UVES database. Because of this large radius, the raw matched list do not only contain quasars but also other objects such as stars, galaxies, Seyferts. The non-quasar objects have been filtered out by visual inspection of the spectra. The data in an ESO OPC category C (Interstellar Medium, Star Formation and Planetary Systems) and D (Stellar Evolution) are usually targeting galactic objects, but for some cases observers targeted quasars under the same program. The spectra have been visually inspected for those particular cases. Further Data Processing ----------------------- In the UVES spectrograph, the light beam from the telescope is split into two arms (UV to Blue and Visual to Red) within the instrument. The spectral resolution of UVES is about $R=\lambda/\Delta\lambda\sim41,400$ when a $1.0''$ slit is used. By varying slit width, the maximum spectral resolution can reach up to $R=80,000$ and $110,000$ for the BLUE and the RED arm, respectively. For each target, individual spectra (most often with overlapping settings) were merged using a dedicated `Python` code which weights each spectrum by its signal-to-noise ratio. All contributing spectra were regridded to a common frame, with the resolution being that of the spectrum with the highest sampling. When present, the bad pixels were masked to assure that they would not contribute to the merged spectrum. In the regions of overlap the spectra were calibrated to the same level before being error-weighted and merged. Particular attention was given to “step" features in the quasar continua and a visual search has identified and corrected these features when they corresponded to the position in between two orders of the Echelle spectrum. In the merging process for each individual spectrum, a radial velocity correction for barycentric and heliocentric motion (using heliocentric correction values from the files header) was applied. A vacuum correction on the wavelength was also
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Recent ALMA observations may indicate a surprising abundance of sub-Jovian planets on very wide orbits in protoplanetary discs that are only a few million years old. These planets are too young and distant to have been formed via the Core Accretion (CA) scenario, and are much less massive than the gas clumps born in the classical Gravitational Instability (GI) theory. It was recently suggested that such planets may form by the partial destruction of GI protoplanets: energy output due to the growth of a massive core may unbind all or most of the surrounding pre-collapse protoplanet. Here we present the first 3D global disc simulations that simultaneously resolve grain dynamics in the disc and within the protoplanet. We confirm that massive GI protoplanets may self-destruct at arbitrarily large separations from the host star provided that solid cores of mass $\sim $10-20 $ M_{\oplus}$ are able to grow inside them during their pre-collapse phase. In addition, we find that the heating force recently analysed by [@MassetVelascoRomero17] perturbs these cores away from the centre of their gaseous protoplanets. This leads to very complicated dust dynamics in the protoplanet centre, potentially resulting in the formation of multiple cores, planetary satellites, and other debris such as planetesimals within the same protoplanet. A unique prediction of this planet formation scenario is the presence of sub-Jovian planets at wide orbits in Class 0/I protoplanetary discs.' author: - | J. Humphries[^1] & S. Nayakshin\ \ Department of Physics and Astronomy, University of Leicester, Leicester LE1 7RH, UK. bibliography: - 'humphries.bib' date: 'Accepted XXX. Received YYY; in original form ZZZ' title: 'On the origin of wide-orbit ALMA planets: giant protoplanets disrupted by their cores' --- \[firstpage\] accretion discs – planet-disc interactions – protoplanetary discs – brown dwarfs – planets and satellites: formation – planets and satellites: composition Introduction ============ It is now widely believed that the gaps in the $\sim $ 1 mm dust discs observed by the Atacama Large Millimeter Array (ALMA) are the signatures of young planets [@Alma2015; @IsellaEtal16; @LongEtal18; @DSHARP1; @DSHARP7]. Modelling suggests that these planets are often wide-orbit Saturn analogues [@DipierroEtal16a; @ClarkeEtal18; @LodatoEtal19]. Such planets present a challenge for both of the primary planet formation theories, albeit for different reasons. In the classical planetesimal-based Core Accretion (CA) theory [@PollackEtal96; @IdaLin04a] forming massive solid cores at wide separations in a $\sim 1$ Myr old disc is challenging as the process is expected to take more than an order of magnitude longer than this [e.g., @KL99]. Core growth via pebble accretion is much faster; however the process is not very efficient [@OrmelLiu18; @LinEtal18] and may require more pebbles than the observations indicate. Additionally, in any flavor of CA scenario the ALMA planets should be in the runaway gas accretion phase. This process is expected to produce planets much more massive than Jupiter [*very rapidly*]{} at these wide orbits, which does not seem to be the case [@NayakshinEtal19]. [@NduguEtal19] has very recently detailed these constraints. As much as $2000 {{\,{\rm M}_{\oplus}}}$ of pebbles in the disc are required to match the ALMA gap structures, and the resulting planet mass function indeed shows too few sub-Jovian planets and too many $M_{\rm p} > 1 {{\,{\rm M}_{\rm J}}}$ planets. In the Gravitational Instability (GI) scenario [@Boss98; @Rice05; @Rafikov05] planets form very rapidly, e.g., in the first $\sim 0.1$ Myr [@Boley09]. The age of ALMA planets is thus not a challenge for this scenario; ALMA planets if anything, are ‘old’ for GI. However, the minimum initial mass of protoplantary fragments formed by GI in the disc is thought to be at least $\sim 1 {{\,{\rm M}_{\rm J}}}$ [@BoleyEtal10], and perhaps even $\sim 3-10 {{\,{\rm M}_{\rm J}}}$ [@KratterEtal10; @ForganRice13b; @KratterL16]. This is an order of magnitude larger than the typical masses inferred for the ALMA gap opening planets [@NayakshinEtal19]. Formation of a protoplanetary clump in a massive gas disc is only the first step in the life of a GI-made planet, and its eventual fate depends on many physical processes [e.g., see the review by @Nayakshin_Review]. In this paper we extend our earlier work [@HumphriesNayakshin18] on the evolution of GI protoplanets in their pebble-rich parent discs. Newly born GI protoplanets are initially very extended with radii of $\sim$ 1 AU. After a characteristic cooling time, Hydrogen molecules in the protoplanet dissociate and it collapses to a tightly bound Jupiter analogue [@Bodenheimer74; @HelledEtalPP62014]. During this cooling phase protoplanets are vulnerable to tidal disruption via interactions with the central star or with other protoplanets, which may destroy many of these nascent planets if they migrate to separations closer than 20 AU [@HumphriesEtal19]. Additionally, pebble accretion plays a key role in the evolution of GI protoplanets. Observations [@TychoniecEtal18] suggest that young Class 0 protoplanetary discs contain as much as hundreds of Earth masses in pebble-sized grains. These will be focused inside protoplanets as they are born [@BoleyDurisen10] and accreted during any subsequent migration [@BaruteauEtal11; @JohansenLacerda10; @OrmelKlahr10]. Both analytical work [@Nayakshin15a] and 3D simulations [@HumphriesNayakshin18] demonstrate that protoplanets accrete the majority of mm and above sized dust grains that enter their Hill spheres, considerably enhancing their total metal content. Once accreted, these pebbles are expected to grow and settle rapidly inside the protoplanet, likely becoming locked into a massive core [@Kuiper51b; @BossEtal02; @HS08; @BoleyEtal10; @Nayakshin10a; @Nayakshin10b]. [@Nayakshin16a] showed via 1D population synthesis models that core formation inside young GI protoplanets may in fact release enough heat to remove some or all of their gaseous envelopes. This process could therefore downsize GI protoplanets, making GI a more physically plausible scenario for hatching ALMA planets. This idea is also attractive because of parallels with other astrophysical systems, e.g. galaxies losing much of their initial gaseous mass due to energetic feedback from supermassive black holes [@DiMatteo05]. In [@HumphriesNayakshin18] we used a sink particle approximation to study the accretion of pebbles onto GI protoplanets. The sink particle approach provides a reliable estimate for the total mass of pebbles captured by a protoplanet, but it does not allow us to study their subsequent evolution. In this paper we improve on our previous work by modelling the protoplanet hydrodynamically, thus resolving both gas and dust dynamics within it. Nevertheless, it remains beyond current computational means to resolve the growing solid core in such simulations, and so we introduce a [*dust-sink*]{} in the protoplanet centre to model the core. As pebbles accrete onto the core, we pass the liberated gravitational potential energy to the surrounding gas in the protoplanet. Our simulations therefore also aim to explore the effects of core feedback as introduced in [@Nayakshin16a]. The paper is structured as follows. In Section \[sec:analytic\_fb\] we briefly calculate the expected core mass necessary to disrupt a young GI protoplanet. Following this, in Section \[sec:methods\] we describe the new physics added to our previous simulations [@HumphriesNayakshin18] in order to numerically model core growth and feedback. We also study the settling timescale for a variety of grain sizes inside protoplanets. In Section \[sec:results\] we present the main results of the paper, examining how feedback driven disruption over a range of feedback timescales and pebble to gas ratios can unbind protoplanets and leave behind rocky cores at tens of AU. In Section \[sec:core\] we extend this analysis and take a closer look at the core during the feedback process. In Section \[sec:discussion\] we outline the observational implications of protoplanet disruption process and also discuss some of the limitations of our model. Finally, we summarise the conclusions of our paper in Section \[sec:conclusions\]: if rocky cores rapidly form inside GI protoplanets, the resultant release of energy may disrupt these objects and leave super-Earth and potentially Saturn mass cores stranded at tens to hundreds of AU. Anal
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Materials exhibiting a substitutional disorder such as multicomponent alloys and mixed metal oxides/oxyfluorides are of great importance in many scientific and technological sectors. Disordered materials constitute an overwhelmingly large configurational space, which makes it practically impossible to be explored manually using first-principles calculations such as density functional theory (DFT) due to the high computational costs. Consequently, the use of methods such as cluster expansion (CE) is vital in enhancing our understanding of the disordered materials. CE dramatically reduces the computational cost by mapping the first-principles calculation results on to a Hamiltonian which is much faster to evaluate. In this work, we present our implementation of the CE method, which is integrated as a part of the Atomic Simulation Environment (ASE) open-source package. The versatile and user-friendly code automates the complex set up and construction procedure of CE while giving the users the flexibility to tweak the settings and to import their own structures and previous calculation results. Recent advancements such as regularization techniques from machine learning are implemented in the developed code. The code allows the users to construct CE on any bulk lattice structure, which makes it useful for a wide range of applications involving complex materials. We demonstrate the capabilities of our implementation by analyzing the two example materials with varying complexities: a binary metal alloy and a disordered lithium chromium oxyfluoride.' author: - Jin Hyun Chang - David Kleiven - Marko Melander - Jaakko Akola - Juan Maria Garcia Lastra - Tejs Vegge title: 'CLEASE: A versatile and user-friendly implementation of Cluster Expansion method' --- Introduction ============ Computational modeling of materials with a substitutional disorder such as multicomponent alloys and mixed metal oxides is said to have a *configurational* problem. The vast configurational space of these materials makes it practically impossible to explore directly using first-principles calculations such as density functional theory (DFT). A quantitative method capable of establishing the relationship between the structure and property of materials is therefore essential. Cluster Expansion (CE) [@Sanchez1984; @DeFontaine1994; @Zunger1993; @Asta2001; @VandeWalle2008; @Zhang2016] is a method that has been used successfully in the past few decades to parameterize and express the configurational dependence of physical properties. The most widely parameterized physical property is energy computed using first-principles methods, but CE can also be used to parameterize other quantities such as band gap [@Magri1991; @Franceschetti1999] and density of states [@Geng2005]. Despite its success and usefulness in predicting physical properties of crystalline materials, CE remains as a niche tool used in a small subfield within the computational materials science, primarily used by specialists. On the other hand, the research fields in which CE is becoming relevant is on the rise; one such example is the use of disordered materials for battery applications [@Wang2015a; @Abdellahi2016; @Abdellahi2016a; @Urban2016; @Kitchaev2018]. The objective of our work is to make cluster expansion more accessible for a broad range of computational scientists who do not necessarily possess expertise in cluster expansion. Our approach to achieving such a goal is to implement CE as a part of a widely used, open-source Atomic Simulation Environment (ASE) package [@ASE]. Henceforth, we refer to our implementation as CLEASE, which stands for CLuster Expansion in Atomic Simulation Environment. Having CE as a part of a widely used package with interfaces to a multitude of open-source and commercial atomic-scale simulation codes accompanies several practical benefits: (1) a large existing user base does not need to install or learn a new program as the CE module is a part of ASE and inherits its syntax and code style, and (2) all of the atomic-scale simulation codes supported by ASE are also automatically supported by the implemented module. In addition, CLEASE utilizes the database management feature implemented in ASE, which provides an efficient way to store, maintain and share both DFT and CE results. Therefore, the implementation presented in this article appeals to a significant portion of computational materials science community as a versatile and easy-to-learn package, thereby lowering the barrier to incorporate cluster expansion as a part of their research methods to accelerate computational materials prediction and design. The rest of the paper is organized as follows. A brief overview of cluster expansion formalism and other important concepts are provided in section \[sec:theory\] in order to aid the readers who are not familiar with the cluster expansion method. The implementation of CLEASE is described in section \[sec:implementation\]. Section \[sec:example\] contains two application examples with different levels of complexities, namely a binary metal alloy and a lithium metal oxyfluoride. The computational settings and technical details for the examples are provided in section \[sec:methods\]. Theory {#sec:theory} ====== Cluster Expansion Formalism --------------------------- The core concept of the cluster expansion is to express the scalar physical quantity of a material, $q(\bm{\sigma})$, to its configuration, $\bm{\sigma}$, where a crystalline system is represented with a fixed underlying grid of atomic sites. In such a representation, any configuration with the same underlying topology can be completely specified by the atomic occupation of each atomic site. For the case of a crystalline material with $N$ atomic sites, any configuration can be specified by an $N$-dimensional vector $\bm{\sigma} = \{s_1, s_2, \ldots, s_N\}$, where $s_i$ is a site variable that specifies which type of atom occupies the atomic site $i$ (also referred to as an occupation variable [@Zarkevich2004; @Meng2009; @VandeWalle2009] or pseudospin [@DeFontaine1994; @Magri1991; @Nelson2013; @Nelson2013a; @Seko2014]). It is noted that the terms configuration and structure are often used interchangeably. For the case of multinary systems consisting of $M$ different atomic species, $s_i$ takes one of $M$ distinct values. The original formulation of Sanchez et al. [@Sanchez1984] specifies the $s_i$ to take any values from $\pm m$, $\pm (m-1)$, $\ldots$, $\pm 1$ for $M = 2m$ (for the case where there is an odd number of element types, an additional value of 0 should be included in the possible values of $s_i$, and the relation between $M$ and $m$ becomes $M = 2m-1$). Other choices of $s_i$ are also commonly used such as values ranging from $0$ to $M-1$ by van de Walle [@VandeWalle2009] and from $1$ to $M$ by Mueller and Ceder [@Mueller2010]. Based on the original formalism by Sanchez et al., single-site basis functions are determined through an orthogonality condition $$\frac{1}{M} \sum_{s_i=-m}^{m}\Theta_n(s_i)\Theta_{n'}(s_i) = \delta_{nn'}, \label{eq:orthogonality_condition}$$ where $\Theta_{n}(s_i)$ is the $n$th single-site basis function (e.g., Chebyshev polynomials) for $i$th site and $\delta_{nn'}$ is a Kronecker delta. The configuration is decomposed into a sum of clusters as shown in figure \[fig:cluster\_decomposition\]. Each cluster has a set of associated cluster functions, which are defined as $$\Phi_{\bm{n}}(\bm{s}) = \prod_{i} \Theta_{n_i}(s_i), \label{eq:cluster_function}$$ where $\bm{n}$ and $\bm{s}$ are vectors specifying the order of the single-site basis function and the site variables in the cluster, respectively. $n_i$ and $s_i$ specify the $i$th element of the respective vectors. The use of orthogonal basis functions guarantees that the cluster functions defined in (\[eq:cluster\_function\]) are also orthogonal. The symmetrically equivalent clusters are classified as the same cluster, and the collection of all symmetrically equivalent clusters are denoted with an $\alpha$. ![image](fig1.pdf){width="150mm"} The average value of the cluster functions in cluster $\alpha$ is referred to as a correlation function, $\phi_{\alpha}$. The physical quantity, $q(\bm{\sigma})$, normalized with the number of atomic sites $N$ is then expressed as $$q(\bm{\sigma}) = \sum_{\alpha} m_{\alpha} J_{\alpha} \phi_{\alpha}, \label{eq:cluster_expansion1}$$ where $m_{\alpha}$ is the multiplicity factor indicating the number of cluster $\alpha$ per atom and $J_{\alpha}$ is the effective cluster interaction (ECI) per occurrence, which needs to be determined. It is noted that the cluster $\alpha$ includes the cluster of size zero, which have $m_{\alpha} \phi_{\alpha} = 1$. Alternatively, (\[eq:cluster\_expansion1\]) can be written in a more explicitly form, $$q(\bm{\sigma}) = J_{0} + \sum_{\alpha} m_{\alpha} J_{\alpha} \phi_{\alpha}, \label{eq:cluster_expansion2}$$ where $J_0$ is the ECI of an empty cluster while $\alpha$ in this case corresponds to the clusters of size one and higher.
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'The time-dependent Dirac equation was solved for zero-impact-parameter bare U-U collisions in the monopole approximation using a mapped Fourier grid matrix representation. A total of 2048 states including bound, as well as positive- and negative-energy states for an $N=1024$ spatial grid were propagated to generate occupation amplitudes as a function of internuclear separation. From these amplitudes spectra were calculated for total inclusive positron and electron production, and also the correlated spectra for ($e^+,e^-$) pair production. These were analyzed as a function of nuclear sticking time in order to establish signatures of spontaneous pair creation, i.e., QED vacuum decay. Subcritical Fr-Fr and highly supercritical Db-Db collisions both at the Coulomb barrier were also studied and contrasted with the U-U results.' author: - Edward Ackad and Marko Horbatsch bibliography: - 'collisionUU.bib' title: 'Calculation of electron-positron production in supercritical uranium-uranium collisions near the Coulomb barrier' --- Introduction ============ In heavy-ion collisions, dynamical electron-positron pairs can be created due to the time-varying potentials [@belkacemheavyion]. For a sufficiently strong static potential, pairs can also be created spontaneously. Such *super-critical* potentials can be achieved in quasi-static adiabatic collision systems. The process of static pair creation is predicted by QED [@pra10324], but has not yet been demonstrated unambiguously by experiment. Therefore, it continues to be of interest as a test of non-perturbative QED with strong fields. The Dirac equation is a good starting point to describe relativistic electron motion. As the nuclear charge, $Z$, of a hydrogen-like system increases, the ground state energy decreases. Thus, for a sufficiently high $Z$ value, the ground-state becomes embedded in the lower continuum, the so-called Dirac sea. The ground state changes character from a bound state to a resonant state with a finite lifetime, and is called a supercritical resonance state [@ackad:022503]. An initially *vacant* supercritical state decays gradually into a pair consisting of a free positron and a bound electron. The bound electron will Pauli-block any other (spin-degenerate) pairs from being subsequently created. Currently, the only known realizable set-up capable of producing a supercritical ground state occurs in certain heavy-ion collision systems. When two bare nuclei collide near the Coulomb barrier, the vacant quasi-molecular ground state, i.e., the 1S$\sigma$ state, can become supercritical when the nuclei are sufficiently close. For the uranium-uranium system, the 1S$\sigma$ becomes supercritical at an internuclear separation of $R\lsim 36$ fm. As the nuclei continue their approach, the supercritical resonance experiences a shorter decay time. Thus, it is most probable that the supercritical resonance will decay at the closest approach of the nuclei. Rutherford trajectories result in collisions were the nuclei decelerate as they approach and come to a stop at closest approach before accelerating away. Since the nuclei are moving very slowly at closest approach, pairs created at this time are due to the intense static potential rather than due to the nuclear dynamics. Nuclear theory groups have predicted that if the nuclei are within touching range the combined Coulomb potential may remain static (“stick") for up to $T=10$ zeptoseconds ($10^{-21}$s) [@EurPJA.14.191; @zagrebaev:031602; @Sticking2]. Such a phenomenon would enhance the static pair creation signal without changing the dynamical pair creation background. In the present work, the time-dependent radial Dirac equation was solved in the monopole approximation for all initial states of a mapped Fourier grid matrix representation of the hamiltonian [@me1]. Single-particle amplitudes were obtained for an initially bare uranium-uranium head-on (zero-impact-parameter) collision at $E_{\rm CM}=740$ MeV, i.e, at the Coulomb barrier. The amplitudes were used to calculate the total positron and electron spectra for different nuclear sticking times, $T$. Previous work [@PhysRevA.37.1449; @eackadconf2007] obtained the electron bound-state contribution to the total positron spectrum by following the time evolution of a finite number of bound-state vacancies. In the present work the complete correlated spectrum was calculated, by which we mean that the final ($e^+,e^-$) phase space contributions include both of (nS $e^-+$ free $e^+$) and (free $e^-+$ free $e^+$) pairs. While the method is capable of handling any head-on collision system (symmetric or non-symmetric), the uranium system was chosen on the basis of planned experiments. Previous searches for supercritical resonances used partially ionized projectiles and solid targets [@PhysRevLett.51.2261; @PhysRevLett.56.444; @PhysLettB.245.153]. The ground state had a high percentage of being occupied, thus damping the supercritical resonance decay signal significantly due to Pauli-blocking. Over the next decade the GSI-FAIR collaboration is planning to perform bare uranium-uranium merged-beam collisions. A search will be conducted for supercritical resonance decay and for the nuclear sticking effect. Therefore, the present work will aid these investigations by providing more complete spectrum calculations. The limitation to zero-impact-parameter collisions is caused by the computational complexity. While this implies that direct comparison with experiment will not be possible, we note that small-impact-parameter collisions will yield similar results [@PhysRevA.37.1449]. Theory ====== The information about the state of a collision system is contained in the single-particle Dirac amplitudes. They are obtained by expanding the time-evolved state into a basis with direct interpretation, namely the target-centered single-ion basis. These amplitudes, $a_{\nu,k}$, can be used to obtain the particle creation spectra [@PhysRevA.45.6296; @pra10324]. The total electron production spectrum, $n_k$, and positron production spectrum, $\bar{n}_q$, where $k$ and $q$ label positive and negative (discretized) energy levels respectively, are given by $$\begin{aligned} \label{creat2} \langle n_k \rangle &=& \sum_{\nu<F}{|a_{\nu,k}|^2} \\ \label{creat1} \langle \bar{n}_q\rangle &=& \sum_{\nu>F}{|a_{\nu,q}|^2}.\end{aligned}$$ Here the coefficients are labeled such that $\nu$ represents the initial state and $F$ is the Fermi level [@PhysRevA.37.1449; @PhysRevA.45.6296]. Equations \[creat2\] and \[creat1\] contain sums over all the propagated initial states above (positrons) or below (electrons) the Fermi level, which is placed below the ground state separating the negative-energy states from the bound and positive-energy continuum states. Therefore, all initial positive-energy states (both bound and continuum) must be propagated through the collision to calculate the total positron spectrum, which is obtained from vacancy production in the initially fully occupied Dirac sea. The dominant contribution (and a first approximation) to this spectrum is obtained by the propagation of the initial 1S state only [@eddiethesis]. Müller *et al.* [@PhysRevA.37.1449] reported partial positron spectra for bare U-U collisions by propagating a number of initial bound-state vacancies. Propagating all the (discretized) states is accomplished in the present work by solving the time-independent Dirac equation (for many intermediate separations) using a matrix representation. The wavefunction is first expanded into spinor spherical harmonics, $\chi_{\kappa,\mu}$, $$\Psi_{\mu}(r,\theta,\phi)=\sum_{\kappa}{ \left( \begin{array}{c} G_{\kappa}(r)\chi_{\kappa,\mu}(\theta,\phi) \\ iF_{\kappa}(r)\chi_{-\kappa,\mu}(\theta,\phi) \end{array} \right)} \quad ,$$ which are labeled by the relativistic angular quantum number $\kappa$ and the magnetic quantum number $\mu$ [@greiner]. The Dirac equation for the scaled radial functions, $f(r)=rF(r)$ and $g(r)=rG(r)$, then becomes ($\hbar=$ c $=$ m$_{\rm e}=1$), $$\begin{aligned} \label{syseqnG} \frac{df_{\kappa}}{dr} - \frac{\kappa }{r}f_{\kappa} & = & - \left(E -1 \right)g_{\kappa} + \sum_{\bar{\kappa}=\pm1}^{\pm\infty}{\langle \chi_{\kappa,\mu} \left| V(r,R) \right| \chi_{\bar{\kappa},\mu} \rangle }g_{\bar{\kappa}} \quad , \\ \label{syseqnG2} \frac{dg_{\kappa}}{dr} + \frac{\kappa}{r}g_{\kappa} & = & \left(E + 1 \right) f_{\kappa} - \sum_{\bar{\kappa}=\pm1}^{\pm\infty}{\langle \chi_{-\kappa,\mu} \left| V(r,R) \right| \chi_{-\bar{\kappa},\mu} \rangle }f_{\bar{\kappa}} \quad ,\end{aligned}$$ where
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In this paper, we propose a new weight initialization method called [*even initialization*]{} for wide and deep nonlinear neural networks with the ReLU activation function. We prove that no poor local minimum exists in the initial loss landscape in the wide and deep nonlinear neural network initialized by the even initialization method that we propose. Specifically, in the initial loss landscape of such a wide and deep ReLU neural network model, the following four statements hold true: 1) the loss function is non-convex and non-concave; 2) every local minimum is a global minimum; 3) every critical point that is not a global minimum is a saddle point; and 4) bad saddle points exist. We also show that the weight values initialized by the even initialization method are contained in those initialized by both of the (often used) standard initialization and He initialization methods.' author: - | Tohru Nitta\ National Institute of Advanced Industrial Science and Technology (AIST), Japan\ `tohru-nitta@aist.go.jp`\ title: Weight Initialization without Local Minima in Deep Nonlinear Neural Networks --- Introduction {#intro} ============ Hinton et al. (2006) proposed Deep Belief Networks with a learning algorithm that trains one layer at a time. Since that report, deep neural networks have attracted attention extensively because of their human-like intelligence achieved through learning and generalization. To date, deep neural networks have produced outstanding results in the fields of image processing and speech recognition (Mohamed et al., 2009; Seide et al., 2011; Taigman et al., 2014). Moreover, their scope of application has expanded, for example, to the field of machine translation (Sutskever et al., 2014). In using deep neural networks, finding a good initialization becomes extremely important to achieve good results. Heuristics have been used for weight initialization of neural networks for a long time. For example, a uniform distribution $ U [ -1/\sqrt{n}, 1/\sqrt{n} ] $ has been often used where $n$ is the number of neurons in the preceding layer. Pre-training might be regarded as a kind of weight initialization methods, which could avoid local minima and plateaus (Bengio et al., 2007). However, several theoretical researches on weight initialization methods have been progressing in recent years. Glorot and Bengio (2010) derived a theoretically sound uniform distribution $ U [\ - \sqrt{6} / \sqrt{n_i + n_{i+1} }, \sqrt{6} /\sqrt{n_i + n_{i+1} } \ ] $ for the weight initialization of deep neural networks with an activation function which is symmetric and linear at the origin. He et al. (2015) proposed a weight initialization method (called [*He initialization*]{} here) with a normal distribution (either $N ( 0, 2 / n_i )$ or $N ( 0, 2 / n_{i+1} )$ ) for the neural networks with the ReLU (Rectified Linear Unit) activation function. The above two initialization methods are driven by experiments to monitor activations and back-propagated gradients during learning. On the other hand, local minima of deep neural networks has been investigated theoretically in recent years. Local minima cause plateaus which have a strong negative influence on learning in deep neural networks. Dauphin et al. (2014) experimentally investigated the distribution of the critical points of a single-layer MLP and demonstrated that the possibility of existence of local minima with large error (i.e., bad or poor local minima) is very small. Choromanska et al. provided a theoretical justification for the work of Dauphin et al. (2014) on a deep neural network with ReLU units using the spherical spin-glass model under seven assumptions (Choromanska, Henaff, Mathieu, Arous & LeCun, 2015). Choromanska et al. also suggested that discarding the seven unrealistic assumptions remains an important open problem (Choromanska, LeCun & Arous, 2015). Kawaguchi (2016) discarded most of these assumptions and proved that the following four statements for a deep nonlinear neural network with only two out of the seven assumptions: 1) the loss function is non-convex and non-concave, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point, and 4) bad saddle points exist. In this paper, we propose a new weight initialization method (called [*even initialization*]{}) for wide and deep nonlinear neural networks with the ReLU activation function: weights are initialized independently and identically according to a probability distribution whose probability density function is even on $[ -1/n, 1/n]$ where $n$ is the number of neurons in the layer. Using the research results presented by Kawaguchi (2016), we prove that no poor local minimum exists in the initial loss landscape in the wide and deep nonlinear neural network initialized by the even initialization method that we propose. We also show that the weight values initialized by the even initialization method are contained in those initialized by both of the (often used) standard initialization and He initialization methods. Weight Initialization without Local Minima {#initialize_weight} ========================================== In this section, we propose a weight initialization method for a wide and deep neural network model. There is no local minimum in the initial weight space of the wide and deep neural network initialized by the weight initialization method. The weight values initialized by the weight initialization method are contained in those initialized by both of the standard initialization and He initialization methods. In other words, there exists an interval of initial weights where there is no local minimum in both cases of the standard initialization and He initialization methods. Kawaguchi Model {#kawa-model} --------------- This subsection presents a description of the deep nonlinear neural network model analyzed by Kawaguchi (2016) (we call it [*Kawaguchi model*]{} here). We will propose a new weight initialization method for the Kawaguchi model in the subsection \[even\_weight\]. First, we consider the following neuron. The net input $U_n$ to a neuron $n$ is defined as: $U_n = \sum_mW_{nm}I_m$, where $W_{nm}$ represents the weight connecting the neurons $n$ and $m$, $I_m$ represents the input signal from the neuron $m$. It is noteworthy that biases are omitted for the sake of simplicity. The output signal is defined as $\varphi(U_n)$ where $\varphi(u) {\stackrel{\rm def}{=}}\max(0, u)$ for any $u \in {\mbox{\boldmath $R$}}$ and is called [*Rectified Linear Unit*]{} ([*ReLU*]{},  denotes the set of real numbers). The deep nonlinear neural network described in (Kawaguchi, 2016) consists of such neurons described above (Fig. \[fig1\]). The network has $H+2$ layers ($H$ is the number of hidden layers). The activation function $\psi$ of the neuron in the output layer is linear, i.e., $\psi(u)=u$ for any $u\in{\mbox{\boldmath $R$}}$. For any $0 \leq k \leq H+1$, let $d_k$ denote the number of neurons of the $k$-th layer, that is, the width of the $k$-th layer where the 0-th layer is the input layer and the $(H+1)$-th layer is the output layer. Let $d_x = d_0$ and $d_y = d_{H+1}$ for simplicity. Let $({\mbox{\boldmath $X$}}, {\mbox{\boldmath $Y$}})$ be the training data where ${\mbox{\boldmath $X$}}\in {\mbox{\boldmath $R$}}^{d_x \times m}$ and ${\mbox{\boldmath $Y$}}\in {\mbox{\boldmath $R$}}^{d_y \times m}$ and where $m$ denotes the number of training patterns. We can rewrite the $m$ training data as $\{ ({\mbox{\boldmath $X$}}_i, {\mbox{\boldmath $Y$}}_i) \}_{i=1}^m$ where ${\mbox{\boldmath $X$}}_i \in {\mbox{\boldmath $R$}}^{d_x}$ is the $i$-th input training pattern and ${\mbox{\boldmath $Y$}}_i \in {\mbox{\boldmath $R$}}^{d_y}$ is the $i$-th output training pattern. Let ${\mbox{\boldmath $W$}}_k$ denote the weight matrix between the $(k-1)$-th layer and the $k$-th layer for any $1 \leq k \leq H+1$. Let ${\mbox{\boldmath $\Theta$}}$ denote the one-dimensional vector which consists of all the weight parameters of the deep nonlinear neural network. Kawaguchi specifically examined a path from an input neuron to an output neuron of the deep nonlinear neural network (Fig. \[fig2\]), and expressed the actual output of output neuron $j$ of the output layer of the deep nonlinear neural network for the $i$-th input training pattern ${\mbox{\boldmath $X$}}_i \in {\mbox{\boldmath $R$}}^{d_x}$ as $$\begin{aligned} \label{eqn2-1} \hat{{\mbox{\boldmath $Y$}}}_i({\mbox{\boldmath $\Theta$}}, {\mbox{\boldmath $X$}}_i)_j = q\sum_{p=1}^\Psi [{\mbox{\boldmath
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'P. A. Price, D. W. Fox, S. R. Kulkarni, B. A. Peterson, B. P. Schmidt, A. M. Soderberg, S. A. Yost, E. Berger, S. G. Djorgovski, D. A. Frail, F. A. Harrison, R. Sari, A. W. Blain, and S. C. Chapman.' title: '**Discovery of the Bright Afterglow of the Nearby Gamma-Ray Burst of 29 March 2003**' --- 24.3cm-2cm+1.5cm -1cm \#1[to 0pt[\#1]{}]{} On 2003 March 29, 11$^{\rm h}$37$^{\rm m}$14$^{\rm s}$.67 UT, triggered all three instruments on board the High Energy Transient Explorer II ([*HETE-II*]{}). About 1.4 hours later, the [*HETE-II*]{} team disseminated via the GRB Coordinates Network (GCN) the 4-arcminute diameter localisation[@vcd+03] by the Soft X-ray Camera (SXC). We immediately observed the error-circle with the Wide Field Imager (WFI) on the 40-inch telescope at Siding Spring Observatory under inclement conditions (nearby thunderstorms). Nevertheless, we were able to identify a bright source not present on the Digitised Sky Survey (Figure \[fig:discovery\]) and rapidly communicated the discovery to the community [@pp03]. The same source was independently detected by the RIKEN automated telescope[@t03]. With a magnitude of 12.6 in the $R$ band at 1.5 hours, the optical afterglow of  is unusually bright. At the same epoch, the well-studied GRB 021004 was $R \sim 16\,$mag[@fyk+03], and the famous GRB 990123 was $R\sim 17\,$mag[@abb+99]. The brightness of this afterglow triggered observations by over 65 optical telescopes around the world, ranging from sub-metre apertures to the Keck I 10-metre telescope. Unprecedented bright emission at radio[@bsf03], millimetre[@ksn03], sub-millimetre[@hmt+03], and X-ray[@ms03] was also reported (Figure \[fig:broadband\]). Greiner et al.[@gpe+03] made spectroscopic observations with the Very Large Telescope (VLT) in Chile approximately 16 hours after the GRB and, based on absorption as well as emission lines, announced a redshift of $z=0.1685$. From Keck spectroscopic observations obtained 8 hours later (Figure \[fig:spectrum\]) we confirm the VLT redshift, finding $z=0.169\pm 0.001$. We note that the optical afterglow of  was, at 1.5 hours, approximately the same brightness as the nearest quasar, 3C 273 ($z=0.158$); it is remarkable that such a large difference in the mass of the engine can produce an optical source with the same luminosity. With a duration of about 25-s and multi-pulse profile[@vcd+03],  is typical of the long duration class of GRBs. The fluence of , as detected by the Konus experiment [@gmp+03], of $1.6\times10^{-4}$ erg cm$^{-2}$ (in the energy range 15-5000 keV) places this burst in the top 1% of GRBs. At a redshift of 0.169,  is the nearest of the cosmological GRBs studied in the 6-year history of afterglow research. Assuming a Lambda cosmology with $H_0 = 71$ km/s/Mpc, $\Omega_M = 0.27$ and $\Omega_\Lambda=0.73$, the angular-diameter distance is $d_A=589\,$Mpc and the luminosity distance is $d_L=805\,$Mpc. The isotropic $\gamma$-ray energy release, $E_{\gamma,\rm iso} \sim 1.3\times 10^{52}\,$erg, is typical of cosmological GRBs[@fks+01]. Likewise, the optical and radio luminosities of the afterglow of are not markedly different from those of cosmological GRBs. In particular, the extrapolated isotropic X-ray luminosity at $t=10$ hr is $L_{X,\rm iso} \sim 6.4\times 10^{45}\,$ergs$^{-1}$, not distinctly different from that of other X-ray afterglows (e.g. ref.). Nonetheless, two peculiarities about the afterglow of  are worth noting. First, the optical emission steepens from $f(t) \propto t^{-\alpha}$ with $\alpha_1 = 0.873 \pm 0.025$ to $\alpha_2 = 1.97 \pm 0.12$ at epoch $t_* = 0.481 \pm 0.033\,$d (Figure \[fig:lightcurve\]; see also ref. ). This change in $\alpha$ is too large to be due to the passage of a cooling break ($\Delta\alpha=1/4$; ref. ) through the optical bands. On the other hand, $\alpha\sim 2$ is typically seen in afterglows following the so-called “jet-break” epoch ($t_j$). Before this epoch, the explosion can be regarded as isotropic, and following this epoch the true collimated geometry is manifested. Such an early jet break would imply a substantially-lower energy release than $E_{\gamma,\rm iso}$. If $t_*\sim t_j$ then using the formalism and adopting the density and $\gamma$-ray efficiency normalisations of Frail et al.[@fks+01] we estimate the true $\gamma$-ray energy release to be $E_\gamma \sim 3\times 10^{49}$ erg. This estimate is $4\sigma$ lower than the “standard energy” of $5\times 10^{50}$ erg found by Frail et al.[@fks+01] The geometric-corrected x-ray luminosity[@bkf03] would also be the lowest of all x-ray afterglows. If the above interpretation is correct then  may be the missing link between cosmological GRBs and the peculiar GRB 980425[@gvv+98] ($E_{\gamma,\rm iso}\sim 10^{48}\,$erg) which has been associated with SN 1998bw at $z=0.0085$. Second, the decay of the optical afterglow is marked by bumps and wiggles (e.g. ref. ). These features could be due to inhomogeneities in the circumburst medium or additional shells of ejecta from the central engine. In either case, the bumps and wiggles complicate the simple jet interpretation offered above. We note that if the GRB had been more distant, and hence the afterglow fainter, then the break in the light curve would likely have been interpreted as the jet break without question. The proximity of  offers us several new opportunities to understand the origin of GRBs. Red bumps in the light curve have been seen in several more-distant ($z\sim 0.3$ to 1) GRB afterglows (e.g.refs ,) and interpreted as underlying SNe that caused the GRBs. While these red bumps appeared to be consistent with a SN light curve, prior to , it had not yet been unambiguously demonstrated that they were indeed SNe. As this paper was being written, a clear spectroscopic signature for an underlying SN has been identified in the optical afterglow of [@smg+03]. Our own spectroscopy at Palomar and Keck confirm the presence of these SN features. This demonstrates once and for all that the progenitors of at least some GRBs are massive stars that explode as SNe. However, there remain a number of issues still to be resolved, relating to the physics of GRB afterglows and the environment around the progenitor. The “fireball” model of GRB afterglows predicts a broad-band spectrum from centimetre wavelengths to x-rays that evolves as the GRB ejecta expand and sweep up the surrounding medium[@spn98]. Testing this model in detail has in the past been difficult, primarily due to both low signal-to-noise and interstellar scintillation at the longer wavelengths, from which come the majority of spectral and temporal coverage of the afterglow evolution (e.g.refs. , ). , with bright emission at all wavelengths (Figure \[fig:broadband\]), and limited scintillation (due to the larger apparent size) will allow astronomers to test the predictions of the fireball model with unprecedented precision through the time evolution of the broad-band spectrum, the angular size of the fireball, and its proper motion (if any). It has been long predicted that if the progenitors of GRBs are massive stars, the circumburst medium should be rich and inhomogenous[@cl99], but it has been difficult to find evidence for this. However, for , it should be possible to trace the distribution of circumburst material and determine the environment of the progenitor.
{ "pile_set_name": "ArXiv" }
null
null
--- address: - '$^1$Physics Department, Northeastern University, Boston, Massachusetts 02115' - '$^2$Department of Physics, Columbia University, New York, New York 10027' - '$^3$Physics Department, City College of the City University of New York, New York, New York 10031' author: - 'S. V. Kravchenko$^1$, D. Simonian$^2$, and M. P. Sarachik$^3$' title: 'Comment on “Charged impurity scattering limited low temperature resistivity of low density silicon inversion layers” (Das Sarma and Hwang, cond-mat/9812216)' --- [2]{} In a recent preprint, Das Sarma and Hwang[@dassarma98] propose an explanation for the sharp decrease in the $B=0$ resistivity at low temperatures which has been attributed to a transition to an unexpected conducting phase in dilute high-mobility two-dimensional systems (see Refs.\[1-4\] in [@dassarma98]). The anomalous transport observed in these experiments is ascribed in Ref.[@dassarma98] to temperature-dependent screening and energy averaging of the scattering time. The model yields curves that are qualitatively similar to those observed experimentally: the resistivity has a maximum at a temperature $\sim E_F/k_B$ and decreases at lower temperatures by a factor of 3 to 10. The anomalous response to a magnetic field ([*e.g.*]{}, the increase in low-temperature resistivity by orders of magnitude [@simonian97]), is not considered in Ref. [@dassarma98]. Two main assumptions are made in the proposed model[@dassarma98]: (1) the transport behavior is dominated by charged impurity scattering centers with a density $N_i$, and (2) the metal-insulator transition, which occurs when the electron density ($n_s$) equals a critical density ($n_c$), is due to the freeze-out of $n_c$ carriers so that the net free carrier density is given by $n\equiv n_s-n_c$ at $T=0$. The authors do not specify a mechanism for this carrier freeze-out and simply accept it as an experimental fact. Although not included in their calculation, Das Sarma and Hwang also note that their model can be extended to include a thermally activated contribution to the density of “free” electrons. In this Comment, we examine whether the available experimental data support the model of Das Sarma and Hwang. (i) Comparison with the experimental data (see Fig. 1 of Ref. [@dassarma98]) is made for an assumed density of charged impurities of $3.5\times10^{9}$ cm$^{-2}$, a value that is too small. In an earlier publication, Klapwijk and Das Sarma [@klapwijk98] explicitly stated that the number of ionized impurities is “$3\times10^{10}$ cm$^{-2}$ for high-mobility MOSFET’s used for the 2D MIT experiments. There is very little room to vary this number by a factor of two”. Without reference to this earlier statement, the authors now use a value for $N_i$ that is one order of magnitude smaller [@sample]. (ii) According to the proposed model, the number of “free” carriers at zero temperature is zero at the “critical” carrier density ($n_s=n_c$) and it is very small ($n=n_s-n_c<<n_c$) near the transition. In this range, the transport must be dominated by thermally activated carriers, which decrease exponentially in number as the temperature is reduced. It is known from experiment that at low temperatures the resistance is independent of temperature[@krav; @hanein98] at $n_c$ (the separatrix between the two phases) and depends weakly on temperature for nearby electron densities. In order to give rise to a finite conductivity $\sim e^2/h$ at the separatrix, an exponentially small number of carriers must have an exponentially large mobility, a circumstance that is rather improbable. (iii) Recent measurements of the Hall coefficient and Shubnikov-de Haas oscillations yield electron densities that are independent of temperature and equal to $n_s$ rather than a density $n=n_s-n_c$ of “free” electrons [@hanein98; @hall]. This implies that [*all*]{} the electrons contribute to the Hall conductance, including those that are frozen-out or localized. Although this is known to occur in quantum systems such as Hall insulators, it is not clear why it can hold within the simple classical model proposed by Das Sarma and Hwang. [10]{} S. Das Sarma and E. H. Hwang, preprint cond-mat/9812216. D. Simonian, S. V. Kravchenko, M. P. Sarachik, and V. M. Pudalov, Phys. Rev. Lett. [**79**]{}, 2304 (1997); V. M. Pudalov, G. Brunthaler, A. Prinz, and G. Bauer, JETP Lett. [**65**]{}, 932 (1997). T. M. Klapwijk and S. Das Sarma, preprint cond-mat/9810349. We note that although sample Si-15 had a particularly high peak mobility at 4.2 K (almost twice that of other samples), this cannot account for a reduction in $N_i$ by a factor of 10. The density of charged traps $N_i$ in samples of comparable quality was estimated to be $1.5\times10^{10}$ [@pudalov93]. V. M. Pudalov, M. D’Iorio, S. V. Kravchenko, and J. W. Campbell, Phys. Rev. Lett. [**70**]{}, 1866 (1993). S. V. Kravchenko, W. E. Mason, G. E. Bowker, J. E. Furneaux, V. M. Pudalov, and M. D’Iorio, Phys. Rev. B [**51**]{}, 7038 (1995). Y. Hanein, D. Shahar, J. Yoon, C. C. Li, D. C. Tsui, and H. Shtrikman, Phys. Rev. B [**58**]{}, R7520 (1998). D. Simonian, K. Mertes, M. P. Sarachik, S. V. Kravchenko, and T. M. Klapwijk, in preparation.
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We generalize and apply the key elements of the Kibble-Zurek framework of nonequilibrium phase transitions to study the non-equilibrium critical cumulants near the QCD critical point. We demonstrate the off-equilibrium critical cumulants are expressible as universal scaling functions. We discuss how to use off-equilibrium scaling to provide powerful model-independent guidance in searches for the QCD critical point.' address: - 'Department of Physics, Brookhaven National Laboratory, Upton, New York 11973-5000,' - 'Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.' author: - Swagato Mukherjee - Raju Venugopalan - 'Yi Yin[^1]' title: 'Universality regained: Kibble-Zurek dynamics, off-equilibrium scaling and the search for the QCD critical point' --- QCD critical point ,critical fluctuations ,non-Gaussian cumulants ,critical slowing down Introduction {#intro} ============ The searches for the conjectured QCD critical point in heavy-ion collision experiments have attracted much theoretical and experimental effort. The central question is to identify telltale experimental signals of the presence of the QCD critical point. In particular, universality arguments can be employed to locate the QCD critical point. We begin our discussion with the static properties of QCD critical point, which is argued to lie in the static universality class of the 3d Ising model. The equilibrium cumulants of the critical mode satisfy the static scaling relation: $$\begin{aligned} \label{eq:kappa-eq} \kappa^{\rm eq}_{n} \sim \xi_{\rm eq}^{\frac{-\beta+(2-\alpha-\beta)\,(n-1)}{\nu}}\, f^{\rm eq}_{n}(\theta)~\sim \xi_{\rm eq}^{\frac{-1+5\,(n-1)}{2}}\, f^{\rm eq}_{n}(\theta)\, , \qquad n=1,2,3,4\ldots\end{aligned}$$ where $\xi_{\rm eq}$ is the correlation length, which will grow universally near the critical point and $\theta$ is the scaling variable. $\alpha, \beta, \nu$ are standard critical exponents. Here and hereafter, we will use the approximate value taken from the 3d Ising model: $\alpha\approx 0, \beta\approx 1/3, \nu\approx 2/3$. The static scaling relation indicates that non-Gaussian cumulants are more sensitive to the growth of the correlation length (e.g. $\kappa^{\rm eq}_{3}\sim \xi^{4.5}_{\rm eq}, \kappa^{\rm eq}_{4}\sim \xi^{7}_{\rm eq}$ while the Gaussian cumulant $\kappa_{2}\sim \xi^{2}_{\rm eq}$). Moreover, for non-Gaussian cumulants, the universal scaling functions $f^{\rm eq}_{n}(\theta)$ can be either positive or negative and depends on $\theta$ only. Since critical cumulants contribute to the corresponding cumulants of net baryon number multiplicity, the enhanced non-Gaussian fluctuations (of baryon number multiplicities) as well as their change in sign and the associated non-monotonicity as a function of beam energy $\sqrt{s}$ can signal the presence of critical point in the QCD phase diagram. (see Ref. [@Luo:2017faz] for a recent review on experimental measurements.) However, off-equilibrium effects upset the naive expectation based on the static properties of the critical system. The relaxation time of critical modes $\tau_{\rm eff}\sim \xi^{z}$ grows universally with the universal dynamical critical exponent $z\approx 3$ for the QCD critical point [@Son:2004iv]. Consequently, critical fluctuations inescapably fall out of equilibrium. In Ref. [@Mukherjee:2015swa], we studied the real time evolution of non-Gaussian cumulants in the QCD critical regime by significantly extending prior work on Gaussian fluctuations [@Berdnikov:1999ph]. We found off-equilibrium critical cumulants can differ in both magnitude and sign from equilibrium expectations. The resulting off-equilibrium evolution of critical fluctuations depend on a number of non-universal inputs such as the details of trajectories in QCD phase diagram and the mapping relation between the QCD phase diagram and the 3d Ising model. (See also C. Herold’s, L. Jiang’s, and M. Nahrgang’s talk in this Quark Matter and the plenary talk in the last Quark Matter [@Nahrgang:2016ayr] for related theoretical efforts.). Is critical universality completely lost in the complexity of off-equilibrium evolution? Are there any universal features of off-equilibrium evolution of critical cumulants? In Ref. [@Mukherjee:2016kyu], we attempted to answer those questions based on the key ideas of the Kibble-Zurek framework of non-equilibrium phase transitions. We demonstrated the emergence of off-equilibrium scaling and universality. Such off-equilibrium scaling and universality open new possibilities to identify experimental signatures of the critical point. The purpose of this proceeding is to illustrate the key idea of Ref. [@Mukherjee:2016kyu] and discuss how to apply this idea to search for QCD critical point. Kibble-Zurek dynamics: the basic idea and an illustrative example {#idea} ================================================================= The basic idea of Kibble-Zurek dynamics was pioneered by Kibble in a cosmological setting and was generalized to describe similar problem in condensed matter system (see Ref. [@Zurek] for a review). The Kibble-Zurek dynamics is now considered to be the paradigmatic framework to describe critical behavior out of equilibrium. Its application covers an enormous variety of phenomena over a wide range of scales ranging from low-temperature physics to astrophysics . We will to use this idea to explore the dynamics of QCD matter near the critical point. ![image](KZxidef.pdf){width="32.00000%"} ![image](BRsol.pdf){width="32.00000%"} ![image](scalingBR.pdf){width="32.00000%"} The dramatic change in the behavior of the quench time relative to the relaxation time is at the heart of the KZ dynamics. As the system approaches the critical point, the relaxation time grows due to the critical slowing down (c.f. dashed curves in  Fig. \[fig:KZ-idea\] (left)). In contrast, the quench time $\tau_{\rm quench}$, defined as the relative change time of correlation length, becomes shorter and shorter due to the rapid growth of the correlation length (c.f. Fig. \[fig:KZ-idea\] (left)) . Therefore at some proper time, say $\tau^{*}$, $\tau_{\rm eff}$ is equal to $\tau_{\rm quench}$. After $\tau^{*}$, the system expands too quickly (i.e. $\tau_{\rm quench} > \tau_{\rm eff}$) for the evolution of correlation length to follow the growth of equilibrium correlation length. It is then natural to define an emergent length scale, the Kibble-Zurek length $l_{\rm KZ}$, which is the value of the equilibrium correlation length at $\tau^{*}$. In another word, $l_{KZ}$ is the maximum correlation length the system could develop when passing through the critical point. Likewise, one could introduce an emergent time scale, referred as Kibble-Zurek time, $\tau_{\rm KZ}=\tau_{\rm eff}(\tau^{*})$, which is the relaxation time at $\tau^{*}$. If the critical fluctuations are in thermal equilibrium, their magnitudes are completely determined by the equilibrium correlation length. For off-equilibrium evolution, we expect that magnitude of off-equilibrium fluctuations is controlled by $l_{\rm KZ}$ and their temporal evolutions are characterized by $\tau_{\rm KZ}$. The emergence of such a characteristic length scale $l_{\rm KZ}$ and the time scale $\tau_{\rm KZ}$ indicates that cumulants will scale as a function of them. As an illustrative example, we revisited the study of Ref. [@Berdnikov:1999ph]. Fig. \[fig:KZ-idea\] (middle) plots the evolution of the off-equilibrium Gaussian cumulant $\kappa_{2}$ by solving the rate equation proposed in Ref. [@Berdnikov:1999ph] from different choices of non-universal inputs. Those off-equilibrium cumulants are different from the equilibrium one and look different at first glance. What happens if we rescale those Gaussian cumulants by $l^{2}_{\rm KZ}$ and present their temporal evolution as a function of the rescaled time $\tau/\tau_{\rm KZ}$? As shown in Fig. \[fig:KZ-idea\] (right), the rescaled evolutions now look nearly identical, which confirms the expectation from the KZ scaling. Off-equilibrium scaling of critical cumulants {#result} ============================================= In Ref. [@Mukherjee:2016kyu], we proposed the following scaling hypothesis for the off-equilibrium evolution of critical cumulants: $$\begin{aligned} \label{eq:scaling} \kappa_{n}\left(\tau;\Gamma\right) \sim l^{\frac{-1+5\,(n-1)}{2}}_{\rm KZ}\, \bar{f}_{n}\left(\,\
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: '[Quantum liquids, in which an effective Lorentzian metric and thus some kind of gravity gradually arise in the low-energy corner, are the objects where the problems related to the quantum vacuum can be investigated in detail. In particular, they provide the possible solution of the cosmological constant problem: why the vacuum energy is by 120 orders of magnitude smaller than the estimation from the relativistic quantum field theory. The almost complete cancellation of the cosmological constant does not require any fine tuning and comes from the fundamental “trans-Planckian” physics of quantum liquids. The remaining vacuum energy is generated by the perturbation of quantum vacuum caused by matter (quasiparticles), curvature, and other possible sources, such as smooth component – the quintessence. This provides the possible solution of another cosmological constant problem: why the present cosmological constant is on the order of the present matter density of the Universe. We discuss here some properties of the quantum vacuum in quantum liquids: the vacuum energy under different conditions; excitations above the vacuum state and the effective acoustic metric for them provided by the motion of the vacuum; Casimir effect, etc. ]{}' author: - | G.E. Volovik\ Low Temperature Laboratory, Helsinki University of Technology\ P.O.Box 2200, FIN-02015 HUT, Finland\ and\ L.D. Landau Institute for Theoretical Physics,\ Kosygin Str. 2, 117940 Moscow, Russia\ title: Vacuum in quantum liquids and in general relativity --- 2 truecm Introduction. ============= Quantum liquids, such as $^3$He and $^4$He, represent the systems of strongly interacting and strongly correlated atoms, $^3$He and $^4$He atoms correspondingly. Even in its ground state, such liquid is a rather complicated object, whose many body physics requires extensive numerical simulations. However, when the energy scale is reduced below about 1 K, we cannot resolve anymore the motion of isolated atoms in the liquid. The smaller the energy the better is the liquid described in terms of the collective modes and the dilute gas of the particle-like excitations – quasiparticles. This is the Landau picture of the low-energy degrees of freedom in quantum Bose and Fermi liquids. The dynamics of collective modes and quasiparticles is described in terms of what we call now ‘the effective theory’. In superfluid $^4$He this effective theory, which incorporates the collective motion of the ground state – the quantum vacuum – and the dynamics of quasiparticles in the background of the moving vacuum, is known as the two-fluid hydrodynamics [@Khalatnikov]. Such an effective theory does not depend on details of microscopic (atomic) structure of the quantum liquid. The type of the effective theory is determined by the symmetry and topology of the ground state, and the role of the microscopic physics is only to choose between different universality classes on the basis of the minimum energy consideration. Once the universality class is determined, the low-energy properties of the condensed matter system are completely described by the effective theory, and the information on the microscopic (trans-Planckian) physics is lost [@LaughlinPines]. In some condensed matter the universality class produces the effective theory, which reminds very closely the relativistic quantum field theory. For example, the collective fermionic and bosonic modes in superfluid $^3$He-A reproduce chiral fermions, gauge fields and even in many respects the gravitational field [@PhysRepRev]. This allows us to use the quantum liquids for the investigation of the properties related to the quantum vacuum in relativistic quantum field theories, including the theory of gravitation. The main advantage of the quantum liquids is that in principle we know their vacuum structure at any relevant scale, including the interatomic distance, which plays the part of one of the Planck length scales in the hierarchy of scales. Thus the quantum liquids can provide possible routes from our present low-energy corner of the effective theory to the “microscopic” physics at Planckian and trans-Planckian energies. One of the possible routes is related to the conserved number of atoms $N$ in the quantum liquid. The quantum vacuum of the quantum liquids is constructed from the discrete elements, the bare atoms. The interaction and zero point motion of these atoms compete and provide an equilibrium ground state of the ensemble of atoms, which can exist even in the absence of external pressure. The relevant energy and the pressure in this equilibrium ground state are exactly zero in the absence of interaction with environment. Translating this to the language of general relativity, one obtains that the cosmological constant in the effective theory of gravity in the quantum liquid is exactly zero without any fine tuning. The equilibrium quantum vacuum is not gravitating. This route shows a possible solution of the cosmological constant problem: why the estimation the vacuum energy using the relativistic quantum field theory gives the value, which is by 120 orders of magnitude higher than its upper experimental limit. In quantum liquids there is a similar discrepancy between the exact zero result for the vacuum energy and the naive estimation within the effective theory. We shall also discuss here how different perturbations of the vacuum in quantum liquids lead to small nonzero energy of quantum vacuum. Translating this to the language of general relativity, one obtains that the in each epoch the vacuum energy density must be either of order of the matter density of the Universe, or of its curvature, or of the energy density of the smooth component – the quintessence. Here we mostly discuss the Bose ensemble of atoms: a weakly interacting Bose gas, which experiences the phenomenon of Bose condensation, and a real Bose liquid – superfluid $^4$He. The consideration of the Bose gas allows us to use the microscopic theory to derive the ground state energy of the quantum system of interacting atoms and the excitations above the vacuum state – quasiparticles. We also discuss the main differences between the bare atoms, which comprise the vacuum state, and the quasiparticles, which serve as elementary particles in the effective quantum field theory. Another consequence of the discrete number of the elements comprising the vacuum state, which we consider, is related to the Casimir effects. The dicreteness of the vacuum – the finite-$N$ effect – leads to the to the mesoscopics Casimir forces, which cannot be derived within the effective theory. For these perposes we consider the Fermi ensembles of atoms: Fermi gas and Fermi liquid. Einstein gravity and cosmological constant problem ================================================== Einstein action --------------- The Einstein’s geometrical theory of gravity consists of two main elements [@Damour]. \(1) Gravity is related to a curvature of space-time in which particles move along the geodesic curves in the absence of non-gravitational forces. The geometry of the space-time is described by the metric $g_{\mu\nu}$ which is the dynamical field of gravity. The action for matter in the presence of gravitational field $S_{\rm M}$, which simultaneously describes the coupling between gravity and all other fields (the matter fields), is obtained from the special relativity action for the matter fields by replacing everywhere the flat Minkowski metric by the dynamical metric $g_{\mu\nu}$ and the partial derivative by $g$-covariant derivative. This follows from the principle that the equations of motion do not depend on the choice of the coordinate system (the so called general covariance). This also means that the motion in the non-inertial frame can be described in the same manner as motion in some gravitational field – this is the equivalence principle. Another consequence of the equivalence principle is that the the space-time geometry is the same for all the particles: the gravity is universal. \(2) The dynamics of the gravitational field is determined by adding the action functional $S_{\rm G}$ for $g_{\mu\nu}$, which describes propagation and self-interaction of the gravitational field: $$S=S_{\rm G}+S_{\rm M}~. \label{GravitationalEinsteinAction}$$ The general covariance requires that $S_{\rm G}$ is the functional of the curvature. In the original Einstein theory only the first order curvature term is retained: $$S_{\rm G}= -{1\over 16\pi G}\int d^4x\sqrt{-g}{\cal R}~, \label{EinsteinCurvatureAction}$$ where $G$ is gravitational Newton cosntant; and ${\cal R}$ is the Ricci scalar curvature. The Einstein action is thus $$S= -{1\over 16\pi G}\int d^4x\sqrt{-g}{\cal R} +S_{\rm M} \label{EinsteinAction}$$ Variation of this action over the metric field $g_{\mu\nu}$ gives the Einstein equations: $${\delta S\over \delta g^{\mu\nu}}= {1\over 2}\sqrt{-g}\left[ -{1\over 8\pi G}\left( R_{\mu\nu}-{1\over 2}{\cal R}g_{\mu\nu} \right) +T^{\rm M}_{\mu\nu}\right]=0~, \label{EinsteinEquation1}$$ where $T^{\rm M}_{\mu\nu}$ is the energy-momentum of the matter fields. Bianchi identities lead to the “covariant” conservation law for matter $$T^{\mu {\rm M}}_{\nu;\mu}=0 ~~,~~{\rm or}~~ \partial_\mu \left(T^{\mu {\rm M}}_\nu \sqrt{-g}\right) = {1\over 2}\sqrt{-g}T^{\alpha\beta {\rm M}} \partial_\nu g_{\alpha\beta} \, , \label{CovariantConservation}$$ But actually this “covariant” conservation takes place in virtue of the field equation for “matter” irrespective of
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Topologically nontrivial states, the solitons, emerge as elementary excitations in $1D$ electronic systems. In a quasi $1D$ material the topological requirements originate the spin- or charge- roton like excitations with charge- or spin- kinks localized in the core. They result from the spin-charge recombination due to confinement and the combined symmetry. The rotons possess semi-integer winding numbers which may be relevant to configurations discussed in connection to quantum computing schemes. Practically important is the case of the spinon functioning as the single electronic $\pi -$ junction in a quasi $1D$ superconducting material. (Published in [@moriond].)' address: - | $^1$ Laboratoire de Physique Théorique et de s Modèles Statistiques, CNRS;\ Bât.100, Université Paris-Sud, 91405 Orsay cedex, France;\ http://ipnweb.in2p3.fr/ lptms/membres/brazov/, e-mail: brazov@ipno.in2p3.fr - '$^2$ L.D. Landau Institute, Moscow, Russia.' author: - '$^{1,2}$' title: | Topological Confinement of Spins and Charges:\ Spinons as ${\large\pi-}$ junctions. --- Introduction to solitons. ========================= Topological defects: solitons, vortices, anyons, etc., are discussed currently, see [@ivanov], in connection to new trends in physics of quantum devices, see [@esteve]. Closest to applications and particularly addressed at this conference [@ryazanov] are the $\pi -$ junctions which, linking two superconductors, provide degeneracy of their states with phase differences equal to $0$ and $2\pi$. The final goal of this publication is to show that in quasi one-dimensional ($1D$) superconductors the $\pi -$ junctions are produced already at the single electronic level extendible to a finite spin polarization. The effect results from reconciliation of the spin and the charge which have been separated at the single chain level. The charge and the spin of the single electron reconfine as soon as $2D$ or $3D$ long range correlations are established due to interchain coupling. The phenomenon is much more general taking place, in other respects, also in such a common system as the Charge Density Wave -CDW and in such a popular system as the doped antiferromagnet or the Spin Density Wave as its quasi $1D$ version [@braz-00]. Actually in this article we shall consider firstly and in greater details the CDW which is an object a bit distant to the mesoscopic community. The applications to superconductors will become apparent afterwards. We shall concentrate on effects of interchain coupling $D>1$: confinement, topological constraints, combined symmetry, spin-charge recombination. A short review and basic references on history of solitons and related topics in correlated electronic systems (like holes moving within the antiferromagnetic media) can be found in [@braz-00]. Solitons in superconducting wires were considered very early [@AL], within the macroscopic regime of the Ginzburg - Landau theory, for the phase slips problem. Closer to our goals is the microscopic solution for solitonic lattice in quasi $1D$ superconductors [@buzdin] at the Zeeman magnetic field. This successful application of results from theory of CDWs, see [@SB-84], to superconductors provides also a link of pair breaking effects in these different systems. The solitonic structures in qasi $1D$ superconductors appear as a $1D$ version of the well known FFLO (Fulde, Ferrel, Larkin, Ovchinnikov, see [@buzdin]) inhomogeneous state near the pair breaking limit. Being very weak in $3D$, this effect becomes quite pronounced in systems with nested Fermi surfaces which is the case of the $1D$ limit. To extend physics of solitons to the higher $D$ world, the most important problem one faces is the effect of *confinement* (S.B. 1980): as topological objects connecting degenerate vacuums, the solitons at $D>1$ acquire an infinite energy unless they reduce or compensate their topological charges. The problem is generic to solitons but it becomes particularly interesting at the electronic level where the spin-charge reconfinement appears as the result of topological constraints. The topological effects of $D>1$ ordering reconfines the charge and the spin *locally* while still with essentially different distributions. Nevertheless *integrally* one of the two is screened again, being transferred to the collective mode, so that in transport the massive particles carry only either charge or spin as in $1D$, see reviews [@SB-84; @SB-89]. Confinement and combined excitations. ===================================== The classical commensurate CDW: confinement of phase solitons and of kinks. --------------------------------------------------------------------------- The CDWs were always considered as the most natural electronic systems to observe solitons. We shall devote to them some more attention because the CDWs also became the subject of studies in mesoscopics [@delft]. Being a case of spontaneous symmetry breaking, the CDW order parameter $O_{cdw}\sim\Delta \cos [{\bf{Qr}}+\varphi ]$ possesses a manifold of degenerate ground states. For the $M-$ fold commensurate CDW the energy $\sim \cos [M\varphi]$ reduces the allowed positions to multiples of $2\pi /M$, $M>1$. Connecting trajectories $\varphi \rightarrow \varphi \pm 2\pi /M$ are phase solitons, or ”$\varphi -$ particles” after Bishop *et al*. Particularly important is the case $M=2$ for which solitons are clearly observed e.g. in polyacethylene [@PhToday] or in organic Mott insulators [@monceau]. Above the $3D$ or $2D$ transition temperature $T_{c}$, the symmetry is not broken and solitons are allowed to exist as elementary particles. But in the symmetry broken phase at $T<T_{c}$, any local deformation must return the configuration to the same (modulo $2\pi $ for the phase) state. Otherwise the interchain interaction energy (with the linear density $F\sim \left\langle\Delta _{0}\Delta _{n}\cos [\varphi _{0}- \varphi _{n}]\right\rangle $ is lost when the effective phase $\varphi _{0}+\pi sign(\Delta _{0})$ at the soliton bearing chain $n=0$ acquires a finite (and $\neq 2\pi )$ increment with respect to the neighboring chain values $\varphi _{n}$. The $1D$ allowed solitons do not satisfy this condition which originates a constant *confinement force* $F$ between them, hence the infinitely growing confinement energy $F|x|$. E.g. for $M=2$ the kinks should be bound in pairs or aggregate into macroscopic complexes with a particular role plaid by Coulomb interactions [@teber]. Especially interesting is the more complicated case of [*coexisting discrete and continuous symmetries*]{}. As a result of their interference the topolological charge of solitons originated by the discrete symmetry can be compensated by gapless degrees of freedom originated by the continuous one. This scenario we shall discuss through the rest of the article. The incommensurate CDW:\ confinement of Amplitude Solitons with phase wings. --------------------------------------------------- Difference of ground states with even and odd numbers of particles is a common issue in mesoscopics. In CDWs it also shows up in a spectacular way (S.B. 1980, see [@SB-84; @SB-89]. Thus any pair of electrons or holes is accommodated to the extended ground states for which the overall phase difference becomes $\pm 2\pi$. Phase increments are produced by phase slips which provide the spectral flow [@yak] from the upper $+\Delta_0$ to the lower $-\Delta_0$ rims of the single particle gap. The phase slip requires for the amplitude $\Delta(x,t)$ to pass through zero, at which moment the complex order parameter has a shape of the amplitude soliton (AS, the kink $\Delta (x=-\infty)\leftrightarrow -\Delta (x=\infty$). Curiously, this instantaneous configuration becomes the stationary ground state for the case when only one electron is added to the system or when the total spin polarization is controlled to be nonzero, see Figure 1. The AS carries the singly occupied mid-gap state, thus having a spin $1/2$ but its charge is compensated to zero by local dilatation of singlet vacuum states [@SB-84; @SB-89]. As a nontrivial topological object ($O_{cdw}$ does not map onto itself) the pure AS is prohibited in $D>1$ environment. Nevertheless it becomes allowed even their if it acquires phase tails with the total increment $\delta\varphi =\pi $, see Figure 2. The length of these tails $\xi _{\varphi }$ is determined by the weak interchain coupling, thus $\xi _{\varphi }\gg \xi _{0}$. As in $1D$, the sign of $\Delta$ changes within the scale $\xi _{0}$ but further on, at the scale $\xi _{\varphi }$, the factor $\cos [Qx+\varphi ]$ also changes the sign thus leaving the product in $O_{cdw}$ to be invariant. As a result the 3D allowed particle is formed with the AS core $\xi _{0}$ carrying the
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In this paper the old problem of determining the discrete spectrum of a multi-particle Hamiltonian is reconsidered. The aim is to bring a fermionic Hamiltonian for arbitrary numbers N of particles by analytical means into a shape such that modern numerical methods can successfully be applied. For this purpose the Cook-Schroeck Formalism is taken as starting point. This includes the use of the occupation number representation. It is shown that the N-particle Hamiltonian is determined in a canonical way by a fictional 2-particle Hamiltonian. A special approximation of this 2-particle operator delivers an approximation of the N-particle Hamiltonian, which is the orthogonal sum of finite dimensional operators. A complete classification of the matrices of these operators is given. Finally the method presented here is formulated as a work program for practical applications. The connection with other methods for solving the same problem is discussed.' address: | Department of Physics, University of Paderborn,\ 33098 Paderborn, Germany,\ j@schroe.de author: - Joachim Schröter title: 'A New Approach to the N-Particle Problem in QM' --- Introduction {#intro} ============ One of the central problems of many-particle quantum mechanics, if not its main problem, is calculating the spectral representation of a many-particle Hamiltonian, which typically has the form $$\label{1.1} \mathbf {H}_N = \sum^{N}_{j} K_j + \frac{1}{2} \sum^N_{j\not= k} W_{jk}\; .$$ Here $K_j$ contains the kinetic energy of particle $j$ and the external fields acting upon $j$, and $W_{jk}$ is the interaction of the particles $j$ and $k$. As is well-known, this problem has a solution if $W_{jk} = 0$. On the other hand, if $W_{jk}$ does not vanish, the problem is ”almost” unsolvable in a strict sense. But the situation is not hopeless. For, what is really needed for practical purposes, is a ”good” approximate solution. In this last field a tremendous work has been done, both analytically and numerically. Its mainstreams are well-known under the labels Thomas-Fermi method ([@Thom],[@Ferm]), Hartree-Fock method ([@Hart],[@Fock]), density functional theory ([@hoko],[@kosh]), configuration interaction method, Haken’s method and others. With respect to these methods and their applications and refinements I refer e.g. to the following books [@Kohan], [@Ohno], [@Haken]. There in addition an abundance of papers and monographs is cited, where the methods are also described in detail. A common feature of these procedures is that they contain one step in which a one-particle approximation of the N-particle problem is carried through. With the methods of Thomas-Fermi and of Hartree-Fock it is all, what is done. With the other methods the described first step is followed by other ones thereby improving the accuracy of approximation. Especially, by combining analytical and numerical mathematics great progress is achieved. Today problems can be solved which were regarded as unsolvable few decades ago. Nevertheless, the question is obvious, whether there are other approaches to a solution of the $N$-particle problem in quantum mechanics than those mentioned above. It is the aim of this paper to present such a new procedure. For this purpose I need some mathematical tools which, though they are widely known, I have briefly described in Appendix A.1. In particular the reader will find all the notation which is used throughout the text. (More details can be found in [@Cook], [@Schroeck].) The basic idea of the procedure as well as the main results are sketched in Section 2.3. The Structure of $N$-Particle Hamiltonians ========================================== In what follows only systems of particles of the same kind are considered. When one starts studying a concrete sytem, its Hamiltonian is usually defined using the position-spin representation, i.e. the Hamiltonian is an operator in the Hilbert space $\bigotimes^N (L^2(\mathbb{R}^{3}) \otimes \mathcal{S}^1)$, where $\bigotimes$ and $ \otimes$ denote tensor products, and where $\mathcal{S}^1$ is the complex vector space of spin functions (cf. Section A.2.1). For explicit calculations this representation is very useful. But the aim of this paper is primarily a structural analysis of the Hamiltonians of a certain class of systems, and in this case a more abstract formalism is adequate. It turns out that the Cook-Schroeck formalism (cf. Appendix A.1) is very useful for this purpose.\ Then our starting point is an arbitrary initial Hamiltonian of the shape (1.1), which is denoted $ \bar H_N $ and defined in a Hilbert space ${\bar{ \mathcal H}}^N := \bigotimes^N {\bar {\mathcal H }^ 1}$, where $\bar{\mathcal H }^1$ is the Hilbert space of the corresponding one-particle system.\ Now let $K$ be the operator defined in $\bar{\mathcal H}^1$ which contains the kinetic energy of one particle of a certain kind together with the action of the external fields. Moreover, let $W$ be that operator in $\bar{\mathcal H}^2$ which represents the interaction of two particles of the kind considered. Then, using Formula (\[A1.25\]), $ \bar {H}_N$ defined in $ \bar{\mathcal H}^N$ is given by $$\label{2.1} \bar H_N = \Omega_N (K) + \Omega_N (W),$$ where $$\begin{array}{c} \Omega_N (K) := ((N-1)!)^{-1} \sum_{P \in {\mathcal{S}}_N} U(P) (K \otimes 1 \otimes \ldots \otimes 1) U^\star(P),\\ [2ex] \Omega_N (W) := (2 (N-2)!)^{-1} \sum_{P \in {\mathcal{S}}_N} U(P) (W \otimes 1 \otimes \ldots \otimes 1) U^\star(P) , \end{array}$$ and $U(P)$ is the unitary permutation operator defined by the particle permutation P. Thus, using Formula (\[A1.27\]), the operator $\bar H_N$ specified for Bosons or Fermions reads $$\label{2.2} \bar{H}^\pm_N = \Omega^\pm_N (K) + \Omega^\pm_N(W).$$ Here the definition $ A^\pm := S^\pm_N AS^\pm_N$ for an arbitrary operator $A$ in $\bar{\mathcal H}^N$ is applied, where $S^\pm_N $ is the symmetrizer (+) resp. the antisymmetrizer (-). Then $A^\pm$ is defined in the Hilbert space $\bar{\mathcal H}^N_\pm = S^\pm_N [\bar {\mathcal{H}}^N]$.\ It is well-known that the structure of $\bar{H}^\pm_N$ given by (\[2.2\]) is not helpful for studying its spectral problem, because the operators $\Omega^\pm_N (K)$ and $\Omega^\pm_N(W)$ do not commute. This suggests the question if it is possible to find an operator $T$ acting in $\bar{\mathcal H}^M,\; 1 \leq M < N$ such that $$\label{2.3} \bar{H}^\pm_N = \Omega^\pm_N (T).$$ Because the two-particle operator $ W $ cannot be represented by a one-particle operator it holds that $M\geq 2$. If the Hamiltonians as well as the operators $ K$ and $W$ are selfadjoint, it turns out that $M=2$ is possible as shown by the following\ [**Proposition 2.1:**]{} Let $$\label{2.4} \tilde{H}_2 (\gamma) = \gamma (K \otimes 1) + \gamma (1 \otimes K) + W \;$$ so that ${\tilde H}_2 (\gamma)$ is defined in $ \bar{\mathcal H}^2$, and let $\gamma_0:=(N-1)^{-1}$. Then $$\label{2.5} \bar{H}^\pm_N = \Omega^\pm_N ({\tilde H}_2 (\gamma_0)) \; \text {and}\;\; \bar{H}^\pm_N \not= \Omega^\pm_N ({\tilde H}_2 (\gamma)), \; \gamma \not= \gamma_0.$$ [**Proof:**]{} Using (\[A1.28\]) yields $$\label{2.6} \begin{array}{ll} \Omega^\pm_N (K) &= N S^\pm_N (K \otimes 1 \otimes \ldots \otimes 1) S^\pm_N \\[3ex] &={{\displaystyle}\frac{1}{N-1} \binom N2 S^\pm_N ((K \otimes 1) \otimes \ldots \otimes 1) S^\pm_N} \\ [3ex] & { + {\displaystyle}\frac{1}{N-1} \binom N 2 S^\pm_N ((1 \otimes K) \otimes \ldots \otimes 1) S^\pm_N} \\ [3ex] &=
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We review some aspects of the current state of data-intensive astronomy, its methods, and some outstanding data analysis challenges. Astronomy is at the forefront of “big data” science, with exponentially growing data volumes and data rates, and an ever-increasing complexity, now entering the Petascale regime. Telescopes and observatories from both ground and space, covering a full range of wavelengths, feed the data via processing pipelines into dedicated archives, where they can be accessed for scientific analysis. Most of the large archives are connected through the Virtual Observatory framework, that provides interoperability standards and services, and effectively constitutes a global data grid of astronomy. Making discoveries in this overabundance of data requires applications of novel, machine learning tools. We describe some of the recent examples of such applications.' address: | 1-Department of Physics, University Federico II, via Cintia 6, Napoli, Italy\ 2-INAF-Astronomical Observatory of Capodimonte, via Moiariello 16, Napoli, Italy\ 3-Center for Data Driven Discovery, California Institute of Technology, Pasadena, 90125- CA, USA title: Data Driven Discovery in Astrophysics --- Astronomy, Virtual Observatory, data mining Introduction {#sec:intro} ============ Like most other sciences, astronomy is being fundamentally transformed by the Information and Computation Technology (ICT) revolution. Telescopes both on the ground and in space generate streams of data, spanning all wavelengths, from radio to gamma-rays, and non-electromagnetic windows on the universe are opening up: cosmic rays, neutrinos, and gravitational waves. The data volumes and data rates are growing exponentially, reflecting the growth of the technology that produces the data. At the same time, we see also significant increases in data complexity and data quality as well. This wealth of data is greatly accelerating our understanding of the physical universe. It is not just the data abundance that is fueling this ongoing revolution, but also Internet-enabled data access, and data re-use. The informational content of the modern data sets is so high as to make archival research and data mining not merely profitable, but practically obligatory: in most cases, researchers who obtain the data can only extract a small fraction of the science that is enabled by it. Furthermore, numerical simulations are no longer just a crutch of an analytical theory, but are increasingly becoming the dominant or even the only way in which various complex phenomena (e.g., star formation or galaxy formation) can be modeled and understood. These numerical simulations produce copious amounts of data as their output; in other words, theoretical statements are expressed not as formulae, but as data sets. Since physical understanding comes from the confrontation of experiment and theory, and both are now expressed as ever larger and more complex data sets, science is truly becoming data-driven in the ways that are both quantitatively and qualitatively different from the past. The situation is encapsulated well in the concept of the “fourth paradigm” [@Hey2009], adding to experiment, analytical theory, and numerical simulations as the four pillars of modern science. This profound, universal change in the ways we do science has been recognized for over a decade now, sometimes described as e-Science, cyber-science, or cyber-infrastructure. Data Overabundance, Virtual Observatory and Astroinformatics {#sec:overabundance} ============================================================ A confluence of several factors pushed astronomy to the forefront of data-intensive science. The first one was that astronomy as a field readily embraced, and in some cases developed, modern digital detectors, such as the CCDs or digital correlators, and scientific computing as a means of dealing with the data, and as a tool for numerical simulations. The culture of e-Science was thus established early (circa 1980s), paving the way for the bigger things to come. The size of data sets grew from Kilobytes to Megabytes, reaching Gigabytes by the late 1980s, Terabytes by the mid-1990s, and currently Petabytes (see Fig. 1). Astronomers adopted early universal standards for data exchange, such as the Flexible Image Transport System (FITS; [@wells1981]). The second factor, around the same time, was the establishment of space missions archives, mandated by NASA and other space agencies, with public access to the data after a reasonable proprietary period (typically 12 to 18 months). This had a dual benefit of introducing the astronomical community both to databases and other data management tools, and to the culture of data sharing and reuse. These data centers formed a foundation for the subsequent concept of a Virtual Observatory [@hanish2001]. The last element was the advent of large digital sky surveys as the major data sources in astronomy. Traditional sky surveys were done photographically, ending in 1990s; those were digitized using plate-scanner machines in the 1990s, thus producing the first Terabyte-scale astronomical data sets, e.g., the Digital Palomar Observatory Sky Survey (DPOSS; [@GSD1999]). They were quickly superseded by the fully digital surveys, such as the Sloan Digital Sky Survey (SDSS; [@york2000]), and many others (see, e.g. [@GSD2012c] for a comprehensive review and references). Aside from enabling a new science, these modern sky surveys changed the social psychology of astronomy: traditionally, observations were obtained (and still are) in a targeted mode, covering a modest set of objects, e.g., stars, galaxies, etc. With modern sky surveys, one can do first-rate observational astronomy without ever going to a telescope. An even more powerful approach uses data mining to select interesting targets from a sky survey, and pointed observations to study them in more detail. This new wealth of data generates many scientific opportunities, but poses many challenges as well: how to best store, access, and analyze these data sets, that are several orders of magnitude larger than what astronomers are used to do on their desktops? A typical sky survey may detect $\sim 10^8 - 10^9$ sources (stars, galaxies, etc.), with $\sim 10^2 - 10^3$ attributes measured for each one. Both the scientific opportunities and the technological challenges are then amplified by data fusion, across different wavelengths, temporal, or spatial scales. Virtual Observatory {#virtualobs} ------------------- The Virtual Observatory (VO, [@brunner2001a; @GSD2002a; @hanish2001]) was envisioned as a complete, distributed (Web-based) research environment for astronomy with large and complex data sets, by federating geographically distributed data and computing assets, and the necessary tools and expertise for their use. VO was also supposed to facilitate the transition from the old data poverty regime, to the regime of overwhelming data abundance, and to be a mechanism by which the progress in ICT can be used to solve the challenges of the new, data-rich astronomy. The concept spread world-wide, with a number of national and international VO organizations, now federated through the International Virtual Observatory Alliance (IVOA; http://ivoa.net). One can regard the VO as an integrator of heterogeneous data streams from a global network of telescopes and space missions, enabling data access and federation, and making such value-added data sets available for a further analysis. The implementation of the VO framework over the past decade was focused on the production of the necessary data infrastructure, interoperability, standards, protocols, middleware, data discovery services, and a few very useful data federation and analysis services (see [@hanish2007; @graham2007], for quick summaries and examples of practical tools and services implemented under the VO umbrella). Most astronomical data originate from sensors and telescopes operating in some wavelength regime, in one or more of the following forms: images, spectra, time series, or data cubes. A review of the subject in this context was given in [@brunner2001b]. Once the instrumental signatures are removed, the data typically represent signal intensity as a function of the position on the sky, wavelength or energy, and time. The bulk of the data are obtained in the form of images (in radio astronomy, as interferometer fringes, but those are also converted into images). The sensor output is then processed by the appropriate custom pipelines, that remove instrumental signatures and perform calibrations. In most cases, the initial data processing and analysis segments the images into catalogs of detected discrete sources (e.g., stars, galaxies, etc.), and their measurable attributes, such as their position on the sky, flux intensities in different apertures, morphological descriptors of the light distribution, ratios of fluxes at different wavelengths (colors), and so on. Scientific analysis then proceeds from such first-order data products. In the case of massive data sets such as sky surveys, raw and processed sensor data, and the initial derived data products such as source catalogs with their measured attributes are provided through a dedicated archive, and accessible online. The Virtual Observatory (VO) framework aims to facilitate seamless access to distributed heterogeneous data sets, for example, combining observations of the same objects from different wavelength regimes to understand their spectral energy distributions or interesting correlations among their properties. The International Virtual Observatory Alliance (IVOA) is charged with specifying the standards and protocols that are required to achieve this. A common set of data access protocols ensures that the same interface is employed across all data archives, no matter where they are located, to perform the same type of data query. Although common data formats may be employed in transferring data, individual data providers usually represent and store their data and metadata in their own way. Common data models define the shared elements across data and metadata collections and provide a framework for describing relationships between them so that different representations can interoperate in a
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We present a new grid of model photospheres for the SDSS-III/APOGEE survey of stellar populations of the Galaxy, calculated using the ATLAS9 and MARCS codes. New opacity distribution functions were generated to calculate ATLAS9 model photospheres. MARCS models were calculated based on opacity sampling techniques. The metallicity (\[M/H\]) spans from $-$5 to 1.5 for ATLAS and $-$2.5 to 0.5 for MARCS models. There are three main differences with respect to previous ATLAS9 model grids: a new corrected H$_{\rm 2}$O linelist, a wide range of carbon (\[C/M\]) and $\alpha$ element \[$\alpha$/M\] variations, and solar reference abundances from @asplund01. The added range of varying carbon and $\alpha$ element abundances also extends the previously calculated MARCS model grids. Altogether 1980 chemical compositions were used for the ATLAS9 grid, and 175 for the MARCS grid. Over 808 thousand ATLAS9 models were computed spanning temperatures from 3500K to 30000K and log $g$ from 0 to 5, where larger temperatures only have high gravities. The MARCS models span from 3500K to 5500K, and log $g$ from 0 to 5. All model atmospheres are publically available online.' author: - 'Sz. M[é]{}sz[á]{}ros, C. Allende Prieto, B. Edvardsson, F. Castelli, A. E. Garc[í]{}a P[é]{}rez, B. Gustafsson, S. R. Majewski, B. Plez, R. Schiavon, M. Shetrone, A. de Vicente' title: 'NEW ATLAS9 AND MARCS MODEL ATMOSPHERE GRIDS FOR THE APACHE POINT OBSERVATORY GALACTIC EVOLUTION EXPERIMENT (APOGEE)' --- Introduction ============ The Apache Point Observatory Galactic Evolution Experiment (APOGEE; Allende Prieto et al. 2008) is a large-scale, near-infrared, high-resolution spectroscopic survey of Galactic stars, and it is one of the four experiments in the Sloan Digital Sky Survey-III (SDSS-III; Eisenstein et al. 2011; Gunn et al. 2006, Aihara et al. 2011). APOGEE will obtain high S/N, R$\sim$22,500 spectra for 100,000 stars in the Milky Way Galaxy, for which accurate chemical abundances, radial velocities, and physical parameters will be determined. APOGEE data will shed new light on the formation of the Milky Way, as well as its chemical and dynamical evolution. To achieve its science goals, APOGEE needs to determine abundances for about 15 elements to an accuracy of 0.1 dex. To attain this precision, a large model photosphere database with up-to-date solar abundances is required. We chose to build the majority of APOGEE’s model photosphere database on ATLAS9 and MARCS calculations. ATLAS [@kurucz05] is widely used as a universal LTE 1$-$D plane-parallel atmosphere modeling code, which is freely available from Robert Kurucz’s website[^1]. ATLAS9 [@kurucz01] handles the line opacity with the opacity distribution functions (ODF), which greatly simplifies and reduces the computation time [@strom01; @kurucz04; @castelli02]. ATLAS uses the mixing-length scheme for convective energy transport. It consists in pretabulating the line opacity as function of temperature and gas pressure in a given number of wavelength intervals which cover the whole wavelength range from far ultraviolet to far infrared. For computational reasons, in each interval the line opacities are rearranged according to strength rather than wavelength. For each selected metallicity and microturbulent velocity, an opacity distribution function table has to be computed. While the computation of the ODFs is very time consuming, extensive grids of model atmospheres and spectrophotometric energy distributions can be computed in a short time once the required ODF tables are available. ATLAS12 [@kurucz04; @castelli03] uses the opacity sampling method (OS) to calculate the opacity at 30,000 points. The high resolution synthetic spectrum at a selected resolution can then be obtained by running SYNTHE [@kurucz03]. More recently, @lester01 have developed SATLAS\_\_ODF and SATLAS\_\_OS, the spherical version of both ATLAS9 and ATLAS12, respectively. No extensive grids of models have been published up to now, neither with ATLAS12, nor with any of the two versions of SATLAS. Instead, extensive grids of ATLAS9 ODF model atmospheres for several metallicities were calculated by @castelli01. These grids are based on solar (or scaled solar) abundances from @grevesse01. Recently, @kirby01 provided a new ATLAS9 grid, but he used abundances from @anders01. The calculations presented in this paper are based on the more recent solar composition from @asplund01. This updated abundance table required new ODFs and Rosseland mean opacity calculations as well. Abundances from @asplund01 were chosen instead of those from newer studies [@asplund02] to match the composition of the MARCS models described below, and those available from the MARCS website. The MARCS model atmospheres [@gustafsson01; @plez01; @gustafsson02] were developed and have been evolving in close connection with applications primarily to spectroscopic analyses of a wide range of late type stars with different properties. The models are one-dimensional plane-parallel or spherical, and computed in LTE assuming the mixing-length scheme for convective energy transport, as formulated by @henyey01. For luminous stars (giants), where the geometric depth of the photosphere is a non-negligible fraction of the stellar radius, the effects of the radial dilution of the energy transport and the depth-varying gravitational field is taken into account. Initially, spectral line opacities were economically treated by the ODF approximation, but later the more flexible and realistic opacity sampling scheme has been adopted. In the OS scheme, line opacities are directly tabulated for a large number of wavelength points (10$^{5}$) as a function of temperature and pressure. The shift in the MARCS code from using ODFs to the OS scheme avoided the sometimes unrealistic assumption that the line opacities of certain relative strengths within each ODF wavelength interval overlap in wavelength irrespective of depth in the stellar atmosphere. This assumption was found to lead to systematically erroneous models, in particular when plolyatomic molecules add important opacities to surface layers [@ekberg01]. The current version of the MARCS code used for the present project and for the more extensive MARCS model atmosphere data base[^2] was presented and described in detail by @gustafsson02. The model atmospheres presented in this paper adds large variaty in \[C/M\] and \[$\alpha$/M\] abundances to the already existing grids by covering these abundances systematically from -1 to +1 for each metallicity. Our main purpose is to update the previous ATLAS9 grid and publish new MARCS models to provide a large composition range to use in the APOGEE survey and future precise abundance analysis projects. These new ATLAS models were calculated with a corrected H$_{2}$O linelist. The abundances used for the MARCS models presented in this paper are from @grevesse02, which are nearly identical to @asplund01; the only significant difference is an abundance of scandium 0.12 dex higher than in @asplund01. The range of stellar parameters (T$_{\rm eff}$, log $g$ and \[M/H\]) spanned by the models covers most stellar types found in the Milky Way. This paper is organized as follows. In Section 2 we describe the parameter range of our ODFs and model atmospheres and give details of the calculation method of ATLAS9 we implemented. Section 3 contains the parameter range and calculation procedure for MARCS models. In Section 4 we compare MARCS and ATLAS9 models with @castelli01, and illustrate how different C and $\alpha$ contents affect the atmosphere. Section 5 contains the conclusions. The grid of ODFs and model atmospheres will be periodically updated in the future and available online[^3]. ![The \[C/M\] or \[$\alpha$/M\] content as a function of \[M/H\] of ATLAS9 models (filled circles, Table 1) and MARCS models (open circles, Table 4). Both \[C/M\] and \[$\alpha$/M\] changes independently from each other, and the small steps in metallicities give altogether 1980 different compositions for the ATLAS9 models and 175 compositions for the MARCS models. The number of acceptable models may vary for each composition, for details see the ATLAS-APOGEE website (http://www.iac.es/proyecto/ATLAS-APOGEE/). For missing metal-rich compositions of ATLAS9 models see Table 2.](figure01.eps){width="2.5in"} ATLAS9 Model Atmospheres ======================== Parameters ---------- The metallicity (\[M/H\]) of the grid varies from $-$5 to 1.5 to cover the full range of chemical compositions and scaled to solar abundances[^4]. For each of these solar scaled compositions we also vary the \[C/M\] and \[$\alpha$/
{ "pile_set_name": "ArXiv" }
null
null
[**Dark Energy Studies: Challenges to Computational Cosmology**]{}\ The ability to test the nature of dark mass-energy components in the universe through large-scale structure studies hinges on accurate predictions of sky survey expectations within a given world model. Numerical simulations predict key survey signatures with varying degrees of confidence, limited mainly by the complex astrophysics of galaxy formation. As surveys grow in size and scale, systematic uncertainties in theoretical modeling can become dominant. Dark energy studies will challenge the computational cosmology community to critically assess current techniques, develop new approaches to maximize accuracy, and establish new tools and practices to efficiently employ globally networked computing resources. Introduction ============ Ongoing and planned observational surveys, such as the Dark Energy Survey (DES)$^{\ino}$, are providing increasingly rich information on the spatial distributions and internal properties of large numbers of galaxies, clusters of galaxies, and supernovae. These astrophysical systems reside in a cosmic web of large-scale structure that evolved by gravitational amplification of an initially small-amplitude Gaussian random density field. The DES plans to investigate the dark sector through the evolution of the Hubble parameter $H(z)$ and linear growth factor $D(z)$ from four independent channels: i) the evolution and clustering properties of rich clusters of galaxies; ii) the redshift evolution of baryonic features in the galaxy power spectrum; iii) weak-lensing tomography derived from measurement of galaxy shear patterns and iv) the luminosity distance–redshift relation of type Ia SNe. We focus our attention on theoretical issues related to the first three of these tests. The power spectrum of fluctuations at recombination is calculated to high accuracy from linear theory$^{\ino}$, so the problem of realizing, through direct simulation, the evolution of a finite patch of a particular world model from this epoch forward is well posed. To support DES-like observations, one would like to evolve multiple regions of Hubble Length dimension with the principal clustered matter components — dark matter and multiple phases of baryons (stars and cold gas in galaxies, warm/hot gas surrounding galaxies and in groups/clusters) — represented by multiple fluids. Mapping observable signatures of the theoretical solution along the past light-cone of synthetic observers in the computational volume allows ‘clean’ mock surveys to be created, which can further be ‘dirtied’ by real-world observational effects in support of survey data analysis. Two fundamental barriers stand in the way of achieving a complete solution to this problem. One is the wide dynamic range of non-linear structures (sub-parsecs to gigaparsecs in length, for example), and the other is the complexity of astrophysical processes controlling the baryonic phases. Since DES-like surveys probe only the higher mass portion of the full spectrum of cosmic structures, the first issue is not strongly limiting. The DES will, however, require understanding galaxy and cluster observable signatures, so uncertainties in the treatment of baryonic physics will play a central role. In a companion paper$^{\ino}$, we outline theoretical uncertainties associated with the large-scale structure channels DES will use to test dark energy. Here, we offer a critique of the computational methods that provide theoretical support for DES and similar surveys. Challenges for Computational Cosmology ======================================= Given the wide scale and scope of large-scale structure, a number of approaches have evolved to address restricted elements of the full problem. Since cold dark matter dominates the matter density, N-body methods that follow collisionless clustering have played an important role in defining the overall evolution of the cosmic web and the structure of collapsed halos formed within it. Combined N-body and gas dynamics techniques explore gravitationally coupled evolution of baryons and dark matter. Knowledge gained from these ‘direct’ approaches led to the development of ‘semi-analytic’ methods that efficiently explore scenarios for baryon-phase evolution. The overall challenge to computational cosmology in the dark energy era is to understand how to harness and push forward these different methods so as to maximize science return from sky survey data. A ‘halo model’ description of the large-scale density field ties together these approaches. The model posits that all matter in the late universe is contained in a spectrum of gravitationally bound objects (halos), each characterized by a mass $M$ defined (typically) by a critical overdensity condition around a local filtered density peak. The space density, large-scale clustering bias (relative to all matter), and internal structure, such as density and temperature profiles, are basic model ingredients. For galaxy studies, the halo occupation distribution (‘HOD’) defines the likelihood $p(N_{gal}|M,z)$ that $N_{gal}$ galaxies of a certain type are associated with the halo. For some applications, it may be important to consider HOD dependence on local environment. Collisionless N-body modeling ----------------------------- Understanding the growth of density perturbations into the mildly and strongly non-linear regimes is a critical component for weak lensing tomography and galaxy cluster studies, respectively. Much progress has been made in this area using N-body simulations, as Moore’s law has enabled progressively larger computations, up to $10^{10}$ particles today$^{\ino}$. Parallel computing has led to production environments where $512^3$ particle runs can be realized on an almost daily basis. By creating large statistical samples of dark matter halos and by probing the internal structure of some halos in great detail, large-$N$ simulations have validated and characterized important raw ingredients of the halo model: i) the space density is calibrated to $\sims 10\%$ accuracy in terms of a similarity variable $\sigma(M,z)$, with $\sigma$ the [*rms*]{} level of density fluctuations filtered on mass scale $M$; ii) the large-scale clustering bias of halos is calibrated to similar accuracy; iii) except for rare cases of major mergers in progress, the interior of halos are hydrostatic with an internal density profile that depends primarily on mass, and secondarily on individual accretion history and iv) the structural similarity of halos is reflected in a tight virial scaling between mass and velocity dispersion. The study of sub-halos, locally bound structures accreted within larger halos but not fully tidally disrupted, is a rapidly developing area with important application to optical studies of galaxy clusters. Since they serve as a foundation for more complex treatments, these ingredients of the halo model deserve more careful study and more precise calibration. The weak lensing shear signal on arcminute and larger scales is generated by weakly non-linear matter fluctuations on relatively large spatial scales, making it relatively insensitive to small-scale baryon physics. Although the Hamilton-Peacock characterization$^{\ino}$ of the non-linear evolution of the power spectrum has been useful, it will need refinement to achieve the anticipated accuracy of DES power spectrum measurements on dark energy. The evolution of higher density moments, particularly the bi-spectrum, is less well understood than the second moment. A suite of multi-scale N-body simulations covering a modest grid of cosmological models is needed to address these problems. New approaches to generating initial conditions and combining multiple realizations of finite volumes$^{\ino}$ should be employed in an effort to push systematic uncertainties on relevant spatial scales to percent levels and below. Although a relatively mature enterprise, N-body modeling of dark matter clustering faces fundamental challenges to improve the absolute level of precision in current techniques and to better understand the dynamical mechanisms associated with non-linear structure evolution. Code comparison projects$^{\ino}$ should be more aggressively pursued and the sensitivity of key non-linear statistics to code control parameters deserves more careful systematic study. A return to testing methods on the self-similar clustering problem$^{\ino}$ is likely to provide valuable insights, and deeper connections to analytic approaches, such as extended perturbation theory and equilibrium stellar dynamics, should be encouraged. Highly accurate dark matter evolution is only a first step, however, as it ignores the $17\%$ matter component of the universe that is directly observable. The baryon phase and galaxy/cluster observables ------------------------------------------------ The dark energy tests planned by DES require modeling the astrophysical behavior of different baryonic phases. Acoustic oscillations in the galaxy power spectrum must be linked to features in the matter power spectrum, requiring accurate tests of the constancy of galaxy bias on large scales. For clusters, selection by Sunyaev-Zel’dovich or X-ray signatures depends on the hot gas phase properties while optical selection is dependent on star formation and interstellar medium phase evolution within galaxies. Several distinct, but related, approaches have emerged to address this complex modeling requirement, all involving tunable model parameters that must, to some degree, be determined empirically. ‘Direct’ computational approaches couple a three-dimensional gas dynamics solver to an N-body algorithm. A dozen, nearly-independent codes, in both Lagrangian and Eulerian flavors, now exist to perform this task. All methods follow entropy generation in gas from shocks, and most allow radiative entropy loss assuming local thermodynamic equilibrium in plasma that may optionally be metal enriched. Methods diverge in their treatment of interstellar medium processes: cold-hot gas phase interactions, star formation rate prescriptions, return of mass and energy from star forming regions, supermassive black hole (SMBH) formation, and attendant SMBH feedback. A valuable comparison study$^{7}$ revealed agreement at the $\sim 10\%$ level among a dozen codes for the solution of the internal structure of a single halo evolved without cooling and star formation. In massive halos, the hosts to rich clusters where only a small fraction of baryons condense into galaxies, gas dynamic models have had good success in modeling the behavior of the hot intracluster medium (ICM), particularly its structural regularity. The ICM mass, a quantity that is essentially independent of temperature when derived from low energy X-ray surface brightness maps, is observed to behave as a power-law of X-ray temperature with only $14\%$ intrinsic scatter$^{\ino}$.
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Convergent migration involving multiple planets embedded in a viscous protoplanetary disc is expected to produce a chain of planets in mean motion resonances, but the multiplanet systems observed by the Kepler spacecraft are generally not in resonance. We demonstrate that under equivalent conditions, where in a viscous disc convergent migration will form a long-term stable system of planets in a chain of mean motion resonances, migration in an inviscid disc often produces a system which is highly dynamically unstable. In particular, if planets are massive enough to significantly perturb the disc surface density and drive vortex formation, the smooth capture of planets into mean motion resonances is disrupted. As planets pile up in close orbits, not protected by resonances, close encounters increase the probability of planet-planet collisions, even while the gas disc is still present. While inviscid discs often produce unstable non-resonant systems, stable, closely packed, non-resonant systems can also be formed. Thus, when examining the expectation for planet migration to produce planetary systems in mean motion resonances, the effective turbulent viscosity of the protoplanetary disc is a key parameter.' author: - | Colin P. McNally,$^{1}$[^1] Richard P. Nelson$^{1}$ and Sijme-Jan Paardekooper $^{1,2}$\ $^{1}$Astronomy Unit, School of Physics and Astronomy, Queen Mary University of London, London E1 4NS, UK\ $^{2}$DAMTP, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK\ bibliography: - 'resplanets\_paper.bib' date: 'Accepted XXX. Received YYY; in original form ZZZ' title: Multiplanet systems in inviscid discs can avoid forming resonant chains --- \[firstpage\] planets and satellites: dynamical evolution and stability — planet-disc interactions — protoplanetary discs Introduction ============ Convergent migration in multiplanet systems, driven by disc-planet interactions in protoplanetary discs, has been shown to result in the capture of the planets into mean motion resonances . tested the behaviour of initially tightly-packed systems in viscous discs, and found that after a period of initial adjustment almost all systems formed chains of MMRs. However, the multiplanet systems discovered by the Kepler mission have only a weak preference for period ratios near first order mean motion resonances [@2011ApJS..197....8L]. Multiplanet systems in the Kepler sample tend to have planets with similar masses, with relatively even orbital spacing [@2017ApJ...849L..33M; @2018AJ....155...48W], but are largely non-resonant. Furthermore, the sample contains planets which appear to mainly cluster around the local thermal mass scale in a fiducial protoplanetary disc model [@2018arXiv180604693W], where the thermal mass corresponds to the planet Hill sphere radius being approximately equal to the disc pressure scale height. Various mechanisms have been proposed to either allow planets to escape a resonant configuration during the presence of the gas disc, or to disrupt the resonant configuration during the later, nearly dissipationless, n-body phase of dynamics. Overstable librations about resonant configurations can cause planets to escape resonance while the gas disc is present [@2014AJ....147...32G], although the requirements on the form of the eccentricity damping for this to occur may be difficult to meet. Disc-driven resonant repulsion can push a resonant pair of planets away from resonance by a combination of orbital circularization and the interaction between the wakes of the planets [@2013ApJ...778....7B]. Orbital perturbations due to turbulent overdensities in the disc have been argued to prevent protoplanets capturing into resonance [e.g. @2008ApJ...683.1117A]. However, the planet forming regions of protoplanetary discs are thought to be largely dead to the magnetorotational instability (MRI), and lacking an instability capable of driving turbulent motion at such high levels. If protoplanetary discs are characterised in these regions by being largely MRI-dead, possessing nearly-laminar flow with wind-driven accretion in their surface layers, they can possess vanishingly low viscosity while still providing a conduit for mass accreting onto the star [@2013ApJ...769...76B]. A second set of concepts proposes that a resonant chain of planets may be disrupted after dissipation of the gas disc. [@2007ApJ...654.1110T] showed that tidal interaction with the central star could extract short period systems out of resonance. [@2015ApJ...803...33C] proposed the interaction of the planets in resonant orbits with a planetesimal disc left over from planet formation may break planets out of resonance. More ambitiously, the possibility that late dynamical instability of resonant chains formed through convergent migration in a viscous disc is responsible for sculpting the entire period ratio distribution of exoplanet systems was raised by . @2017MNRAS.470.1750I and @2019arXiv190208772I demonstrate that systems with large numbers of protoplanetary cores in an n-body computation with a prescription for disc-planet interactions may result in a planetary system configuration which becomes unstable after the dissipation of the gas disc. This behaviour appears to be due to the increasing tendency for chains with a high mass to undergo dynamical instability after the dissipation of the gas disc, as identified by @2012Icar..221..624M. In this letter, motivated by our recent work showing that disc-planet interactions involving intermediate mass planets embedded in inviscid protoplanetary discs leads to stochastic, non-deterministic migration behaviour due to the emergence of vortices in the flow [@2019MNRAS.484..728M], we question the basic premise that convergent migration in protoplanetary gas discs should result in chains of planets in mean-motion resonance, and the consequent tendency to form systems of resonant planets which are stable over Gyr time scales. We construct a scenario where a like-for-like comparison between the convergent migration of a multiplanet system in a viscous and an inviscid disc can be made, and demonstrate that, in contrast to the situation in viscous discs, the ability to form resonant chains of planets is impeded by vortex-modified feedback migration in inviscid discs. Methods and Results =================== ![Planet migration in a viscous disc, compare to Figure \[fig:convsys29a\]. All five planets form a resonant chain, and in long term evolution this configuration tends to be stable.[]{data-label="fig:convsys30a"}](plots/convsys30_a.pdf){width="\columnwidth"} ![Planet nearest neighbour period ratios in a viscous disc, displaying the formation of a resonant chain.[]{data-label="fig:convsys30pratio"}](plots/convsys30_pratio.pdf){width="\columnwidth"} ![Planets in an inviscid disc, compare to Figure \[fig:convsys30a\]. The final configuration consists of three planets that have migrated into the viscous region of the disc in a chain of resonances, and a coorbital pair to the outside in the inviscid region out of resonance. Sampling the long term behaviour of this system shows it tends to undergo dynamical instability.[]{data-label="fig:convsys29a"}](plots/convsys29_a.pdf){width="\columnwidth"} Gas disc-planet interaction simulations were performed in two-dimensional vertically integrated models of gas discs with a modified version of [FARGO3D]{} 1.2 [@2016ApJS..223...11B], including an implementation of the energy equation in term of specific entropy (see Appendix \[sec:entropy\]). Indirect terms for the planets and gas disc were included, the planet potential was smoothed with a Plummer-sphere potential with length 0.4 scale heights, and the disc gravity force felt by the planets was modified by removing the azimuthally symmetric component of the potential to compensate for the neglect of disc self-gravity following @2008ApJ...678..483B. In [FARGO3D]{} the planet orbits are integrated with the built-in 5th order scheme, with an additional planet-planet acceleration time step limit from to increase accuracy of energy conservation during close encounters. The detailed outcomes of these close encounters and three-body interactions is chaotic and senstive to small perturbations in the initial conditions and the numerical method of integration. We do not include planet-planet collisions. The grid spacing was radially logarithmic, extending radially from $r=0.6$ to $4$, with resolution corresponding to $\sim 24$ cells per scale height in all directions. Damping boundary zones were applied as in @2019MNRAS.484..728M, The azimuthal velocity field was initialised to produce an exact numerical radial force balance, following the method implemented in [FARGO]{} . The runs presented in this work required in total 600 kCPUh. Disc thermodynamics were modelled in the simplest useful form for considering the near-adiabatic thermodynamics of the inner regions of protoplanetary discs in two dimensions. Thus, we apply a thermal relaxation term in the form used by @2012ApJ...750...34H and @2016ApJ...817..102L, with a timescale derived from an effective optical depth estimate from @1990ApJ...351..632H for an irradiated disc as described by @2012ApJ...757...50D. We adopt the simplified Rosseland mean opacity model of @1997ApJ...486..372B and approximate the Planck mean opacity as being the equal to it. In viscous (turbulent) disc regions, we include a subgrid turbulent diffusivity to entropy assuming a turbulent Prandtl number unity, i.e. equal diffusion of momentum and specific entropy.
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We study theoretically the influence of the surface plasmon excitation on the Goos-Hänchen lateral shift of a $p$-polarized Gaussian beam incident obliquely on a dielectric-metal bilayer in the Otto configuration. We find that the lateral shift depends sensitively on the thickness of the metal layer and the width of the incident beam, as well as on the incident angle. Near the incident angle at which surface plasmons are excited, the lateral shift changes from large negative values to large positive values as the thickness of the metal layer increases through a critical value. For wide incident beams, the maximal forward and backward lateral shifts can be as large as several hundred times of the wavelength. As the width of the incident Gaussian beam decreases, the magnitude of the lateral shift decreases rapidly, but the ratio of the width of the reflected beam to that of the incident beam, which measures the degree of the deformation of the reflected beam profile, increases. In all cases considered, we find that the reflected beam is split into two parts. We also find that the lateral shift of the transmitted beam is always positive and very weak.' address: - '$^1$ Department of Energy Systems Research and Department of Physics, Ajou University, Suwon 16499, Korea' - '$^2$ School of Physics, Korea Institute for Advanced Study, Seoul 02455, Korea' author: - 'Sangbum Kim$^{1}$ and Kihong Kim$^{1,2}$' title: 'Direct calculation of the strong Goos-Hänchen effect of a Gaussian light beam due to the excitation of surface plasmon polaritons in the Otto configuration' --- Introduction ============ A light beam deviates from the path expected from geometrical optics when it is totally reflected at the interface between two different media. The reflected beam is displaced laterally along the interface, which is called the Goos-Hänchen (GH) shift. This phenomenon has been predicted a long time ago and measured experimentally by Goos and Hänchen for the first time [@Goos1; @Goos2; @Goos3]. Artmann has derived an analytical formula for the GH shift for incident plane waves [@Artmann]. The GH effect occurs in many diverse areas such as acoustics, optics, plasma physics and condensed matter physics [@Lotsch; @Puri]. Earlier works have treated the GH shift in multilayered structures in the Otto or Kretschmann configuration [@Shah1; @Shah2; @Shah3]. Tamir and Bertoni performed a detailed analysis of the electromagnetic field distribution in a leaky-wave structure upon which a Gaussian beam is incident [@Tamir]. It has been demonstrated that the reflected beam displays either a forward or a backward beam shift. An approximate analytical solution has shown that the initial Gaussian beam profile splits into two. The theory of leaky waves has also been applied to acoustic beams incident on a liquid-solid interface, with the aim of presenting a unified theory of the beam shifting effect near the Rayleigh angle [@Bertoni]. The GH effect of a light beam incident on a dielectric slab from the air has been studied with an emphasis on the transmitted beam [@Li]. Lakhtakia has pointed out that the GH shift reverses its direction when $\epsilon < 0$ and $\mu < 0$ in the optically rarer medium [@Lakhtakia]. The enhancement of the GH shift and the control of the reflected field profile has been achieved by adding a defect or cladding layer to photonic crystals [@He; @Wang]. Recently, De Leo [*et al.*]{} have performed an extended study investigating the asymmetric GH effect and derived an expression for the GH shift valid in the region where the Artmann formula diverges [@Leo1; @Leo2; @Leo3; @Leo4]. Light waves confined to the surface of a medium and surface charges oscillating resonantly with the light waves constitute the surface plasmon polaritons (SPPs). The enhancement of electromagnetic fields near the surface caused by the excitation of SPPs has generated practical applications in sensor technology [@Homola; @Kneipp; @Kurihara]. These applications include thin film probing [@Pockrand], biosensing [@Liedberg] and biological imaging [@Okamoto1]. In the Otto or Kretschmann configuration, the SPPs are excited by attenuated total internal reflection by enhancing the momentum of the incident light [@Yeatman; @Torma]. The excitation of SPPs in the Otto or Kretschmann configuration affects the GH shift profoundly. Early results on the influence of SPPs on the shift of light beams can be found in [@Mazur] and [@Kou]. It has been shown that the interaction of leaky waves with SPPs enhances the GH shift. Results on the excitation of surface waves in the Otto configuration have been reported by Chen [*et al.*]{} [@Chen]. Chuang has conducted an analysis of the behavior of the reflection coefficient for both Otto and Kretschmann configurations [@Chuang]. The zeros and poles of the reflection coefficient move around the complex plane with the change of parameters, such as the beam width, the wavelength, the thickness and the dielectric constants. Zeller [*et al.*]{} have shown that the coupling of an incident wave with the SPP is highly dependent on the thickness of the dielectric sublayer in both the Kretschmann and Otto configuration [@Zeller1; @Zeller2; @Zeller3]. Shadrivov [*et al.*]{} have studied the GH shift in the Otto configuration with the metal sublayer substituted by a left-handed metamaterial [@Shadrivov]. A large GH shift with beam splitting was observed, and the energy transfer between the right- and left-handed materials was demonstrated by numerical simulations. There also exist studies to enhance the GH shift using various hybrid structures containing sublayers of graphene, MoS$_2$ or cytop [@Xiang1; @Xiang2; @Xiang3]. Recently, much progress has been made on obtaining a tunable GH shift in the prism-coupling system, by applying an external voltage to a graphene sublayer and other heterostructures [@Farmani1; @Farmani2; @Farmani3; @Xiang4]. Kim [*et al.*]{} have studied the GH shift of incident $p$ waves in the Otto configuration containing a nonlinear dielectric layer and shown that its magnitude can be as large as several hundred times of the wavelength at the incident angles where the SPPs are excited [@Kim3]. Furthermore, they have shown that the sign and the size of the GH shift can change very sensitively as the nonlinearity parameter varies. In this paper, we study the strong enhancement of the GH effect for incident Gaussian beams when SPPs are excited at the metal-dielectric interface in the Otto configuration. We examine the influence of varying the thickness of the metal layer and the incident beam width on the GH effect and find out optimal configurations for maximal forward and backward lateral shifts. Our theoretical method is based on the invariant imbedding method, using which we transform the wave equation into a set of invariant imbedding equations to create an equivalent initial value problem [@Kim5; @Kim1; @Kim2; @Kim6]. For the simplest case of multilayered structures made of uniform linear media, this method is equivalent to those based on the Fresnel coefficients. The invariant imbedding method has been employed to calculate the GH shift for plane waves incident on nonlinear media [@Kim3; @Kim4]. It can also be applied to the case of graded media. Here we consider the interaction of a Gaussian beam with linear media. More details of our model and method will be presented in the next section. Generalization of the invariant imbedding method to Gaussian beams ================================================================== We assume the layered structure lies in $0 \le z \le L$. A Gaussian beam with a finite half-width $W$ is incident from the region where $z>L$ at an angle $\theta_i$. For a $p$-polarized beam propagating in the $xz$ plane, the $y$ component of the magnetic field associated with the incident beam at the $z=L$ plane can be written as $${H_y}^{(i)}(x,L) = H_0 \exp \left( -{x^2 \over {W_x}^2} + i k_{x0} x \right),$$ where $W_x$ $( = W / \cos\theta_i)$ is the half-width in the $x$ direction. The center of the incident beam is at $x = 0$. The parameter $k_{x0}$ ($= k_1 \sin\theta_i$) is the $x$ component of the wave vector corresponding to the incident angle $\theta_i$ and $k_1$ is the wave number in the incident region, which corresponds to the prism. The superscript $(i)$ refers to the incident beam. We consider the incident Gaussian beam as a linear combination of plane waves and write its field as $${H_y}^{(i)}(x,L) = {1 \over \sqrt{2\pi}} \int_{-\infty}^{\infty} \tilde{H}\left(k_x\right) \exp \left( i k_x x \right) dk_x,$$ where the Fourier transform $\tilde{H}\left(k_x\right)$ is given by $$\tilde{H}(k_x) = \frac{1}{\sqrt{2}}H_0 W_x\exp\left[-\frac{{W_x}^2}{4}\left( k_x-k_{x0}\right)^2\right].$$ The variable $k_x$ can be parameterized as $k
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In this paper we give an upper bound on the number of extensions of a triple to a quadruple for the Diophantine $m$-tuples with the property $D(4)$ and confirm the conjecture of uniqueness of such extension in some special cases.' author: - Marija Bliznac Trebješanin title: 'Extension of a Diophantine triple with the property $D(4)$' --- 2010 [*Mathematics Subject Classification:*]{} 11D09, 11D45, 11J86\ Keywords: diophantine tuples, Pell equations, reduction method, linear forms in logarithms. Introduction ============ Let $n\neq0$ be an integer. We call the set of $m$ distinct positive integers a $D(n)$-$m$-tuple, or $m$-tuple with the property $D(4)$, if the product of any two of its distinct elements increased by $n$ is a perfect square. One of the most interesting and most studied questions is how large those sets can be. In the classical case, first studied by Diophantus, when $n=1$, Dujella has proven in [@duje_kon] that a $D(1)$-sextuple does not exist and that there are at most finitely many quintuples. Over the years many authors improved the upper bound for the number of $D(1)$-quintuples and finally He, Togbé and Ziegler in [@petorke] have given the proof of the nonexistence of $D(1)$-quintuples. To see details of the history of the problem with all references one can visit the webpage [@duje_web]. Variants of the problem when $n=4$ or $n=-1$ are also studied frequently. In the case $n=4$ similar conjectures and observations can be made as in the $D(1)$ case. In the light of that observation, Filipin and the author have proven that $D(4)$-quintuple also doesn’t exist. In both cases $n=1$ and $n=4$, conjecture about uniqueness of extension of a triple to a quadruple with a larger element is still open. In the case $n=-1$, a conjecture of nonexistence of a quadruple is studied, and for the survey of a problem one can see [@cipu]. A $D(4)$-pair can be extended with a larger element $c$ to form a $D(4)$-triple. The smallest such $c$ is $c=a+b+2r$, where $r=\sqrt{ab+4}$ and such triple is often called a regular triple, or in the $D(1)$ case it is also called an Euler triple. There are infinitely many extensions of a pair to a triple and they can be studied by finding solutions to a of a Pellian equation $$\label{par_trojka} bs^2-at^2=4(b-a),$$ where $s$ and $t$ are positive integers defined by $ac+4=s^2$ and $bc+4=t^2.$ For a $D(4)$-triple $\{a,b,c\}$, $a<b<c$, we define $$d_{\pm}=d_{\pm}(a,b,c)=a+b+c+\frac{1}{2}\left(abc\pm \sqrt{(ab+4)(ac+4)(bc+4)}\right),$$ and it is easy to check that $\{a,b,c,d_{+}\}$ is a $D(4)$-quadruple, which we will call a regular quadruple, and if $d_{-}\neq 0$ then $\{a,b,c,d_{-}\}$ is also a regular $D(4)$-quadruple with $d_{-}<c$. Any $D(4)$-quadruple is regular. Results which support this conjecture in some special cases can be found for example in [@dujram], [@bf], [@fil_par], [@fht] and some of those results are stated in the next section and will be used as known results. In [@fm] Fujita and Miyazaki approached this conjecture in the $D(1)$ case differently – they examined how many possibilities are there to extend a fixed Diophantine triple with a larger integer. They improved their result from [@fm] further in the joint work [@cfm] with Cipu where they have shown that any triple can be extended to a quadruple in at most $8$ ways. In this paper we will follow the approach and ideas from [@fm] and [@cfm] to prove similar results for extensions of a $D(4)$-triple. Usually, the numerical bounds and coefficients are slightly better in the $D(1)$ case, which can be seen after comparing Theorem \[teorem1.5\] and [@fm Theorem 1.5]. To overcome this problem we have made preparations similar as in [@nas2] by proving a better numerical lower bound on the element $b$ in an irregular $D(4)$-quadruple and still many results needed considering and proving more special cases. Let $\{a,b,c\}$ be a $D(4)$-triple which can be extended to a quadruple with an element $d$. Then there exist positive integers $x,y,z$ such that $$ad+4=x^2,\quad bd+4=y^2, \quad cd+4=z^2.$$ By expressing $d$ from these equations we get the following system of generalized Pellian equations $$\begin{aligned} cx^2-az^2&=4(c-a),\label{prva_pelova_s_a}\\ cy^2-bz^2&=4(c-b).\label{druga_pelova_s_b}\end{aligned}$$ There exists only finitely many fundamental solutions $(z_0,x_0)$ and $(z_1,y_1)$ to these Pellian equations and any solution to the system can be expressed as $z=v_m=w_n$, where $m$ and $n$ are non-negative integers and ${v_m}$ and ${w_n}$ are recurrence sequences defined by $$\begin{aligned} &v_0=z_0,\ v_1=\frac{1}{2}\left(sz_0+cx_0\right),\ v_{m+2}=sv_{m+1}-v_{m},\\ &w_0=z_1,\ w_1=\frac{1}{2}\left(tz_1+cy_1 \right),\ w_{n+2}=tw_{n+1}-w_n.\\\end{aligned}$$ The initial terms of these equations were determined by Filipin in [@fil_xy4 Lemma 9] and one of the results of this paper is improving that Lemma by eliminating the case where $m$ and $n$ are even and $|z_0|$ is not explicitly determined. \[teorem\_izbacen\_slucaj\] Suppose that $\{a,b,c,d\}$ is a $D(4)$-quadruple with $a<b<c<d$ and that $w_m$ and $v_n$ are defined as before. 1. If equation $v_{2m}=w_{2n}$ has a solution, then $z_0=z_1$ and $|z_0|=2$ or $|z_0|=\frac{1}{2}(cr-st)$. 2. If equation $v_{2m+1}=w_{2n}$ has a solution, then $|z_0|=t$, $|z_1|=\frac{1}{2}(cr-st)$ and $z_0z_1<0$. 3. If equation $v_{2m}=w_{2n+1}$ has a solution, then $|z_1|=s$, $|z_0|=\frac{1}{2}(cr-st)$ and $z_0z_1<0$. 4. If equation $v_{2m+1}=w_{2n+1}$ has a solution, then $|z_0|=t$, $|z_1|=s$ and $z_0z_1>0$. Moreover, if $d>d_+$, case $ii)$ cannot occur. Also we improved a bound on $c$ in the terms of $b$ for which an irregular extension might exist. \[teorem1.5\] Let $\{a,b,c,d\}$ be a $D(4)$-quadruple and $a<b<c<d$. Then i) if $b<2a$ and $c\geq 890b^4$ or ii) if $2a\leq b\leq 12 a$ and $c\geq 1613b^4$ or iii) if $b>12a$ and $c\geq 39247 b^4$ we must have $d=d_+$. \[teorem1.6\] Let $\{a,b,c,d\}$ be a $D(4)$-quadruple and $a<b<c<d_+<d$. Then any $D(4)$-quadruple $\{e,
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | By considering a certain univalent function in the open unit disk $\mathbb{U} $, that maps $\mathbb{U}$ onto a strip domain, we introduce a new class of analytic and close-to-convex functions by means of a certain non-homogeneous Cauchy-Euler-type differential equation. We determine the coefficient bounds for functions in this new class. Relevant connections of some of the results obtained with those in earlier works are also provided. address: 'Kocaeli University, Faculty of Aviation and Space Sciences,Arslanbey Campus, 41285 Kartepe-Kocaeli, TURKEY' author: - Serap BULUT title: 'Coefficient bounds for close-to-convex functions associated with vertical strip domain' --- Introduction ============ Let $\mathcal{A}$ denote the class of functions of the form$$f(z)=z+\sum_{n=2}^{\infty }a_{n}z^{n} \label{1.1}$$which are analytic in the open unit disk $\mathbb{U}=\left\{ z:z\in \mathbb{C}\;\text{and}\;\left\vert z\right\vert <1\right\} $. We also denote by $\mathcal{S}$ the class of all functions in the normalized analytic function class $\mathcal{A}$ which are univalent in $\mathbb{U}$. For two functions $f$ and $g$, analytic in $\mathbb{U}$, we say that the function $f$ is subordinate to $g$ in $\mathbb{U}$, and write$$f\left( z\right) \prec g\left( z\right) \qquad \left( z\in \mathbb{U}\right) ,$$if there exists a Schwarz function $\omega $, analytic in $\mathbb{U}$, with$$\omega \left( 0\right) =0\qquad \text{and\qquad }\left\vert \omega \left( z\right) \right\vert <1\text{\qquad }\left( z\in \mathbb{U}\right)$$such that$$f\left( z\right) =g\left( \omega \left( z\right) \right) \text{\qquad }\left( z\in \mathbb{U}\right) .$$Indeed, it is known that$$f\left( z\right) \prec g\left( z\right) \quad \left( z\in \mathbb{U}\right) \Rightarrow f\left( 0\right) =g\left( 0\right) \text{\quad and\quad }f\left( \mathbb{U}\right) \subset g\left( \mathbb{U}\right) .$$Furthermore, if the function $g$ is univalent in $\mathbb{U}$, then we have the following equivalence$$f\left( z\right) \prec g\left( z\right) \quad \left( z\in \mathbb{U}\right) \Leftrightarrow f\left( 0\right) =g\left( 0\right) \text{\quad and\quad }f\left( \mathbb{U}\right) \subset g\left( \mathbb{U}\right) .$$ A function $f\in \mathcal{A}$ is said to be starlike of order $\alpha $$\left( 0\leq \alpha <1\right) $, if it satisfies the inequality$$\Re \left( \frac{zf^{\prime }(z)}{f(z)}\right) >\alpha \qquad \left( z\in \mathbb{U}\right) .$$We denote the class which consists of all functions $f\in \mathcal{A}$ that are starlike of order $\alpha $ by $\mathcal{S}^{\ast }(\alpha )$. It is well-known that $\mathcal{S}^{\ast }(\alpha )\subset \mathcal{S}^{\ast }(0)=\mathcal{S}^{\ast }\subset \mathcal{S}.$ Let $0\leq \alpha ,\delta <1.$ A function $f\in \mathcal{A}$ is said to be close-to-convex of order $\alpha $ and type $\delta $ if there exists a function $g\in \mathcal{S}^{\ast }\left( \delta \right) $ such that the inequality$$\Re \left( \frac{zf^{\prime }(z)}{g(z)}\right) >\alpha \qquad \left( z\in \mathbb{U}\right)$$holds. We denote the class which consists of all functions $f\in \mathcal{A}$ that are close-to-convex of order $\alpha $ and type $\delta $ by $\mathcal{C}(\alpha ,\delta )$. This class is introduced by Libera [@L]. In particular, when $\delta =0$ we have $\mathcal{C}(\alpha ,0)=\mathcal{C}(\alpha )$ of close-to-convex functions of order $\alpha $, and also we get $\mathcal{C}(0,0)=\mathcal{C}$ of close-to-convex functions introduced by Kaplan [@K]. It is well-known that $\mathcal{S}^{\ast }\subset \mathcal{C}\subset \mathcal{S}$. Furthermore a function $f\in \mathcal{A}$ is said to be in the class $\mathcal{M}\left( \beta \right) $$\left( \beta >1\right) $ if it satisfies the inequality$$\Re \left( \frac{zf^{\prime }(z)}{f(z)}\right) <\beta \qquad \left( z\in \mathbb{U}\right) .$$This class introduced by Uralegaddi et al. [@UGS]. Motivated by the classes $\mathcal{S}^{\ast }(\alpha )$ and $\mathcal{M}\left( \beta \right) $, Kuroki and Owa [@KO] introduced the subclass $\mathcal{S}\left( \alpha ,\beta \right) $ of analytic functions $f\in \mathcal{A}$ which is given by Definition $\ref{dfn1}$ below. \[dfn1\](see [@KO]) Let $\mathcal{S}\left( \alpha ,\beta \right) $ be a class of functions $f\in \mathcal{A}$ which satisfy the inequality$$\alpha <\Re \left( \frac{zf^{\prime }\left( z\right) }{f\left( z\right) }\right) \mathbb{<\beta }\qquad \left( z\in \mathbb{U}\right)$$for some real number $\alpha \;\left( \alpha <1\right) $ and some real number $\beta \;\left( \beta >1\right) .$ The class $\mathcal{S}\left( \alpha ,\beta \right) $ is non-empty. For example, the function $f\in \mathcal{A}$ given by$$f(z)=z\exp \left\{ \frac{\beta -\alpha }{\pi }i\int_{0}^{z}\frac{1}{t}\log \left( \frac{1-e^{2\pi i\frac{1-\alpha }{\beta -\alpha }}t}{1-t}\right) dt\right\}$$is in the class $\mathcal{S}\left( \alpha ,\beta \right) $. Also for $f\in \mathcal{S}\left( \alpha ,\beta \right) $, if $\alpha \geq 0$ then $f\in \mathcal{S}^{\ast }(\alpha )$ in $\mathbb{U}$, which implies that $f\in \mathcal{S}.$ \[lm1\][@KO] Let $f\in \mathcal{A}$ and $\alpha <1<\beta $. Then $f\in \mathcal{S}\left( \alpha ,\beta \right) $ if and only if$$\frac{zf^{\prime }\left( z\right) }{f\left( z\right) }\prec 1+\frac{\beta -\alpha }{\pi }i\log \left( \frac{1-e^{2\pi i\frac{1-\alpha }{\beta -\alpha }}z}{1-z}\right) \qquad \left( z\in \mathbb{U}\right) .$$ Lemma $\ref{lm1}$ means that the function $f_{\alpha ,\beta }:\mathbb{U\rightarrow C}$ defined by$$f_{\alpha ,\beta }(z)=1+\frac{\beta -\alpha }{\pi }i\log \left( \frac{1-e^{2\pi i\frac{1-\alpha }{\beta -\alpha }}z}{1-z}\right) \label{1.2}$$is analytic in $\mathbb{U}$ with $f_{\alpha ,\beta }(0)=1$ and maps the unit disk $\mathbb{U}$ onto the vertical strip domain$$\Omega _{\alpha ,\beta }=\left\{ w\in \mathbb{C}:\alpha <\Re \left( w\right) \mathbb{<\beta }\right\} \label{1.x}$$conformally. We note that the function $f_{\alpha ,\beta }$ defined by $\left( \ref{1.2}\right) $ is a convex univalent function in $\mathbb{U}$ and has the form$$f_{\alpha ,\beta }(z)=1+\sum_{n=1}^{\infty }B_{n}z^{n},$$where$$B
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In contemporary applied and computational mathematics, a frequent challenge is to bound the expectation of the spectral norm of a sum of independent random matrices. This quantity is controlled by the norm of the expected square of the random matrix and the expectation of the maximum squared norm achieved by one of the summands; there is also a weak dependence on the dimension of the random matrix. The purpose of this paper is to give a complete, elementary proof of this important, but underappreciated, inequality.' author: - 'Joel A. Tropp' date: '15 June 2015.' title: | The Expected Norm of a Sum of Independent Random Matrices:\ An Elementary Approach --- Motivation ========== Over the last decade, random matrices have become ubiquitous in applied and computational mathematics. As this trend accelerates, more and more researchers must confront random matrices as part of their work. Classical random matrix theory can be difficult to use, and it is often silent about the questions that come up in modern applications. As a consequence, it has become imperative to develop and disseminate new tools that are easy to use and that apply to a wide range of random matrices. Matrix Concentration Inequalities --------------------------------- Matrix concentration inequalities are among the most popular of these new methods. For a random matrix ${{\bm{Z}}}$ with appropriate structure, these results use simple parameters associated with the random matrix to provide bounds of the form $${\operatorname{\mathbb{E}}}{\left\Vert { {{\bm{Z}}} - {\operatorname{\mathbb{E}}}{{\bm{Z}}} } \right\Vert} \quad \leq \quad \dots \quad\text{and}\quad {{\mathbb}{P}\left\{{ {\left\Vert { {{\bm{Z}}} - {\operatorname{\mathbb{E}}}{{\bm{Z}}} } \right\Vert} \geq t }\right\}} \quad\leq\quad \dots$$ where ${\left\Vert {\cdot} \right\Vert}$ denotes the spectral norm, also known as the $\ell_2$ operator norm. These tools have already found a place in a huge number of mathematical research fields, including [2]{} - numerical linear algebra [@Tro11:Improved-Analysis] - numerical analysis [@MB14:Far-Field-Compression] - uncertainty quantification [@CG14:Computing-Active] - statistics [@Kol11:Oracle-Inequalities] - econometrics [@CC13:Optimal-Uniform] - approximation theory [@CDL13:Stability-Accuracy] - sampling theory [@BG13:Relevant-Sampling] - machine learning [@DKC13:High-Dimensional-Gaussian; @LSS+14:Randomized-Nonlinear] - learning theory [@FSV12:Learning-Functions; @MKR12:PAC-Bayesian] - mathematical signal processing [@CBSW14:Coherent-Matrix] - optimization [@CSW12:Linear-Matrix] - computer graphics and vision [@CGH14:Near-Optimal-Joint] - quantum information theory [@Hol12:Quantum-Systems] - theory of algorithms [@HO14:Pipage-Rounding; @CKMP14:Solving-SDD] and - combinatorics [@Oli10:Spectrum-Random]. These references are chosen more or less at random from a long menu of possibilities. See the monograph [@Tro15:Introduction-Matrix] for an overview of the main results on matrix concentration, many detailed applications, and additional background references. The Expected Norm ----------------- The purpose of this paper is to provide a complete proof of the following important, but underappreciated, theorem. This result is adapted from [@CGT12:Masked-Sample Thm. A.1]. \[thm:main\] Consider an independent family $\{ {{\bm{S}}}_1, \dots, {{\bm{S}}}_n \}$ of random $d_1 \times d_2$ complex-valued matrices with ${\operatorname{\mathbb{E}}}{{\bm{S}}}_i = {{\bm{0}}}$ for each index $i$, and define $$\label{eqn:indep-sum} {{\bm{Z}}} := \sum_{i=1}^n {{\bm{S}}}_i.$$ Introduce the matrix variance parameter $$\label{eqn:variance-param} \begin{aligned} v({{\bm{Z}}}) :=& \max\left\{ {\left\Vert { {\operatorname{\mathbb{E}}}\big[ {{\bm{ZZ}}}^{*}\big] } \right\Vert}, \ {\left\Vert { {\operatorname{\mathbb{E}}}\big[ {{\bm{Z}}}^{*}{{\bm{Z}}} \big] } \right\Vert} \right\} \\ =& \max\left\{ {\left\Vert { \sum_i {\operatorname{\mathbb{E}}}\big[ {{\bm{S}}}_i {{\bm{S}}}_i^{*}\big] } \right\Vert}, \ {\left\Vert { \sum_i {\operatorname{\mathbb{E}}}\big[ {{\bm{S}}}_i^{*}{{\bm{S}}}_i \big] } \right\Vert} \right\} \end{aligned}$$ and the large deviation parameter $$\label{eqn:large-dev-param} L := \left( {\operatorname{\mathbb{E}}}\max\nolimits_i {{\left\Vert {{{\bm{S}}}_i} \right\Vert}^2} \right)^{1/2}.$$ Define the dimensional constant $$\label{eqn:dimensional} C({{\bm{d}}}) := C(d_1, d_2) := 4 \cdot \big(1 + 2\lceil \log (d_1 + d_2) \rceil \big).$$ Then we have the matching estimates $$\label{eqn:main-ineqs} \sqrt{c \cdot v({{\bm{Z}}})} \ +\ c \cdot L \quad\leq\quad \left( {\operatorname{\mathbb{E}}}{{\left\Vert {{{\bm{Z}}}} \right\Vert}^2} \right)^{1/2} \quad\leq\quad \sqrt{C({{\bm{d}}}) \cdot v({{\bm{Z}}})}\ +\ C({{\bm{d}}}) \cdot L.$$ In the lower inequality, we can take $c := 1/4$. The symbol ${\left\Vert {\cdot} \right\Vert}$ denotes the $\ell_2$ operator norm, also known as the spectral norm, and ${}^*$ refers to the conjugate transpose operation. The map $\lceil \cdot \rceil$ returns the smallest integer that exceeds its argument. The proof of this result occupies the bulk of this paper. Most of the page count is attributed to a detailed presentation of the required background material from linear algebra and probability. We have based the argument on the most elementary considerations possible, and we have tried to make the work self-contained. Once the reader has digested these ideas, the related—but more sophisticated —approach in the paper [@MJCFT14:Matrix-Concentration] should be accessible. Discussion ---------- Before we continue, some remarks about Theorem \[thm:main\] are in order. First, although it may seem restrictive to focus on independent sums, as in , this model captures an enormous number of useful examples. See the monograph [@Tro15:Introduction-Matrix] for justification. We have chosen the term *variance parameter* because the quantity  is a direct generalization of the variance of a scalar random variable. The passage from the first formula to the second formula in  is an immediate consequence of the assumption that the summands ${{\bm{S}}}_i$ are independent and have zero mean (see Section \[sec:upper\]). We use the term *large-deviation parameter* because the quantity  reflects the part of the expected norm of the random matrix that is attributable to one of the summands taking an unusually large value. In practice, both parameters are easy to compute using matrix arithmetic and some basic probabilistic considerations. In applications, it is common that we need high-probability bounds on the norm of a random matrix. Typically, the bigger challenge is to estimate the expectation of the norm, which is what Theorem \[thm:main\] achieves. Once we have a bound for the expectation, we can use scalar concentration inequalities, such as [@BLM13:Concentration-Inequalities Thm. 6.10], to obtain high-probability bounds on the deviation between the norm and its mean value. We have stated Theorem \[thm:main\] as a bound on the second moment of ${\left\Vert {{{\bm{Z}}}} \right\Vert}$ because this is the most natural form of the result. Equivalent bounds hold for the first moment: $$\sqrt{c' \cdot v({{\bm{Z}}})} \ +\ c' \cdot L \quad\leq\quad {\operatorname{\mathbb{E}}}{\left\Vert { {{\bm{Z}}} } \right\Vert} \quad\leq\quad \sqrt{C({{\bm{d}}}) \cdot v({{\bm{Z}}})}\ +\ C({{\bm{d}}}) \cdot
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Using the DPW method, we construct genus zero Alexandrov-embedded constant mean curvature (greater than one) surfaces with any number of Delaunay ends in hyperbolic space.' author: - Thomas Raujouan title: 'Constant mean curvature $n$-noids in hyperbolic space' --- Introduction {#introduction .unnumbered} ============ In [@dpw], Dorfmeister, Pedit and Wu introduced a loop group method (the DPW method) for constructing harmonic maps from a Riemann surface into a symmetric space. As a consequence, their method provides a Weierstrass-type representation of constant mean curvature surfaces (CMC) in Euclidean space ${\mathbb{R}}^3$, three-dimensional sphere ${\mathbb{S}}^3$, or hyperbolic space ${\mathbb{H}}^3$. Many examples have been constructed (see for example [@newcmc; @dw; @kkrs; @dik; @heller1; @heller2]). Among them, Traizet [@nnoids; @minoids] showed how the DPW method in ${\mathbb{R}}^3$ can construct genus zero $n$-noids with Delaunay ends (as Kapouleas did with partial differential equations techniques in [@kapouleas]) and glue half-Delaunay ends to minimal surfaces (as did Mazzeo and Pacard in [@pacard], also with PDE techniques). A natural question is wether these constructions can be carried out in ${\mathbb{H}}^3$. Although properly embedded CMC annuli of mean curvature $H>1$ in ${\mathbb{H}}^3$ are well-known since the work of Korevaar, Kusner, Meeks and Solomon [@meeks], no construction similar to [@kapouleas] or [@pacard] can be found in the literature. This paper uses the DPW method in ${\mathbb{H}}^3$ to construct these surfaces. The two resulting theorems are as follows. \[theoremConstructionNnoids\] Given a point $p\in{\mathbb{H}}^3$, $n\geq 3$ distinct unit vectors $u_1, \cdots, u_n$ in the tangent space of ${\mathbb{H}}^3$ at $p$ and $n$ non-zero real weights $\tau_1,\cdots,\tau_n$ satisfying the balancing condition $$\label{eqBalancingNnoids} \sum_{i=1}^{n}\tau_i u_i = 0$$ and given $H>1$, there exists a smooth $1$-parameter family of CMC $H$ surfaces $\left(M_t\right)_{0<t<T}$ with genus zero, $n$ Delaunay ends and the following properties: 1. Denoting by $w_{i,t}$ the weight of the $i$-th Delaunay end, $$\lim\limits_{t\to 0} \frac{w_{i,t}}{t} = \tau_i.$$ 2. Denoting by $\Delta_{i,t}$ the axis of the $i$-th Delaunay end, $\Delta_{i,t}$ converges to the oriented geodesic through the point $p$ in the direction of $u_i$. 3. If all the weights $\tau_i$ are positive, then $M_t$ is Alexandrov-embeddedd. 4. If all the weights $\tau_i$ are positive and if for all $i\neq j\in[1,n]$, the angle $\theta_{ij}$ between $u_i$ and $u_j$ satisfies $$\label{eqAnglesNnoid} \left| \sin\frac{\theta_{ij}}{2} \right|>\frac{\sqrt{H^2-1}}{2H},$$ then $M_t$ is embedded. \[theoremConstructionMinoids\] Let $M_0\subset{\mathbb{R}}^3$ be a non-degenerate minimal $n$-noid with $n\geq 3$ and let $H>1$. There exists a smooth family of CMC $H$ surfaces $\left(M_t\right)_{0<|t|<T}$ in ${\mathbb{H}}^3$ such that 1. The surfaces $M_t$ have genus zero and $n$ Delaunay ends. 2. After a suitable blow-up, $M_t$ converges to $M_0$ as $t$ tends to $0$. 3. If $M_0$ is Alexandrov-embedded, then all the ends of $M_t$ are of unduloidal type if $t>0$ and of nodoidal type if $t<0$. Moreover, $M_t$ is Alexandrov-embedded if $t>0$. Following the proofs of [@nnoids; @minoids] gives an effective strategy to construct the desired CMC surfaces $M_t$. This is done in Sections \[sectionNnoids\] and \[sectionMinoids\]. However, showing that $M_t$ is Alexandrov-embedded requires a precise knowledge of its ends. This is the purpose of the main theorem (Section \[sectionPerturbedDelaunayImmersions\], Theorem \[theoremPerturbedDelaunay\]). We consider a family of holomorphic perturbations of the data giving rise via the DPW method to a half-Delaunay embedding $f_0: {\mathbb{D}}^*\subset {\mathbb{C}}\longrightarrow {\mathbb{H}}^3$ and show that the perturbed induced surfaces $f_t({\mathbb{D}}^*)$ are also embedded. Note that the domain on which the perturbed immersions are defined does not depend on the parameter $t$, which is stronger than $f_t$ having an embedded end, and is critical for showing that the surfaces $M_t$ are Alexandrov-embedded. The essential hypothesis on the perturbations is that they do not occasion a period problem on the domain ${\mathbb{D}}^*$ (which is not simply connected). The proof relies on the Fröbenius method for linear differential systems with regular singular points. Although this idea has been used in ${\mathbb{R}}^3$ by Kilian, Rossman, Schmitt [@krs] and [@raujouan], the case of ${\mathbb{H}}^3$ generates two extra resonance points that are unavoidable and make their results inapplicable. Our solution is to extend the Fröbenius method to loop-group-valued differential systems. ![*Theorem \[theoremConstructionNnoids\] ensures the existence of $n$-noids with small necks. For $H>1$ small enough ($H\simeq1.5$ on the picture), there exists embedded $n$-noids with more than six ends.*](7noidmainlevee.png){width="7cm"} Delaunay surfaces in ${\mathbb{H}}^3$ via the DPW method {#sectionNotations} ======================================================== This Section fixes the notation and recalls the DPW method in ${\mathbb{H}}^3$. Hyperbolic space ---------------- #### Matrix model. Let ${\mathbb{R}}^{1,3}$ denote the space ${\mathbb{R}}^4$ with the Lorentzian metric ${\left\langle x, x \right\rangle} = -x_0^2+x_1^2+x_2^2+x_3^2$. Hyperbolic space is the subset ${\mathbb{H}}^3 $ of vectors $x\in{\mathbb{R}}^{1,3}$ such that ${\left\langle x, x \right\rangle} = -1$ and $x_0>0$, with the metric induced by ${\mathbb{R}}^{1,3}$. The DPW method constructs CMC immersions into a matrix model of ${\mathbb{H}}^3$. Consider the identification $$x=(x_0,x_1,x_2,x_3)\in{\mathbb{R}}^{1,3} \simeq X = \begin{pmatrix} x_0+x_3 & x_1+ix_2\\ x_1-ix_2 & x_0 - x_3 \end{pmatrix}\in {\mathcal{H}}_2$$ where ${\mathcal{H}}_2:=\{M\in {\mathcal{M}}(2,{\mathbb{C}})\mid M^*=M \}$ denotes the Hermitian matrices. In this model, ${\left\langle X, X \right\rangle} = -\det X$ and ${\mathbb{H}}^3$ is identified with the set ${\mathcal{H}}_2^{++}\cap {\mathrm{SL}}(2,{\mathbb{C}})$ of Hermitian positive definite matrices with determinant $1$. This fact enjoins us to write $${\mathbb{H}}^3 = \left\{ F{F}^* \mid F\in{\mathrm{SL}}(2,{\mathbb{C}}) \right\}.$$ Setting $$\label{eqPauliMatrices} \sigma_1 = \begin{pmatrix} 0& 1\\ 1 & 0 \end{pmatrix}, \qquad \sigma_2 = \begin{pmatrix} 0& i\\ -i & 0 \end{pmatrix}, \qquad \sigma_3 = \begin{pmatrix} 1& 0\\ 0 & -1 \end{pmatrix},$$ gives us an orthonormal basis $\left(\sigma_1,\sigma_2,\sigma_3\right)$ of the tangent space $T
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We study a topological sigma-model ($A$-model) in the case when the target space is an ($m_0|m_1$)-dimensional supermanifold. We prove under certain conditions that such a model is equivalent to an $A$-model having an ($m_0-m_1$)-dimensional manifold as a target space. We use this result to prove that in the case when the target space of $A$-model is a complete intersection in a toric manifold, this $A$-model is equivalent to an $A$-model having a toric supermanifold as a target space.' author: - | Albert Schwarz[^1]\ Department of Mathematics, University of California,\ Davis, CA 95616\ ASSCHWARZ@UCDAVIS.EDU title: 'Sigma-models having supermanifolds as target spaces.' --- Our goal is to study a two-dimensional topological $\sigma$-model ($A$-model). Sigma-models having supermanifolds as target spaces were considered in an interesting paper \[5\]. However, the approach of \[5\] leads to a conclusion that in the case when the target space of $A$-model is a supermanifold the contribution of rational curves to correlation functions vanishes (i.e. these functions are essentially trivial). In our approach $A$-model having a $(m_0|m_1)$-dimensional supermanifold as a target space is not trivial, but it is equivalent to an $A$-model with $(m_0-m_1)$-dimensional target space. We hope, that this equivalence can be used to understand better the mirror symmetry, because it permits us to replace most interesting target spaces with supermanifolds having non-trivial Killing vectors and to use $T$-duality. We start with a definition of $A$-model given in \[1\]. This definition can be applied to the case when the target space is a complex Kähler supermanifold $M$. Repeating the consideration of \[1\] we see that the correlation functions can be expressed in terms of rational curves in $M$, i.e. holomorphic maps of $CP^1$ into $M$. (We restrict ourselves to the genus $0$ case and assume that the situation is generic; these restrictions will be lifted in a forthcoming paper \[8\]). Let us consider for simplicity the case when ($m_0|m_1$)-dimensional complex supermanifold $M$ corresponds to an $m_1$-dimensional holomorphic vector bundle $\alpha$ over an $m_0$-dimensional complex manifold $M_0$ (i.e. $M$ can be obtained from the total space of the bundle $\alpha$ by means of reversion of parity of the fibres.) The natural map of $M$ onto $M_0$ will be denoted by $\pi$. To construct the correlation functions of the $A$-model with the target space $M$ we should fix real submanifolds $N_1,...,N_k$ of $M_0$ and the points $x_1,...,x_k\in CP^1$. For every two-dimensional homology class $\lambda \in H_2(M,{\bf Z})$ we consider the space $D_{\lambda}$ of holomorphic maps $\varphi$ of $CP^1$ into $M$ that transform $CP^1$ into a cycle $\varphi (CP^1)\in \lambda$ and satisfy the conditions $\pi (\varphi (x_1))\in N_1,...,\pi(\varphi (x_k))\in N_k$. (We identify the homology of $M$ with the homology of its body $M_0$; the condition $\varphi (CP^1)\in \lambda$ means that the image of the fundamental homology class of $CP^1$ by the homomorphism $(\pi \varphi )_*:H_2(CP^1,{\bf Z})\rightarrow H_2(M_0,{\bf Z})$ is equal to $\lambda$.) The space $D_{\lambda}$ contributes to the correlation function under consideration only if $$2m_0-(q_1+...+q_k)+<c_1(T),\lambda >=2m_1(<c_1(\alpha ),\lambda >+1)$$ where $c_1(T)$ is the first Chern class of the tangent bundle to $M, c_1(\alpha)$ is the first Chern class of the bundle $\alpha$ and $q_i=2m_0-\dim N_i$ denotes the codimension of $N_i$. If $\varphi \in D_{\lambda}$ then $\pi (\varphi) \in D_{\lambda}^0$ where $ D_{\lambda}$ is the space of holomorphic maps $\phi :CP^1\rightarrow M_0$ obeying $\phi (CP^1) \in \lambda$ and $\phi (x_1)\in N_1,...,\phi (x_k)\in N_k$. Let us consider a holomorphic vector bundle $\xi _{\lambda}$ over $D^0_{\lambda}$ having the vector space of holomorphic sections of the pullback of $\alpha$ by the map $\phi \in D^0_{\lambda}$ as a fiber over $\phi$. It is easy to check that $D_{\lambda}$ can be obtained from the total space of $\xi _{\lambda}$ by means of parity reversion in the fibers. It follows from the index theorem that the virtual dimension of $D^0_{\lambda}$ is equal to $d_1=2m_0-\sum q_i+2<c_1(T),\lambda >$; our assumption that the situation is generic means that $d_1=\dim D^0_{\lambda}$. The Riemann-Roch theorem together with equation (1) permits us to say that the dimension of the fiber of $\xi _{\lambda}$ is equal to $d_2=2m_1(<c_2(\alpha),\lambda >+1)$ and coincides with $d_1$. We see that the even dimension $d_1$ of $ D_{\lambda}$ coincides with its odd dimension $d_2$. The contribution of $D_{\lambda}$ into the correlation function can be expressed in terms of the Euler number of the vector bundle $\xi _{\lambda}$ (see \[2\] or \[3\] for explanation of similar statements in a little bit different situations). Let us consider now a holomorphic section $F$ of $\alpha$. We will assume that the zero locus of $F$ is a manifold and denote this manifold by $X$. The Kähler metric on $M$ induces a Kähler metric on $X$; therefore we can consider an $A$-model with the target space $X$ . We’ll check that the correlation functions of this $A$-model coincide with the correlation functions of the $A$-model with target space $M$. More precisely, the correlation function of $A$-model with target space $M$ constructed by means of submanifolds $N_1,...,N_k \subset M_0$ coincides with the correlation function of $A$-model with target space $X$ constructed by means of submanifolds $N_1^{\prime}=N_1\cap X,...,N_k^{\prime}=N_k\cap X$ of the manifold $X$. (Without loss of generality we can assume that $N_i^{\prime}=N_i\cap X$ is a submanifold). To prove this statement we notice that using the section $F$ of $\alpha$ we can construct a section $f_{\lambda}$ of $\xi _{\lambda}$ assigning to every map $\phi \in D_{\lambda}^0$ an element $f_{\lambda}(\phi)=F\cdot \phi$ of the fiber of $\xi_{\lambda}$ over $\phi \in D_{\lambda}^0$. It is easy to check that zeros of the section $f_{\lambda}$ can be identified with holomorphic maps $\phi \in D_{\lambda}^0$ satisfying $\phi (CP^1)\subset X$. The number of such maps enters the expression for correlation functions of $A$-model with target space $X$. From the other side this number coincides with the Euler number of $\xi _{\lambda}$ entering corresponding expression in the case of target space $M$. This remark proves the coincidence of correlation functions for the target space $M$ with correlation functions for the target space $X$. Let us stress, however, that not all correlation functions for the target space $X$ can be obtained by means of above construction. Using the language of cohomology one can say that a correlation function of an $A$-model with the target space $X$ corresponds to a set of cohomology classes $\nu _1,...,\nu _k\in H(X,{\bf C})$. Such a correlation function is equal to a correlation function of an $A$-model with the target space $M$ if there exist cohomology classes $\tilde {\nu }_1,...,\tilde {\nu} _k\in H(M_0,{\bf C})$ obeying $\nu_1=i^*\tilde {\nu}_1,...,\nu_k=i^*\tilde {\nu}_k$. (Here $i$ denotes the embedding of $X$ into $M_0$. We used the fact that cohomology class $\nu _i\in H(X,{\bf C})$, dual to $N_i^{\prime}=N_i\cap X$, is equal to $i^*\tilde {\nu}_i$ where $\tilde {\nu}_i\in H(M_0,{\bf C})$ is dual to $N_i$). To prove that correlation functions of $A$-model having
{ "pile_set_name": "ArXiv" }
null
null
--- author: - | S. V. Larin\ Institute of Macromolecular Compounds, Russian Academy of Sciences,\ V.O. Bol’shoi pr. 31, 199004 St. Petersburg, Russian Federation\ S. V. Lyulin\ Institute of Macromolecular Compounds, Russian Academy of Sciences,\ V.O. Bol’shoi pr. 31, 199004 St. Petersburg, Russian Federation\ P. A. Likhomanova\ National Research Center “Kurchatov Institute”, 123182, Moscow, Russia\ `likhomanovapa@gmail.com`\ K. Yu. Khromov\ National Research Center “Kurchatov Institute”, 123182, Moscow, Russia and\ Moscow Institute of Physics and Technology (State University), 117303, Moscow, Russia\ `khromov_ky@nrcki.ru`\ A. A. Knizhnik\ Kintech Laboratory Ltd., 123182, Moscow, Russia and\ National Research Center “Kurchatov Institute”, 123182, Moscow, Russia\ B. V. Potapkin\ Kintech Laboratory Ltd., 123182, Moscow, Russia and\ National Research Center “Kurchatov Institute”, 123182, Moscow, Russia title: 'Muiltiscale modeling of electrical conductivity of R-BAPB polyimide + carbon nanotubes nanocomposites' --- Introduction {#introduction .unnumbered} ============ Polymer materials, while possessing some unique and attractive qualities, such as low weight, high strength, resistance to chemicals, ease of processing, are for the most part insulators. If methods could be devised to turn common insulating polymers into conductors, that would open great prospects for using such materials in many more areas than they are currently used. These areas may include organic solar cells, printing electronic circuits, light-emitting diodes, actuators, supercapacitors, chemical sensors, and biosensors [@Long_2011]. Since the reliable methods for carbon nanotubes (CNT) fabrication had been developed in the 1990s, growing attention has been paid to the possibility of dispersing CNTs in polymers, where CNTs junctions may form a percolation network and turn insulating polymer into a good conductor when a percolation threshold is overcome. An additional benefit of using such polymer/CNTs nanocomposites instead of intrinsically conducting polymers, such as polyaniline [@Polyaniline] for example, is that dispersed CNTs, besides providing electrical conductivity, enhance polymer mechanical properties as well. CNTs enhanced polymer nanocomposites have been intensively investigated experimentally, including composites conductivity [@Eletskii]. As for the theoretical research in this area, the results are more modest. If one is concerned with nanocomposite conductivity, its value depends on many factors, among which are the polymer type, CNTs density, nanocomposite preparation technique, CNTs and their junctions geometry, a possible presence of defects in CNTs and others. Taking all these factors into account and obtaining quantitatively correct results in modeling is a very challenging task since the resulting conductivity is formed at different length scales: at the microscopic level it is influenced by the CNTs junctions contact resistance and at the mesoscopic level it is determined by percolation through a network of CNTs junctions. Thus a consistent multi-scale method for the modeling of conductivity, starting from atomistic first-principles calculations of electron transport through CNTs junctions is necessary. Due to the complexity of this multi-scale task, the majority of investigations in the area are carried out in some simplified forms, this is especially true for the underlying part of the modeling: determination of CNTs junction contact resistance. For the contact resistance either experimental values as in [@Soto_2015] or the results of phenomenological Simmons model as in [@Xu_2013; @Yu_2010; @Jang_2015; @Pal_2016] are usually taken, or even an arbitrary value of contact resistance reasonable by an order of magnitude may be set [@Wescott_2007]. In [@Bao_2011; @Grabowski_2017] the tunneling probability through a CNT junction is modeled using a rectangular potential barrier and the quasi-classical approximation. The authors of [@Castellino_2016] employed an oversimplified two-parameter expression for contact resistance, with these parameters fitted to the experimental data. The best microscopic attempt, that we are aware of, is using the semi-phenomenological tight-binding approximation for the calculations of contact resistance [@Penazzi_2013]. But in [@Penazzi_2013] just the microscopic part of the nanocomposites conductivity problem is addressed, and the conductivity of nanocomposite is not calculated. Moreover, in [@Penazzi_2013] the coaxial CNTs configuration is only considered, which is hardly realistic for real polymers. Thus, the majority of investigations are concentrated on the mesoscopic part of the task: refining a percolation model or phenomenologically taking into account different geometry peculiarities of CNTs junctions. Moreover, comparison with experiments is missing in some publications on this topic. Thus, a truly multi-scale research, capable of providing quantitative results comparable with experiments, combining fully first-principles calculations of contact resistance on the microscopic level with a percolation model on the mesoscopic level seems to be missing. In our previous research [@comp_no_pol], we proposed an efficient and precise method for fully first-principles calculations of CNTs contact resistance and combined it with a Monte-Carlo statistical percolation model to calculate the conductivity of a simplified example network of CNTs junctions without polymer filling. In the current paper, we are applying the developed approach to the modeling of conductivity of the CNTs enhanced polymer polyimide R-BAPB. R-BAPB (Fig. \[fig\_RBAPB-struct\]) is a novel polyetherimide synthesized using 1,3-bis-(3$'$,4-dicarboxyphenoxy)-benzene (dianhidride R) and 4,4$'$-bis-(4$''$-aminophenoxy)diphenyl (diamine BAPB). It is thermostable polymer with extremely high thermomechanical properties (glass transition temperature $T_g= 453-463$ K, melting temperature $T_m= 588$ K, Young’s modulus $E= 3.2$ GPa) [@Yudin_JAPS]. This polyetherimide could be used as a binder to produce composite and nanocomposite materials demanded in shipbuilding, aerospace, and other fields of industry. The two main advantages of the R-BAPB among other thermostable polymers are thermoplasticity and crystallinity. R-BAPB-based composites could be produced and processed using convenient melt technologies. Crystallinity of R-BAPB in composites leads to improved mechanical properties of the materials, including bulk composites and nanocomposite fibers. It is well known that carbon nanofillers could act as nucleating agents for R-BAPB, increasing the degree of crystallinity of the polymer matrix in composites. As it was shown in experimental and theoretical studies [@Yudin_MRM05; @Larin_RSCADV14; @Falkovich_RSCADV14; @Yudin_CST07], the degree of crystallinity of carbon nanofiller enhanced R-BAPB may be comparable to that of bulk polymers. ![The chemical structure of R-BAPB polyimide.[]{data-label="fig_RBAPB-struct"}](figs/fig_Larin1/RBAPB.png){width="8cm"} Ordering of polymer chains relative to nanotube axes could certainly influence a conductance of the polymer filled nanoparticle junctions. However, it is expected that such influence will depend on many parameters, including the structure of a junction, position, and orientation of chain fragments on the nanotube surface close to a junction, and others. Taking into account all of these parameters is a rather complex task that requires high computational resources for atomistic modeling and ab-initio calculations, as well as complex analysis procedures. Thus, on the current stage of the study, we consider only systems where the polymer matrix was in an amorphous state, i.e. no sufficient polymer chains ordering relative to nanotubes were observed. Description of the multiscale procedure {#description-of-the-multiscale-procedure .unnumbered} ======================================= The modeling of polymer nanocomposite electrical conductivity is based on a multi-scale approach, in which different simulation models are used at different scales. For the electron transport in polymer composites with a conducting filler, the lowest scale corresponds to the contact resistance between tubes. The contact resistance is determined at the atomistic scale by tunneling of electrons between the filler particles via a polymer matrix, and hence, analysis of contact resistance requires knowledge of the atomistic structure of a contact. Therefore, at the first step, we develop an atomistic model of the contact between carbon nanotubes in a polyimide matrix using the molecular dynamics (MD) method. This method gives us the structure of the intercalated polymer molecules between carbon nanotubes for different intersection angles between the nanotubes. One should mention, that since a polymer matrix is soft, the contact structure varies with time and, therefore, we use molecular dynamics to sample these structures. Based on the determined atomistic structures of the contacts between nanotubes in the polymer matrix we calculate electron transport through the junction using electronic structure calculations and the formalism of the Green’s matrix. Since this analysis requires first-principles methods, one has to reduce
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Natural philosophy necessarily combines the process of scientific observation with an abstract (and usually symbolic) framework, which provides a logical structure to the development of a scientific theory. The metaphysical underpinning of science includes statements about the process of science itself, and the nature of both the philosophical and material objects involved in a scientific investigation. By developing a formalism for an abstract mathematical description of inherently non-mathematical, physical objects, an attempt is made to clarify the mechanisms and implications of the philosophical tool of *Ansatz*. Outcomes of the analysis include a possible explanation for the philosophical issue of the ‘unreasonable effectiveness’ of mathematics as raised by Wigner, and an investigation into formal definitions of the terms: principles, evidence, existence and universes that are consistent with the conventions used in physics. It is found that the formalism places restrictions on the mathematical properties of objects that represent the tools and terms mentioned above. This allows one to make testable predictions regarding physics itself (where the nature of the tools of investigation is now entirely abstract) just as scientific theories make predictions about the universe at hand. That is, the mathematical structure of objects defined within the new formalism has philosophical consequences (via logical arguments) that lead to profound insights into the nature of the universe, which may serve to guide the course of future investigations in science and philosophy, and precipitate inspiring new avenues of research.' bibliography: - 'dualref.bib' date: --- **A more general treatment of the philosophy of physics and the existence of universes**\ [Jonathan M. M. Hall[^1]\ University of Adelaide, Adelaide, South Australia 5005, Australia]{} Introduction {#sect:intro} ============ The study of physics requires both scientific observation and philosophy. The tenants of science and its axioms of operation are not themselves scientific statements, but philosophical statements. The profound philosophical insight precipitating the birth of physics was that scientific observations and philosophical constructs, such as logic and reasoning, could be married together in a way that allowed one to make predictions of observations (in science) based on theorems and proofs (in philosophy). This natural philosophy requires a philosophical ‘leap’, in which one makes an assumption or guess about what abstract framework applies most correctly. Such a leap, called *Ansatz*, is usually arrived at through inspiration and an integrated usage of faculties of the mind, rather than a programmatic application of certain axioms. Nevertheless, a programmatic approach allows enumeration of the details of a mathematical system. It seems prudent to apply a programmatic approach to the notion of Ansatz itself and to clarify its process metaphysically, in order to gain a deeper understanding of how it is used in practice in science; but first of all, let us begin with the inspiration. A metaphysical approach {#sect:meta} ======================= In this work, a programme is laid out for addressing the philosophical mechanism of Ansatz. In physics, a scientific prediction is made firstly by arriving at a principle, usually at least partly mathematical in nature. The mathematical formulation is then guessed to hold in particular physical situations. The key philosophical process involved is exactly this ‘projecting’ or ‘matching’ of the self-contained mathematical formulation with the underlying principles of the universe. No proof is deemed possible outside the mathematical framework, for proof, as an abstract entity, is an inherent feature of a mathematical (and philosophical) viewpoint. Indeed, it is difficult to imagine what tools a proof-like verification in a non-mathematical context may use or require. It may be that the current lack of clarity in the philosophical mechanism involved in applying mathematical principles to the universe has implications for further research in physics. For example, in fine-tuning problems of the Standard Model of particle interactions (such as for the mass of the Higgs boson [@DGH; @Martin:1997ns] and the magnitude of the cosmological constant [@ArkaniHamed:1998rs]) it has been speculated that the existence of multiple universes may alleviate the mystery surrounding them [@Wheeler; @Schmidhuber:1999gw; @Szabo:2004uy; @Gasperini:2007zz; @Greene:2011], in that a mechanism for obtaining the seemingly finely-tuned value of the quantity would no longer be required- it simply arises statistically. However, if such universes are causally disconnected, e.g. in disjoint ‘bubbles’ in Linde’s chaotic inflation framework [@Linde:1983gd; @Linde:1986fc], there is a great challenge in even demonstrating such universes’ existence, and therefore draws into question the rather elaborate programme of postulating them. Setting aside for the time being the use of approaches that constitute novel applications of known theories, such as the exploitation of quantum entanglement to obtain information about the existence of other universes [@Tipler:2010ft], a more abstract and philosophical approach is postulated in this paper. The universality of mathematics ------------------------------- Outside our universe, one is at a loss to intuit exactly which physical principles continue to hold. For example, could one assume a Minkowski geometry, and a causality akin to our current understanding, to hold for other universes and the ‘spaces between’, if indeed the universes are connected by some sort of spacetime? Indeed, such questions are perhaps too speculative to lead to any real progress; however, if one takes the view of Mathematical Realism, which often underpins the practice of physics, as argued in Section \[sec:mr\], and the tool of Ansatz, one can at least identify *mathematical principles* as principles that should hold in any physical situation- our universe, or any other. This viewpoint is more closely reminiscent of Level IV in Tegmark’s taxonomy [@Tegmark:2003sr] of universes. One may imagine that mathematical theorems and logical reasoning hold in all situations, and that all ‘universes’ (a term in need of a careful definition to match closely with the sense it is meant in the practice of physics) are subject to mathematical inquiry. In that case, mathematics (and indeed, our own reasoning) may act as a ‘telescope to beyond the universe’ in exactly the situation where all other senses and tools are drawn into question. To achieve the goal of examining the process of the Ansatz- of matching a mathematical idea to a non-mathematical entity (or phenomenon), one needs to be able to define a non-mathematical object abstractly, or mathematically. Of course, such an entity that can be written down and manipulated is indeed not ‘non-mathematical’. This is so in the same way that, in daily speech, an object can be referred to only by making an abstraction (c.f. ‘this object’, ‘what is *meant* by this object’ ‘what is *meant* by the phrase ‘what is meant by this object’ ’, etc.). This nesting feature is no real stumbling block, as one can simply identify it as an attribute of a particular class of abstractions- those representing non-mathematical objects. Thus a rudimentary but accurate formulation of non-mathematical objects in a mathematical way will form the skeleton outline for a new and fairly general formalism. After developing a mathematics of non-mathematical objects, one could then apply it to a simple test case. Using the formalism, one could derive a process by which an object is connected or related somehow to its description, using only the theorems and properties known to hold in the new framework. The formalism could then be applied to the search for other universes, and the development of a procedure to identify properties of such universes. In doing so, one could make a real discovery so long as the phenomenological properties are not introduced ‘by hand’. This follows the ethos of physics, whereby an inspired principle (or principles) is followed, sometimes superficially remote from a phenomenon being studied, but which has profound implications not always perceived contemporaneously (and not introduced artificially), which ultimately guide the course of an inquiry or experiment. There is an additional motivation behind this programme beyond addressing the mechanism of the Ansatz, which is to attempt to clarify philosophically Wigner’s ‘unreasonable effectiveness’ of mathematics [@Wigner] itself. It is the hope of this paper to identify this kind of ‘effectiveness’ as a kind of fine-tuning problem, i.e. that it is simply a feature that naturally arises from the structure of the new formalism. Evidences {#subsect:evintro} --------- In the special situation where one uses mathematical constructs exclusively, the type of evidence required for a new discovery would also need to be mathematical in nature, and testing that it satisfies the necessary requirements to count as evidence in the usual scientific sense could be achieved by using mathematical tools within the new framework. To explain how this might be done, consider that evidence is usually taken to mean an observation (or collection of observations) about the universe that supports the implications of a mathematical formulation prescribed by a particular theory. Therefore, it is necessary to have a strict separation between objects that are considered ‘real/existing in the universe’, and those that are true mathematical statements that may be applied or *projected* (correctly or otherwise) onto the universe. Note that, for evidence in the usual sense, any observations experienced by the scientist are indeed abstractions also. For example, in examining an object, photons reflected from its surface can interact in the eye to produce a signal in the brain, and the interpretation of such a signal is an object of an entirely different nature to that of the actual photons themselves- much observational data is, in fact, discarded, and most crucially, the observation is then fitted into an abstract framework constructed in the mind. In a very proper sense, the more abstract is the more tangible to experience, and the
{ "pile_set_name": "ArXiv" }
null
null
**KERNEL ESTIMATION OF DENSITY LEVEL SETS** [Benoît CADRE]{}[[^1]]{} Laboratoire de Mathématiques, Université Montpellier II, CC 051, Place E. Bataillon, 34095 Montpellier cedex 5, FRANCE [**Abstract.**]{} Let $f$ be a multivariate density and $f_n$ be a kernel estimate of $f$ drawn from the $n$-sample $X_1,\cdots,X_n$ of i.i.d. random variables with density $f$. We compute the asymptotic rate of convergence towards 0 of the volume of the symmetric difference between the $t$-level set $\{f\geq t\}$ and its plug-in estimator $\{f_n\geq t\}$. As a corollary, we obtain the exact rate of convergence of a plug-in type estimate of the density level set corresponding to a fixed probability for the law induced by $f$. [**Key-words:**]{} Kernel estimate, Density level sets, Hausdorff measure. [**2000 Mathematics Subject Classification:**]{} 62H12, 62H30. [**1. Introduction.**]{} Recent years have witnessed an increasing interest in estimation of density level sets and in related multivariate mappings problems. The main reason is the recent advent of powerfull mathematical tools and computational machinery that render these problems much more tractable. One of the most powerful application of density level sets estimation is in unsupervised [*cluster analysis*]{} (see Hartigan \[1\]), where one tries to break a complex data set into a series of piecewise similar groups or structures, each of which may then be regarded as a separate class of data, thus reducing overall data compexity. But there are many other fields where the knowledge of density level sets is of great interest. For example, Devroye and Wise \[2\], Grenander \[3\], Cuevas \[4\] and Cuevas and Fraiman \[5\] used density support estimation for pattern recognition and for detection of the abnormal behavior of a system. In this paper, we consider the problem of estimating the $t$-level set ${\cal L}(t)$ of a multivariate probability density $f$ with support in ${I\!\!R}^k$ from independent random variables $X_1,\cdots,X_n$ with density $f$. Recall that for $t\geq 0$, the $t$-level set of the density $f$ is defined as follows : $${\cal L}(t)=\{x\in{I\!\!R}^k\, :\, f(x)\geq t\}.$$ The question now is how to define the estimates of ${{\cal L}}(t)$ from the $n$-sample $X_1,\cdots,X_n$ ? Even in a nonparametric framework, there are many possible answers to this question, depending on the restrictions one can impose on the level set and the density under study. Mainly, there are two families of such estimators : the [*plug-in*]{} estimators and the estimators constructed by an [*excess mass*]{} approach. Assume that an estimator $f_n$ of the density $f$ is available. Then a straightforward estimator of the level set ${{\cal L}}(t)$ is $\{f_n\geq t\}$, the plug-in estimator. Molchanov \[6, 7\] and Cuevas and Fraiman \[5\] proved consistency of these estimators and obtained some rates of convergence. The excess mass approach suggest to first consider the empirical mapping $M_n$ defined for every borel set $L\subset{I\!\!R}^k$ by $$M_n(L)=\frac{1}{n} \sum_{i=1}^n {\bf 1}_{\{X_i\in L\}}-t \lambda(L),$$ where $\lambda$ denotes the Lebesgue measure on ${I\!\!R}^k$. A natural estimator of ${{\cal L}}(t)$ is a maximizer of $M_n(L)$ over a given class of borel sets $L$. For different classes of level sets (mainly star-shaped or convex level sets), estimators based on the excess mass approach were studied by Hartigan \[8\], Müller \[9\], Müller and Sawitzki \[10\], Nolan \[11\] and Polonik \[12\], who proved consistency and found certain rates of convergence. When the level set is star-shaped, Tsybakov \[13\] recently proved that the excess mass approach gives estimators with optimal rates of convergence in an asymptotically minimax sense, whithin the studied classes of densities. Though this result has a great theoretical interest, assuming the level set to be convex or star-shaped appears to be somewhat unsatisfactory for the statistical applications. Indeed, such an assumption does not permit to consider the important case where the density under study is multimodal with a finite number of modes, and hence the results can not be applied to cluster analysis in particular. In comparison, the plug-in estimators do not care about the specific shape of the level set. Moreover, another advantage of the plug-in approach is that it leads to easily computable estimators. We emphasize that, if the excess mass approach often gives estimators with optimal rates of convergence, the complexity of the computational algorithm of such an estimator is high, due to the presence of the maximizing step (see the computational algorithm proposed by Hartigan, \[8\]). In this paper, we study a plug-in type estimator of the density level set ${{\cal L}}(t)$, using a kernel density estimate of $f$ (Rosenblatt, \[14\]). Given a kernel $K$ on ${I\!\!R}^k$ ([*i.e.*]{}, a probability density on ${I\!\!R}^k$) and a bandwidth $h=h(n) >0$ such that $h\to 0$ as $n$ grows to infinity, the kernel estimate of $f$ is given by $$f_n(x)=\frac{1}{nh^k}\sum_{i=1}^n K\Big(\frac{x-X_i}{h}\Big), \ x\in{I\!\!R}^k.$$ We let the plug-in estimate ${{\cal L}}_n(t)$ of ${{\cal L}}(t)$ be defined as $${{\cal L}}_n(t)=\{x\in{I\!\!R}^k\, : \, f_n(x)\geq t\}.$$ In the whole paper, the distance between two borel sets in ${I\!\!R}^k$ is a measure -in particular the volume or Lebesgue measure $\lambda$ on ${I\!\!R}^k$- of the symmetric difference denoted $\Delta$ ([*i.e.*]{}, $A\Delta B=(A\cap B^c)\cup(A^c\cap B)$ for all sets $A,B$). Our main result (Theorem 2.1) deals with the limit law of $$\sqrt {nh^k}\,\lambda\Big({{\cal L}}_n(t)\Delta{{\cal L}}(t)\Big),$$ which is proved to be degenerate. Consider now the following statistical problem. In cluster analysis for instance, it is of interest to estimate the density level set corresponding to a fixed probability $p\in [0,1]$ for the law induced by $f$. The data contained in this level set can then be regarded as the most important data if $p$ is far enough from 0. Since $f$ is unknown, the level $t$ of this density level set is unknown as well. The natural estimate of the target density level set ${{\cal L}}(t)$ becomes ${{\cal L}}_n(t_n)$, where $t_n$ is such that $$\int_{{{\cal L}}_n(t_n)} f_nd\lambda=p.$$ As a consequence of our main result, we obtain in Corollary 2.1 the exact asymptotic rate of convergence of ${{\cal L}}_n(t_n)$ to ${{\cal L}}(t)$. More precisely, we prove that for some $\beta_n$ which only depends on the data, one has : $$\beta_n\sqrt {nh^k}\,\lambda\Big({{\cal L}}_n(t_n)\Delta{{\cal L}}(t)\Big)\to \sqrt {\frac{2}{\pi}\int K^2d\lambda}$$ in probability. The precise formulations of Theorem 2.1 and Corollary 2.1 are given in Section 2. Section 3 is devoted to the proof of Theorem 2.1 while the proof of Corollary 2.1 is given in Section 4. The appendix is dedicated to a change of variables formula involving the $(k$-$1)$-dimensional Hausdorff measure (Proposition A). [**2. The main results.**]{} [**2.1 Estimation of $t$-level sets.**]{} In the following, $\Theta\subset (0,\infty)$ denotes an open interval and $\|.\|$ stands for the euclidean norm over any finite dimensional space. Let us introduce the hypotheses on the density $f$ : - $f$ is twice continuously differentiable and $f(x)\to 0$ as $\|x\|\to\infty$; - For all $t\in\Theta$, $$\inf_{f^{-1}(\{t\})} \|\nabla f\|>0,$$ where, here and in the following, $\nabla\psi(x)$ denotes the gradient at $x\in{I\!\!R}^k$ of the differentiable function $\psi\,:\, {I\!\!R}^k\to{I\!\!R}$. Next, we introduce the assumptions on the kernel $K$ : - $K$ is a continuously differentiable and compactly supported function. Moreover, there exists a monotone nonincreasing function $\mu\, :\, {I\!\!R
{ "pile_set_name": "ArXiv" }
null
null
--- author: - Justin Lovegrove title: Reverberation Mapping of the Optical Continua of 57 MACHO Quasars --- Abstract ======== Autocorrelation analyses of the optical continua of 57 of the 59 MACHO quasars reveal structure at proper time lags of $544 \pm 5.2$ days with a standard deviation of 77 light days. Interpreted in the context of reverberation from elliptical outflow winds as proposed by Elvis (2000) [@E00], this implies an approximate characteristic size scale for winds in the MACHO quasars of $544 \pm 5.2$ light days. The internal structure variable of these reflecting outflow surfaces is found to be $11.87^o \pm 0.40^o$ with a standard deviation of $2.03^o$. Introduction ============ Brightness fluctuations of the UV-optical continuum emission of quasars were recognised shortly after the initial discovery of the objects in the 1960s [@ms]. Although several programmes were undertaken to monitor these fluctuations, little is yet known about their nature or origin. A large number of these have focused on comparison of the optical variability with that in other wavebands and less on long-timescale, high temporal resolution optical monitoring. Many studies have searched for oscillations on the $\sim$ day timescale in an attempt to constrain the inner structure size (eg [@wm]). This report, however, is concerned with variability on the year timescale to evidence global quasar structure. In a model proposed by Elvis [@E00] to unite the various spectroscopic features associated with different “types” of quasars and AGN (eg broad absorption lines, X-ray/UV warm absorbers, broad blue-shifted emission lines), the object’s outer accretion disc has a pair of bi-conical extended narrow emission line regions in the form of outflowing ionised winds. Absorption and emission lines and the so-called warm absorbers result from orientation effects in observing these outflowing winds. Supporting evidence for this is provided by a correlation between polarisation and broad absorption lines found by [@O]. Outflowing accretion disc winds are widely considered to be a strong candidate for the cause of feedback (for a discussion of the currently understood properties of feedback see [@FEA]). Several models have been developed [@p00; @p08] to simulate these winds. [@p00] discusses different launch mechanisms for the winds - specifically the balance between magnetic forces and radiation pressure - but finds no preference for one or the other, while [@p08] discusses the effect of rotation and finds that a rotating wind has a larger thermal energy flux and lower mass flux, making a strong case for these winds as the source of feedback. The outflow described by [@E00] is now usually identified with the observationally-invoked “dusty torus” around AGN [@ra]. [@mp] demonstrated for the MACHO quasars that there is no detectable lag time between the V and R variability in quasars, which can be interpreted as demonstrating that all of the optical continuum variability originates in the same region of the quasar. [@ST97] observed the gravitationally lensed quasar Q0957+561 to measure the time delay between the two images and measure microlensing effects. In doing so, they found a series of autocorrelation subpeaks initially attributed to either microlensing or accretion disc structure. These results were then re-interpreted by [@S05] as Elvis’ outflowing winds at a distance of $2 \times 10^{17} cm$ from the quasar’s central compact object. A model applied by [@VA07] to the quasar Q2237+0305, to simulate microlensing, found that the optimal solution for the system was one with a central bright source and an extended structure with double the total luminosity of the central source, though the outer structure has a lower surface brightness as the luminosity is emanating from a larger source, later determined by [@slr08] to lie at $8.4 \times 10^{17} cm$. [@slr08] continued on to argue that since magnetic fields can cause both jets and outflows, they therefore must be the dominant effect in AGN. [@lslp] however pointed out that the magnetic field required to power the observed Elvis outflows is too great to be due to the accretion disc alone. They therefore argue that all quasars and AGN have an intrinsically magnetic central compact object, which they refer to as a MECO, as proposed by [@rl07], based upon solutions of the Einstein-Maxwell equations by [@rl03]. One compelling aspect to this argument is that it predicts a power-law relationship between Elvis outflow radius and luminosity, which was found in work by Kaspi et al [@ks] and updated by Bentz et al [@b], if one assumes the source of quasar broad emission lines to be outflow winds powered by magnetic fields. The [@ks] and [@b] results were in fact empirically derived for AGN of $Z < 0.3$ and [@ks] postulates that there may be some evolution of this relation with luminosity (and indeed one might expect some time-evolution of quasar properties which may further modify this scaling relation) so generalising these results to quasars may yet prove a fallacy. The radius of the broad line region was found to scale initially by [@ks] as $R_{blr} \propto L^{0.67}$, while [@b] found $R_{blr} \propto L^{0.52}$. Another strength of the MECO argument is that while [@wu] found quasar properties to be uncorrelated using the current standard black hole models, [@lslp] and [@sll] found a homogeneous population of quasars using the [@rl03] model. [@p] used microlensing observations of 10 quadruply-lensed quasars, 9 of which were of known redshift including Q2237+0305, to demonstrate that standard thin accretion disc models, such as the widely-accepted Shakura-Sunyaev (S-S) disc [@ss73], underestimate the optical continuum emission region thickness by a factor of between 3 and 30, finding an average calculated thickness of $3.6 \times 10^{15} cm$, while observed values average $5.3 \times 10^{16} cm$. [@bp] found a radius of the broad line region for the Seyfert galaxy NGC 5548 of just under 13 light days when the average of several spectral line reverberations were taken, corresponding to $R_{blr} = 3.3 \times 10^{16} cm$. When the scaling of [@ks] and [@b] is taken into account, the [@bp] result is comparable to the [@slr08] and [@S05] results (assuming $\frac{L_{quasar}}{L_{seyfert}} \sim 10^4$, then [@b] would predict a quasar $R_{blr}$ of approximately $3 \times 10^{18}$). Also, given that black hole radius scales linearly with mass, as does the predicted radius of the inner edge of the accretion disc, a linear mass-$R_{blr}$ relationship might also be expected. Given calculated Seyfert galaxy black hole masses of order $10^8M_o$ and average quasar masses of order $10^9M_o$, this would scale the Seyfert galaxy $R_{blr}$ up to $3.3 \times 10^{17} cm$. While these relations are not self-consistent, either of them may be found consistent with the existing quasar structure sizes. [@r] also found structure on size scales of $10^{16} cm$ from microlensing of SDSS J1004+4112 which would then scale to $10^{18}cm$. These studies combined strongly evidence the presence of the Elvis outflow at a radial distance of approximately $10^{18} cm$ from the central source in quasars which may be detected by their reverberation of the optical continuum of the central quasar source. The [@VA07] result is also in direct conflict with the S-S accretion disc model, which has been applied in several unsuccessful attempts to describe microlensing observations of Q2237. First a simulation by [@w] used the S-S disc to model the microlensing observations but predicted a large microlensing event that was later observed not to have occurred. [@k] then attempted to apply the S-S disc in a new simulation but another failed prediction of large-amplitude microlensing resulted. Another attempt to simulate the Q2237 light curve by [@ei] produced the same large-amplitude microlensing events. These events are an inherent property of the S-S disc model where all of the luminosity emanates from the accretion disc, hence causing it all to be lensed simultaneously. Only by separating the luminosity into multiple regions, eg two regions, one inner and one outer, as in [@VA07], can these erroneous large-amplitude microlensing events be avoided. Previous attempts have been made to identify structure on the year timescale, including structure function analysis by Trevese et al [@t] and by Hawkins [@h96; @h06]. [@t] found strong anticorrelation on the $\sim 5$ year timescale but no finer structure - this is unsurprising as their results were an average of the results for multiple quasars taken at low temporal resolution, whereas the size scales of the Elvis outflow winds should be dependant on various quasar properties which differ depending on the launch mechanism and also should be noticed on smaller timescales than their observations were sensitive to. [@h96] also found variations on the $\sim 5$-year timescale again with poor temporal resolution but then put forth the argument that the variation was found to be redshift-independant and
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We present Nesterov-type acceleration techniques for Alternating Least Squares (ALS) methods applied to canonical tensor decomposition. While Nesterov acceleration turns gradient descent into an optimal first-order method for convex problems by adding a *momentum* term with a specific weight sequence, a direct application of this method and weight sequence to ALS results in erratic convergence behaviour or divergence. This is so because the tensor decomposition problem is non-convex and ALS is accelerated instead of gradient descent. We investigate how line search or restart mechanisms can be used to obtain effective acceleration. We first consider a cubic line search (LS) strategy for determining the momentum weight, showing numerically that the combined Nesterov-ALS-LS approach is competitive with or superior to other recently developed nonlinear acceleration techniques for ALS, including acceleration by nonlinear conjugate gradients (NCG) and LBFGS. As an alternative, we consider various restarting techniques, some of which are inspired by previously proposed restarting mechanisms for Nesterov’s accelerated gradient method. We study how two key parameters, the momentum weight and the restart condition, should be set. Our extensive empirical results show that the Nesterov-accelerated ALS methods with restart can be dramatically more efficient than the stand-alone ALS or Nesterov accelerated gradient method, when problems are ill-conditioned or accurate solutions are required. The resulting methods perform competitively with or superior to existing acceleration methods for ALS, and additionally enjoy the benefit of being much simpler and easier to implement. On a large and ill-conditioned 71$\times$1000$\times$900 tensor consisting of readings from chemical sensors used for tracking hazardous gases, the restarted Nesterov-ALS method outperforms any of the existing methods by a large factor.' author: - 'Drew Mitchell[^1]' - 'Nan Ye[^2]' - 'Hans De Sterck[^3]' bibliography: - 'ref.bib' title: Nesterov Acceleration of Alternating Least Squares for Canonical Tensor Decomposition --- Introduction ============ Canonical tensor decomposition. ------------------------------- Tensor decomposition has wide applications in machine learning, signal processing, numerical linear algebra, computer vision, natural language processing and many other fields [@kolda2009tensor]. This paper focuses on the CANDECOMP/PARAFAC (CP) decomposition of tensors [@kolda2009tensor], which is also called the canonical polyadic decomposition. CP decomposition approximates a given tensor $T \in R^{I_{1} \times \ldots \times I_{N}}$ by a low-rank tensor composed of a sum of $r$ rank-one terms, $\widetilde{T} = \sum_{i=1}^{r} a_{1}^{(i)} \circ \ldots \circ a_{N}^{(i)}$, where $\circ$ is the vector outer product. Specifically, we minimize the error in the Frobenius norm, $$\begin{aligned} \norm{T - \sum_{i=1}^{r} a_{1}^{(i)} \circ \ldots \circ a_{N}^{(i)}}_{F}. \label{eq:cp}\end{aligned}$$ Finding efficient methods for computing tensor decomposition is an active area of research, but the alternating least squares (ALS) algorithm is still one of the most efficient algorithms for CP decomposition. ALS finds a CP decomposition in an iterative way. In each iteration, ALS sequentially updates a block of variables at a time by minimizing expression (\[eq:cp\]), while keeping the other blocks fixed: first $A_{1} = (a_{1}^{(1)}, \ldots, a_{1}^{(r)})$ is updated, then $A_{2} = (a_{2}^{(1)}, \ldots, a_{2}^{(r)})$, and so on. Updating a factor matrix $A_{i}$ is a linear least-squares problem that can be solved in closed form. Collecting the matrix elements of the $A_{i}$’s in a vector $x$, we shall use $ALS(x)$ to denote the updated variables after performing one full ALS iteration starting from $x$. When the CP decomposition problem is ill-conditioned, ALS can be slow to converge [@acar2011scalable], and recently a number of methods have been proposed to accelerate ALS. One approach uses ALS as a nonlinear preconditioner for general-purpose nonlinear optimization algorithms, such as nonlinear GMRES [@sterck2012nonlinear], nonlinear conjugate gradients (NCG) [@sterck2015nonlinearly], and LBFGS [@sterck2018nonlinearly]. Alternatively, the general-purpose optimization algorithms can be seen as nonlinear accelerators for ALS. In [@wang2018accelerating], an approach was proposed based on the Aitken-Stefensen acceleration technique. These acceleration techniques can substantially improve ALS convergence speed when problems are ill-conditioned or an accurate solution is required. Nesterov’s accelerated gradient method. {#subsec:Nesterov} --------------------------------------- In this paper, we adapt Nesterov’s acceleration method for gradient descent to the ALS method for CP tensor decomposition. Nesterov’s method of accelerating gradient descent is a celebrated method for speeding up the convergence rate of gradient descent, achieving the optimal convergence rate obtainable for first order methods on convex problems [@nesterov1983method]. Consider the problem of minimizing a function $f(x)$, $$\min_{x} f(x).$$ Nesterov’s accelerated gradient descent starts with an initial guess $x_{1}$. For $k \ge 1$, given $x_{k}$, a new iterate $x_{k+1}$ is obtained by first adding a multiple of the *momentum* $x_{k} - x_{k-1}$ to $x_{k}$ to obtain an auxiliary variable $y_{k}$, and then performing a gradient descent step at $y_{k}$. The update equations at iteration $k \ge 1$ are as follows: $$\begin{aligned} y_{k} &= x_{k} + \beta_{k} (x_{k} - x_{k-1}), \\ x_{k+1} &= y_{k} - \alpha_{k} {\nabla}f(y_{k}), \label{eq:nesterov}\end{aligned}$$ where the gradient descent step length $\alpha_{k}$ and the momentum weight $\beta_{k}$ are suitably chosen numbers, and $x_{0} = x_{1}$ so that the first iteration is simply gradient descent. There are a number of ways to choose the $\alpha_{k}$ and $\beta_{k}$ so that Nesterov’s accelerated gradient descent converges at the optimal $O(1/k^{2})$ in function value for smooth convex functions. For example, when $f(x)$ is a convex function with $L$-Lipschitz gradient, by choosing $\alpha_{k} = \frac{1}{L}$, and $\beta_{k}$ as $$\begin{aligned} \lambda_{0} &= 0, \quad \lambda_{k} = \frac{1 + \sqrt{1 + 4 \lambda_{k-1}^{2}}}{2}, \label{eq:lambda} \\ \beta_{k} &= \frac{\lambda_{k-1} - 1}{\lambda_{k}}. \label{eq:momentum1}\end{aligned}$$ one obtains the following $O(1/k^{2})$ convergence rate: $$f(x_{k}) - f(x^{*}) \le \frac{2 L \norm{x_1 - x^{*}}}{k^{2}},$$ where $x^{*}$ is a minimizer of $f$. See, e.g., [@su2016differential] for more discussion on the choices of momentum weights. Main approach and contributions of this paper. ---------------------------------------------- Recent work has seen extensions of Nesterov’s accelerated gradient method in several ways: either the method is extended to the non-convex setting [@ghadimi2016accelerated; @li2015accelerated], or Nesterov’s approach is applied to accelerate convergence of methods that are not directly of gradient descent-type, such as the Alternating Direction Method of Multipliers (ADMM) [@goldstein2014fast]. This paper attacks both of these challenges at the same time for the canonical tensor decomposition problem: we develop Nesterov-accelerated algorithms for the non-convex CP tensor decomposition problem, and we do this by accelerating ALS steps instead of gradient descent steps. Our basic approach is to apply Nesterov acceleration to ALS in a manner that is equivalent to replacing the gradient update in the second step of Nesterov’s method, Eq. (\[eq:nesterov\]), by an ALS step. Replacing gradient directions by update directions provided by ALS is essentially also the approach taken in [@sterck2012nonlinear; @sterck2015nonlinearly; @sterck2018nonlinearly] to obtain nonlinear acceleration of ALS by NGMRES, NCG and LBFGS; in the case of Nesterov’s method the procedure is extremely simple and easy to implement. However, applying this procedure directly fails for several reasons. First, it is not clear to which extent the $\beta_{k}$ momentum weight sequence of (\[eq:momentum1\]), which guarantees optimal convergence for gradient acceleration in the convex case, applies at all to our case of ALS acceleration for a non-convex problem. Second, and more generally, it is well-known that optimization methods for non-convex problems require mechanisms to safeguard against ‘bad steps’, especially when the solution is not close to a local minimum. The main contribution of this paper is to propose and explore two
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We investigate the microscopic origin of the ferromagnetic and antiferromagnetic spin exchange couplings in the quasi one-dimensional cobalt compound Ca$_3$Co$_2$O$_6$. In particular, we establish a local model which stabilizes a ferromagnetic alignment of the $S=2$ spins on the cobalt sites with trigonal prismatic symmetry, for a sufficiently strong Hund’s rule coupling on the cobalt ions. The exchange is mediated through a $S=0$ cobalt ion at the octahedral sites of the chain structure. We present a strong coupling evaluation of the Heisenberg coupling between the $S=2$ Co spins on a separate chain. The chains are coupled antiferromagnetically through super-superexchange via short O-O bonds.' author: - Raymond Frésard - Christian Laschinger - Thilo Kopp - Volker Eyert title: 'The Origin of Magnetic Interactions in Ca$_3$Co$_2$O$_6$' --- Recently there has been renewed interest in systems exhibiting magnetization steps. In classical systems such as CsCoBr$_3$ one single plateau is typically observed in the magnetization versus field curve at one third of the magnetization at saturation.[@Hida94] This phenomenon attracted considerable attention, and Oshikawa, Yamanaka and Affleck demonstrated that Heisenberg antiferromagnetic chains exhibit such magnetization plateaus when embedded in a magnetic field.[@Oshikawa97] These steps are expected when $N_c (S-m)$ is an integer, where $N_c$ is the number of sites in the magnetic unit cell, $S$ the spin quantum number, and $m$ the average magnetization per spin, which we shall refer to as the OYA criterion. The steps can be stable when chains are coupled, for instance in a ladder geometry. In that case the magnetic frustration is an important ingredient to their stability.[@Mila98] Plateaus according to the OYA criterion are also anticipated for general configurations, provided gapless excitations do not destabilize them.[@Oshikawa00] Indeed several systems exhibiting magnetization steps are now known;[@Shiramura98; @Narumi98] they all obey the OYA criterion, they are usually far from exhausting all the possible $m$ values, they all are frustrated systems, and they all can be described by an antiferromagnetic Heisenberg model. Related behavior has been recently found in other systems. For example, up to five plateaus in the magnetization vs. field curve have been observed in Ca$_3$Co$_2$O$_6$ at low temperature[@Aasland97; @Kageyama97; @Maignan00]. However there is to date no microscopic explanation to this phenomenon, even though the location of the plateaus is in agreement with the OYA criterion. Ca$_3$Co$_2$O$_6$ belongs to the wide family of compounds A’$_3$ABO$_6$, and its structure belongs to the space group R3c. It consists of infinite chains formed by alternating face sharing AO$_6$ trigonal prisms and BO$_6$ octahedra — where Co atoms occupy both A and B sites. Each chain is surrounded by six chains separated by Ca atoms. As a result a Co ion has two neighboring Co ions on the same chain, at a distance of $2.59$ Å, and twelve Co neighbors on the neighboring chains at distances $7.53$ Å  (cf. Fig. \[Fig:plane\]).[@Fjellvag96] Concerning the magnetic structure, the experiment points toward a ferromagnetic ordering of the magnetic Co ions along the chains, together with antiferromagnetic correlations in the buckling a-b plane.[@Aasland97] The transition into the ordered state is reflected by a cusp-like singularity in the specific heat at 25 K,[@Hardy03] — at the temperature where a strong increase of the magnetic susceptibility is observed. Here we note that it is particularly intriguing to find magnetization steps in a system where the dominant interaction is ferromagnetic. In order to determine the effective magnetic Hamiltonian of a particular compound one typically uses the Kanamori-Goodenough-Anderson (KGA) rules[@Goodenough]. Knowledge of the ionic configuration of each ion allows to estimate the various magnetic couplings. When applying this program to Ca$_3$Co$_2$O$_6$ one faces a series of difficulties specifically when one tries to reconcile the neutron scattering measurements that each second Co ion is non-magnetic. Even the assumption that every other Co ion is in a high spin state does not settle the intricacies related to the magnetic properties; one still has to challenge issues such as: i) what are the ionization degrees of the Co ions? ii) how is an electron transfered from one cobalt ion to a second? iii) which of the magnetic Co ions are magnetically coupled? iv) which mechanism generates a ferromagnetic coupling along the chains? These questions are only partially resolved by ab initio calculations. In particular, one obtains that both Co ions are in 3+ configurations.[@Whangbo03] Moreover both Co-O and direct Co-Co hybridizations are unusually large, and low spin and high spin configurations for the Co ions along the chains alternate.[@Eyert03] Our publication addresses the magnetic couplings, and in particular the microscopic origin of the ferromagnetic coupling of two Co ions through a non-magnetic Co ion. In view of the plethoric variety of iso-structural compounds,[@Stitzer01] the presented mechanism is expected to apply to many of these systems. We now derive the magnetic inter-Co coupling for Ca$_3$Co$_2$O$_6$ from microscopic considerations. The high-spin low-spin scenario confronts us with the question of how a ferromagnetic coupling can establish itself, taking into account that the high spin Co ions are separated by over 5 Å, linked via a non-magnetic Co and several oxygens. Let us first focus on the Co-atoms in a single Co-O chain of Ca$_3$Co$_2$O$_6$. As mentioned above the surrounding oxygens form two different environments in an alternating pattern. We denote the Co ion in the center of the oxygen-octahedron Co1, and the Co ion in the trigonal prisms Co2. The variation in the oxygen-environment leads to three important effects. First, there is a difference in the strength of the crystal field splitting, being larger in the octahedral environment. As a result Co1 is in the low spin state and Co2 in the high spin state. Second, the local energy levels are in a different sequence. For the octahedral environment we find the familiar $t_{2g}$–$e_g$ splitting, provided the axes of the local reference frame point towards the surrounding oxygens. The trigonal prismatic environment accounts for a different set of energy levels. For this local symmetry one expects a level scheme with $d_{3z^2-r^2}$ as lowest level, followed by two twofold degenerate pairs $d_{xy}$, $d_{x^2-y^2}$ and $d_{xz}$, $d_{yz}$. However, our LDA calculations[@Eyert03] show that the $d_{3z^2-r^2}$ level is actually slightly above the first pair of levels. Having clarified the sequence of the energy levels, we now turn to the microscopic processes which link the Co ions. Two mechanisms may be competing: either the coupling involves the intermediate oxygens, or direct Co-Co overlap is more important. Relying on electronic structure calculations, we may safely assume that the direct Co-Co overlap dominates.[@Eyert03] The identification of the contributing orbitals is more involved. Following Slater and Koster[@Slater54] one finds that only the $3z^2$-$r^2$ orbitals along the chains have significant overlap. However, we still have to relate the Koster-Slater coefficients and the coefficients for the rotated frame since the natural reference frames for Co1 and Co2 differ. On the Co2 atoms with the triangular prismatic environment the $z$-axis is clearly defined along the chain direction, and we choose the $x$ direction to point toward one oxygen. This defines a reference frame $S$. The $x$ and $y$ directions are arbitrary and irrelevant to our considerations. The octahedral environment surrounding the Co1 atoms defines the natural coordinate system, which we call $S'$. By rotating $S'$ onto $S$ one obtains the $3z^2$-$r^2$ orbital in the reference frame $S$ as an equally weighted sum of $x'y'$, $x'z'$, $y'z'$ orbitals in $S'$. The above observation that the only significant overlap is due to the $3z^2$-$r^2$ orbitals on both Co ions now translates into an overlap of the $3z^2$-$r^2$ orbital on high spin cobalt with all $t_{2g}$ orbitals on low spin cobalt. ![Typical hopping paths for a) ferromagnetic and b) antiferromagnetic ordering. The displayed ferromagnetic path is the only one for ferromagnetic ordering and has the highest multiplicity of all, ferromagnetic and antiferromagnetic. There are similar paths for antiferromagnetic ordering but with a Hund’s rule penalty and lower multiplicity. The path in (b) is unique for the antiferromagnetic case and has low energy but also low multiplicity. \[Fig:levelscheme\]](fig_1.eps){width=".47\textwidth"} We proceed with a strong coupling expansion to identify the magnetic coupling along the chain. This amounts to determine the difference in energy, between the ferromagnetic and antiferromagnetic configurations, to fourth order in the hopping, since
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | The magnetic permeability of a ferrite is an important factor in designing devices such as inductors, transformers, and microwave absorbing materials among others. Due to this, it is advisable to study the magnetic permeability of a ferrite as a function of frequency. When an excitation that corresponds to a harmonic magnetic field **H** is applied to the system, this system responds with a magnetic flux density **B**; the relation between these two vectors can be expressed as **B** =$\mu(\omega)$ **H** . Where $\mu$ is the magnetic permeability. In this paper, ferrites were considered linear, homogeneous, and isotropic materials. A magnetic permeability model was applied to NiZn ferrites doped with Yttrium. The parameters of the model were adjusted using the Genetic Algorithm. In the computer science field of artificial intelligence, Genetic Algorithms and Machine Learning does rely upon nature’s bounty for both inspiration nature’s and mechanisms. Genetic Algorithms are probabilistic search procedures which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover. For the numerical fitting usually is used a nonlinear least square method, this algorithm is based on calculus by starting from an initial set of variable values. This approach is mathematically elegant compared to the exhaustive or random searches but tends easily to get stuck in local minima. On the other hand, random methods use some probabilistic calculations to find variable sets. They tend to be slower but have greater success at finding the global minimum regardless of the initial values of the variables author: - 'Silvina Boggi, Adrian C. Razzitte,Gustavo Fano' title: Numerical response of the magnetic permeability as a funcion of the frecuency of NiZn ferrites using Genetic Algorithm --- Magnetic permeability model =========================== The ferrites materials have been widely used as various electronic devices such as inductors, transformers, and electromagnetic wave absorbers in the relatively high-frequency region up to a few hundreds of MHz. The electromagnetic theory can be used to describe the macroscopic properties of matter. The electromagnetic fields may be characterized by four vectors: electric field **E**, magnetic flux density **B** , electric flux density **D**, and magnetic field **H**, which at ordinary points satisfy Maxwell’s equations. The ferrite media under study can be considerer as linear, homogeneous, and isotropic. The relation between the vectors **B** and **H** can be expressed as : **B** =$\mu(\omega)$**H** . Where $\mu$ is the magnetic permeability of the material. Another important parameter for magnetic materials is magnetic susceptibility $\chi$ which relates the magnetization vector M to the magnetic field vector **H** by the relationship: **M** =$\chi(\omega)$ **H** . Magnetic permeability $\mu$ and magnetic susceptibility $\chi$ are related by the formula: $\mu=1 + \chi $. Magnetic materials in sinusoidal fields have, in fact, magnetic losses and this can be expressed taking $\mu$ as a complex parameter:$ \mu=\mu' + j \mu" $ [@VonHippel] In the frequency range from RF to microwaves, the complex permeability spectra of the ferrites can be characterized by two different magnetization mechanisms: domain wall motion and gyromagnetic spin rotation. Domain wall motion contribution to susceptibility can be studied through an equation of motion in which pressure is proportional to the magnetic field [@greiner]. Assuming that the magnetic field has harmonic excitation $H= H_{0} e^{j\omega t}$ , the contribution of domain wall to the susceptibility $\chi _{d}$ is: $$\label{eq:chid} \chi_{d}=\frac{\omega^{2}\;\chi_{d0}}{\omega^{2}{_{d}}-\omega^{2}-j\omega\beta}$$   Here, $\chi_{d}$ is the magnetic susceptibility for domain wall, $\omega_{d}$ is the resonance frequency of domain wall contribution, $\chi_{d0}$ is the static magnetic susceceptibility, $\beta$ is the damping factor and $\omega$ is the frequency of the external magnetic field. Gyromagnetic spin contribution to magnetic susceptibility can be studied through a magnetodynamic equation [@sohoo][@wohlfarth]. The magnetic susceptibility $\chi_{s}$ can be expressed as: $$\ \chi_{s}=\frac{\left(\omega_{s}-j\omega\alpha\right)\omega_{s}\chi_{s0}}{\left(\omega_{s}-j\omega\alpha\right)^{2}-\omega^{2}},$$ Here, $\chi_{s}$ is the magnetic susceptibility for gyromagnetic spin, $\omega_{s}$ is the resonance frequency of spin contribution, $ \chi_{s0}$ is the static magnetic susceptibility, and $\alpha$ is the damping factor and $\omega$ is the frequency of the external magnetic field. The total magnetic permeability results [@PhysicaB]: $$\label{eq:modelo} \mu=1+ \chi_{d}+\chi_{s}=1+\frac{\omega^{2}\;\chi_{d0}}{\omega^{2}{_{d}}-\omega^{2}-j\omega\beta}+\frac{\left(\omega_{s}+j\omega\alpha\right)\omega_{s}\chi_{s0}}{\left(\omega_{s}+j\omega\alpha\right)^{2}-\omega^{2}}$$   Separating the real and the imaginary parts of equation (\[eq:modelo\]) we get: $$\label{mureal} \mu'\left(\omega\right)=1+\frac{\omega_{d}^{2}\;\chi_{d0}\left(\omega_{d}^{2}-\omega^{2}\right)}{\left(\omega_{d}^{2}-\omega^{2}\right)^{2}+\omega^{2}\beta^{2}}+\frac{\omega_{s}^{2}\;\chi_{s0}\left(\omega_{s}^{2}-\omega^{2}\right)+\omega^{2}\alpha^{2}}{\left(\omega_{s}^{2}-\omega^{2}\left(1+\alpha^{2}\right)\right)^{2}+4\omega^{2}\omega_{s}^{2}\alpha^{2}}$$ $$\label{muimag} \mu"\left(\omega\right)=1+\frac{\omega_{d}^{2}\;\chi_{d0}\;\omega\;\beta}{\left(\omega_{d}^{2}-\omega^{2}\right)^{2}+\omega^{2}\beta^{2}}+\frac{\omega_{s}\;\chi_{s0}\;\omega\;\alpha\left(\omega_{s}^{2}+\omega^{2}\left(1+\alpha^{2}\right)\right)}{\left(\omega_{s}^{2}-\omega^{2}\left(1+\alpha^{2}\right)\right)^{2}+4\omega^{2}\omega_{s}^{2}\alpha^{2}},$$ Magnetic losses, represented by the imaginary part of the magnetic permeability, can be extremely small; however, they are always present unless we consider vacuum [@landau]. From a physics point of view, the existing relationship between $\mu'$ and $\mu"$ reflects that the mechanisms of energy storage and dissipation are two aspects of the same phenomenon [@boggi]. Genetic Algorithms ================== Genetic Algorithms (GA) are probabilistic search procedures which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover. A GA allows a population composed of many individuals evolve according to selection rules designed to maximize “fitness” or minimize a “cost function”. A path through the components of AG is shown as a flowchart in Figure (\[fig:diagramaflujo\]) ![Flowchart of a Genetic Algorithm.[]{data-label="fig:diagramaflujo"}](diagramaf.eps){width="70.00000%"} Selecting the Variables and the Cost Function --------------------------------------------- A cost function generates an output from a set of input variables (a chromosome). The Cost function’s object is to modify the output in some desirable fashion by finding the appropriate values for the input variables. The Cost function in this work is the difference between the experimental value of the permeability and calculated using the parameters obtained by the genetic algorithm. To begin the AG is randomly generated an initial population of chromosomes. This population is represented by a matrix in which each row is a chromosome that contains the variables to optimize, in this work, the parameters of permeability model. [@1] Natural Selection ------------------ Survival of the fittest translates into discarding the chromosomes with the highest cost . First, the costs and associated chromosomes are ranked from lowest cost to highest cost. Then, only the best are selected to continue, while the rest are deleted. The selection rate, is the fraction of chromosomes that survives for the next step of mating. Select mates ------------- Now two chromosomes are selected from the set surviving to produce two new offspring which contain traits from each parent. Chromosomes with lower cost are more likely to be selected from the chromosomes that survive natural selection. Offsprings are born to replace the discarded chromosomes Mating ------ The simplest methods choose one or more points in the chromosome to mark as the crossover points. Then the variables between these points are merely swapped between the two parents. Crossover points are randomly selected. Mutación -------- If care is not taken, the GA can converge too quickly into one region of a local minimum of the cost function rather
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Explicit formulas involving a generalized Ramanujan sum are derived. An analogue of the prime number theorem is obtained and equivalences of the Riemann hypothesis are shown. Finally, explicit formulas of Bartz are generalized.' address: 'Institut für Mathematik, Universität Zürich, Winterthurerstrasse 190, CH-8057 Zürich, Switzerland' author: - Patrick Kühn - Nicolas Robles title: Explicit formulas of a generalized Ramanujan sum --- Introduction ============ In [@ramanujan] Ramanujan introduced the following trigonometrical sum. The Ramanujan sum is defined by $$\begin{aligned} \label{ramanujansum} {c_q}(n) = \sum_{\substack{(h,q) = 1}} {{e^{2\pi inh/q}}},\end{aligned}$$ where $q$ and $n$ are in ${\mathbb{N}}$ and the summation is over a reduced residue system $\bmod \; q$. Many properties were derived in [@ramanujan] and elaborated in [@hardy]. Cohen [@cohen] generalized this arithmetical function in the following way. Let $\beta \in {\mathbb{N}}$. The $c_q^{(\beta)}(n)$ sum is defined by $$\begin{aligned} \label{cohendef} c_q^{(\beta )}(n) = \sum_{\substack{(h,{q^\beta })_\beta = 1}} {{e^{2\pi inh/{q^\beta }}}},\end{aligned}$$ where $h$ ranges over the non-negative integers less than $q^{\beta}$ such that $h$ and $q^{\beta}$ have no common $\beta$-th power divisors other than $1$. It follows immediately that when $\beta = 1$, becomes the Ramanujan sum . Among the most important properties of $c_q^{(\beta )}(n)$ we mention that it is a multiplicative function of $q$, i.e. $$c_{pq}^{(\beta )}(n) = c_p^{(\beta )}(n)c_q^{(\beta )}(n),\quad (p,q) = 1.$$ The purpose of this paper is to derive explicit formulas involving $c_q^{(\beta)}(n)$ in terms of the non-trivial zeros $\rho$ of the Riemann zeta-function and establish arithmetic theorems. Let $z \in {\mathbb{C}}$. The generalized divisor function $\sigma_z^{(\beta)}(n)$ is the sum of the $z^{\operatorname{th}}$ powers of those divisors of $n$ which are $\beta^{\operatorname{th}}$ powers of integers, i.e. $$\sigma_z^{(\beta)}(n) = \sum_{{d^\beta }|n} {{d^{\beta z}}}.$$ The object of study is the following. For $x \ge 1$, we define $$\mathfrak{C}^{(\beta )}(n,x) = \sum_{q \leqslant x} {c_q^{(\beta )}(n)}.$$ For technical reasons we set $$\mathfrak{C}^{\sharp,(\beta )}(n,x) = \begin{cases} \mathfrak{C}^{(\beta )}(n,x), & \mbox{ if } x \notin {\mathbb{N}},\\ \mathfrak{C}^{(\beta )}(n,x) - \tfrac{1}{2}c_x^{(\beta)}(n), & \mbox{ if } x \in {\mathbb{N}}. \end{cases}$$ The explicit formula for $\mathfrak{C}^{\sharp,(\beta )}(n,x)$ is then as follows. \[explicitcohenramanujan\] Let $\rho$ and $\rho_m$ denote non-trivial zeros of $\zeta(s)$ of multiplicity $1$ and $m \ge 2$ respectively. Fix integers $\beta$, $n$. There is an $1>\varepsilon > 0$ and a $T_0 = T_0(\varepsilon)$ such that [(\[br-a\])]{.nodecor} and [(\[br-b\])]{.nodecor} hold for a sequence $T_{\nu}$ and $$\mathfrak{C}^{\sharp,(\beta )}(n,x) = - 2\sigma_1^{(\beta )}(n) + \sum_{\substack{|\gamma | < T_{\nu}}} {\frac{{\sigma_{1 - \rho /\beta }^{(\beta )}(n)}}{{\zeta '(\rho )}}\frac{{{x^\rho }}}{\rho }} + {\rm K}_{T_{\nu}}(x) - \sum_{k = 1}^\infty {\frac{{{{( - 1)}^{k}}(2\pi /{x})^{2k}}}{{(2k)!k\zeta (2k + 1)}}\sigma_{1 + 2k/\beta }^{(\beta )}(n)} + E_{T_{\nu}}(x) ,$$ where the error term satisfies $$E_{T_{\nu}}(x) \ll \frac{x \log x}{T_{\nu}^{1-\varepsilon}} ,$$ and where for the zeros of multiplicity $m \ge 2$ we have $${\rm K}_{T_{\nu}}(x) = \sum_{m \geqslant 2} {\sum_{{|\gamma_m|<T_{\nu}}} {\kappa ({\rho _m},x)} } ,\quad \kappa ({\rho _m},x) = \frac{1}{{(m - 1)!}}\mathop {\lim }\limits_{s \to {\rho _m}} \frac{{{d^{m - 1}}}}{{d{s^{m - 1}}}}\bigg( {{{(s - {\rho _m})}^m}\frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}}\frac{{{x^s}}}{s}} \bigg).$$ Moreover, in the limit $\nu \to \infty$ we have $$\mathfrak{C}^{\sharp,(\beta )}(n,x) = - 2\sigma_1^{(\beta )}(n) + \lim_{\nu \to \infty} \sum_{\substack{|\gamma | < T_{\nu}}} {\frac{{\sigma_{1 - \rho /\beta }^{(\beta )}(n)}}{{\zeta '(\rho )}}\frac{{{x^\rho }}}{\rho }} + \lim_{\nu \to \infty} {\rm K}_{T_{\nu}}(x) - \sum_{k = 1}^\infty {\frac{{{{( - 1)}^{k}}(2\pi /{x})^{2k}}}{{(2k)!k\zeta (2k + 1)}}\sigma_{1 + 2k/\beta }^{(\beta )}(n)}.$$ The next result is a generalization of a well-known theorem of Ramanujan which is of the same depth as the prime number theorem. \[line1theorem\] For fixed $\beta$ and $n$ in ${\mathbb{N}}$, we have $$\begin{aligned} \label{line1cohen} \frac{{\sigma_{1 - s/\beta }^{(\beta )}(n)}}{{\zeta (s)}} = \sum_{q = 1}^\infty {\frac{{c_q^{(\beta )}(n)}}{{{q^s}}}}\end{aligned}$$ at all points on the line ${\operatorname{Re}}(s)=1$. \[corollaryPNT1\] Let $\beta \in {\mathbb{N}}$. One has that $$\label{pnt_ramanujan} \sum\limits_{q = 1}^\infty {\frac{{c_q^{(\beta )}(n)}}{q}} = 0, \quad \beta \ge 1, \quad \textnormal{and} \quad \sum_{q = 1}^\infty {\frac{{c_q^{(\beta )}(n)}}{{{q^\beta }}}} = \begin{cases} \tfrac{{\sigma_0^{(\beta )}(n)}}{{\zeta (\beta )}} & \mbox{ if } \beta > 1,\\ 0 & \mbox{ if } \beta =1. \end{cases}$$ In particular $$\begin{aligned} \label{pnt_ramanujan2} \sum_{q = 1}^\infty {\frac{c_q(n)}{q}} = 0 \quad \textnormal{and} \quad \sum_{q = 1}^\infty {\frac{\mu(q)}{q}} = 0.\end{aligned}$$ It is possible to further extend the validity of deeper into the critical strip, however, this is done at the cost of the Riemann hypothesis. \[equivalence1\] Let $\beta, n \in {\mathbb{N}}$. The Riemann hypothesis is true if and only if $$\begin{aligned} \label{RH_equivalent} \sum_{q = 1}^\infty {\frac{{c_q^{(\beta )}(n)}}{{{q^s}}}}\end{aligned}$$ is convergent and its sum is $\sigma_{1-s/\beta}^{(\beta)}(n) / \zeta(s)$, for every $s$ with $\sigma > \tfrac{1}{2}$. This is a generalization of a theorem proved by Littlewood (see [@littlewood] and $\mathsection$14.25 of [@titchmarsh]) for the special case where $n =1$. \[equivalence2\] A necessary and sufficient condition for the Riemann hypothesis is $$\mathfrak{C}^{(\beta)}(n,x) \ll_{n,\beta
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We present the results of a deep study of the isolated dwarf galaxies Andromeda XXVIII and Andromeda XXIX with Gemini/GMOS and Keck/DEIMOS. Both galaxies are shown to host old, metal-poor stellar populations with no detectable recent star formation, conclusively identifying both of them as dwarf spheroidal galaxies (dSphs). And XXVIII exhibits a complex horizontal branch morphology, which is suggestive of metallicity enrichment and thus an extended period of star formation in the past. Decomposing the horizontal branch into blue (metal poor, assumed to be older) and red (relatively more metal rich, assumed to be younger) populations shows that the metal rich are also more spatially concentrated in the center of the galaxy. We use spectroscopic measurements of the Calcium triplet, combined with the improved precision of the Gemini photometry, to measure the metallicity of the galaxies, confirming the metallicity spread and showing that they both lie on the luminosity-metallicity relation for dwarf satellites. Taken together, the galaxies exhibit largely typical properties for dSphs despite their significant distances from M31. These dwarfs thus place particularly significant constraints on models of dSph formation involving environmental processes such as tidal or ram pressure stripping. Such models must be able to completely transform the two galaxies into dSphs in no more than two pericentric passages around M31, while maintaining a significant stellar populations gradient. Reproducing these features is a prime requirement for models of dSph formation to demonstrate not just the plausibility of environmental transformation but the capability of accurately recreating real dSphs.' author: - 'Colin T. Slater and Eric F. Bell' - 'Nicolas F. Martin' - 'Erik J. Tollerud and Nhung Ho' title: 'A Deep Study of the Dwarf Satellites Andromeda XXVIII & Andromeda XXIX' --- Introduction ============ The unique physical properties and environments of dwarf galaxies make them excellent test cases for improving our understanding of the processes that affect the structure, stellar populations, and evolution of galaxies. Because of their shallow potential wells, dwarf galaxies are particularly sensitive to a wide range of processes that may only weakly affect larger galaxies. These processes range from cosmological scales, such as heating by the UV background radiation [@gnedin00], to interactions at galaxy scales such as tidal stripping and tidal stirring [@mayer01; @klimentowski09; @kravtsov04], resonant stripping [@donghia09], and ram pressure stripping [@mayer06], to the effects of feedback from from the dwarfs themselves [@dekel86; @maclow99; @gnedin02; @sawala10]. Many studies have focused on understanding the differences between the gas-rich, star forming dwarf irregular galaxies (dIrrs) and the gas-poor, non-star-forming dwarf spheroidals. While a number of processes could suitably recreate the broad properties of this differentiation, finding observational evidence in support of any specific theory has been difficult. One of the main clues in this effort is the spatial distribution of dwarfs; while dIrrs can be found throughout the Local Group, dSphs principally are only found within 200-300 kpc of a larger host galaxy such as the Milky Way or Andromeda [@einasto74; @vandenbergh94; @grebel03]. This is trend is also reflected in the gas content of Local Group dwarfs [@blitz00; @grcevich09]. This spatial dependence seems to indicate that environmental effects such as tides and ram pressure stripping are likely to be responsible for creating dSphs. However, there are outliers from this trend, such as Cetus, Tucana, and Andromeda XV, which are dSphs that lie more than 700 kpc from either the Milky Way or Andromeda. The existence of such distant dSphs may suggest that alternative channels for dSph formation exist [@kazantzidis11b], or it could be an incidental effect seen in galaxies that have passed through a larger host on very radial orbits [@teyssier12; @slater13]. The set of isolated dwarf galaxies was recently enlarged by the discovery of Andromeda XXVIII and XXIX, which by their position on the sky were known to be approximately 360 and 200 kpc from Andromeda, respectively [@slater11; @bell11]. While And XXIX was identified as a dSph by the images confirming it as a galaxy, there was no comparable data on And XXVIII (beyond the initial SDSS discovery data) with which to identify it as a dSph or dIrr. We thus sought to obtain deeper imaging of both galaxies down to the horizontal branch level which would enable a conclusive identification of the galaxies as dSphs or dIrrs by constraining any possible recent star formation. In addition, the deep photometry permits more precise determination of the spatial structure and enables the interpretation of the spectroscopic Calcium triplet data from @tollerud13 to obtain a metallicity measurement. As we will discuss, the information derived from these measurements along with dynamical considerations imposed by their position in the Local Group can together place significant constraints on plausible mechanisms for the origin of these two dSphs. This work is organized as follows: we discuss the imaging data and the reduction process in Section \[data\], and illustrate the general features of the color-magnitude diagram in Section \[obsCMD\]. Spectroscopic metallicities are presented in Section \[spectra\], and the structure and stellar populations of the dwarfs are discussed in Section \[structure\]. We discuss the implications of these results for theories of dSph formation in Section \[discussion\]. Imaging Observations & Data Reduction {#data} ===================================== Between 22 July 2012 and 13 August 2012 we obtained deep images of And XXVIII and XXIX with the GMOS instrument on Gemini-North (Gemini program GN-2012B-Q-40). The observations for each dwarf consisted of a total of 3150 seconds in SDSS-i band and 2925 seconds in r, centered on the dwarf. Because the dwarfs each nearly fill the field of view of the instrument, we also obtained a pair of flanking exposures for each dwarf to provide an “off-source” region for estimating the contamination from background sources. These exposures consisted of at least 1350 s in both r and i, though some fields received a small number of extra exposures. The images were all taken in 70th percentile image quality conditions or better, which yielded excellent results with the point source full width at half maximum ranging between 0.47and 0.8. All of the images were bias subtracted, flat fielded, and coadded using the standard bias frames and twilight flats provided by Gemini. The reduced images can be seen in Figure \[images\]. Residual flat fielding and/or background subtraction uncertainty exists at the 1% level (0.01 magnitudes, roughly peak to valley). PSF photometry was performed using DAOPHOT [@stetson87], which enabled accurate measurements even in the somewhat crowded centers of the dwarfs. In many cases the seeing in one filter was much better than the other, such as for the core of And XXVIII where the seeing was $0.47\arcsec$ in i and $0.68\arcsec$ in r. In these cases we chose to first detect and measure the position of stars in the image with the best seeing, and then require the photometry of the other band to reuse the positions of stars detected in the better band. This significantly extends our detection limit, which would otherwise be set by the shallower band, but with limited color information at these faint magnitudes. The images were calibrated to measurements from the Sloan Digital Sky Survey (SDSS), Data Release 9 [@DR9]. For each stacked image we cross-matched all objects from the SDSS catalog that overlapped our fields, with colors between $-0.2 < (r-i)_0 < 0.6$, and classified as stars both by SDSS and DAOPHOT. Star-galaxy separation was performed using the “sharp” parameter from DAOPHOT. From this we measured the weighted mean offset between the SDSS magnitudes and the instrumental magnitudes to determine the zeropoint for each field. Between the saturation limit of the Gemini data, mitigated by taking several exposures, and faint limits of the SDSS data (corresponding to approximately $19 < i < 22.5$ and $19.5 < r < 22.5$) there were of order 100 stars used for the calibration of each frame. Based on the calculated stellar measurement uncertainties the formal uncertainty on the calibration is at the millimagnitude level, but unaccounted systematic effects likely dominate the statistical uncertainty (e.g., precision reddening measurements). All magnitudes were dereddened with the extinction values from @schlafly11. The photometric completeness of each stacked image was estimated by artificial star tests. For each field we took the PSF used by DAOPHOT for that field and inserted a large grid of artificial stars, with all of the stars at the same magnitude but with Poisson noise on the actual pixel values added to each image. This was performed for both r and i band images simultaneously, and the resulting pair of images was then run through the same automated DAOPHOT pipeline that was used on the original image. Artificial stars were inserted over a a grid of i band magnitudes and r-i colors, producing measurements of the recovery rate that cover the entire CMD. The 50% completeness limit for both dwarfs is at least $r_0 = 25.5$, with slightly deeper data in the i-band for And XXVIII. The observed CMDs suffer
{ "pile_set_name": "ArXiv" }
null
null
1.2cm [**The ALEPH Collaboration**]{}\ 1.2cm [**Abstract**]{} Searches for scalar top, scalar bottom and degenerate scalar quarks have been performed with data collected with the ALEPH detector at LEP. The data sample consists of 57 $\mathrm{pb}^{-1}$ taken at $\rts$ = 181–184 GeV. No evidence for scalar top, scalar bottom or degenerate scalar quarks was found in the channels $\stop \rightarrow \mathrm{c}\neu$, $\stop \rightarrow \mathrm{b}\ell\snu$, $\sbot \rightarrow \mathrm{b}\neu$, and $\mathrm{\tilde{q}} \rightarrow \mathrm{q}\neu$. From the channel $\stop \rightarrow \mathrm{c}\neu$ a limit of 74 $\gev$ has been set on the scalar top quark mass, independent of the mixing angle. This limit assumes a mass difference between the $\stop$ and the $\neu$ in the range 10–40 $\gev$. From the channel $\stop \rightarrow \mathrm{b}\ell\snu$ the mixing-angle-independent scalar top limit is 82 $\gev$, assuming $m_{\mathrm{\tilde{t}}}-m_{\tilde{\nu}}$ $>$ 10 $\gev$. From the channel $\sbot \rightarrow \mathrm{b}\neu$, a limit of 79 $\gev$ has been set on the mass of the supersymmetric partner of the left-handed state of the bottom quark. This limit is valid for $m_{\mathrm{\tilde{b}}}-m_{\neu}$ $>$ 10 $\gev$. From the channel $\mathrm{\tilde{q}} \rightarrow \mathrm{q}\neu$, a limit of 87 $\gev$ has been set on the mass of supersymmetric partners of light quarks assuming five degenerate flavours and the production of both “left-handed” and “right-handed” squarks. This limit is valid for $m_{\mathrm{\tilde{q}}}-m_{\neu}$ $>$ 5 $\gev$. =10000 **The ALEPH Collaboration** Introduction ============ In the Minimal Supersymmetric extension of the Standard Model (MSSM) [@SUSY], each chirality state of the Standard Model fermions has a scalar supersymmetric partner. The scalar quarks (squarks) $\tilde{\rm{q}}_{\rm{R}}$ and $\tilde{\rm{q}}_{\rm{L}}$ are the supersymmetric partners of the left-handed and right-handed quarks, respectively. They are weak interaction eigenstates which can mix to form the mass eigenstates. Since the size of the mixing is proportional to the mass of the Standard Model partner, the lighter scalar top (stop) could be the lightest supersymmetric charged particle. The stop mass eigenstates are obtained by a unitary transformation of the $\stopr$ and $\stopl$ fields, parametrised by the mixing angle $\thetamix$. The lighter stop is given by $\stop = \mathrm{\tilde{t}_L \cos{\thetamix}} + \mathrm{\tilde{t}_R \sin{\thetamix}}$, while the heavier stop is the orthogonal combination. The stop could be produced at LEP in pairs, $\rm{e^+ e^-} \to \stop \bar{\stop}$, via [*s*]{}-channel exchange of a virtual photon or a Z. The searches for stops described here assume that all supersymmetric particles except the lightest neutralino $\neu$ and possibly the sneutrino $\snu$ are heavier than the stop. The conservation of R-parity is also assumed; this implies that the lightest supersymmetric particle (LSP) is stable. Under these assumptions, the two dominant decay channels are $\stop \to \rm{c} \neu$ and $\stop \to\rm{b} \ell \tilde{\nu}$ [@Hikasa]. The first decay can only proceed via loops and thus has a very small width, of the order of 0.01–1 eV [@Hikasa]. The $\stop \to \rm{b} \ell \tilde{\nu}$ channel proceeds via a virtual chargino exchange and has a width of the order of 0.1–10 keV [@Hikasa]. The latter decay dominates when it is kinematically allowed. The phenomenology of the scalar bottom (sbottom), the supersymmetric partner of the bottom quark, is similar to the phenomenology of the stop. Assuming that the $\sbot$ is lighter than all supersymmetric particles except the $\neu$, the $\sbot$ will decay as $\mathrm{\tilde{b} \rightarrow b \neu}$. Compared to the $\stop$ decays, the $\sbot$ decay has a large width of the order of 10–100 MeV. Direct searches for stops and sbottoms are performed in the stop decay channels $\stop \to \rm{c}\neu$ and $\stop \to\rm{b} \ell \tilde{\nu}$ and in the sbottom decay channel $\sbot \to \rm{b} \neu$. The results of these searches supersede the ALEPH results reported earlier for data collected at energies up to $\rts$ = 172 GeV [@ALEPH_stop]. The D0 experiment [@D0] has reported a lower limit on the stop mass of 85 ${\mathrm GeV}/c^2$ for the decay into $\rm{c} {\chi}$ and for a mass difference between the $\stop$ and the $\chi$ larger than about 40 $\gev$. Searches for $\stop \rightarrow \rm{c} {\chi}$, $\stop\rightarrow\mathrm{b} \ell \tilde{\nu}$ and $\sbot\rightarrow\rm{b}{\chi}$ using data collected at LEP at energies up to $\sqrt{s}$ = 172 $\mathrm{GeV}$ have also been performed by OPAL [@OPAL]. The supersymmetric partners of the light quarks are generally expected in the MSSM to be heavy, i.e., beyond the reach of LEP2, but their masses receive large negative corrections from gluino loops [@doni]. The dominant decay mode is assumed to be $\tilde{\rm{q}} \to \rm{q}\neu$. Limits are set on the production of the u, d, s, c, b squarks, under the assumption that they are mass degenerate. The D0 and CDF Collaborations have published limits on degenerate squarks [@d0ds; @cdfds]. These limits are outside the LEP2 kinematic range for the case of a light gluino; however limits from LEP2 are competitive with those from the Tevatron if the gluino is heavy. The ALEPH detector ================== A detailed description of the ALEPH detector can be found in Ref. [@Alnim], and an account of its performance as well as a description of the standard analysis algorithms can be found in Ref. [@Alperf]. Only a brief overview is given here. Charged particles are detected in a magnetic spectrometer consisting of a silicon vertex detector (VDET), a drift chamber (ITC) and a time projection chamber (TPC), all immersed in a 1.5 T axial magnetic field provided by a superconducting solenoid. The VDET consists of two cylindrical layers of silicon microstrip detectors; it performs very precise measurements of the impact parameter in space thus allowing powerful short-lifetime particle tags, as described in Ref. [@Rb1]. Between the TPC and the coil, a highly granular electromagnetic calorimeter (ECAL) is used to identify electrons and photons and to measure their energies. Surrounding the ECAL is the return yoke for the magnet, which is instrumented with streamer tubes to form the hadron calorimeter (HCAL). Two layers of external streamer tubes are used together with the HCAL to identify muons. The region near the beam line is covered by two luminosity calorimeters, SICAL and LCAL, which provide coverage down to 34 mrad. The information obtained from the tracking system is combined with that from the calorimeters to form a list of “energy flow particles” [@Alperf]. These objects serve to calculate the variables that are used in the analyses described in Section 3. The Analyses ============ Data collected at $\sqrt{s}$ = 181, 182, 183, and 184 GeV have been analysed, corresponding to integrated luminosities of 0.2, 3.9, 51.0, and 1.9 $\rm{pb}^{-1}$, respectively. Three separate analyses are used to search for the processes , , and $\mathrm \stop \rightarrow b \ell \snu$. All of these channels are characterised by missing momentum and energy. The experimental topology depends largely on $\deltm$, the mass difference between the $\tilde{\rm{q}}$ and the $\neu$ or $\snu$. When $\deltm$ is large, there is a substantial amount of energy available for the visible system and the signal events tend to look like $\ww$, $\ewnu$, $\zz$, and $\qqg$ events. These processes are characterised by high multiplicity and high visible mass $M_{\mathrm{vis}}$. When $\deltm$ is small, the energy available for the visible system is small and the signal events are therefore similar to $\ggqq$ events. The process $\ggqq$ is characterised by low multiplicity, low $M_{\mathrm{vis}}$, low total transverse momentum $\pt$ and the
{ "pile_set_name": "ArXiv" }
null
null
Introduction {#sec:Introduction} ============= Quantum theory (QT) may either be defined by a set of axioms or otherwise be ’derived’ from classical physics by using certain assumptions. Today, QT is frequently identified with a set of axioms defining a Hilbert space structure. This mathematical structure has been created (by von Neumann) by abstraction from the linear solution space of the central equation of QT, the Schr[ö]{}dinger equation. Thus, deriving Schr[ö]{}dinger’s equation is basically the same as deriving QT. To derive the most general version of the time-dependent Schr[ö]{}dinger equation, describing $N$ particles with spin in an external gauge field, means to derive essentially the whole of non-relativistic QT. The second way of proceeding is sometimes called ’quantization’. In the standard (canonical) quantization method one starts from a classical Hamiltonian whose basic variables are then ’transformed’, by means of well-known correspondence rules, $$\label{eq:DTR43QUPR} p\to\frac{\hbar}{\imath}\frac{\mathrm{d}}{\mathrm{d}x},\;\;\; E\to-\frac{\hbar}{\imath}\frac{\mathrm{d}}{\mathrm{d}t} \mbox{,}$$ into operators. Then, all relevant classical observables may be rewritten as operators acting on states of a Hilbert space etc; the details of the ’derivation’ of Schr[ö]{}dinger’s equation along this lines may be found in many textbooks. There are formal problems with this approach which have been identified many years ago, and can be expressed e.g. in terms of Groenewold’s theorem, see [@groenewold:principles], [@gotay:groenewold]. Even more seriously, there is no satisfactory *explanation* for this ’metamorphosis’ of observables into operators. This quantization method (as well as several other mathematically more sophisticated versions of it) is just a *recipe* or, depending on one’s taste, “black magic”, [@hall:exact_uncertainty]. Note that the enormous success of this recipe in various contexts - including field quantization - is no substitute for an explanation. The choice of a particular quantization procedure will be strongly influenced by the preferred interpretation of the quantum theoretical formalism. If QT is interpreted as a theory describing individual events, then the Hamiltonian of classical mechanics becomes a natural starting point. This ’individuality assumption’ is an essential part of the dominating ’conventional’, or ’Copenhagen’, interpretation (CI) of QT. It is well-known, that QT becomes a source of mysteries and paradoxes[^1] whenever it is interpreted in the sense of CI, as a (complete) theory for individual events. Thus, the canonical quantization method and the CI are similar in two respects: both rely heavily on the concept of individual particles and both are rather mysterious. This situation confronts us with a fundamental alternative. Should we accept the mysteries and paradoxes as inherent attributes of reality or should we not, instead, critically reconsider our assumptions, in particular the ’individuality assumption’. As a matter of fact, the dynamical numerical output of quantum mechanics consists of *probabilities*. A probability is a “deterministic” prediction which can be verified in a statistical sense only, i.e. by performing experiments on a large number of identically prepared individual systems, see [@belinfante:individual], [@margenau:measurements]. Therefore, the very structure of QT tells us that it is a theory about statistical ensembles only, see [@ballentine:statistical]. If dogmatic or philosophical reasons ’force’ us to interpret QT as a theory about individual events, we have to create complicated intellectual constructs, which are not part of the physical formalism, and lead to unsolved problems and contradictions. The present author believes, like several other physicists \[see e.g. [@kemble:principles1; @einstein:physics_reality; @margenau:quantum-mechanical; @blokhintsev:quantum; @ballentine:statistical; @belinfante:measurements; @ross-bonney:dice; @young:quantum; @newton:probability; @pippard:interpretation; @tschudi:statistical; @toyozawa:measurement; @krueger:epr_debate; @ali:ensemble]\] that QT is a purely statistical theory whose predictions can only be used to describe the behavior of statistical ensembles and not of individual particles. This statistical interpretation (SI) of QT eliminates all mysteries and paradoxes - and this shows that the mysteries and paradoxes are not part of QT itself but rather the result of a particular (mis)interpretation of QT. In view of the similarity discussed above, we adopt the statistical point of view, not only for the interpretation of QT itself, but also in our search for *quantization conditions*. The general strategy is to find a new set of (as) simple (as possible) statistical assumptions which can be understood in physical terms and imply QT. Such an approach would also provide an explanation for the correspondence rules . The present paper belongs to a series of works aimed at such an explanation. Quite generally, the present work continues a long tradition of attempts, see [@schrodinger:quantisierung_I; @motz:quantization; @schiller:quasiclassical; @rosen:classical_quantum; @frieden:fisher_basis; @lee.zhu:principle; @hall.reginatto:quantum_heisenberg; @frieden:sciencefisher], to characterize QT by mathematical relations which can be understood in physical terms[^2] (in contrast to the axiomatic approach). More specifically, it continues previous attempts to derive Schr[ö]{}dinger’s equation with the help of statistical concepts, see [@hall.reginatto:schroedinger], [@reginatto:derivation; @syska:fisher], [@klein:schroedingers]. These works, being quite different in detail, share the common feature that a statistical ensemble and not a particle Hamiltonian is used as a starting point for quantization. Finally, in a previous work, [@klein:statistical], of the present author an attempt has been undertaken to construct a complete statistical approach to QT with the help of a small number of very simple (statistical) assumptions. This work will be referred to as I. The present paper is a continuation and extension of I. The quantization method reported in I is based on the following general ideas: - QT should be a probabilistic theory in configuration space (not in phase space). - QT should fullfil abstract versions of (i) a conservation law for probability (continuity equation) , and (ii) Ehrenfest’s theorem. Such relations hold in all statistical theories no matter whether quantum or classical. - There are no laws for particle trajectories in QT anymore. This arbitrariness, which represents a crucial difference between QT and classical statistics, should be handled by a statistical principle analogous to the principle of maximal entropy in classical statistics. These general ideas lead to the mathematical assumptions which represent the basis for the treatment reported in I. This work was restricted to a one-dimensional configuration space (a single particle ensemble with a single spatial degree of freedom). The present work generalizes the treatment of I to a $3N-$dimensional configuration space ( ensembles representing an arbitrary number $N$ of particles allowed to move in three-dimensional space), gauge-coupling, and spin. In a first step the generalization to three spatial dimensions is performed; the properly generalized basic relations are reported in section \[sec:calcwpt\]. This section contains also a review of the fundamental ideas. In section \[sec:gaugecoupling\] we make use of a mathematical freedom, which is already contained in our basic assumptions, namely the multi-valuedness of the variable $S$. This leads to the appearance of potentials in statistical relations replacing the local forces of single-event (mechanical) theories. The mysterious non-local action of the vector potential (in effects of the Aharonov-Bohm type) is explained as a consequence of the statistical nature of QT. In section \[sec:stat-constr-macr\] we discuss a related question: Which constraints on admissible forces exist for the present class of statistical theories ? The answer is that only macroscopic (elementary) forces of the form of the Lorentz force can occur in nature, because only these survive the transition to QT . These forces are statistically represented by potentials, i.e. by the familiar gauge coupling terms in matter field equations. The present statistical approach provides a natural explanation for the long-standing question why potentials play an indispensable role in the field equations of physics. In section \[sec:fisher-information\] it is shown that among all statistical theories only the time-dependent Schrödinger equation follows the logical requirement of maximal disorder or minimal Fisher information. Spin one-half is introduced, in section \[sec:spin\], as the property of a statistical ensemble to respond to an external gauge field in two different ways. A generalized calculation, reported in sections \[sec:spin\] and \[sec:spin-fish-inform\], leads to Pauli’s (single-particle) equation. In section \[sec:spin-as-gauge\] an alternative derivation, following [@arunsalam:hamiltonians], and [@gould:intrinsic] is reported, which is particularly convenient for the generalization to arbitrary $N$. The latter is performed in section \[sec:final-step-to\],
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'The longitudinal spin structure factor for the $XXZ$-chain at small wave-vector $q$ is obtained using Bethe Ansatz, field theory methods and the Density Matrix Renormalization Group. It consists of a peak with peculiar, non-Lorentzian shape and a high-frequency tail. We show that the width of the peak is proportional to $q^2$ for finite magnetic field compared to $q^3$ for zero field. For the tail we derive an analytic formula without any adjustable parameters and demonstrate that the integrability of the model directly affects the lineshape.' author: - 'R. G. Pereira' - 'J. Sirker' - 'J.-S. Caux' - 'R. Hagemans' - 'J. M. Maillet' - 'S. R. White' - 'I. Affleck' title: 'The dynamical spin structure factor for the anisotropic spin-$1/2$ Heisenberg chain' --- One of the seminal models in the field of strong correlation effects is the antiferromagnetic spin-$1/2$ $XXZ$-chain $$H=J\sum_{j=1}^{N}\left[S_{j}^{x}S_{j+1}^{x}+S_{j}^{y}S_{j+1}^{y}+\Delta S_{j}^{z}S_{j+1}^{z}-hS_{j}^{z}\right]\; , \label{XXZ}$$ where $J>0$ is the coupling constant and $h$ a magnetic field. The parameter $\Delta$ describes an exchange anisotropy and the model is critical for $-1<\Delta\leq 1$. Recently, much interest has focused on understanding its dynamics, in particular, the spin [@Zotos] and the heat conductivity [@KluemperSakai], both at wave-vector $q=0$. A related important question refers to dynamical correlation functions at small but nonzero $q$, in particular the dynamical spin structure factors $S^{\mu\mu}(q,\omega)$, $\mu=x,y,z$ [@Muller]. These quantities are in principle directly accessible by inelastic neutron scattering. Furthermore, they are important to resolve the question of ballistic versus diffusive transport raised by recent experiments [@Thurber] and would also be useful for studying Coulomb drag for two quantum wires [@Glazman]. In this letter we study the lineshape of the longitudinal structure factor $S^{zz}(q,\omega)$ at zero temperature in the limit of small $q$. Our main results can be summarized as follows: By calculating the form factors $F(q,\omega)\equiv\left\langle 0\left|S_{q}^{z}\right|\alpha\right\rangle $ (here $|0\rangle$ is the ground state and $|\alpha\rangle$ an excited state) for finite chains based on a numerical evaluation of exact Bethe Ansatz (BA) expressions [@KitanineMaillet; @CauxMaillet] we establish that $S^{zz}(q,\omega)$ consists of a peak with peculiar, non-Lorentzian shape centered at $\omega\sim vq$, where $v$ is the spin-wave velocity, and a high-frequency tail. We find that $|F(q,\omega)|$ is a rapidly decreasing function of the number of particles involved in the excitation. In particular, we find for all $\Delta$ that the peak is completely dominated by two-particle (single particle-hole) and the tail by four-particle states (denoted by 2$p$ and 4$p$ states, respectively). Including up to eight-particle as well as bound states we verify using Density Matrix Renormalization Group (DMRG) that the sum rules are fulfilled with high accuracy corroborating our numerical results. By solving the BA equations for small $\Delta$ and infinite system size analytically we show that the width of the peak scales like $q^2$ for $h\neq 0$. Furthermore, we calculate the high-frequency tail analytically based on a parameter-free effective bosonic Hamiltonian. We demonstrate that our analytical results for the linewidth and the tail are in excellent agreement with our numerical data. For a chain of length $N$ the longitudinal dynamical structure factor is defined by $$\begin{aligned} S^{zz}\left(q,\omega\right)&=&\frac{1}{N}\sum_{j,j^{\prime}=1}^{N}e^{-iq\left(j-j^{\prime}\right)}\int_{-\infty}^{+\infty}\!\!\!\!\!\!\!\! dt\, e^{i\omega t}\left\langle S_{j}^{z}\left(t\right)S_{j^{\prime}}^{z}\left(0\right)\right\rangle \nonumber \\ &=& \frac{2\pi}{N}\sum_{\alpha}\left|\left\langle 0\left|S_{q}^{z}\right|\alpha\right\rangle \right|^{2}\delta\left(\omega-E_{\alpha}\right) \; . \label{strucFac}\end{aligned}$$ Here $S_{q}^{z}=\sum_{j}S_{j}^{z}e^{-iqj}$ and $\left|\alpha\right\rangle $ is an eigenstate with energy $E_{\alpha}$ above the ground state energy. For a finite system, $S^{zz}\left(q,\omega\right)$ at fixed $q$ is a sum of $\delta$-peaks at the energies of the eigenstates. In the thermodynamic limit $N\rightarrow\infty$, the spectrum is continuous and $S^{zz}\left(q,\omega\right)$ becomes a smooth function of $\omega$ and $q$. By linearizing the dispersion around the Fermi points and representing the fermionic operators in terms of bosonic ones the Hamiltonian (\[XXZ\]) at low energies becomes equivalent to the Luttinger model [@GiamarchiBook]. For this free boson model $S^{zz}(q,\omega)$ can be easily calculated and is given by $$S^{zz}\left(q,\omega\right)=K\left|q\right|\delta\left(\omega-v\left|q\right|\right)\, , \label{Szz_freeBoson}$$ where $K$ is the Luttinger parameter. This result is a consequence of Lorentz invariance: a single boson with momentum $\left|q\right|$ always carries energy $\omega=v|q|$, leading to a $\delta$-function peak at this level of approximation. We expect the simple result (\[Szz\_freeBoson\]) to be modified in various ways. First of all, the peak at $\omega\sim vq$ should acquire a finite width $\gamma_q$. The latter can be easily calculated for the $XX$ point, $\Delta=0$, where the model is equivalent to non-interacting spinless fermions. In this case the only states that couple to the ground state via $S^z_q$ are those containing a single particle-hole excitation (2$p$ states). As a result, the exact $S^{zz}(q,\omega)$ is finite only within the boundaries of the 2$p$ continuum. For $h\neq 0$, one finds $\gamma_q\approx q^2/m$ for small $q$, where $m=(J\cos k_F)^{-1}$ is the effective mass at the Fermi momentum $k_F$. For $h=0$, $m^{-1}\to 0$ and the width becomes instead $\gamma_q\approx Jq^3/8$. In both cases the non-zero linewidth is associated with the band curvature at the Fermi level and sets a finite lifetime for the bosons in the Luttinger model. Different attempts to calculate $\gamma_q$ for $\Delta\neq0$ have focused on perturbation theory in the band curvature terms [@Samokhin] or in the interaction $\Delta$ [@Kopietz; @Teber] and contradictory results were found. All these approaches have to face the breakdown of perturbation theory near $\omega\sim vq$. Since perturbative approaches show divergences on shell, our discussion about the broadening of the peak is based on the BA solution. The BA allows us to calculate the energy of an eigenstate exactly from a system of coupled non-linear equations [@tak99]. For $\Delta=0$ these equations decouple, the structure factor is determined by 2$p$ states only and one recovers the free fermion solution. For $|\Delta|\ll 1$ the most important excitations are still of the 2$p$ type and one can obtain the energies of these eigenstates analytically in the thermodynamic limit by expanding the BA equations in lowest order in $\Delta$. For $h\neq0$ (*i.e.*, finite magnetization $s\equiv \left\langle S_j^z\right\rangle$) this leads to $$\label{BA7} \gamma_q = 4J \l(1+\frac{2\Delta}{\pi}\sin k_F\r)\cos k_F \sin^2\frac{q}{2} \approx \frac{q^2}{m^*} \; .$$ for the 2$p$ type excitations. We therefore conclude that the interaction does not change the scaling of $\gamma_q$ compared to the free fermion case but rather induces a renormalization of the mass given by $m\rightarrow m^* = m/(1+2\Delta\sin k_F/\pi)$. We have verified our analytical small $\Delta$ result by calculating the form factors numerically [@CauxMaillet].
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We study the affine quasi-Einstein Equation for homogeneous surfaces. This gives rise through the modified Riemannian extension to new half conformally flat generalized quasi-Einstein neutral signature $(2,2)$ manifolds, to conformally Einstein manifolds and also to new Einstein manifolds through a warped product construction.' address: - 'MBV: Universidade da Coruña, Differential Geometry and its Applications Research Group, Escola Politécnica Superior, 15403 Ferrol, Spain' - 'EGR-XVR: Faculty of Mathematics, University of Santiago de Compostela, 15782 Santiago de Compostela, Spain' - 'PBG: Mathematics Department, University of Oregon, Eugene OR 97403-1222, USA' author: - 'M. Brozos-Vázquez E. García-Río P. Gilkey, and X. Valle-Regueiro' title: 'The affine quasi-Einstein Equation for homogeneous surfaces' --- [^1] Introduction ============ The affine quasi-Einstein Equation (see Equation (\[E1.c\])) is a $0^{th}$ order perturbation of the Hessian. It is a natural linear differential equation in affine differential geometry. We showed (see [@BGGV17a]) that it gives rise to strong projective invariants of the affine structure. Moreover, this equation also appears in the classification of half conformally flat quasi-Einstein manifolds in signature $(2,2)$. In this paper, we will examine the solution space to the affine quasi-Einstein Equation in the context of homogeneous affine geometries. A description of locally homogeneous affine surfaces has been given by Opozda [@Op04] (see Theorem \[T1.7\] below). They fall into 3 families. The first family is given by the Levi-Civita connection of a surface of constant curvature (Type $\mathcal{C}$). There are two other families. The first (Type $\mathcal{A}$) generalizes the Euclidean connection and the second (Type $\mathcal{B}$) is a generalization of the hyperbolic plane. As the Type $\mathcal{C}$ geometries are very rigid, we shall focus on the other two geometries. There are many non-trivial solutions of the affine quasi-Einstein Equation for Type $\mathcal{A}$ geometries (see Section \[ss1.5\]) and for Type $\mathcal{B}$ geometries (see Section \[ss1.6\]). This leads (see Theorem \[T1.1\] and Remark \[R1.2\]) to new examples of half conformally flat and conformally Einstein isotropic quasi-Einstein manifolds of signature $(2,2)$. We also use results of [@KimKim] to construct new higher dimensional Einstein manifolds. Our present discussion illustrates many of the results of [@BGGV17a] and focusses on the dimension of the eigenspaces of the solutions to the affine quasi-Einstein Equation for homogeneous surfaces. Notational conventions ---------------------- Recall that a pair $\mathcal{M}=(M,\nabla)$ is said to be an [*affine manifold*]{} if $\nabla$ is a torsion free connection on the tangent bundle of a smooth manifold $M$ of dimension $m\ge2$. We shall be primarily interested in the case of affine surfaces ($m=2$) but it is convenient to work in greater generality for the moment. In a system of local coordinates, express $\nabla_{\partial_{x^i}}\partial_{x^j}=\Gamma_{ij}{}^k\partial_{x^k}$ where we adopt the Einstein convention and sum over repeated indices. The connection $\nabla$ is torsion free if and only if the Christoffel symbols $\Gamma=(\Gamma_{ij}{}^k)$ satisfy the symmetry $\Gamma_{ij}{}^k=\Gamma_{ji}{}^k$ or, equivalently, if given any point $P$ of $M$, there exists a coordinate system centered at $P$ so that in that coordinate system we have $\Gamma_{ij}{}^k(P)=0$. Let $f$ be a smooth function on $M$. The Hessian $$\label{E1.a} \mathcal{H}_\nabla f=\nabla^2f:=(\partial_{x^i}\partial_{x^j}f-\Gamma_{ij}{}^k\partial_{x^k}f)\,dx^i\otimes dx^j$$ is an invariantly defined symmetric $(0,2)$-tensor field; $\mathcal{H}_\nabla:C^\infty(M)\rightarrow C^\infty(S^2(M))$ is a second order partial differential operator which is natural in the context of affine geometry. The curvature operator $R_\nabla$ and the Ricci tensor $\rho_\nabla$ are defined by setting: $$R_\nabla(x,y):=\nabla_x\nabla_y-\nabla_y\nabla_x-\nabla_{[x,y]}\text{ and } \rho_\nabla(x,y):=\operatorname{Tr}\{z\rightarrow R_\nabla(z,x)y\}\,.$$ The Ricci tensor carries the geometry if $m=2$; an affine surface is flat if and only if $\rho_\nabla=0$ because $$\rho_{11}=R_{211}{}^2,\quad\rho_{12}=R_{212}{}^2,\quad\rho_{21}=R_{121}{}^1,\quad \rho_{22}=R_{122}{}^1\,.$$ In contrast to the situation in Riemannian geometry, $\rho_\nabla$ is not in general a symmetric $(0,2)$-tensor field. The symmetrization and anti-symmetrization of the Ricci tensor are defined by setting, respectively, $$\textstyle\rho_{s,\nabla}(x,y):=\frac12\{\rho_\nabla(x,y)+\rho_\nabla(y,x)\}\text{ and } \textstyle\rho_{a,\nabla}(x,y):=\frac12\{\rho_\nabla(x,y)-\rho_\nabla(y,x)\}\,.$$ We use $\rho_{s,\nabla}$ to define a $0^{\operatorname{th}}$ order perturbation of the Hessian. The [*affine quasi-Einstein operator*]{} $ \mathfrak{Q}_{\mu,\nabla}:C^\infty(M)\rightarrow C^\infty(S^2(M))$ is defined by setting: $$\label{E1.b} \mathfrak{Q}_{\mu,\nabla} f:=\mathcal{H}_\nabla f-\mu f\rho_{s,\nabla} \,.$$ The eigenvalue $\mu$ is a parameter of the theory; again, this operator is natural in the category of affine manifolds. The [*affine quasi-Einstein Equation*]{} is the equation: $$\label{E1.c} \mathfrak{Q}_{\mu,\nabla} f=0\text{ i.e. }\mathcal{H}_\nabla f=\mu f\rho_{s,\nabla}\,.$$ We introduce the associated eigenspaces by setting: $$E(\mu,\nabla):=\ker( \mathfrak{Q}_{\mu,\nabla})=\{f\in C^\infty(M):\mathcal{H}_\nabla f=\mu f\rho_{s,\nabla}\}\,.$$ Similarly, if $P$ is a point of $M$, we let $E(P,\mu,\nabla)$ be the space of germs of solutions to Equation (\[E1.c\]) which are defined near $P$. Note that $E(0,\nabla)=\ker(\mathcal{H}_\nabla)$ is the set of [*Yamabe solitons*]{}. Also note that $\rho_{s,\nabla}=0$ implies $E(\mu,\nabla)=E(0,\nabla)$ for any $\mu$. If $\mu\ne0$ and $f>0$, let $\hat f:=-2\mu^{-1}\log(f)$, i.e. $f=e^{-\frac12\mu\hat f}$. This transformation converts Equation (\[E1.c\]) into the equivalent non-linear equation: $$\label{E1.d} \mathcal{H}_\nabla\hat f+2\rho_{s,\nabla}-\textstyle\frac12\mu d\hat f\otimes d\hat f=0\,.$$ Half conformally flat 4-dimensional geometry -------------------------------------------- Equation (\[E1.d\]) plays an important role in the study of the quasi-Einstein Equation in neutral signature geometry [@BGGV17]. Let $\mathcal{N}=(N,g,F,\mu_N)$ be a quadruple where $(N,g)$ is a pseudo-Riemannian manifold of dimension $n$, $F\in {C}^\infty(N)$, and $\mu_N\in \mathbb{R}$. Let $\nabla^g$ be the Levi-Civita connection of $g$; the associated Ricci tensor $\rho_g$ is a symmetric $(0,2)$-tensor field. We say that $\mathcal{N}$ is a *quasi-Einstein manifold* if $$\mathcal{H}_{\nabla^g}F+\rho_g-\mu_NdF\otimes dF=\lambda\, g\text{ for }\lambda\in\mathbb{R}\,.$$ We say $\mathcal{N}$ is [*isotropic*]{} if $\|dF\|=0$. We restrict to the 4-dimensional setting where Walker geometry (see [@DeR; @Walker]) enters by means of the deformed Riemannian extension. If $(x^1,x^2)$ are local coordinates on an affine surface $\mathcal{M}=(M,\nabla)$, let $(y_1,y_2)$ be the corresponding dual coordinates on the cotangent bundle $T^*M$; if $\omega$ is a 1-form, then we can express $\
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'Lillian J. Ratliff and Tanner Fiez\' bibliography: - '2017TACLTrefs.bib' title: Adaptive Incentive Design --- Introduction {#sec:intro} ============ Problem Formulation {#sec:problemformulation} =================== Utility Learning Formulation {#sec:incent_util} ============================ Incentive Design Formulation {#sec:incent_incent} ============================ Convergence in the Noise Free Case {#sec:main} ================================== Convergence in the Presence of Noise {#sec:noise} ==================================== Numerical Examples {#sec:examples} ================== Conclusion {#sec:discussion} ========== We present a new method for adaptive incentive design when a planner faces competing agents of unknown type. Specifically, we provide an algorithm for learning the agents’ decision-making process and updating incentives. We provide convergence guarantees on the algorithm. We show that under reasonable assumptions, the agents’ true response is driven to the desired response and, under slightly more restrictive assumptions, the true preferences can be learned asymptotically. We provide several numerical examples that both verify the theory as well as demonstrate the performance when we relax the theoretical assumptions.
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'provide valuable services, with convenient features for mobile users. However, the location and other information disclosed through each query to the erodes user privacy. This is a concern especially because providers can be *honest-but-curious*, collecting queries and tracking users’ whereabouts and infer sensitive user data. This motivated both *centralized* and *decentralized* location privacy protection schemes for : anonymizing and obfuscating queries to not disclose exact information, while still getting useful responses. Decentralized schemes overcome disadvantages of centralized schemes, eliminating anonymizers, and enhancing users’ control over sensitive information. However, an insecure decentralized system could create serious risks beyond private information leakage. More so, attacking an improperly designed decentralized privacy protection scheme could be an effective and low-cost step to breach user privacy. We address exactly this problem, by proposing security enhancements for mobile data sharing systems. We protect user privacy while preserving accountability of user activities, leveraging pseudonymous authentication with mainstream cryptography. We show our scheme can be deployed with off-the-shelf devices based on an experimental evaluation of an implementation in a static automotive testbed.' author: - Hongyu Jin - Panos Papadimitratos bibliography: - 'references.bib' title: 'Resilient Privacy Protection for Location-based Services through Decentralization' --- This work has been supported by the Swedish Foundation for Strategic Research (SSF) SURPRISE project and the KAW Academy Fellow Trustworthy IoT project.
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We demonstrate that the origin of so called quantum probabilistic rule (which differs from the classical Bayes’ formula by the presence of $\cos \theta$-factor) might be explained in the framework of ensemble fluctuations which are induced by preparation procedures. In particular, quantum rule for probabilities (with nontrivial $\cos \theta$-factor) could be simulated for macroscopic physical systems via preparation procedures producing ensemble fluctuations of a special form. We discuss preparation and measurement procedures which may produce probabilistic rules which are neither classical nor quantum; in particular, hyperbolic ‘quantum theory.’' author: - | Andrei Khrennikov\ Department of Mathematics, Statistics and Computer Sciences\ University of Växjö, S-35195, Sweden title: 'On the mystery of quantum probabilistic rule: trigonometric and hyperbolic probabilistic behaviours' --- Introduction ============ It is well known that the classical probabilistic rule based on the Bayes’ formula for conditional probabilities cannot be applied to quantum formalism, see, for example, \[1\]-\[3\] for extended discussions. In fact, all special properties of quantum systems are just consequences of violations of the classical probability rule, Bayes’ theorem \[1\]. In this paper we restrict our investigations to the two dimensional case. Here Bayes’ formula has the form $(i=1,2):$ $$\label{*1} {\bf p}(A=a_i)={\bf p}(C=c_1){\bf p}(A=a_i/C=c_1)+ {\bf p}(C=c_2){\bf p}(A=a_i/C=c_2),$$ where $A$ and $C$ are physical variables which take, respectively, values $a_1, a_2$ and $c_1, c_2.$ Symbols ${\bf p}(A=a_i/C=c_j)$ denote conditional probabilities. There is a large diversity of opinions on the origin of violations of (\[\*1\]) in quantum mechanics. The common opinion is that violations of (\[\*1\]) are induced by special properties of quantum systems. Let $\phi$ be a quantum state. Let $\{\phi_i\}_{i=1}^2$ be an orthogonal basis consisting of eigenvectors of the operator $\hat{C}$ corresponding to the physical observable $C.$ The quantum theoretical rule has the form $(i=1,2):$ $$\label{*2} q_i = {\bf p}_1 {\bf p}_{1i} + {\bf p}_2 {\bf p}_{2i}\pm 2 \sqrt{{\bf p}_1{\bf p}_{1i} {\bf p}_2 {\bf p}_{2i}}\cos\theta,$$ where $q_i={\bf p}_\phi(A=a_i), {\bf p}_j={\bf p}_\phi(C=c_j), {\bf p}_{ij}={\bf p}_{\phi_i}(A=a_j), i,j=1,2.$ Here probabilities have indexes corresponding to quantum states. The common opinion is that this quantum probabilistic rule must be considered as a peculiarity of nature. However, there exists an opposition to this general opinion, namely the probabilistic opposition. The main domain of activity of this probabilistic opposition is Bell’s inequality and the EPR paradox \[4\] , see, for example, \[1\], \[5\]-\[11\]. The general idea supported by the probabilistic opposition is that special quantum behaviour can be understood on the basis of local realism, if we be careful with the probabilistic description of physical phenomena. It seems that the origin of all ‘quantum troubles’ is probabilistic rule (\[\*2\]). It seems that the violation of Bell’s inequality is just a new representation of the old contradiction between rules (\[\*1\]) and (\[\*2\]) (the papers of Accardi \[1\] and De Muynck, De Baere and Martens \[7\] contain extended discussions on this problem). Therefore, the main problem of the probabilistic justification of quantum mechanics is to find the clear probabilistic explanation of the origin of quantum probabilistic rule (\[\*2\]) and the violation of classical probabilistic rule (\[\*1\]) and explain why (\[\*2\]) is sometimes reduced to (\[\*1\]). L. Accardi \[5\] introduced a notion of the [*statistical invariant*]{} to investigate the relation between classical Kolmogorovean and quantum probabilistic models, see also Gudder and Zanghi in \[6\]. He was also the first who mentioned that Bayes’ postulate is a “hidden axiom of the Kolmogorovean model... which limits its applicability to the statistical description of the natural phenomena ”, \[5\]. In fact, this investigation plays a crucial role in our analysis of classical and quantum probabilistic rules. An interesting investigation on this problem is contained in the paper of J. Shummhammer \[11\]. He supports the idea that quantum probabilistic rule (\[\*2\]) is not a peculiarity of nature, but just a consequence of one special method of the probabilistic description of nature, so called method of [*[maximum predictive power]{}*]{}. We do not directly support the idea of Shummhammer. It seems that the origin of (\[\*2\]) is not only a consequence of the use of one special method for the description of nature, but merely a consequence of our manipulations with nature, ensembles of physical systems, in quantum preparation/measurement procedures. In this paper we provide probabilistic analysis of quantum rule (\[\*2\]). In our analysis ‘probability’ has the meaning of the [*frequency probability,*]{} namely the limit of frequencies in a long sequence of trials (or for a large statistical ensemble). Hence, in fact , we follow to R. von Mises’ approach to probability \[12\]. It seems that it would be impossible to find the roots of quantum rule (\[\*2\]) in the conventional probability framework, A. N. Kolmorogov, 1933, \[13\]. In the conventional measure-theoretical framework probabilities are defined as sets of real numbers having some special mathematical properties. Classical rule (\[\*1\]) is merely a consequence of the definition of conditional probabilities. In the Kolmogorov framework to analyse the transition from (\[\*1\]) to (\[\*2\]) is to analyse the transition from one definition to another. In the frequency framework we can analyse behaviour of trails which induce one or another property of probability. Our analysis shows that quantum probabilistic rule (\[\*2\]) can be explained on the basis of ensemble fluctuations (one of possible sourses of ensemble fluctuations is so called ensemble nonreproducibility, see De Baere \[7\]; see also \[10\] for the statistical variant of nonreproducibility). Such fluctuations can generate (under special conditions) the cos $\theta$-factor in (\[\*2\]). Thus trigonometric fluctuations of quantum probabilities can be explained without using the wave arguments. An unexpected consequence of our analysis is that quantum probability rule (\[\*2\]) is just one of possible perturbations (by ensemble fluctuations) of classical probability rule (\[\*1\]). In principle, there might exist experiments which would produce perturbations of classical probabilistic rule (\[\*1\]) which differ from quantum probabilistic rule (\[\*2\]). Quantum formalism and ensemble fluctuations =========================================== [**1. Frequency probability theory.**]{} The frequency definition of probability is more or less standard in quantum theory; especially in the approach based on preparation and measurement procedures, \[14\], \[3\]. Let us consider a sequence of physical systems $ \pi= (\pi_1,\pi_2,..., \pi_N,... )\;. $ Suppose that elements of $\pi$ have some property, for example, position, and this property can be described by natural numbers: $ L=\{1,2,...,m \},$ the set of labels. Thus, for each $\pi_j\in \pi,$ we have a number $x_j\in L.$ So $\pi$ induces a sequence $$\label{la1} x=(x_1,x_2,..., x_N,...), \; \; x_j \in L.$$ For each fixed $\alpha \in L,$ we have the relative frequency $\nu_N(\alpha)= n_N(\alpha)/N$ of the appearance of $\alpha$ in $(x_1,x_2,..., x_N).$ Here $n_N(\alpha)$ is the number of elements in $(x_1,x_2,..., x_N)$ with $x_j=\alpha.$ R. von Mises \[12\] said that $x$ satisfies to the principle of the [*statistical stabilization*]{} of relative frequencies, if, for each fixed $\alpha \in L,$ there exists the limit $$\label{l0} {\bf p} (\alpha)=\lim_{N\to \infty} \nu_N(\alpha) .$$ This limit is said to be a probability of $\alpha.$ We shall not consider so called principle of [*randomness,*]{} see \[12\] for the details. This principle, despite its importance for the foundations of probability theory, is not related to our frequency analysis. We shall be interested only in the statistical stabilization of relative frequencies. [**2. Preparation and measurement procedures and quantum formalism.**]{} We consider a statistical ensemble S of quantum particles described by a quantum state $\phi.$ This ensemble is produced by some preparation procedure ${\cal E},$ see, for example, \[14\], \[3\] for details. There are two discrete physical observables $C=c_1, c_2$ and $A=a_1, a_2.$ The total number of particles in S is equal to N. Suppose that $n_{i}^{c}, i=1,2,$ particles in $S$ would give the result $C=c_i$ and $n_{i}^
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | In a software project, esp. in open-source, a contribution is a valuable piece of work made to the project: writing code, reporting bugs, translating, improving documentation, creating graphics, etc. We are now at the beginning of an exciting era where software bots will make contributions that are of similar nature than those by humans. Dry contributions, with no explanation, are often ignored or rejected, because the contribution is not understandable per se, because they are not put into a larger context, because they are not grounded on idioms shared by the core community of developers. We have been operating a program repair bot called Repairnator for 2 years and noticed the problem of “dry patches”: a patch that does not say which bug it fixes, or that does not explain the effects of the patch on the system. We envision program repair systems that produce an “explainable bug fix”: an integrated package of at least 1) a patch, 2) its explanation in natural or controlled language, and 3) a highlight of the behavioral difference with examples. In this paper, we generalize and suggest that software bot contributions must explainable, that they must be put into the context of the global software development conversation. author: - | Martin Monperrus\ KTH Royal Institute of Technology\ `martin.monperrus@csc.kth.se` bibliography: - 'biblio-erc.bib' - 'biblio-software-repair.bib' title: | Explainable Software Bot Contributions:\ Case Study of Automated Bug Fixes --- To appear in “2019 IEEE/ACM 1st International Workshop on Bots in Software Engineering (BotSE)” Introduction ============ The landscape of software bots is immense [@lebeuf:18], and will slowly be explored by far and large by software engineering research. In this paper, we focus on software bots that contribute to software projects, with the most noble sense of contribution: an act with an outcome that is considered concrete and valuable by the community. The open-source world is deeply rooted in this notion of “contribution”: developers are called “contributors”. Indeed, “contributor” is both a better and more general term than developer for the following reasons. First, it emphasizes on the role within the project (bringing something) as opposed to the nature of the task (programming). Second, it covers the wide range of activities required for a successful software project, way beyond programming: reporting bugs, translating, improving documentation, creating graphics are all essential, and all fall under the word “contribution”. Recently, we have explored one specific kind of contributions: bug fixes [@urli:hal-01691496; @arXiv-1810.05806]. A bug fix is a small change to the code so that a specific case that was not well-handled becomes correctly considered. Technically, it is a patch, a modification of a handful of source code lines in the program. The research area on automated program repair [@Monperrus2015] devises systems that automatically synthesize such patches. In the Repairnator project [@urli:hal-01691496; @arXiv-1810.05806], we went to the point of suggesting synthesized patches to real developers. Those suggestions were standard code changes on the collaborative development platform Github. In the rest of this paper, Repairnator is the name given to the program repair bot making those automated bug fixes. A bug fixing suggestion on Github basically contains three parts: the source code patch itself, a title, and a textual message explaining the patch. The bug fixing proposal is called “pull-request”. From a pure technical perspective, only the code matters. However, there are plenty of human activities happening around pull requests: project developers triage them, integrators make code-review, impacted users comment on them. For all those activities, the title and message of the pull requests are of utmost importance. Their clarity directly impact the speed of merging in the main code base. In the first phase of the Repairnator project [@urli:hal-01691496; @arXiv-1810.05806], we exclusively focused on the code part of the pull-request: Repairnator only created a source code patch, with no pull-request title and explanation, we simply used a generic title like “Fix failing build” and a short human-written message. Now, we realize that bot-generated patches must be put into context, so as to smoothly integrate into the software development conversation. A program repair bot must not only synthesize a patch but also synthesize the explanation coming with it: Repairnator must create explainable patches. This is related to the research on explainable artificial intelligence, or “explainable AI” for short [@gunning2017explainable]. Explainable AI refers to decision systems, stating that all decisions made by an algorithm must come with a rationale, an explanation of the reasons behind the decision. Explainable AI is a reaction to purely black-box decisions made, for instance, by a neural network. In this paper, we claim that contributions made by software bots must be explainable, contextualized. This is required for software bots to be successful, but more importantly, this is required to achieve a long-term smooth collaboration between humans and bots on software development. To sum up, we argue in this paper that: - Software bot contributions must be explainable. - Software bot contributions must be put in the context of a global development conversation. - Explainable contributions involve generation of natural language explanations and conversational features. - Program repair bots should produce explainable patches. ![image](fig-overview-erc.pdf){width=".905\textwidth"} Section \[sec:converstion\] presents the software development conversation, Section \[sec:bots-as-communicating-agents\] discusses why and how software bots must communicate. Section \[sec:explainable-patch-suggestion\] instantiates the concept in the realm of program repair bots. The Software Development Conversation {#sec:converstion} ===================================== Software developers work together on so-called “code repositories” and software development is a highly collaborative activity. In small projects, 5-10 software engineers interact together on the same code base. In large projects, 1000+ engineers are working in a coordinated manner to write new features, to fix software bugs, to ensure security and performance, etc. In an active source code repository, changes are committed to the code base every hour, minute, if not every second for some very hot and large software packages. *All the interactions between developers is what forms the “software development conversation”.* Nature of the Conversation -------------------------- The software development conversation involves exchanging source code of course, but not only. When a developer proposes a change to the code, she has to explain to the other developers the intent and content of the change. Indeed, in mature projects with disciplined code review, *all code modifications come with a proper explanation of the change in natural language*. This concept of developers working and interacting together on the code repository is shown at the left-hand side of Figure \[fig-overview-vision\]. What is also depicted on Figure \[fig-overview-vision\] is the variety of focus in the software development conversation. The developers may discuss about new features, about fixing bugs, etc. Depending on expertise and job title, developers may take only part to one specific conversation. On Figure \[fig-overview-vision\], developer Madeleine is the most senior engineer, taking part to all discussions in the project. Junior developer Sylvester tends to only discuss on bug reports and the corresponding fixing pull requests. Scale of the Conversation ------------------------- In a typical software repository of a standard project in industry, 50+ developers work together. In big open-source projects as well as in giant repositories from big tech companies, the number of developers involved in the same repository goes into the thousands and more. For instance, the main repository of the Mozilla Firefox browser, [gecko-dev](https://github.com/mozilla/gecko-dev), has contributions from 3800+ persons. Table \[tab-extraordinary-repos\] shows the scale of this massive collaboration for some of the biggest open-source repositories ever. Notably, the software development conversation is able to transcend traditional organization boundaries: it works even when developers work from different companies, or even when they are only loosely coupled individuals as in the case of open-source. Channels -------- The software development conversation happens in different channels. *Oral channels* Historically, the software development conversation happens in meetings, office chats, coffee breaks, phone calls. This remains largely true in traditional organizations. *Online channels* We have witnessed in the past decades the raise of decentralized development, with engineers scattered over offices, organizations and countries. In those contexts, a significant part of the software development conversation now takes place in online written channels: mailing-lists, collaborative development platforms (Github, Gitlab, etc), synchronous chats (IRC, Slack), online forums and Q&A sites (Stackoverflow), etc. Source code contributions only represent a small part of the software development conversation. Most of the exchanges between developers are interactive, involving natural language. In the case of collaborative platforms such as Github, the bulk of the software development conversation happens as comments on issues and pull-requests. Software bots will become new voices in this conversation.
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | We discuss the information entropy for a general open pointer-based simultaneous measurement and show how it is bound from below. This entropic uncertainty bound is a direct consequence of the structure of the entropy and can be obtained from the formal solution of the measurement dynamics. Furthermore, the structural properties of the entropy allow us to give an intuitive interpretation of the noisy influence of the pointers and the environmental heat bath on the measurement results. [*Keywords*]{}: simultaneous pointer-based measurement, noisy measurement, conjugate observables, entropy, uncertainty relation, quantum mechanics address: 'Institut f[ü]{}r Quantenphysik and Center for Integrated Quantum Science and Technology (IQ^ST^), Universit[ä]{}t Ulm, D-89069 Ulm, Germany' author: - Raoul Heese and Matthias Freyberger bibliography: - 'concept.bib' title: 'Entropic uncertainty bound for open pointer-based simultaneous measurements of conjugate observables' --- Introduction {#sec:Introduction} ============ The concept of pointer-based simultaneous measurements of conjugate observables is an indirect measurement model, which allows to dynamically describe the properties of a simultaneous quantum mechanical measurement process. Additional to the system to be measured (hereafter just called *system*), the model introduces two additional systems called *pointers*, which are coupled to the system and act as commuting meters from which the initial system observables can be read out after a certain interaction time. In this sense, the pointers represent the measurement devices used to simultaneously determine the system observables. Pointer-based simultaneous measurements date back to Arthurs and Kelly [@arthurs1965] and are based on von Neumann’s idea of indirect observation [@vonneumann1932]. In principle, any pair of conjugate observables like position and momentum or quadratures of the electromagnetic field [@schleich2001], whose commutator is well-defined and proportional to the identity operator, can straightforwardly be measured within the scope of pointer-based simultaneous measurements. We limit ourselves to the measurement of position and momentum in the following. An *open* pointer-based simultaneous measurement [@heese2014] also takes environmental effects into consideration by utilizing an environmental heat bath in the sense of the Caldeira-Leggett model [@caldeira1981; @caldeira1983a; @caldeira1983ae; @caldeira1983b], which leads to a quantum Brownian motion [@chou2008; @fleming2011a; @fleming2011b; @martinez2013] of the system and the pointers, whereas a *closed* pointer-based simultaneous measurement [@arthurs1965; @wodkiewicz1984; @stenholm1992; @buzek1995; @appleby1998a; @appleby1998b; @appleby1998c; @busch2007; @busshardt2010; @busshardt2011; @heese2013] does not involve any environmental effects. A schematic open pointer-based simultaneous measurement procedure is shown in Fig. \[fig:model\]. In this contribution we calculate the information entropy of an open pointer-based simultaneous measurement and discuss its properties as a measurement uncertainty. In particular, we make use of recent results [@heese2013; @heese2014], which we extend and generalize. In Sec. \[sec:generalopenpointer-basedsimultaneousmeasurements\], we present the formal dynamics of open pointer-based simultaneous measurements and then use these results to discuss the entropic uncertainty in Sec. \[sec:entropy\]. In the end, we arrive at a generic lower bound of this entropic uncertainty. Note that we solely use rescaled dimensionless variables [@heese2014] so that $\hbar = 1$. ![Principles of an open pointer-based simultaneous measurement of two conjugate observables, e.g., the simultaneous measurement of position and momentum. The measurement apparatus consists of two quantum mechanical systems, called pointers, which are bilinearly coupled to the quantum mechanical system to be measured. Additionally, an environmental heat bath in the sense of the Caldeira-Leggett model can disturb both the system and the pointers. After the interaction process, one observable of each pointer is directly measured (e.g., the position of each pointer) while the system itself is not subject to any direct measurement. However, from these measurement results, information about the initial system observables can then be inferred. In other words, the final pointer observables act as commuting meters from which the initial non-commuting system observables can be simultaneously read out. The price to be paid for this simultaneity comes in form of fundamental noise terms, which affect the inferred values. The corresponding uncertainties can be described by information entropies, which are bound from below.[]{data-label="fig:model"}](fig1.pdf) Open pointer-based simultaneous measurements {#sec:generalopenpointer-basedsimultaneousmeasurements} ============================================ As indicated in the introduction, our model of open pointer-based simultaneous measurements consists of a system particle to be measured with mass $M_{\mathrm{S}}$, position observable $\hat{X}_{\mathrm{S}}$ and momentum observable $\hat{P}_{\mathrm{S}}$, which is coupled bilinearly to two pointer particles with masses $M_1$ and $M_2$, position observables $\hat{X}_1$ and $\hat{X}_2$, and momentum observables $\hat{P}_1$ and $\hat{P}_2$, respectively. Both the system and the pointers are bilinearly coupled to an environmental heat bath, which consists of a collection of $N$ harmonic oscillators with masses $m_1,\dots,m_N$, position observables $\hat{q}_1,\dots,\hat{q}_N$, and momentum observables $\hat{k}_1,\dots,\hat{k}_N$. In this section, we first present the general Hamiltonian for this model and then briefly discuss the resulting dynamics. Hamiltonian {#sec:generalopenpointer-basedsimultaneousmeasurements:hamiltonian} ----------- The general Hamiltonian for our model reads $$\begin{aligned} \label{eq:H} \hat{\mathscr{H}}(t) \equiv \hat{H}_{\mathrm{free}} + \hat{H}_{\mathrm{int}}(t) + \hat{H}_{\mathrm{bath}}(t)\end{aligned}$$ and therefore consists of three parts. First, the free evolution Hamiltonian $$\begin{aligned} \label{eq:H:free} \hat{H}_{\mathrm{free}} \equiv \frac{\hat{P}_{\mathrm{S}}^2}{2 M_{\mathrm{S}}} + \frac{\hat{P}_1^2}{2 M_1} + \frac{\hat{P}_2^2}{2 M_2},\end{aligned}$$ which simply describes the dynamics of the undisturbed system and pointers. Second, the interaction Hamiltonian $$\begin{aligned} \label{eq:H:int} \hat{H}_{\mathrm{int}}(t) \equiv C_{\mathrm{S}}(t) \hat{X}_{\mathrm{S}}^2 + C_1(t) \hat{X}_1^2 + C_2(t) \hat{X}_2^2 + ( \hat{X}_{\mathrm{S}}, \hat{P}_{\mathrm{S}} ) \mathbf{C}(t) ( \hat{X}_1, \hat{X}_2, \hat{P}_1, \hat{P}_2 )^T,\end{aligned}$$ which describes possible quadratic potentials with the coupling strengths $C_{\mathrm{S}}(t)$, $C_1(t)$, and $C_2(t)$, respectively, as well as bilinear interactions between the system observables and the pointer observables via the $2\times4$ coupling matrix $\mathbf{C}(t)$. These interactions are necessary for an information transfer between system and pointers and are therefore a prerequisite of pointer-based simultaneous measurements. The existence of quadratic potentials is on the other hand not essential, but may be reasonable from a physical point of view when regarding confined particles. One possible interaction Hamiltonian would be the interaction Hamiltonian of the classic Arthurs and Kelly model [@arthurs1965], which can be written as $\hat{H}_{\mathrm{int}} = \kappa ( \hat{X}_{\mathrm{S}} \hat{P}_1 + \hat{P}_{\mathrm{S}} \hat{P}_2 )$ with an arbitrary coupling strength $\kappa \neq 0$. Lastly, Eq. [(\[eq:H\])]{} contains the bath Hamiltonian [@caldeira1981; @caldeira1983a; @caldeira1983ae; @caldeira1983b] $$\begin{aligned} \label{eq:H:bath} \hat{H}_{\mathrm{bath}}(t) \equiv \frac{1}{2} \hat{\mathbf{k}}^{T} \mathbf{m}^{-1} \hat{\mathbf{k}}+ \frac{1}{2} \hat{\mathbf{q}}^{T} \mathbf{c} \hat{\mathbf{q}} + \hat{\mathbf{q}}^{T} \mathbf{g}(t) ( \hat{X}_{\mathrm{S}}, \hat{X}_1, \hat{X}_2 )^T,\end{aligned}$$ which describes the independent dynamics of the bath particles with the $N \times N$ diagonal mass matrix $\mathbf{m}$ containing $m_1,\dots,m_N$, and the $N \times N$ symmetric and positive definite bath-internal coupling matrix $\mathbf{c}$; as well as the coupling of system and pointer positions
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We describe various studies relevant for top physics at future circular collider projects currently under discussion. We show how highly-massive top-antitop systems produced in proton-proton collisions at a center-of-mass energy of 100 TeV could be observed and employed for constraining top dipole moments, investigate the reach of future proton-proton and electron-positron machines to top flavor-changing neutral interactions, and discuss top parton densities.' address: - 'CERN, PH-TH, CH-1211 Geneva 23, Switzerland' - 'Institut Pluridisciplinaire Hubert Curien/Département Recherches Subatomiques, Université de Strasbourg/CNRS-IN2P3, 23 rue du Loess, F-67037 Strasbourg, France' author: - Benjamin Fuks bibliography: - 'fuks.bib' title: | Opportunities with top quarks at\ future circular colliders --- A future circular collider facility at CERN =========================================== The Large Hadron Collider (LHC) at CERN has delivered very high quality results during its first run in 2009-2013, with in particular the discovery of a Higgs boson with a mass of about 125 GeV in 2012. Unfortunately, no hint for the presence of particles beyond the Standard Model has been observed. Deviations from the Standard Model are however still allowed and expected to show up either through precision measurements of indirect probes, or directly at collider experiments. In this context, high precision would require to push the intensity frontier further and further, whereas bringing the energy frontier beyond the LHC regime would provide handles on new kinematical thresholds. Along these lines, a design study for a new accelerator facility aimed to operate at CERN in the post-LHC era has been undertaken. This study focuses on a machine that could collide protons at a center-of-mass energy of $\sqrt{s}=100$ TeV, that could be built in a tunnel of about 80-100 km in the area of Geneva and benefit from the existing infrastructure at CERN [@fcchh]. A possible intermediate step in this project could include an electron-positron machine with a collision center-of-mass energy ranging from 90 GeV (the $Z$-pole) to 350 GeV (the top-antitop threshold), with additional working points at $\sqrt{s}=160$ GeV (the $W$-boson pair production threshold) and 240 GeV (a Higgs factory) [@fccee]. In parallel, highly-energetic lepton-hadron and heavy ion collisions are also under investigation. Both the above-mentioned future circular collider (FCC) setups are expected to deliver a copious amount of top quarks. More precisely, one trillion of them are expected to be produced in 10 ab$^{-1}$ of proton-proton collisions at $\sqrt{s}=100$ TeV and five millions of them in the same amount of electron-positron collisions at $\sqrt{s}=350$ GeV (which will in particular allows for top mass and width measurements at an accuracy of about 10 MeV [@Gomez-Ceballos:2013zzn]). This consequently opens the door for an exploration of the properties of the top quark, widely considered as a sensitive probe to new physics given its mass close to the electroweak scale, with an unprecedented accuracy. This is illustrated below with three selected examples. Top pair production in 100 TeV proton-proton collisions ======================================================= The top quark pair-production cross section for proton-proton collisions at $\sqrt{s}=100$ TeV reaches 29.4 nb at the next-to-leading order accuracy in QCD, as calculated with [[MadGraph5]{}a[MC@NLO]{}]{} [@Alwall:2014hca] and the NNPDF 2.3 set of parton densities [@Ball:2012cx]. A very large number of $t\bar t$ events are thus expected to be produced for integrated luminosities of several ab$^{-1}$, with a significant number of them featuring a top-antitop system whose invariant-mass lies in the multi-TeV range. Whereas kinematical regimes never probed up to now will become accessible, standard $t\bar t$ reconstruction techniques may not be sufficient to observe such top quarks that are highly boosted, with a transverse momentum ($p_T$) easily exceeding a few TeV. In addition, it is not clear how current boosted top tagging techniques, developed in the context of the LHC, could be applied. Consequently, it could be complicated to distinguish a signal made of a pair of highly boosted top quarks from the overwhelming multijet background. ![\[fig:ttbar\]*Left*: distributions of the $z$ variable of Eq. (\[eq:z\]) for proton-proton collisions at $\sqrt{s}=100$ TeV. We present predictions for top-antitop (red dashed) and multijet (blue plain) production, after selecting events as described in the text. We have fixed $M_{jj}^{\rm cut}$ to 6 TeV and normalized the results to 100 fb$^{-1}$. *Right*: constraints on the top dipole moments derived from measurements at the Tevatron and the LHC (gray), and from predictions at the LHC (red, $\sqrt{s}=14$ TeV) and the FCC (black, $M_{jj}^{\rm cut} = 10$ TeV).](zz "fig:"){width=".57\columnwidth"} ![\[fig:ttbar\]*Left*: distributions of the $z$ variable of Eq. (\[eq:z\]) for proton-proton collisions at $\sqrt{s}=100$ TeV. We present predictions for top-antitop (red dashed) and multijet (blue plain) production, after selecting events as described in the text. We have fixed $M_{jj}^{\rm cut}$ to 6 TeV and normalized the results to 100 fb$^{-1}$. *Right*: constraints on the top dipole moments derived from measurements at the Tevatron and the LHC (gray), and from predictions at the LHC (red, $\sqrt{s}=14$ TeV) and the FCC (black, $M_{jj}^{\rm cut} = 10$ TeV).](dip "fig:"){width=".41\columnwidth"} To demonstrate that this task is already manageable with basic considerations [@inprep], we have analyzed, by means of the [MadAnalysis]{} 5 package [@Conte:2012fm], leading-order hard-scattering events simulated with [[MadGraph5]{}a[MC@NLO]{}]{} and matched to the parton showering and hadronization algorithms included in [Pythia]{} 8 [@Sjostrand:2007gs]. We have considered, in our analysis, jets with a $p_T > 1$ TeV that have been reconstructed with [FastJet]{} [@Cacciari:2011ma] and an anti-$k_T$ jet algorithm with a radius parameter $R=0.2$ [@Cacciari:2008gp]. We preselect events featuring at least two jets with a pseudorapidity $|\eta|<2$ and at least one muon lying in a cone of $R=0.2$ of any of the selected jets. The invariant mass of the system comprised of the two leading jets is additionally constrained to be larger than a threshold $M_{jj}^{\rm cut}$. We then investigate the properties of the selected muons relatively to those of the related jet. In this context, we present on Figure \[fig:ttbar\] (left) the distribution in a $z$ variable defined as the ratio of the muon transverse momentum $p_T(\mu_i)$ to the corresponding jet transverse momentum $p_T(j_i)$, maximized over the $n$ final-state muons of the event, z \_[i=1,…n]{}  . \[eq:z\]Muons arising from multijet events are mostly found to carry a small fraction of the jet transverse momentum, which is inferred by their production mechanism ($B$- and $D$-meson decays). This contrasts with muons induced by prompt decays of top quarks that can gather a significant fraction of the top $p_T$. Imposing the $z$-variable to be larger than an optimized threshold $z^{\rm cut}$, it becomes possible to obtain signal over background ratios $S/B$ of order one and extract the $t\bar t$ signal at the $5\sigma$ level (defined by $S/\sqrt{S+B}$). We study, in Table \[tab:ttbar\], the $z^{\rm cut}$ value for different invariant-mass threshold $M_{jj}^{\rm cut}$, and present the associated $S/B$ ratio together with the luminosity necessary for a signal extraction at $5\sigma$. [llll]{} $M_{jj}^{\rm cut}$ & $z^{\rm cut}$ & $S/B$ & ${\cal L}_{5\sigma}$\ 6 TeV & 0.5 & 0.39 & 36.1 fb$^{-1}$\ 10 TeV & 0.5 & 0.74 & 202 fb$^{-1}$\ 15 TeV & 0.4 & 0.25 & 2.35 ab$^{-1}$\ On Figure \[fig:ttbar\] (right), we illustrate how a measurement of the fiducial cross section related to the above selection with $M_{jj}^{\rm cut} = 10$
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We model the newly synthesized magic-angle twisted bilayer-graphene superconductor with two $p_{x,y}$-like Wannier orbitals on the superstructure honeycomb lattice, where the hopping integrals are constructed via the Slater-Koster formulism by symmetry analysis. The characteristics exhibited in this simple model are well consistent with both the rigorous calculations and experiment observations. A van Hove singularity and Fermi-surface (FS) nesting are found in the doping levels relevant to the correlated insulator and unconventional superconductivity revealed experimentally, base on which we identify the two phases as weak-coupling FS instabilities. Then, with repulsive Hubbard interactions turning on, we performed random-phase-approximation (RPA) based calculations to identify the electron instabilities. As a result, we find chiral $d+id$ topological superconductivity bordering the correlated insulating state near half-filling, identified as noncoplanar chiral spin-density wave (SDW) ordered state, featuring quantum anomalous Hall effect. The phase-diagram obtained in our approach is qualitatively consistent with experiments.' author: - 'Cheng-Cheng Liu' - 'Li-Da Zhang' - 'Wei-Qiang Chen' - Fan Yang title: 'Chiral Spin Density Wave and $d+id$ Superconductivity in the Magic-Angle-Twisted Bilayer Graphene' --- ** The newly revealed “high-temperature superconductivity (SC)"[@SC] in the “magic-angle" twisted bilayer-graphene (MA-TBG) has caught great research interests[@Volovik2018; @Roy2018; @Po2018; @Xu2018; @Yuan2018; @Baskaran2018; @Phillips2018; @Kivelson2018]. In such a system, the low energy electronic structure can be dramatically changed by the twist. It was shown that some low energy flat bands, which are well separated with other high energy bands, appear when the twist angle is around $1.1^{\circ}$. A correlated insulating state is observed when the flat bands are near half-filled [@Mott]. Doping this correlated insulator leads to SC with highest critical temperature $T_c$ up to 1.7 K. This system looks similar to the cuprates in terms of phase diagram and the high ratio of $T_c$ over the Fermi-temperature $T_F$. In fact, it was argued that the insulating state was a Mott insulator, while the MA-TBG was an analogy of cuprate superconductor. Since the structure of the MA-TBG is in situ tunable, it was proposed that this system can serve as a good platform to study the pairing mechanism of the high-$T_c$ SC, the biggest challenge of condensed-matter physics. However, the viewpoint that the SC in MA-TBG is induced by doping a Mott-insulator suffers from the following three inconsistencies with experimental results. Firstly, the so-called “Mott-gap" extrapolated from the temperature-dependent conductance is just about 0.31 meV[@Mott], which is much lower than the band width of the low energy emergent flat bands ($\sim$10 meV). Such a tiny “Mott-gap" can hardly be consistent with the “Mott-physics". Secondly, the behaviors upon doping into this insulating phase is different from those for doping a Mott-insulator, as analyzed in the following for the positive filling as an example. In the case of electron doping with respect to the half-filling, the system has small Fermi pocket with area proportional to doping, which is consistent with a doped Mott insulator[@SC]. However, in the hole doping case, slightly doping leads to a large Fermi surface (FS) with area proportional to the electron concentration of the whole bands instead of the hole concentration with respect to the half-filling[@SC; @Mott]. Such behavior obviously conflicts with the “Mott-physics". Thirdly, some samples which exhibit the so-called “Mott-insulating" behavior at high temperature become SC upon lowering the temperature[@Notice]. Such a behavior is more like to be caused by the competing between SC and some other kind of orders, such as density waves, instead of “Mott physics". In this Letter, we study the problem from weak coupling approach, wherein electrons on the FS acquire effective attractions through exchanging spin fluctuations, which leads to Cooper pairing. After analyzing the characteristics of the low energy emergent band structure, an effective $p_{x,y}$-orbital tight-binding model[@Yuan2018] on the emergent honeycomb lattice is adopted, but with the hopping integrals newly constructed via the Slater-Koster formulism[@Slater1954], which is re-derived based on the symmetry of the system (Supplementary Material I[@SupplMater]). The characteristics of the constructed band structure is qualitatively consistent with both the rigorous multi-band tight-binding results[@Nguyen2017; @Moon2012] and experiments[@SC; @Mott]. Moreover the band degeneracy at high-symmetric points or lines is compatible with the corresponding irreducible representations[@Yuan2018]. Then after the Hubbard-Hund interaction turns on, we performed RPA based calculations to study the electron instabilities. Our results identify the correlated insulator near half-filling as FS-nesting induced noncoplanar chiral SDW insulator, featuring quantum anomalous Hall effect (QAHE). Bordering this SDW insulator is chiral $d+id$ topological superconducting state. The obtained phase diagram is qualitatively consistent with experiments. ** For the MA-TBG, the small twist angle between the two graphene layers causes Moire pattern which results in much enlarged unit cell, consequently thousands of energy bands are taken into account[@Nguyen2017; @Moon2012], and the low-energy physics are dramatically changed[@Nguyen2017; @Moon2012; @Fang2015; @Santos2007; @Santos2012; @Shallcross2008; @Bistritzer2011; @Bistritzer2010; @Uchida2014; @Mele2011; @Mele2010; @Sboychakov2015; @Morell2010; @Trambly2010; @Latil2007; @Trambly2012; @Gonza2013; @Luis2017; @Cao2016; @Ohta2012; @Kim2017; @Huder2018; @Li2017]. Remarkably, four low energy nearly-flat bands with a total bandwidth of about 10 meV emerge which are well isolated from the high energy bands. Since both the correlated insulating and the superconducting phases emerge when these low energy bands are partially filled, it’s urgent to provide an effective model with relevant degrees of freedom to capture the low energy band structure. By analyzing the degeneracy and representation of the flat bands at all three of the high symmetry points $\Gamma$, $K$ and $M$, a honeycomb lattice rather than the triangular one should be adopted to model the low-energy physics of MA-TBG[@Po2018; @Yuan2018]. The emergent honeycomb lattice consists of two sublattices originating from different layers. Further symmetry analysis shows the related Wannier orbitals on each site belong to the $p_x$ and $p_y$ symmetry[@Yuan2018]. Therefore, we can construct the hopping integrals between the $p_{x,y}$-like orbitals on the honeycomb lattice via the Slater-Koster formulism[@Slater1954] based on symmetry analysis[@SupplMater], which reflects coexisting $\sigma$ and $\pi$ bondings[@Wu2008a; @Wu2008b; @Zhang2014; @Liu2014; @Yang2015]. Our tight-binding (TB) model up to the next nearest neighbor (NNN) hoping thus obtained reads, $$\label{tb} H_{tb}=\sum_{i\mu,j\nu,\sigma}t_{i\mu,j\nu}c_{i\mu\sigma}^{\dagger}c_{j\nu\sigma}-\mu_c\sum_{i\mu\sigma}c_{i\mu\sigma}^{\dagger}c_{i\mu\sigma}.$$ Here $\mu,\nu=x,y$ represent the $p_{x},p_{y}$ orbitals shown in Fig. \[band\](a), $i,j$ stand for the site and $\mu_c$ is the chemical potential determined by the filling $\delta\equiv n/n_s-1$ relative to charge neutrality. Here $n$ is the average electron number per unit cell, $n_s=4$ is the $n$ for charge neutrality. The hopping integral $t_{i\mu,j\nu}$ can be obtained as $$\label{slater_koster} t_{i\mu,j\nu}=t_{\sigma}^{ij}\cos\theta_{\mu,ij}\cos\theta_{\nu,ij}+t_{\pi}^{ij}\sin\theta_{\mu,ij}\sin\theta_{\nu,ij},$$ where $\theta_{\mu,ij}$ denotes the angle from the direction of $\mu$ to that of $\mathbf{r}_{j}-\mathbf{r}_{i}$. The Slater-Koster parameters $t_{\sigma/\pi}^{ij}$ represent the hopping integrals contributed by $\sigma/\pi$- bondings. More details about the band structure are introduced in Supplementary Materials II[@SupplMater]. ![(a) Schematic diagram for our model. The dashed rhombus labels the unit cell of the emergent honeycomb lattice with the $p_{x},p_{y}$-like Wannier orbitals on each site. (b) Band structure and (c) DOS of MA-TBG. The red, black and
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'A game-theoretic approach for studying power control in multiple-access networks with transmission delay constraints is proposed. A non-cooperative power control game is considered in which each user seeks to choose a transmit power that maximizes its own utility while satisfying the user’s delay requirements. The utility function measures the number of reliable bits transmitted per joule of energy and the user’s delay constraint is modeled as an upper bound on the delay outage probability. The Nash equilibrium for the proposed game is derived, and its existence and uniqueness are proved. Using a large-system analysis, explicit expressions for the utilities achieved at equilibrium are obtained for the matched filter, decorrelating and minimum mean square error multiuser detectors. The effects of delay constraints on the users’ utilities (in bits/Joule) and network capacity (i.e., the maximum number of users that can be supported) are quantified.' author: - '\' title: '[A Non-Cooperative Power Control Game in Delay-Constrained Multiple-Access Networks]{}' --- Introduction ============ In wireless networks, power control is used for resource allocation and interference management. In multiple-access CDMA systems such as the uplink of cdma2000, the purpose of power control is for each user terminal to transmit enough power so that it can achieve the desired quality of service (QoS) without causing unnecessary interference for other users in the network. Depending on the particular application, QoS can be expressed in terms of throughput, delay, battery life, etc. Since in many practical situations, the users’ terminals are battery-powered, an efficient power management scheme is required to prolong the battery life of the terminals. Hence, power control plays an even more important role in such scenarios. Consider a multiple-access DS-CDMA network where each user wishes to locally and selfishly choose its transmit power so as to maximize its utility and at the same time satisfy its delay requirements. The strategy chosen by each user affects the performance of other users through multiple-access interference. There are several questions to ask concerning this interaction. First of all, what is a reasonable choice of a utility function that measures energy efficiency and takes into account delay constraints? Secondly, given such a utility function, what strategy should a user choose in order to maximize its utility? If every user in the network selfishly and locally picks its utility-maximizing strategy, will there be a stable state at which no user can unilaterally improve its utility (Nash equilibrium)? If such an equilibrium exists, will it be unique? What will be the effect of delay constraint on the energy efficiency of the network? Game theory is the natural framework for modeling and studying such a power control problem. Recently, there has been a great deal of interest in applying game theory to resource allocation is wireless networks. Examples of game-theoretic approaches to power control are found in [@GoodmanMandayam00; @JiHuang98; @Saraydar02; @Xiao01; @Zhou01; @Alpcan; @Sung; @Meshkati_TCOMM]. In [@GoodmanMandayam00; @JiHuang98; @Saraydar02; @Xiao01; @Zhou01], power control is modeled as a non-cooperative game in which users choose their transmit powers in order to maximize their utilities. In [@Meshkati_TCOMM], the authors extend this approach to consider a game in which users can choose their uplink receivers as well as their transmit powers. All the power control games proposed so far assume that the traffic is not delay sensitive. Their focus is entirely on the trade-offs between throughput and energy consumption without taking into account any delay constraints. In this work, we propose a non-cooperative power control game that does take into account a transmission delay constraint for each user. Our focus here is on energy efficiency. Our approach allows us to study networks with both delay tolerant and delay sensitive traffic/users and quantify the loss in energy efficiency due to the presence of users with stringent delay constraints. The organization of the paper is as follows. In Section \[system model\], we present the system model and define the users’ utility function as well as the model used for incorporating delay constraints. The proposed power control game is described in Section \[proposed game\], and the existence and uniqueness of Nash equilibrium for the proposed game is discussed in Section \[Nash equilibrium\]. In Section \[multiclass\], we extend the analysis to multi-class networks and derive explicit expressions for the utilities achieved at Nash equilibrium. Numerical results and conclusions are given in Sections \[Numerical results\] and \[conclusions\], respectively. System Model {#system model} ============ We consider a synchronous DS-CDMA network with $K$ users and processing gain $N$ (defined as the ratio of symbol duration to chip duration). We assume that all $K$ user terminals transmit to a receiver at a common concentration point, such as a cellular base station or any other network access point. The signal received by the uplink receiver (after chip-matched filtering) sampled at the chip rate over one symbol duration can be expressed as $$\label{eq1} {\mathbf{r}} = \sum_{k=1}^{K} \sqrt{p_k} h_k \ b_k {\mathbf{s}}_k + {\mathbf{w}} ,$$ where $p_k$, $h_k$, $b_k$ and ${\mathbf{s}}_k$ are the transmit power, channel gain, transmitted bit and spreading sequence of the $k^{th}$ user, respectively, and $\mathbf{w}$ is the noise vector which is assumed to be Gaussian with mean $\mathbf{0}$ and covariance $\sigma^2 \mathbf{I}$. We assume random spreading sequences for all users, i.e., $ {\mathbf{s}}_k = \frac{1}{\sqrt{N}}[v_1 ... v_N]^T$, where the $v_i$’s are independent and identically distributed (i.i.d.) random variables taking values in {$-1,+1$} with equal probabilities. Utility Function ---------------- To pose the power control problem as a non-cooperative game, we first need to define a suitable utility function. It is clear that a higher signal to interference plus noise ratio (SIR) level at the output of the receiver will result in a lower bit error rate and hence higher throughput. However, achieving a high SIR level requires the user terminal to transmit at a high power which in turn results in low battery life. This tradeoff can be quantified (as in [@GoodmanMandayam00]) by defining the utility function of a user to be the ratio of its throughput to its transmit power, i.e., $$\label{eq2} u_k = \frac{T_k}{p_k} \ .$$ Throughput is the net number of information bits that are transmitted without error per unit time (sometimes referred to as *goodput*). It can be expressed as $$\label{eq3} T_k = \frac{L}{M} R_k f(\gamma_k) ,$$ where $L$ and $M$ are the number of information bits and the total number of bits in a packet, respectively. $R_k$ and $\gamma_k$ are the transmission rate and the SIR for the $k^{th}$ user, respectively; and $f(\gamma_k)$ is the “efficiency function" which is assumed to be increasing and S-shaped (sigmoidal) with $f(\infty)=1$. We also require that $f(0)=0$ to ensure that $u_k=0$ when $p_k=0$. In general, the efficiency function depends on the modulation, coding and packet size. A more detailed discussion of the efficiency function can be found in [@Meshkati_TCOMM]. Note that for a sigmoidal efficiency function, the utility function in is a quasiconcave function of the user’s transmit power. The throughput $T_k$ in (\[eq3\]) could also be replaced with any increasing concave function such as the Shannon capacity formula as long as we make sure that $u_k = 0$ when $p_k=0$. Based on (\[eq2\]) and (\[eq3\]), the utility function for user $k$ can be written as $$\label{eq4} u_k = \frac{L}{M} R \frac{f(\gamma_k)}{p_k}\ .$$ For the sake of simplicity, we have assumed that the transmission rate is the same for all users, i.e., $R_1 = ... = R_K = R$. All the results obtained here can be easily generalized to the case of unequal rates. The utility function in , which has units of *bits/Joule*, captures very well the tradeoff between throughput and battery life and is particularly suitable for applications where energy efficiency is crucial. Delay Constraints {#delay constraint} ----------------- Let $X$ represent the (random) number of transmissions required for a packet to be received without any errors. The assumption is that if a packet has one or more errors, it will be retransmitted. We also assume that retransmissions are independent from each other. It is clear that the transmission delay for a packet is directly proportional to $X$. Therefore, any constraint on the transmission delay can be equivalently expressed as a constraint on the number of transmissions. Assuming that the packet success rate is given by the efficiency function $f(\gamma)$[^1], the probability that exactly $m$ transmissions are required for the successful transmission of
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Deep learning approaches to cyclone intensity estimation have recently shown promising results. However, suffering from the extreme scarcity of cyclone data on specific intensity, most existing deep learning methods fail to achieve satisfactory performance on cyclone intensity estimation, especially on classes with few instances. To avoid the degradation of recognition performance caused by scarce samples, we propose a context-aware CycleGAN which learns the latent evolution features from adjacent cyclone intensity and synthesizes CNN features of classes lacking samples from unpaired source classes. Specifically, our approach synthesizes features conditioned on the learned evolution features, while the extra information is not required. Experimental results of several evaluation methods show the effectiveness of our approach, even can predicting unseen classes.' address: | School of Information and Communication Engineering,\ Beijing University of Post and Telecommunications, Beijing, China bibliography: - 'arixv\_cyclone.bib' title: 'CYCLONE INTENSITY ESTIMATE WITH CONTEXT-AWARE CYCLEGAN' --- =1 context-aware CycleGAN, cyclone intensity estimation, feature generation Introduction {#sec:intro} ============ Cyclone intensity estimation is an important task in meteorology for predicting disruptive of the cyclone. The intensity of a cyclone, which is defined as the maximum wind speed near the cyclone center, is the most critical parameter of a cyclone [@cyclone; @CycloneClassify]. The main assumption of estimation method is that cyclones with similar intensity tend to have a similar pattern [@cyclone]. The cyclone features represented by the early estimation approaches rely on the human-constructed features which are sparse and subjectively biased [@cyclone]. With remarkable progress of deep learning, using Convolutional Neural Networks (CNN) to estimate the intensity of cyclones [@CycloneClassify; @CycloneRotation] has obtained increasing attentions. \[fig:example\] Existing deep learning approaches for cyclone intensity estimation can be roughly split into two categories: classification based methods [@CycloneClassify] and regression based methods [@CycloneRotation]. Classification approaches estimate cyclone intensities by treating each intensity label as an independent fixed class and use a cross-entropy loss to optimize model. Regression methods estimate exact intensity values of cyclones by using mean squared error (MSE) as a loss function. However, a common intrinsic problem in existing approaches is that they all ignore negative effects of specific cyclone data distribution, as shown in Fig 1 (b), where some cyclone classes only contain few instances. ![image](WithCyc.jpg){width="80.00000%" height="4.6cm"} \[fig:pipline\] A natural approach to address this problem is synthesizing required samples to supplement training set. Recently, generating training data with generative adversarial networks (GAN) obtains increasing attentions due to the ability of generation conditioned on specific attributes or categories, e.g. synthesize disgust or sad emotion images from neutral class with Cycle-Consistent Adversarial Networks (CycleGAN) [@Emotion]. Most of existing approaches [@featureGeneration; @featureGenerationCycle] rely on extra attribute information to train GAN model and generate samples. However, in this work, each cyclone sample only has an intensity label and some cyclone classes are extremely lacking samples, which make existing methods perform poorly. Evolution features, the difference of features between adjacent cyclone classes, has a similar pattern on each fixed cyclone intensity interval. As illustrated in Fig.1 (a), the highlighted points are evolution features between images of adjacent cyclone intensities, which are located on the eye of cyclones and the border of cyclones and sea. In this paper, we propose a context-aware CycleGAN to synthesize CNN features of cyclone classes lacking samples in the absence of extra information. In particular, the generator of context-aware CycleGAN is modified to learn cyclone evolution features conditioned on a given context intensity interval which is the difference in intensity between each two samples involved in training. Moreover, the generator is regularized by a classification loss to regulate the intensity characteristic of synthetic features and the adversarial loss is improved with the Wasserstein distance [@WGAN] for a training stability. Based on the learned evolution features, our method is able to synthesize features of any cyclone intensity with a contextual cyclone sample. We summarize our contributions as follows: - Propose the concept of evolution features which focus on the difference of features between adjacent classes instead of normal features of classes. - Improve CycleGAN to synthesize features under the constraint of a context intensity interval where a classification loss is optimized to regulate cyclone intensity. Our approach {#sec:format} ============ Different from conventional generative methods, our method is suitable to synthesize features for classes lacking samples and classes which have context-dependent without extra information. Specific to the cyclone intensity estimation, the context refers to the interval of adjacent cyclone intensities. Therefore, the key of our model is the ability to generate required CNN features from unpaired source classes by relying on evolution features of a fixed context intensity interval shown in Fig 2. We begin by defining the problem of our interest. Let $X={(f_x,s_x,c_{x\rightarrow y})}$ and $Y={(f_y,s_y,c_{y\rightarrow x})}$ where X and Y are source and target classes, $f\in \mathbb{R}^{d_x}$ is the CNN features of cyclone image extracted by the CNN, $s$ denotes the intensity labels in $S=\{s_1,...,s_K\}$ consisting of K discrete speeds, $c_{x\rightarrow y}$ is the context attribute vector of label from $s_x$ to $s_y$, so as the $c_{y\rightarrow x}$. To improve the generalization ability of our method, the CNN features $f$ is connected with the random noise, as in [@featureGeneration; @featureGenerationCycle]. Context-aware CycleGAN {#sec:generator} ---------------------- In general, GAN [@WGAN] consists of a generative network ($G$) and a discriminative network ($D$) that are iteratively trained in a two-player minimax game manner. CycleGAN is used to realize the unpaired translation between source classes ($X$) and target classes ($Y$). There are two generator $G_{X\rightarrow Y}$ and $G_{Y \rightarrow X}$ in CycleGAN. $G_{X\rightarrow Y}$ learns a mapping $X\rightarrow Y$ and $G_{Y \rightarrow X}$ learns a mapping $Y \rightarrow X$ simultaneously [@cycleGAN]. **Context-aware Transform.** Learning evolution features from adjacent cyclone classes is critical to our method. Hence, our generator consists of a single hidden layer and a context-aware transform layer conditioned on the context $c$, which focus on the evolution features. Formally, the context-aware transform layer in the generator transforms the input $f_i$ to the output $f_o$ by relying on the context vector $c$. The transformation is denoted as $f_o=g(f,c)=r(W^c f+b)$, where the $g()$ is the function of transform layer, $r()$ is the relu function, $b$ is the bias of the layer and $W^c$ is the weight parameter. In particular, the weight parameter $W^c$ is designed as the summation of the following two terms: $$W^c=\overline{W}_p+V_pE(c), \label{weight}$$ where $\overline{W}_p$ is the independent weight of the context attributes, $E(c)$ is the desired shape context representation, which is turned from the context intensity interval by using a function denoted as $E(·)$, and $V_p$ transforms the context attributes to the weight of auxiliary transform layer. All of parameters in are learnt during the training stage [@contextTranfer; @context-aware]. Moreover, the second term in is an auxiliary transform layer generated from the context attribute $c$, which focuses on learning evolution features. **Objective Function.** Only using evolution features do not guarantee label-preserving transformation. For generating features of expected intensity, the classification loss of pertained classifier is proposed to be minimized as the regular term of generator. Besides, adversarial losses in CycleGAN are applied to iteratively train the generators and discriminators [@cyclone]. In particular, the regularized classification loss $L_{cls}$ is defined as: $$\mathcal {L}_{cls}(s_y, \widetilde{f}_y;\theta)= -E_{\widetilde{f}_y\sim P_{\widetilde{f}_y}}[log P(s_y|\widetilde{f}_y;\theta)], \vspace{-8pt} \label{classification_loss}$$ where $\widetilde{f}_y$ is features synthesized by $G_{X\rightarrow Y}$ from the real features $f_x$, $s_y$ is the class label of $\widetilde{f}_y$ and $(s_y|\widetilde{f}_y;\theta)$ denotes the probability of the synthetic features predicted with its true label $s_y$ which is the output of liner softmax with the parameter $\theta$ pretrained on real features. $L_{cls}(s_x, \widetilde{f}_x;\theta)$ is analogously defined with the same parameter $\theta$. Hence, the full objective is: $$\begin{aligned} \mathcal{L}(G_{X\rightarrow Y}, &G_{Y\rightarrow X}, D_X, D_Y) =\mathcal{L
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | This article studies the structure of the automorphism groups of general graph products of groups. We give a complete characterisation of the automorphisms that preserve the set of conjugacy classes of vertex groups for arbitrary graph products. Under mild conditions on the underlying graph, this allows us to provide a simple set of generators for the automorphism groups of graph products of *arbitrary* groups. We also obtain information about the geometry of the automorphism groups of such graph products: lack of property (T), acylindrical hyperbolicity. The approach in this article is geometric and relies on the action of graph products of groups on certain complexes with a particularly rich combinatorial geometry. The first such complex is a particular Cayley graph of the graph product that has a *quasi-median* geometry, a combinatorial geometry reminiscent of (but more general than) CAT(0) cube complexes. The second (strongly related) complex used is the Davis complex of the graph product, a CAT(0) cube complex that also has a structure of right-angled building. author: - Anthony Genevois and Alexandre Martin title: Automorphisms of graph products of groups from a geometric perspective --- Introduction and main results ============================= Graph products of groups, which have been introduced by Green in [@GreenGP], define a class of group products that, loosely speaking, interpolates between free and direct products. For a simplicial graph $\Gamma$ and a collection of groups $\mathcal{G}=\{ G_v \mid v \in V(\Gamma) \}$ indexed by the vertex set $V(\Gamma)$ of $\Gamma$, the *graph product* $\Gamma \mathcal{G}$ is defined as the quotient $$\left( \underset{v \in V(\Gamma)}{\ast} G_v \right) / \langle \langle gh=hg, \ h \in G_u, g \in G_v, \{ u,v \} \in E(\Gamma) \rangle \rangle,$$ where $E(\Gamma)$ denotes the edge set of $\Gamma$. The two extreme situations where $\Gamma$ has no edge and where $\Gamma$ is a complete graph respectively correspond to the free product and the direct sum of the groups belonging to the collection $\mathcal{G}$. Graph products include two intensively studied families of groups: right-angled Artin groups and right-angled Coxeter groups. Many articles have been dedicated to the study of the automorphism groups of these particular examples of graph products. In particular, the automorphism groups of right-angled Coxeter groups have been intensively studied in relation with the famous rigidity problem for Coxeter groups, see for instance [@RACGrigidity]. Beyond these two cases, the automorphism groups of general graph products of groups are poorly understood. Most of the literature on this topic imposes very strong conditions on the graph products involved, either on the underlying graph (as in the case of the automorphisms groups of free products [@OutSpaceFreeProduct; @HorbezHypGraphsForFreeProducts; @HorbezTitsAlt]) or on the vertex groups (most of the case, they are required to be abelian or even cyclic [@AutGPabelianSet; @AutGPabelian; @AutGPSIL; @RuaneWitzel]). Automorphism groups of graph products of more general groups (and their subgroups) are essentially uncharted territory. For instance, the following general problem is still unsolved:\ **General Problem.** Find a natural / simple generating set for the automorphism group of a general graph product of groups.\ The first result in that direction is the case of right-angled Artin groups or right-angled Coxeter groups, solved by Servatius [@RAAGServatius] and Laurence [@RAAGgenerators]. More recently, Corredor–Guttierez described a generating set for automorphism groups of graph products of cyclic groups [@AutGPabelianSet], using previous work of Guttierez–Piggott–Ruane [@AutGPabelian]. Beyond these cases however, virtually nothing is known about the automorphism group of a graph product. Certain elements in the generating sets of a right-angled Artin groups naturally generalise to more general graph products, and we take a moment to mention them as they play an important role in the present work: - For an element $g\in \Gamma{{\mathcal G}}$, the *inner automorphism* $\iota(g)$ is defined by $$\iota(g): \Gamma{{\mathcal G}}\rightarrow \Gamma{{\mathcal G}}, ~~x \mapsto gxg^{-1}.$$ - Given an isometry $\sigma : \Gamma \to \Gamma$ and a collection of isomorphisms $\Phi = \{ \varphi_u : G_u \to G_{\sigma(u)} \mid u \in V(\Gamma) \}$, the *local automorphism* $(\sigma, \Phi)$ is the automorphism of $\Gamma \mathcal{G}$ induced by $$\left\{ \begin{array}{ccc} \bigcup\limits_{u \in V(\Gamma)} G_u & \to & \Gamma \mathcal{G} \\ g & \mapsto & \text{$\varphi_u(g)$ if $g \in G_u$} \end{array} \right. .$$ For instance, in the specific case of right-angled Artin groups, graphic automorphisms (i.e. automorphisms of $\Gamma{{\mathcal G}}$ induced by a graph automorphism of $\Gamma$) and inversions [@ServatiusCent] are local automorphisms. - Given a vertex $u \in V(\Gamma)$, a connected component $\Lambda$ of $\Gamma \backslash \mathrm{star}(u)$ and an element $h \in G_u$, the *partial conjugation* $(u, \Lambda,h)$ is the automorphism of $\Gamma \mathcal{G}$ induced by $$\left\{ \begin{array}{ccc} \bigcup\limits_{u \in V(\Gamma)} G_u & \to & \Gamma \mathcal{G} \\ g & \mapsto & \left\{ \begin{array}{cl} g & \text{if $g \notin \langle \Lambda \rangle$} \\ hgh^{-1} & \text{if $g \in \langle \Lambda \rangle$} \end{array} \right. \end{array} \right. .$$ Notice that an inner automorphism of $\Gamma \mathcal{G}$ is always a product of partial conjugations. The goal of this article is to describe the structure (and provide a generating set) for much larger classes of graphs products of groups by adopting a new geometric perspective. In a nutshell, the strategy is to consider the action of graph products $\Gamma{{\mathcal G}}$ on an appropriate space and to show that this action can be extended to an action of $\mathrm{Aut}(\Gamma{{\mathcal G}})$ on $X$, in order to exploit the geometry of this action. Such a ‘rigidity’ phenomenon appeared for instance in the work of Ivanov on the action of mapping class groups of hyperbolic surfaces on their curve complexes: Ivanov showed that an automorphism of the mapping class group induces an automorphism of the underlying curve complex [@IvanovAut]. Another example is given by the Higman group: in [@MartinHigman], the second author computed the automorphism group of the Higman group $H_4$ by first extending the action of $H_4$ on a CAT(0) square complex naturally associated to its standard presentation to an action of $\mathrm{Aut}(H_4)$. In this article, we construct such rigid actions for large classes of graph products of groups. The results from this article vastly generalise earlier results obtained by the authors in a previous version (not intended for publication, but still available on the arXiv as [@GPcycle]).\ We now present the main results of this article. **For the rest of this introduction, we fix a finite simplicial graph $\Gamma$ and a collection ${{\mathcal G}}$ of groups indexed by $V(\Gamma)$.** #### The subgroup of conjugating automorphisms. When studying automorphisms of graph products of groups, an important subgroup consists of those automorphisms that send vertex groups to conjugates of vertex groups. This subgroup already appears for instance in the work of Tits [@TitsAutCoxeter], Corredor–Gutierrez [@AutGPabelianSet], and Charney–Gutierez–Ruane [@AutGPabelian]. A description of this subgroup was only available for right-angled Artin groups and other graph products of *cyclic* groups by work of Laurence [@RAAGgenerators]. A central result of this paper is a complete characterisation of this subgroup under *no restriction on the vertex groups or the underlying graph*. More precisely, let us call a *conjugating automorphism* of $\Gamma \mathcal{G}$ an automorphism $\varphi$ of $\Gamma \mathcal{G}$ such that, for every vertex group $G_v \in \mathcal{G}$, there exists a vertex group $G_w \in \mathcal{G}$ and an element $g \in \Gamma \mathcal{G}$ such that $\varphi(G_v)=gG_wg^{-1}$. We prove the following: *The subgroup of conjugating automorphisms of $\Gamma \mathcal{G}$ is exactly the subgroup of $ \mathrm{Aut}(\Gamma{{\mathcal G}})$ generated by the local automorphisms and the partial conjugations.* #### Generating set and algebraic structure. With this characterisation of conjugating automorphisms at our disposal, we are able to completely describe the automorphism group of large classes of graph products, and in particular to give a generating set for such automorphism groups
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Magnetic B-type stars exhibit photometric variability due to diverse causes, and consequently on a variety of timescales. In this paper we describe interpretation of BRITE photometry and related ground-based observations of 4 magnetic B-type systems: $\epsilon$ Lupi, $\tau$ Sco, a Cen and $\epsilon$ CMa.' author: - 'G.A. Wade' - 'D.H. Cohen' - 'C. Fletcher' - 'G. Handler' - 'L. Huang' - 'J. Krticka' - 'C. Neiner' - 'E. Niemczura' - 'H. Pablo' - 'E. Paunzen' - 'V. Petit' - 'A. Pigulski' - 'Th. Rivinius' - 'J. Rowe' - 'M. Rybicka' - 'R. Townsend' - 'M. Shultz' - 'J. Silvester' - 'J. Sikora' - 'the BRITE-Constellation Executive Science Team (BEST)' bibliography: - 'wade.bib' title: 'Magnetic B stars observed with BRITE: Spots, magnetospheres, binarity, and pulsations' --- Introduction ============ Approximately 10% of mid- to early-B stars located on the main sequence show direct evidence of strong surface magnetism. Studying the photometric variability of such systems provides insight into their multiplicity and physical characteristics, rotation, surface and wind structures, and pulsation properties. For example, in late- and mid- B-type stars (below about spectral type B2) magnetic fields stabilise atmospheric motions and allow the accumulation of peculiar abundances (and abundance distributions) of various chemical elements. At earlier spectral types, magnetic fields channel radiatively-driven stellar winds, confining wind plasma to produce complex co-rotating magnetospheres. Some magnetic stars are located in close binary systems, where photometric variability may reveal eclipses, tidal interaction and (potentially) mass and energy transfer effects. Finally, some magnetic B stars are located in an instability strip, and exhibit $\beta$ Cep and SPB-type pulsations. The bright magnetic B stars observed by the BRITE-Constellation have been preferential targets of spectropolarimetric monitoring within the context of the BRITEpol survey (see the paper by Neiner et al. in these proceedings). In this article we provide brief reports on analysis of BRITE photometry and complementary data for 4 magnetic B-type stars for which the BRITE observations detect or constrain variability due to these mechanisms. $\epsilon$ Lupi =============== ![Phased photometry (Upper panel - BRITE blue, middle panel - BRITE red) and radial velocities (lower panel) of $\epsilon$ Lupi, compared with the predictions of the heartbeat model (curves).[]{data-label="fig:epslupi"}](eps_lupi.pdf){width="\textwidth"} $\epsilon$ Lupi is a short-period ($\sim 4.6$ d) eccentric binary system containing two mid/early-B stars . It was observed by the BRITE UBr, BAb, BTr, BLb nano-satellites [see e.g. @2016PASP..128l5001P] during the Centaurus campaign from March to August 2014, and again by BLb during the Scorpius campaign from February to August 2015. [Magnetic fields associated with both the primary and secondary components were reported by @2015MNRAS.454L...1S, making $\epsilon$ Lupi the first known doubly-magnetic massive star binary]{}. The (variable) proximity of the two components led @2015MNRAS.454L...1S to speculate that their magnetospheres may undergo reconnection events during their orbit. Such events, as well as rotational modulation by surface structures and the suspected $\beta$ Cep pulsations of one or both components, could introduce brightness fluctuations potentially observable by BRITE. The periodogram of the BRITE photometry shows power at the known orbital period. When the data are phased accordingly, both the red (BTr+UBr) and blue (BAb) lightcurves exhibit a subtle, non-sinusoidal modulation with peak flux occurring at the same phase as the orbital RV extremum (i.e. periastron). We interpret this modulation as a “heartbeat" effect [e.g. @2012ApJ...753...86T], resulting from tidally-induced deformation of the stars during their close passage at periastron. Assuming this phenomenon, we have successfully modeled the lightcuves and RV variations using the PHOEBE code [version 1, @2005ApJ...628..426P see Fig. \[fig:epslupi\]]. ![BRITE red filter photometry of $\tau$ Sco, compared with the predictions of ADM models computed assuming pure scattering. The different colours correspond to source surface radii ranging from 2-5 $R_*$.[]{data-label="fig:tausco"}](tausco.pdf){width="\textwidth"} $\tau$ Sco ========== $\tau$ Sco is a hot main sequence B0.5V star that was observed by BAb, UBr, BLb, BHr during the Scorpius campaign from February August 2015. @2006MNRAS.370..629D detected a magnetic field in the photosphere of this X-ray bright star, varying according to a rotational period of 41 d. They modeled the magnetic field topology, finding it to be remarkably complex. @2010ApJ...721.1412I acquired Suzaku X-ray measurements of $\tau$ Sco. They found that the very modest phase variation of the X-ray flux was at odds with the predicted variability according to the 3D force-free extrapolation of the magnetosphere reported by @2006MNRAS.370..629D. Petit et al. (in prep) have sought to explain this discrepancy by reconsidering the physical scale of the closed magnetospheric loops of $\tau$ Sco. New modeling of system using the Analytic Dynamical Magnetosphere (ADM) formalism [@2016MNRAS.462.3830O] yields predictions of the X-ray variability as a function of the adopted mass-loss rate (as quantified by the “source surface" of the extrapolation). These same ADM models have been used in conjunction with BRITE photometry to constrain the distribution of cooler plasma surrounding the star. Adopting a pure electron scattering approximation, we have computed the expected brightness modulation as a function of source surface distance (Fig. \[fig:tausco\]). The very high quality of the BRITE red photometry allows us to rule out models with source surface radii smaller than 3 $R_*$. a Cen ===== a Cen is a Bp star of intermediate spectral type ($T_{\rm eff}\sim 19$ kK) that exhibits extreme variations of its helium lines during its 8.82 d rotational cycle. It was observed during the Centaurus campaign from March to August 2014 by UBr, BAb, BTr, BLb. used high resolution spectra to compute Doppler Imaging maps of the distributions of He, Fe, N and O of a Cen, revealing in particular a more than two-order-of-magnitude contrast in the abundance of He in opposite stellar hemispheres. They also discovered that the He-poor hemisphere shows a high relative concentration of $^3$He. The BRITE photometry of a Cen exhibits clear variability according to the previously-known rotational period (Fig. \[fig:acen1\], left panel). It also reveals marginal variability at frequencies that may correspond to pulsations in the SPB range. Using a collection of 19 new ESPaDOnS and HARPSpol Stokes $V$ spectra, in addition to archival UVES spectra (e.g. Fig. \[fig:acen1\], right panel), new self-consistent Magnetic Doppler Imaging maps have been derived of the stellar magnetic field and the abundance distributions of various elements, including Si (Fig. \[fig:acen2\]). These maps will be used as basic input for modeling the two-colour BRITE lightcurves . ![[*Left panel -*]{} BRITE photometry of a Cen (upper curve - blue filter, lower curve - red filter) phased according to the stellar rotation period. [*Right panel -*]{} Dynamic spectrum of the rotational variability of the He [i]{} $\lambda 4388$ line, showing the extreme changes in line strength.[]{data-label="fig:acen1"}](acen_brite.pdf "fig:"){width="6.5cm"}![[*Left panel -*]{} BRITE photometry of a Cen (upper curve - blue filter, lower curve - red filter) phased according to the stellar rotation period. [*Right panel -*]{} Dynamic spectrum of the rotational variability of the He [i]{} $\lambda 4388$ line, showing the extreme changes in line strength.[]{data-label="fig:acen1"}](acen_he.pdf "fig:"){width="6.5cm"} ![Magnetic Doppler Imaging maps of a Cen, showing the surface Si distribution (upper row), and the magnetic field modulus and orientation (middle and bottom rows, respectively). []{data-label="fig:acen2"}](acen_map.pdf
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'After the development of a self-consistent quantum formalism nearly a century ago there began a quest for how to interpret the theoretical constructs of the formalism. In fact, the pursuit of new interpretations of quantum mechanics persists to this day. Most of these endeavors assume the validity of standard quantum formalism and proceed to ponder the ontic nature of wave functions, operators, and the Schödinger equation. The present essay takes a different approach, more epistemological than ontological. I endeavor to give a heuristic account of how empirical principles lead us to a quantum mechanical description of the world. An outcome of this approach is the suggestion that the notion of discrete quanta leads to the wave nature and statistical behavior of matter rather than the other way around. Finally, the hope is to offer some solace to those older of us who still worry about such things and also to provide the neophyte student of quantum mechanics with physical insight into the mathematically abstract and often baffling aspects of the theory.' author: - Stephen Boughn - | Stephen Boughn[*[^1]*]{}\ *Department of Physics, Princeton University, Princeton NJ\ *Departments of Physics and Astronomy, Haverford College, Haverford PA** date: '[LaTeX-ed ]{}' title: A Quantum Story --- [*Keywords:*]{} Quantum theory $\cdot$ Canonical quantization $\cdot$ Foundations of quantum mechanics $\cdot$ Measurement problem Introduction {#INTRO} ============ From the very beginnings of quantum mechanics a century ago, it was clear that the concepts of classical physics were insufficient for describing many phenomena. In particular, the fact that electrons and light exhibited both the properties of particles and the properties of waves was anathema to classical physics. After the development of a self-consistent quantum formalism, there began a quest for just how to interpret the new theoretical constructs. De Broglie and Schrödinger favored interpreting quantum waves as depicting a continuous distributions of matter while Einstein and Born suggested that they only provide a statistical measure of where a particle of matter or radiation might be. After 1930, the [*Copenhagen interpretation*]{} of Bohr and Heisenberg was generally accepted; although, Bohr and Heisenberg often emphasized different aspects of the interpretation and there has never been complete agreement as to its meaning even among its proponents[@St1972]. The Copenhagen interpretation dealt with the incongruous dual wave and particle properties by embracing Bohr’s [*principle of complementarity*]{} in which complementary features of physical systems can only be accessed by experiments designed to observe one or the other but not both of these features. For example, one can observe either the particle behavior or wave behavior of electrons but not both at the same time. In addition, the waves implicit in Schrödinger’s equation were interpreted as probability amplitudes for the outcomes of experiments. Finally, in order to facilitate the communication of experimental results, the Copenhagen interpretation emphasized that the description of experiments, which invariably involve macroscopic apparatus, must be described in classical terms. These aspects of quantum theory are familiar to all beginning students of quantum mechanics; however, many students harbor the uneasy feeling that something is missing. How can an electron in some circumstances exhibit the properties of a particle and at other times exhibit the properties of a wave? How is it that a primary theoretical constructs of quantum mechanics, the Schrödinger wave functions or Hilbert state vectors, only indicate the probability of events? Quantum mechanics itself does not seem to indicate that any event actually happens. Why is it that experiments are only to be described classically? Where is the quantum/classical divide between the quantum system and the classical measurement and what governs interactions across this divide? In fact, these sorts of questions are raised not only by neophyte students of quantum mechanics but also by seasoned practitioners. In actuality, the question of how to interpret quantum theory has never been fully answered and new points of view are still being offered. Many of these interpretations involve novel mathematical formalisms that have proved to be useful additions to quantum theory. In fact, new formulations of quantum mechanics and quantum field theory, including axiomatic approaches, are often accompanied by new or modified interpretations. Such interpretive analyses are largely framed within the mathematical formalism of quantum theory and I will refrain from saying anything more about them. The purpose of this essay is to address a different, more epistemological question, “What is it about the physical world that leads us to a quantum theoretic model of it?" The intention is to in no way malign the more formal investigations of quantum mechanics. Such investigations have been extremely successful in furthering our understanding of quantum theory as well as increasing our ability to predict and make use of novel quantum phenomena. These treatments invariably begin with the assumption that standard quantum mechanics is a fundamental law of nature and then proceed with interpreting its consequences. In this essay I take the point of view that quantum mechanics is a model, a human invention, created to help us describe and understand our world and then proceed to address the more philosophical question posed above, a question that is still pondered by some physicists and philosophers and certainly by many physics students when they first encounter quantum mechanics. Most of the latter group eventually come to some understanding, perhaps via the ubiquitous Copenhagen Interpretation, and then proceed according to the maxim “Shut up and calculate!"[^2] One modest aim of this essay is to provide such students with a heuristic perspective on quantum mechanics that might enable them to proceed to calculations without first having to “shut up". What’s Quantized? ================= Let us begin by asking where the ‘quantum’ in quantum mechanics comes from. What is it that’s quantized? That matter is composed of discrete quanta, atoms, was contemplated by Greek philosophers in the $5^{th}$ century B.C.[@Be2011] and the idea continued to be espoused through the $18^{th}$ century. Even though it wasn’t until the $19^{th}$ and early $20^{th}$ centuries that the existence of atoms was placed on a firm empirical basis, it’s not difficult to imagine what led early philosophers to an atomistic model. Perhaps the primary motivation, an argument that still resonates today, was to address the puzzle of change, i.e., the transformation of matter. This was often expressed by the assertion that things cannot come from nothing nor can they ever return to nothing. Rather, creation, destruction, and change are most simply explained by the rearrangement of the atomic constituents of matter. In his epic poem [*De rerum natura*]{} (On the Nature of Things, circa 55 BC), Lucretius[^3] explained (translation by R. Melville[@Lu55]) > ...no single thing returns to nothing but at its dissolution everything returns to matter’s primal particles...they must for sure consist of changeless matter. For if the primal atoms could suffer change...then no more would certainty exist of what can be and what cannot...Nor could so oft the race of men repeat the nature, manners, habits of their parents. While it took nearly 2500 years, the conjectures of the atomists were largely justified. One might also reasonably ask, “Are there other aspects of nature that are quantized?" It’s no coincidence that during the same period that saw the confirmation of the atomic hypothesis, there appeared evidence for the discrete nature of atomic interactions. Perhaps the first clues were the early $19^{th}$ century observations by Wollaston and Fraunhofer of discrete absorption lines in the spectrum of the sun and the subsequent identification of emission lines in the spectra of elements in the laboratory by Kirchoff and Bunsen in 1859. In 1888, Rydberg was able to relate the wavelengths of these discrete spectral lines to ratios of integers. Boltzmann introduced discrete energy as early as 1868 but only as a computational device in statistical mechanics. It was in 1900 that Planck found he must take such quantization more seriously in his derivation of the Planck black body formula[@Ba2009]. A decade later Jeans, Poincaré, and Ehrenfest demonstrated that the discreteness of energy states, which source black body radiation, follows from the general morphology of the spectrum and is not the consequence of precisely fitting the observed spectral data[@No1993]. In 1905 Einstein introduced the notion of quanta of light with energies that depended on frequency with precisely the same relation as introduced by Planck[^4], $E=h\nu$, and then used this relation to explain qualitative observations of the photoelectric effect[^5]. In 1907 it was again Einstein who demonstrated that energy quantization of harmonic oscillators explained why the heat capacities of solids decrease at low temperatures. Finally, Bohr’s 1913 model of discrete energy levels of electrons in atoms explained the spectral lines of Kirchoff and Bunsen as well as resolved the conflict of Maxwell’s electrodynamics with the stability of Rutherford’s 1911 nuclear atomic model. In a 1922 conversation with Heisenberg[@He1972], Bohr expressed an argument for the discreteness of atomic interactions that harkened back to the ancient Greeks’ arguments for atoms (and to the Lucretius quote above). Bohr based his argument on the stability of matter, but not in the sense just mentioned. Bohr explained, > By ‘stability’ I mean that the same substances always have the same properties, that the same crystals recur, the same chemical compounds, etc. In other words, even after a host of changes due to external influences, an iron atom will always remain an iron atom, with exactly the same properties as before. This cannot be explained by the principles of classical mechanics...according to which all effects have precisely determined causes, and according to which the present state of a phenomenon or process is fully determined by the one that immediately preceded it. In other words, in a world composed of Rutherford atoms, quantum discreteness is necessary in order to preserve the simplicity and regularity of nature. Bohr’
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We propose a novel class of network models for temporal dyadic interaction data. Our objective is to capture important features often observed in social interactions: sparsity, degree heterogeneity, community structure and reciprocity. We use mutually-exciting Hawkes processes to model the interactions between each (directed) pair of individuals. The intensity of each process allows interactions to arise as responses to opposite interactions (reciprocity), or due to shared interests between individuals (community structure). For sparsity and degree heterogeneity, we build the non time dependent part of the intensity function on compound random measures following [@Todeschini2016]. We conduct experiments on real-world temporal interaction data and show that the proposed model outperforms competing approaches for link prediction, and leads to interpretable parameters.' author: - | Xenia  Miscouridou$^{\mathbf{1}}$, François Caron$^{\mathbf{1}}$, Yee Whye Teh$^{\mathbf{1,2}}$\ $^{1}$Department of Statistics, University of Oxford\ $^{2}$DeepMind\ `{miscouri, caron, y.w.teh}@stats.ox.ac.uk`\ bibliography: - '../biblio.bib' title: 'Modelling sparsity, heterogeneity, reciprocity and community structure in temporal interaction data' --- Introduction ============ There is a growing interest in modelling and understanding temporal dyadic interaction data. Temporal interaction data take the form of time-stamped triples $(t,i,j)$ indicating that an interaction occurred between individuals $i$ and $j$ at time $t$. Interactions may be directed or undirected. Examples of such interaction data include commenting a post on an online social network, exchanging an email, or meeting in a coffee shop. An important challenge is to understand the underlying structure that underpins these interactions. To do so, it is important to develop statistical network models with interpretable parameters, that capture the properties which are observed in real social interaction data. One important aspect to capture is the *community structure* of the interactions. Individuals are often affiliated to some latent communities (e.g. work, sport, etc.), and their affiliations determine their interactions: they are more likely to interact with individuals sharing the same interests than to individuals affiliated with different communities. An other important aspect is *reciprocity*. Many events are responses to recent events of the opposite direction. For example, if Helen sends an email to Mary, then Mary is more likely to send an email to Helen shortly afterwards. A number of papers have proposed statistical models to capture both community structure and reciprocity in temporal interaction data [@Blundell2012; @Dubois2013; @Linderman2014]. They use models based on Hawkes processes for capturing reciprocity and stochastic block-models or latent feature models for capturing community structure. In addition to the above two properties, it is important to capture the global properties of the interaction data. Interaction data are often *sparse*: only a small fraction of the pairs of nodes actually interact. Additionally, they typically exhibit high degree (number of interactions per node) *heterogeneity*: some individuals have a large number of interactions, whereas most individuals have very few, therefore resulting in empirical degree distributions being heavy-tailed. As shown by @Karrer2011, @Gopalan2013 and @Todeschini2016, failing to account explicitly for degree heterogeneity in the model can have devastating consequences on the estimation of the latent structure. Recently, two classes of statistical models, based on random measures, have been proposed to capture sparsity and power-law degree distribution in network data. The first one is the class of models based on exchangeable random measures [@Caron2017; @Veitch2015; @Herlau2015; @Borgs2016; @Todeschini2016; @Palla2016; @Janson2017]. The second one is the class of edge-exchangeable models [@Crane2015; @Crane2017; @Cai2016; @Williamson2016; @Janson2017a; @Ng2017]. Both classes of models can handle both sparse and dense networks and, although the two constructions are different, connections have been highlighted between the two approaches [@Cai2016; @Janson2017a]. The objective of this paper is to propose a class of statistical models for temporal dyadic interaction data that can capture all the desired properties mentioned above, which are often found in real world interactions. These are *sparsity*, *degree heterogeneity*, *community structure* and *reciprocity*. Combining all the properties in a single model is non trivial and there is no such construction to our knowledge. The proposed model generalises existing reciprocating relationships models [@Blundell2012] to the sparse and power-law regime. Our model can also be seen as a natural extension of the classes of models based on exchangeable random measures and edge-exchangeable models and it shares properties of both families. The approach is shown to outperform alternative models for link prediction on a variety of temporal network datasets. The construction is based on Hawkes processes and the (static) model of @Todeschini2016 for sparse and modular graphs with overlapping community structure. In Section \[sec:background\], we present Hawkes processes and compound completely random measures which form the basis of our model’s construction. The statistical model for temporal dyadic data is presented in Section \[sec:model\] and its properties derived in Section \[sec:properties\]. The inference algorithm is described in Section \[sec:inference\]. Section \[sec:experiments\] presents experiments on four real-world temporal interaction datasets. Background material {#sec:background} =================== Hawkes processes ---------------- Let $(t_k)_{k\geq 1}$ be a sequence of event times with $t_k\geq 0$, and let $\mathcal H_t=(t_k|t_k\leq t )$ the subset of event times between time $0$ and time $t$. Let $N(t)=\sum_{k\geq 1}1_{t_k\leq t}$ denote the number of events between time $0$ and time $t$, where $1_{A}=1$ if $A$ is true, and 0 otherwise. Assume that $N(t)$ is a counting process with conditional intensity function $\lambda(t)$, that is for any $t\geq 0$ and any infinitesimal interval $dt$ $$\Pr(N(t+dt)-N(t)=1|\mathcal H_t)=\lambda(t)dt.\label{eq:counting}$$ Consider another counting process $\tilde{N}(t)$ with the corresponding $(\tilde{t}_k)_{k\geq 1}, \mathcal{\tilde{H}}_t, \tilde{\lambda}(t)$. Then, $N(t),\tilde{N}(t)$ are mutually-exciting Hawkes processes  [@self_exc_HP] if the conditional intensity functions $\lambda(t)$ and $\tilde{\lambda}(t)$ take the form $$\begin{aligned} \lambda(t)=\mu + \int_0^t g_\phi(t-u)\, d\tilde{N}(u)\label{eq:hawkesintensity}\qquad \tilde{\lambda}(t)=\tilde{\mu} + \int_0^t {g}_{\tilde{\phi}}(t-u)\, d{N}(u)\end{aligned}$$ where $\mu=\lambda(0)>0, \tilde{\mu}=\tilde{\lambda}(0)>0$ are the base intensities and $g_\phi,g_{\tilde{\phi}}$ non-negative kernels parameterised by $\phi$ and $\tilde{\phi}$. This defines a pair of processes in which the current rate of events of each process depends on the occurrence of past events of the opposite process. Assume that $\mu=\tilde{\mu},\, \phi= \tilde{\phi}$ and $g_\phi(t) \geq 0$ for $t > 0$, $g_\phi(t)=0$ for $t<0$. If $g_\phi$ admits a form of fast decay then this results in strong local effects. However, if it prescribes a peak away from the origin then longer term effects are likely to occur. We consider here an exponential kernel $$g_\phi(t-u)= \eta e^{-\delta (t-u)}, t>u \label{expo_kernel}$$ where $\phi=(\eta,\delta)$. $\eta\geq 0$ determines the sizes of the self-excited jumps and $\delta>0$ is the constant rate of exponential decay. The stationarity condition for the processes is $\eta < \delta$. Figure \[fig:process\_and\_inten\] gives an illustration of two mutually-exciting Hawkes processes with exponential kernel and their conditional intensities. Compound completely random measures ----------------------------------- A homogeneous completely random measure (CRM) [@Kingman1967; @Kingman1993] on $\mathbb R_+$ without fixed atoms nor deterministic component takes the form $$\begin{aligned} W=\sum_{i\geq 1}w_i \delta_{\theta_i}\end{aligned}$$ where $(w_i,\theta_i)_{i\geq 1}$ are the points of a Poisson process on $(0,\infty)\times\mathbb R_+$ with mean measure $\rho(dw)H(d\theta)$ where $\rho$ is a L' evy measure, $H$ is a locally bounded measure and $\delta_x$ is the dirac delta mass at $x$. The homogeneous CRM is completely characterized by $\rho$ and $H$, and we write $W\sim \operatorname{CRM}(\rho
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Crucial problems of the quantum Internet are the derivation of stability properties of quantum repeaters and theory of entanglement rate maximization in an entangled network structure. The stability property of a quantum repeater entails that all incoming density matrices can be swapped with a target density matrix. The strong stability of a quantum repeater implies stable entanglement swapping with the boundness of stored density matrices in the quantum memory and the boundness of delays. Here, a theoretical framework of noise-scaled stability analysis and entanglement rate maximization is conceived for the quantum Internet. We define the term of entanglement swapping set that models the status of quantum memory of a quantum repeater with the stored density matrices. We determine the optimal entanglement swapping method that maximizes the entanglement rate of the quantum repeaters at the different entanglement swapping sets as function of the noise of the local memory and local operations. We prove the stability properties for non-complete entanglement swapping sets, complete entanglement swapping sets and perfect entanglement swapping sets. We prove the entanglement rates for the different entanglement swapping sets and noise levels. The results can be applied to the experimental quantum Internet.' author: - 'Laszlo Gyongyosi[^1]' - 'Sandor Imre[^2]' title: 'Theory of Noise-Scaled Stability Bounds and Entanglement Rate Maximization in the Quantum Internet' --- Introduction {#sec1} ============ The quantum Internet allows legal parties to perform networking based on the fundamentals of quantum mechanics [@ref1; @ref2; @ref3; @ref11; @ref13a; @ref13; @ref18; @refn7; @puj1; @puj2; @pqkd1; @np1]. The connections in the quantum Internet are formulated by a set of quantum repeaters and the legal parties have access to large-scale quantum devices [@ref5; @ref6; @ref7; @qmemuj; @ref4] such as quantum computers [@qc1; @qc2; @qc3; @qc4; @qc5; @qc6; @qcadd1; @qcadd2; @qcadd3; @qcadd4; @shor1; @refibm]. Quantum repeaters are physical devices with quantum memory and internal procedures [@ref5; @ref6; @ref7; @ref11; @ref13a; @ref13; @ref8; @ref9; @ref10; @add1; @add2; @add3; @refqirg; @ref18; @ref19; @ref20; @ref21; @add4; @refn7; @refn5; @refn3; @sat; @telep; @refn1; @refn2; @refn4; @refn6]. An aim of the quantum repeaters is to generate the entangled network structure of the quantum Internet via entanglement distribution [@ref23; @ref24; @ref25; @ref26; @ref27; @nadd1; @nadd2; @nadd3; @nadd4; @nadd5; @nadd6; @nadd7; @kris1; @kris2]. The entangled network structure can then serve as the core network of a global-scale quantum communication network with unlimited distances (due to the attributes of the entanglement distribution procedure). Quantum repeaters share entangled states over shorter distances; the distance can be extended by the entanglement swapping operation in the quantum repeaters [@refn7; @ref1; @ref5; @ref6; @ref7; @ref13; @ref13a]. The swapping operation takes an incoming density matrix and an outgoing density matrix; both density matrices are stored in the local quantum memory of the quantum repeater [@ref45; @ref46; @ref47; @ref48; @ref49; @ref50; @ref51; @ref52; @ref53; @ref54; @ref55; @ref56; @ref57; @ref58; @ref60; @ref61; @ref62]. The incoming density matrix is half of an entangled state such that the other half is stored in the distant source node, while the outgoing density matrix is half of an entangled state such that the other half is stored in the distant target node. The entanglement swapping operation, applied on the incoming and outgoing density matrices in a particular quantum repeater, entangles the distant source and target quantum nodes. Crucial problems here are the size and delay bounds connected to the local quantum memory of a quantum repeater and the optimization of the swapping procedure such that the entanglement rate of the quantum repeater (outgoing entanglement throughput measured in entangled density matrices per a preset time unit) is maximal. These questions lead us to the necessity of strictly defining the fundamental stability and performance criterions [@ref36; @ref37; @ref38; @ref39; @ref40; @ref41; @ref42; @ref43; @ref44] of quantum repeaters in the quantum Internet. Here, a theoretical framework of noise-scaled stability analysis and entanglement rate maximization is defined for the quantum Internet. By definition, the stability of a quantum repeater can be weak or strong. The strong stability implies weak stability, by some fundamentals of queueing theory [@refL1; @refL2; @refL3; @refL4; @refD1]. Weak stability of a quantum repeater entails that all incoming density matrices can be swapped with a target density matrix. Strong stability of a quantum repeater further guarantees the boundness of the number of stored density matrices in the local quantum memory. The defined system model of a quantum repeater assumes that the incoming density matrices are stored in the local quantum memory of the quantum repeater. The stored density matrices formulate the set of incoming density matrices (input set). The quantum memory also consists of a separate set for the outgoing density matrices (output set). Without loss of generality, the cardinality of the input set (number of stored density matrices) is higher than the cardinality of the output set. Specifically, the cardinality of the input set is determined by the entanglement throughput of the input connections, while the cardinality of the output set equals the number of output connections. Therefore, if in a given swapping period, the number of incoming density matrices exceeds the cardinality of the output set, then several incoming density matrices must be stored in the input set (Note: The logical model of the storage mechanisms of entanglement swapping in a quantum repeater is therefore analogous to the logical model of an input-queued switch architecture [@refL1; @refL2; @refL3].). The aim of entanglement swapping is to select the density matrices from the input and output sets, such that the outgoing entanglement rate of the quantum repeater is maximized; this also entails the boundness of delays. The maximization procedure characterizes the problem of optimal entanglement swapping in the quantum repeaters. Finding the optimal entanglement swapping means determining the entanglement swapping between the incoming and outgoing density matrices that maximizes the outgoing entanglement rate of the quantum repeaters. The problem of entanglement rate maximization must be solved for a particular noise level in the quantum repeater and with the presence of various entanglement swapping sets. The noise level in the proposed model is analogous to the lost density matrices in the quantum repeater due to imperfections in the local operations and errors in the quantum memory units. The entanglement swapping sets are logical sets that represent the actual state of the quantum memory in the quantum repeater. The entanglement swapping sets are formulated by the set of received density matrices stored in the local quantum memory and the set of outgoing density matrices, which are also stored in the local quantum memory. Each incoming and outgoing density matrix represent half of an entangled system, such that the other half of an incoming density matrix is stored in the distant source quantum repeater, while the other half of an outgoing density matrix is stored in the distant target quantum repeater. The aim of determining the optimal entanglement swapping method is to apply the local entanglement swapping operation on the set of incoming and outgoing density matrices such that the outgoing entanglement rate of the quantum repeater is maximized at a particular noise level. As we prove, the entanglement rate maximization procedure depends on the type of entanglement swapping sets formulated by the stored density matrices in the quantum memory. We define the logical types of the entanglement swapping sets and characterize the main attributes of the swapping sets. We present the efficiency of the entanglement swapping procedure as a function of the local noise and its impacts on the entanglement rate. We prove that the entanglement swapping sets can be defined as a function of the noise, which allows us to define noise-scaled entanglement swapping and noise-scaled entanglement rate maximization. The proposed theoretical framework utilizes the fundamentals of queueing theory, such as the Lyapunov methodology [@refL1], which is an analytical tool used to assess the performance of queueing systems [@refL1; @refL2; @refL3; @refL4; @refD1; @refs3], and defines a fusion of queueing theory with quantum Shannon theory [@ref4; @ref22; @ref28; @ref29; @ref30; @ref32; @ref33; @ref34; @ref35] and the theory of quantum Internet. The novel contributions of our manuscript are as follows: 1. We define a theoretical framework of noise-scaled entanglement rate maximization for the quantum Internet. 2. We determine the optimal entanglement swapping method that maximizes the entanglement rate of a quantum repeater at the different entanglement swapping sets as
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'Mrinal Kumar[^1]' - 'Shubhangi Saraf[^2]' bibliography: - 'refs.bib' title: Superpolynomial lower bounds for general homogeneous depth 4 arithmetic circuits --- ="2D Introduction ============ Proving lower bounds for explicit polynomials is one of the most important open problems in the area of algebraic complexity theory. Valiant [@Valiant79] defined the classes $\VP$ and $\VNP$ as the algebraic analog of the classes $\P$ and $\NP$, and showed that proving superpolynomial lower bounds for the Permanent would suffice in separating $\VP$ from $\VNP$. Despite the amount of attention received by the problem, we still do not know any superpolynomial (or even [*quadratic*]{}) lower bounds for general arithmetic circuits. This absence of progress on the general problem has led to a lot of attention on the problem of proving lower bounds for restricted classes of arithmetic circuits. The hope is that an understanding of restricted classes might lead to a better understanding of the nature of the more general problem, and the techniques developed in this process could possibly be adapted to understand general circuits better. Among the many restricted classes of arithmetic circuits that have been studied with this motivation, [*bounded depth*]{} circuits have received a lot of attention. In a striking result, Valiant et al [@VSBR83] showed that any $n$ variate polynomial of degree $\text{poly}(n)$ which can be computed by a polynomial sized arithmetic circuit of arbitrary depth can also be computed by an arithmetic circuit of depth $O(\log^2 n)$ and size poly$(n)$. Hence, proving superpolynomial lower bounds for circuits of depth $\log^2 n$ is as hard as proving lower bounds for general arithmetic circuits. In a series of recent works, Agrawal-Vinay [@AV08], Koiran [@koiran] and Tavenas [@Tavenas13] showed that the depth reduction techniques of Valiant et al [@VSBR83] can in fact be extended much further. They essentially showed that in order to prove superpolynomial lower bounds for general arithmetic circuits, it suffices to prove strong enough lower bounds for just [*homogeneous depth 4*]{} circuits. In particular, to separate $\VNP$ from $\VP$, it would suffice to focus our attention on proving strong enough lower bounds for homogeneous depth 4 circuits. The first superpolynomial lower bounds for homogeneous circuits of depth 3 were proved by Nisan and Wigderson [@NW95]. Their main technical tool was the use of the [*dimension of partial derivatives*]{} of the underlying polynomials as a complexity measure. For many years thereafter, progress on the question of improved lower bounds stalled. In a recent breakthrough result on this problem, Gupta, Kamath, Kayal and Saptharishi [@GKKS12] proved the first superpolynomial ($2^{\Omega(\sqrt n)}$) lower bounds for homogeneous depth 4 circuits when the fan-in of the product gates at the bottom level is bounded (by $\sqrt n$). This result was all the more remarkable in light of the results by Koiran [@koiran] and Tavenas [@Tavenas13] which showed that $2^{\omega(\sqrt n\log n)}$ lower bounds for this model would suffice in separating $\VP$ from $\VNP$. The results of Gupta et al were further improved upon by Kayal Saha and Sapthrashi [@KSS13] who showed $2^{\Omega(\sqrt n\log n)}$ lower bounds for the model of homogeneous depth 4 circuits when the fan-in of the product gates at the bottom level is bounded (by $\sqrt n$). Thus even a slight asymptotic improvement in the exponent of either of these bounds would imply lower bounds for general arithmetic circuits! The main tool used in both the papers [@GKKS12] and [@KSS13] was the notion of the dimension of [*shifted partial derivatives*]{} as a complexity measure, a refinement of the Nisan-Wigderson complexity measure of dimension of partial derivatives. In spite of all this exciting progress on homogeneous depth 4 circuits with bounded bottom fanin (which suggests that possibly we might be within reach of lower bounds for much more general classes of circuits) these results give almost no non trivial (not even super linear) lower bounds for general homogeneous depth 4 circuits (with no bound on bottom fanin). Indeed the only lower bounds we know for general homogeneous depth 4 circuits are the slightly superlinear lower bounds by Raz using the notion of elusive functions [@Raz10b]. Thus nontrivial lower bounds for the class of general depth 4 homogeneous circuits seems like a natural and basic question left open by these works, and strong enough lower bounds for this model seems to be an important barrier to overcome before proving lower bounds for more general classes of circuits. In this direction, building upon the work in [@GKKS12; @KSS13], Kumar and Saraf [@KS-depth4; @KS-formula] proved superpolynomial lower bounds for depth 4 circuits with unbounded bottom fan-in but [*bounded top fan-in*]{}. For the case of [*multilinear*]{} depth 4 circuits, superpolynomial lower bounds were first proved by Raz and Yehudayoff [@RY08b]. These lower bounds were recently improved in a paper by Fournier, Limaye, Malod and Srinivasan [@FLMS13]. The main technical tool in the work of Fournier et al was the use of the technique of [*random restrictions*]{} before using shifted partial derivatives as a complexity measure. By setting a large collection of variables at random to zero, all the product gates with high bottom fan-in got set to zero. Thus the resulting circuit had bounded bottom fanin and then known techniques of shifted partial derivatives could be applied. This idea of random restrictions crucially uses the multilinearity of the circuits, since in multilinear circuits high bottom fanin means [*many*]{} distinct variables feeding in to a gate, and thus if a large collection of variables is set at random to zero, then with high probability that gate is also set to zero. [**Our Results:** ]{} In this paper, we prove the first superpolynomial lower bounds for general homogeneous depth 4 circuits with no restriction on the fan-in, either top or bottom. The main ingredient in our proof is a new complexity measure of [*bounded support*]{} shifted partial derivatives. This measure allows us to prove exponential lower bounds for homogeneous depth 4 circuits where all the monomials computed at the bottom layer have only few variables (but possibly large degree/fan-in). This exponential lower bound combined with a careful “random restriction" procedure that allows us to transform general depth 4 homogeneous circuits to this form gives us our final result. We will now formally state our results. Our main theorem is stated below.  \[thm:main\] There is an explicit family of homogeneous polynomials of degree $n$ in $n^2$ variables in $\VNP$ which requires homogeneous $\spsp$ circuits of size $n^{\Omega(\log\log n)}$ to compute it. We prove our lower bound for the family of Nisan-Wigderson polynomials $NW_d$ which is based upon the idea of Nisan-Wigderson designs. We give the formal definition in Section \[sec:prelims\]. As a first step in the proof of Theorem \[thm:main\], we prove an exponential lower bound on the top fan-in of any homogeneous $\spsp$ circuit where every product gate at the bottom level has at most $O(\log n)$ distinct variables feeding into it. Let homogeneous $\spsp^{\{s\}}$ circuits denote the class of homogeneous $\spsp$ circuits where every product gate at the bottom level has at most $s$ distinct variables feeding into it (i.e. has support at most $s$).  \[thm:main2\] There exists a constant $\beta > 0$, and an explicit family of homogeneous polynomials of degree $n$ in $n^2$ variables in $\VNP$ such that any homogeneous $\spsp^{\{\beta\log n\}}$ circuit computing it must have top fan-in at least $2^{\Omega(n)}$. Observe that since homogeneous $\spsp^{\{s\}}$ circuits are a more general class of circuits than homogeneous $\spsp$ circuits with bottom fan-in at most $s$, our result strengthens the results of of Gupta et al and Kayal et al [@GKKS12; @KSS13] when $s = O(\log n)$. We prove Theorem \[thm:main\] by applying carefully chosen random restrictions to both the polynomial family and to any arbitrary homogeneous $\spsp$ circuit and showing that with high probability the circuit simplifies into a homogeneous $\spsp$ circuit with bounded bottom support while the polynomial (even after the restriction) is still rich enough for Theorem \[thm:main2\] to hold. Our results hold over every field. [**Organization of the paper :**]{} The rest of the paper is organized as follows. In Section \[sec:overview\], we provide a high level overview of the proof. In Section \[sec:prelims\], we introduce some notations and preliminary notions used in the paper. In Section \[sec:small-support-lb\], we give a proof of Theorem \[thm:main2\]. In Section \[sec:rand-res\], we describe the random restriction procedure and analyze its effect on the circuit and the polynomial. In Section \[sec
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Second order cone programs (SOCPs) are a class of structured convex optimization problems that generalize linear programs. We present a quantum algorithm for SOCPs based on a quantum variant of the interior point method. Our algorithm outputs a classical solution to the SOCP with objective value $\epsilon$ close to the optimal in time $\widetilde{O} \left( n\sqrt{r} \frac{\zeta \kappa}{\delta^2} \log \left(1/\epsilon\right) \right)$ where $r$ is the rank and $n$ the dimension of the SOCP, $\delta$ bounds the distance from strict feasibility for the intermediate solutions, $\zeta$ is a parameter bounded by $\sqrt{n}$, and $\kappa$ is an upper bound on the condition number of matrices arising in the classical interior point method for SOCPs. We present applications to the support vector machine (SVM) problem in machine learning that reduces to SOCPs. We provide experimental evidence that the quantum algorithm achieves an asymptotic speedup over classical SVM algorithms with a running time $\widetilde{O}(n^{2.557})$ for random SVM instances. The best known classical algorithms for such instances have complexity $\widetilde{O} \left( n^{\omega+0.5}\log(1/\epsilon) \right)$, where $\omega$ is the matrix multiplication exponent that has a theoretical value of around $2.373$, but is closer to $3$ in practice.' author: - Iordanis Kerenidis - Anupam Prakash - Dániel Szilágyi bibliography: - 'bibliography.bib' title: 'Quantum algorithms for Second-Order Cone Programming and Support Vector Machines' --- Introduction ============ Convex optimization is one of the central areas of study in computer science and mathematical optimization. The reason for the great importance of convex optimization is twofold. Firstly, starting with the seminal works of Khachiyan [@khachiyan1980polynomial] and Karmarkar [@karmarkar1984new], efficient algorithms have been developed for a large family of convex optimization problems over the last few decades. Secondly, convex optimization has many real world applications and many optimization problems that arise in practice can be reduced to convex optimization [@boyd2004convex]. There are three main classes of structured convex optimization problems: linear programs (LP), semidefinite programs (SDP), and second-order conic programs (SOCP). The fastest (classical) algorithms for these problems belong to the family of interior-point methods (IPM). Interior point methods are iterative algorithms where the main computation in each step is the solution of a system of linear equations whose size depends on the dimension of the optimization problem. The size of structured optimization problems that can be solved in practice is therefore limited by the efficiency of linear system solvers – on a single computer, most open-source and commercial solvers can handle dense problems with up to tens of thousands of constraints and variables, or sparse problems with the same number of nonzero entries [@mittelmann2019lp; @mittelmann2019socp]. In recent years, there has been a tremendous interest in quantum linear algebra algorithms following the breakthrough algorithm of Harrow, Hassidim and Lloyd [@harrow2009quantum]. Quantum computers are known to offer significant, even exponential speedups [@harrow2009quantum] for problems related to linear algebra and machine learning, including applications to principal components analysis [@lloyd2013quantum], clustering [@lloyd2013quantum; @kerenidis2018q], classification [@li2019sublinear; @kerenidis2018sfa], least squares regression [@kerenidis2017quantum; @chakraborty2018power] and recommendation systems [@kerenidis2016quantum]. This raises the natural question of whether quantum linear algebra can be used to speed up interior point algorithms for convex optimization. The recent work [@kerenidis2018quantum] proposed a quantum interior point method for LPs and SDPs and developed a framework in which the classical linear system solvers in interior point methods can be replaced by quantum linear algebra algorithms. In this work, we extend the results of [@kerenidis2018quantum] and develop a quantum interior point method for second order cone programs (SOCPs). Second order cone programs are a family of structured optimization problems that have complexity intermediate between LPs and SDPs. They offer an algorithmic advantage over SDPs as the linear systems that arise in the Interior Point Method for SOCPs are of smaller size than those for general SDPs. In fact, the classical complexity for SOCP algorithms is close to that for LP algorithms. Our results indicate that this remains true in the quantum setting, that is the quantum linear systems arising in the interior point method for SOCPs are easier for quantum algorithms than the linear systems for general SDPs considered in [@kerenidis2018quantum]. SOCPs are also interesting from a theoretical perspective, as the interior point method for LPs, SOCPs and SDPs can be analyzed in a unified manner using the machinery of Euclidean Jordan algebras [@faybusovich1997linear]. An important contribution of our work is to present a similar unified analysis for quantum interior point methods, which is equivalent to analyzing the classical interior point method where the linear systems are only solved approximately with an $\ell_{2}$ norm error. Our analysis is not purely Jordan-algebraic like that of [@faybusovich1997linear], however it can still be adapted to analyze quantum interior point methods for LPs and SDPs. The main advantage of SOCPs from the applications perspective is their high expressive power. Many problems that are traditionally formulated as SDPs can in fact be reduced to SOCPs, an extensive list of such problems can be found in [@alizadeh2003second]. An important special case of SOCPs is the convex quadratic programming (QP) problem, which in turn includes the support vector machine (SVM) problem in machine learning [@cortes1995support] and the portfolio optimization problem in computational finance [@markowitz1952portfolio; @boyd2004convex] as special cases. The SVM and portfolio optimization problems are important in practice and have recently been proposed as applications for quantum computers [@kerenidis2019quantum; @rebentrost2018quantum; @rebentrost2014quantum]. However, these applications consider the special cases of the $\ell_{2}$-(or least-squares-)SVM [@suykens1999least] and the unconstrained portfolio optimization problem, that reduce to a single system of linear equations. The $\ell_{1}$-regularized SVM algorithm is widely used in machine learning as it finds a robust classifier that maximizes the error margin. Similarly, the constrained portfolio optimization is widely applicable in computational finance as it is able to find optimal portfolios subject to complex budget constraints. The ($\ell_{1}$-)SVM and constrained portfolio optimization problems reduce to SOCPs, our algorithm can therefore be regarded as the first specialized quantum algorithm for these problems. We provide experimental evidence to demonstrate that our quantum SVM algorithm indeed achieves an asymptotic speedup. For suitably constructed ’random’ SVM instances, our experiments indicate that the quantum SVM algorithm has running time $\widetilde{O}(n^{2.557})$ which is an asymptotic speedup over classical SVM algorithms, which have complexity $\Omega(n^{3})$ for such instances. The significance of these experimental results is two-fold. First, they demonstrate that quantum interior point methods can achieve significant speedups for optimization problems arising in practice, this was not clear apriori from the results in [@kerenidis2018q] where the running time depends on the condition number of intermediate matrices arising in the interior point method for which it is hard to prove worst case upper bounds. It suggests that one may hope to obtain significant polynomial speedups for other real-world optimization problems using quantum optimization methods. Second, support vector machines are one of the most well studied classifiers in machine learning, it is therefore significant to have an asymptotically faster quantum SVM algorithm. Moreover, our SVM algorithm has the same inputs and outputs as that for classical SVMs and is therefore directly comparable to them. This can lead to further developments at the interface of quantum computing and machine learning, for example that of quantum kernel methods. Our Results ----------- In order to state our results more precisely, we first introduce second order cone programs (SOCPs). An SOCP is an optimization problem over the product of Lorentz cones ${\mathcal{L}}_k$ which are defined as, $${\mathcal{L}}^k = \left\lbrace\left. {\bm{x}} = (x_0; {\widetilde{{\bm{x}}}}) \in {\mathbb{R}}^{k} \;\right\rvert\; \norm{{\widetilde{{\bm{x}}}}} \leq x_0 \right\rbrace.$$ The SOCP in the standard form is the following optimization problem: $$\begin{array}{ll} \min\limits_{{\bm{x}}_1, \dots, {\bm{x}}_r} & {\bm{c}}_1^T {\bm{x}}_1+\cdots {\bm{c}}_r^T {\bm{x}}_r\\ \text{s.t.}& A^{(1)} {\bm{x}}_1 + \cdots + A^{(r)}{\bm{x}}_r = {\bm{b}} \\ & {\bm{x}}_i \in {\mathcal{L}}^{n_i},\; \forall i \in [r], \end{array} \label{prob:SOCP primal verbose}$$ The inputs for the problem are the $r$ constraint matrices $A^{(i)} \in {\mathbb{R}}^{m \times n_i}, i \in [r]$, vectors ${\bm{b}} \in {\mathbb{R}}^m$ and ${\bm
{ "pile_set_name": "ArXiv" }
null
null