text
stringlengths
0
12.5k
meta
dict
change_metrics
dict
--- author: - Dafei Jin - Yang Xia - Thomas Christensen - Siqi Wang - King Yan Fong - Matthew Freeman - 'Geoffrey C. Gardner' - Saeed Fallahi - Qing Hu - Yuan Wang - Lloyd Engel - 'Michael J. Manfra' - 'Nicolas X. Fang' - Xiang Zhang bibliography: - 'References.bib' title: 'Magnetically-defined topological edge plasmons in edgeless electron gas' --- =8000 [**Topological materials bear gapped excitations in bulk yet protected gapless excitations at boundaries [@Qi2011RMP; @Lu2014NatPhoton]. Magnetoplasmons (MPs), as high-frequency density excitations of two-dimensional electron gas (2DEG) in a perpendicular magnetic field [@Ando1982RMP; @Kushwaha2001SSR], embody a prototype of band topology for bosons [@Jin:2016; @Jin2017PRL]. The time-reversal-breaking magnetic field opens a topological gap for bulk MPs up to the cyclotron frequency [@Zudov2003PRL; @Gao2016NatCommun]; topologically-protected edge magnetoplasmons (EMPs) bridge the bulk gap and propagate unidirectionally along system’s boundaries [@Mast1985PRL; @Glattli1985PRL; @Fetter:1986; @Volkov1988JETP]. However, all the EMPs known to date adhere to physical edges where the electron density terminates abruptly [@Ashoori1992PRB; @Balev1997PRB; @Kumada2014PRL]. This restriction has made device application extremely difficult. Here we demonstrate a new class of topological edge plasmons – domain-boundary magnetoplasmons (DBMPs), within a uniform edgeless 2DEG. Such DBMPs arise at the domain boundaries of an engineered sign-changing magnetic field and are protected by the difference of gap Chern numbers ($\pm1$) across the magnetic domains. They propagate unidirectionally along the domain boundaries and are immune to domain defects [@Jin:2016]. Moreover, they exhibit wide tunability in the microwave frequency range under an applied magnetic field or gate voltage. Our study opens a new direction to realize high-speed reconfigurable topological devices [@Mahoney2017PRX; @Fang2012NatPhoton; @Bahari2017Science].** ]{} In this work, we present the first experimental observation of a new class of topological edge plasmons, domain-boundary magnetoplasmons (DBMPs), at microwave frequencies in a high-mobility GaAs/AlGaAs heterojunction. In contrast to the traditional wisdom, where edge magentoplasmons (EMPs) must rely on a space-varying electron density $n({\mathbf{r}})$, in our scenario, the DBMPs are defined by a space-varying magnetic field $B({\mathbf{r}})=B({\mathbf{r}}){\hat{\mathbf{e}}_{z}}$ embedded into a uniform 2DEG [@Ye1995PRL; @Nogaret2000PRL; @Reijniers2000JPCM; @Yasuda2017Science]. A custom-shaped NdFeB strong permanent magnet, placed immediately above the heterojunction, produces a sign-changing magnetic field around $0.15$ T in magnitude, sufficient to gap bulk MPs in each magnetic domain. The $10^7$ cm$^2$ V$^{-1}$ s$^{-1}$ high electron mobility in this system affords an ultra-long relaxation time of hundreds of picoseconds and ultra-low damping rate of only a few gigahertz, superior to any other existing 2DEG systems [@Mast1985PRL; @Bolotin2008PRL; @Ohtomo2004Nature]. By measuring microwave resonant spectra, we clearly verify the existence and nonreciprocal nature of DBMPs. Their excitation frequencies display a unique dependence on both an applied magnetic field and gate voltage, differing substantially from the conventional EMPs in several intriguing aspects. Our theoretical prediction and experimental observation show excellent mutual agreement. [**System design**]{} Figure \[fig:device\] illustrates the layout of our magnetoplasmonic device. Conceptually (Fig. \[fig:device\]a), a 2DEG in a GaAs/AlGaAs heterojunction (see Methods) is cladded above and below by a fused silica (glass) spacer and a GaAs substrate, respectively, of thicknesses ${d_{{\scriptscriptstyle}\text{A}}}= 100~\mu$m and ${d_{{\scriptscriptstyle}\text{B}}}= 150~\mu$m, and permittivities ${\epsilon_{{{\scriptscriptstyle}\text{A}}}}=3.8$ and ${\epsilon_{{{\scriptscriptstyle}\text{B}}}}=12.8$. This dielectric-2DEG-dielectric structure is enclosed in a metallic cavity along $z$, terminating at the spacer’s top and substrate’s bottom. A holed NdFeB permanent magnet, atop the upper cavity wall, projects a circular magnetic field $\mathbf{B}_{\text{m}}({\mathbf{r}}) = B_{\text{m}}(r){\hat{\mathbf{e}}_{z}}$ onto the 2DEG. The sign of $B_{\text{m}}(r)$ changes abruptly across the projection of the hole’s radius, $a = 0.75~$mm, producing adjacent oppositely-signed magnetic domains (see Methods). The entire 2DEG is additionally exposed to a tunable homogeneous magnetic field $\mathbf{B}_0({\mathbf{r}}) = B_0{\hat{\mathbf{e}}_{z}}$ from a superconducting coil, allowing an overall shift of the magnetic field profile. ![image](AllComponents.pdf) In practice (Fig. \[fig:device\]b), the heterojunction sample has a 12 mm $\times$ 6 mm rectangular footprint. A 9 mm $\times$ 3 mm Hall bar is fabricated atop of it, allowing *in situ* measurements and control of the 2DEG electron concentration $n_0$. The fused silica spacer is topped by a 100 nm thick e-beam evaporated Cr-coating, serving simultaneously as upper cavity wall and gate electrode [@Hatke2015NatCommun; @Mi2017arXiv]. A gate voltage of $V_\text{g} \sim \pm 100~$V can be applied across the Cr-coating–Hall bar junction to tune the electron concentration. The sample-spacer-magnet assembly is glued by Poly(methyl methacrylate) (PMMA) onto a customized Cu printed circuit board (PCB) with a 5 $\mu$m Ni and 200 nm Au surface finish. The PCB hosts a coplanar waveguide (CPW) connecting RF Ports 1 and 2 with mini-SMP connectors [@Hatke2015NatCommun]. By design, the CPW has a 50 $\Omega$ impedance with the sample-magnet assembly loaded. The CPW signal line is aligned tangentially to the projected circle from the hole of magnet so as to maximize the microwave-DBMP coupling. [**Theoretical prediction**]{} The main physics of MP system can be captured by the continuity equation and a constitutive equation containing the longitudinal Coulomb force and transverse Lorentz force: \[eqs:governing\] $$\begin{aligned} &\omega\rho({\mathbf{r}},\omega) = -{{\rm i}}\nabla\cdot {\mathbf{j}}({\mathbf{r}},\omega),\\ &\omega{\mathbf{j}}({\mathbf{r}},\omega) = -{{\rm i}}\frac{e^2}{m_*}n({\mathbf{r}})\nabla\Phi({\mathbf{r}},\omega) - {\omega_{\text{c}}}({\mathbf{r}}){\mathbf{j}}({\mathbf{r}},\omega)\times\hat{\mathbf{e}}_{z}. \end{aligned}$$ Here, ${\mathbf{j}}$ and $\rho$ are the surface current and charge densities, evaluated at frequencies $\omega$ and in-plane positions ${\mathbf{r}}$. $\Phi({\mathbf{r}},\omega) = \int V({\mathbf{r}}-{\mathbf{r}}')\rho({\mathbf{r}}',\omega)\,{{\rm d}}^2{{\mathbf{r}}'}$ is the self-consistent potential due to the (screened) Coulomb interaction $V$. ${\omega_{\text{c}}}({\mathbf{r}})=eB({\mathbf{r}})/cm_*$ is a space-varying cyclotron frequency, with $m_*$ the electron effective mass. As elaborated below, even with a constant electron density $n({\mathbf{r}})=n_0$, topologically-protected DBMPs can reside at boundaries of sign-changing magnetic domains solely defined by the spatial profile $B({\mathbf{r}})$ and ${\omega_{\text{c}}}({\mathbf{r}})$ [@Jin:2016]. The total magnetic field, $B(r) = B_0 + B_{\text{m}}(r)$, is the sum of a tunable, uniform field $B_0$ from the superconducting coil, and a fixed, $r$-dependent field $B_{\text{m}}(r)$ from the holed NdFeB permanent magnet. The latter is well-approximated by a step function, $$\label{eq:Bm} B_{\text{m}}(r) \simeq {\bar{B}_{\text{m}}}+ \operatorname{sgn}(r-a){\Delta B_{\text{m}}}.$$ Here, ${\Delta B_{\text{m}}}$ contributes an equal
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'The studies of hyperon production performed at COSY-11 are summarized. The results of the experiments in the reaction channels $pp \rightarrow pK^+\Lambda$, $pp \rightarrow pK^+\Sigma^0$, and $pp \rightarrow nK^+\Sigma^+$ are shown. Excitation functions from threshold up to about 90 MeV excess energies have been evaluated with high precision for the $\Lambda$ and $\Sigma^0$ production. The $\Lambda p$ and $\Sigma^0 p$ final state interactions were extracted. The $\Sigma^+$ production was measured at 13 and 60 MeV excess energies.' author: - | D. Grzonka\ for the COSY-11 collaboration title: | Summary of the COSY-11 Measurements\ of Hyperon Production --- [ address=[Institut f[ü]{}r Kernphysik, Forschungszentrum Jülich, D-52425 Jülich, Germany]{}]{} Introduction ============ The hyperon-nucleon interaction is less known than the one for the nucleon-nucleon system due to the difficulties in performing scattering experiments with the unstable hyperons. The existing YN-scattering data are rather limited [@eng66; @sec68; @ale68; @eis71] and for a better understanding of the strong interaction in the nonperturbative region of the QCD an extension of the data base in the strangeness sector is very important. Besides hyperon-nucleon scattering, reactions into 3-body exit channels like: $NN \rightarrow NKY$ can be used to extract detailed information on the NY-subsystem. The YN interaction is only one aspect covered by these kinds of experiments which can be separated into three stages, the initial state interaction of the incoming nucleons, the associated strangeness production process and the final state interaction. Final state interaction happens between all exit particles and by separating a suitable kinematic region also the KN and KY interaction can be studied. Furthermore information about the contributing reaction mechanisms are obtained including the excitation of nucleon resonances which is also directed to the structure of these resonances. Most favourable for these studies are experiments close to the reaction threshold due to the low relative momenta and therefore long interaction times between the ejectiles. In order to get a detailed understanding of these elementary interactions involving strangeness differential cross sections as a function of spin and isospin degrees of freedom are required. A significant contribution to this kind of physics has been done by the hyperon production experiments at COSY-11. The experimental setup for strangeness production at COSY-11 ============================================================ The internal COSY-11 installation [@bra96] at COSY [@mai97] was designed for near threshold meson production studies. It used a COSY machine dipole as magnetic spectrometer and included scintillation detectors and drift chambers to reconstruct particle tracks of positively charged particles and measure their velocities in order to determine their four-momentum components with high precision. A sketch of the setup is given in fig. \[c11setup\] and for more details see [@bra96]. ![ \[c11setup\] The COSY-11 detection system installed at a COSY machine dipole with the detector components relevant for the hyperon production studies. The left side shows a 3-d view of the arrangement and on the right side is a sketch of the detector components to illustrate the principle of operation. The S8 scintillator was only used for the $pp \rightarrow n K^+ \Sigma^+$ reaction.](c11detsw.ps "fig:"){width=".6\textwidth"} ![ \[c11setup\] The COSY-11 detection system installed at a COSY machine dipole with the detector components relevant for the hyperon production studies. The left side shows a 3-d view of the arrangement and on the right side is a sketch of the detector components to illustrate the principle of operation. The S8 scintillator was only used for the $pp \rightarrow n K^+ \Sigma^+$ reaction.](c11det_sketch.eps "fig:"){width=".4\textwidth"} In the case of hyperon production via the reaction channels $pp \rightarrow pK^+ \Lambda / \Sigma^0$ the proton velocities are measured with the scintillator hodoscopes S1 and S3 but for the kaon the $\sim$ 9 m flight path to S3 is too long. Most of the kaons would decay before reaching S3. Here the flight path from the target to S1 is used where the start time is calculated from the measured proton momentum. The particle identification is worse than in the proton case due to the much shorter flight path but its still sufficient to separate most of the pions and protons from the kaons. The hyperon four-momentum $P_{hyperon}$ is determined by the missing mass technique: $P_{\Lambda} = P_{beam} - P_{p} - P_{K^+}$ with the known beam $P_{beam}$ and the measured proton $P_{p}$ and kaon $P_{K^+}$ four-momenta. This method results in a rather clean separation of the hyperon production events as can be seen from fig. \[hypexp\] left for an event sample of $\Lambda$ production at 7 MeV excess energy. ![\[hypexp\]Invariant mass of the second track as a function of the missing mass for $\Lambda$ production at 7 MeV excess energy (left). Missing mass squared distribution for $\Sigma^0$ production at 7 MeV excess energy (up right). Missing mass squared distributions for $\Sigma^+$ production at 13 and 60 MeV excess energy (down right) with the applied background subtraction and the expected distributions of $\Sigma^+$ production from Monte Carlo.](pkl_reac_idn2.eps){width="\textwidth"} ![\[hypexp\]Invariant mass of the second track as a function of the missing mass for $\Lambda$ production at 7 MeV excess energy (left). Missing mass squared distribution for $\Sigma^0$ production at 7 MeV excess energy (up right). Missing mass squared distributions for $\Sigma^+$ production at 13 and 60 MeV excess energy (down right) with the applied background subtraction and the expected distributions of $\Sigma^+$ production from Monte Carlo.](mmsigma.eps "fig:"){width="\textwidth"} ![\[hypexp\]Invariant mass of the second track as a function of the missing mass for $\Lambda$ production at 7 MeV excess energy (left). Missing mass squared distribution for $\Sigma^0$ production at 7 MeV excess energy (up right). Missing mass squared distributions for $\Sigma^+$ production at 13 and 60 MeV excess energy (down right) with the applied background subtraction and the expected distributions of $\Sigma^+$ production from Monte Carlo.](mm_nks.eps "fig:"){width="\textwidth"} In the case of the $\Sigma^0$ production its similar but the background level is higher as can seen from fig. \[hypexp\] up right which shows the missing mass squared distribution in the kaon band at an excess energy of 7 MeV for $pp \rightarrow pK^+ \Sigma^0$. In parallel to the $\Sigma^0$ production also $\Lambda$ production at about 80 MeV higher excess energies is measured. With the addition of a neutron detector, installed for studies at a deuteron target, another hyperon channel was accessible at COSY-11, namely the $pp \rightarrow nK^+ \Sigma^+$ reaction. Here the peak to background ratio was less favourable, see fig. \[hypexp\] down right, because no proton is in the exit channel to produce a precise timing signal. The neutron detector provided the time and position of the point of the first neutron interaction producing a charged ejectile from which the neutron momentum was calculated. The absolute time calibration was performed with $\gamma$’s by selecting $pp \rightarrow pp \pi^0$ events with the $\pi^0$ decaying within the target into two $\gamma$’s from which the event start time was calculated. For the $K^+$ time of flight measurement the additional S8 scintillator was used with a distance of only 1.9 m to S1. The $\Sigma^+$ with a $c\tau$ of 2.4 cm couldn’t be measured directly but its four-momentum was determined by a missing mass analysis. Further hyperon channels are not feasible at COSY-11. In principle also the ($n \Lambda$) and ($n \Sigma^0$) system could be studied by using a deuteron target but the additional detection of the spectator proton would reduce the efficiency drastically. Also hyperon decay products could in principle be measured but the efficiency was extremely low. In all measurements the luminosity was determined by elastic $pp$-scattering detected in parallel to the hyperon production. For the detection of the second proton a Si-pad detector combined with a scintillator ($Si_{mon} / S4$ in fig. \[c11setup\] ) was installed. For studies with a polarized beam the polarization has to be determined. Two detection systems served for this aim: In addition to the COSY polarimeter a pair of wire chambers and scintillators were installed above and below the beam close to the target to measure the elastic $pp$-scattering at $\phi = 90 ^{\circ}$ which is independent of the polarisation. Experimental results ==================== When COSY-11 went into operation in 1996 no data were available for $\Lambda$ and $\Sigma$ hyperon production close to the reaction threshold. For the reaction channel $pp \rightarrow pK^+ \Lambda$ above 300 MeV/c excess energy data were existing mostly from bubble chamber measurements at CERN [@bal88]. On the theoretical side parametrizations of the excitation function were on the market which differ close
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'The upgrade of the DA$\Phi$NE machine layout requires a modification of the size and position of the inner focusing quadrupoles of KLOE$^2$ thus asking for the realization of two new calorimeters covering the quadrupoles area. To improve the reconstruction of $K_L\to 2\pi^0$ events with photons hitting the quadrupoles, a tile calorimeter, QCALT, with high efficiency to low energy photons (20-300 MeV), time resolution of less than 1 ns and space resolution of few cm, is needed. We propose a tile calorimeter with a high granularity readout corresponding to about 2500 silicon photomultipliers (SiPM) of $1\times 1$ mm$^2$ area. Moreover, the low polar angle regions need the realization of a dense crystal calorimeter with very high time resolution performances to extend the acceptance for multiphotons events. Best candidates for this calorimeter are LYSO crystals with APD readout or PbWO$_4$ crystals with large area SIPM readout.' address: - 'Laboratori Nazionali di Frascati dell’INFN' - 'Dipartimento di Energetica Univ. Roma La Sapienza' author: - 'F.Happacher, M.Martini$^{a,}$[^1], S.Miscetti$^{a}$, I.Sarra$^{a}$' title: Tile and crystal calorimeters for the KLOE$^2$ experiment --- The KLOE$^2$ proposal ===================== In the last decade a wide experimental program has been carried out at Da$\Phi$ne[@dafne], the $e^+e^-$ collider of the Frascati National Laboratories, running at a center of mass energy of 1020 MeV, the $\Phi$ resonance mass. During KLOE run, Da$\Phi$ne delivered a peak luminosity of 1.5$\times$10$^{32}$ cm$^{-2}$s$^{-1}$ which granted about 1 fb$^{-1}$ per year in the last data taking period. A new machine scheme has been recently proposed by the Frascati accelerator group aiming at increasing the luminosity of the machine up to a factor 5. This scheme has been succesfully tested at Da$\Phi$ne, and these encouraging results push for a new data taking compaign for the KLOE experiment to complete its physics program and to perform a new interesting set of measurements. The new experiment, named KLOE$^2$, expects to collect 5 fb$^{-1}$/year. We are now working to improve the performances of our detector[@kloe_all] adding: an inner tracker, a tagger system to study $\gamma\gamma$ physics, a new small angle calorimeter and a new quadrupole calorimeter. In this paper we explain the project and the R&D for these last two items. Quadrupole tile calorimeter, QCALT ================================== In Fig.\[figkloe\], we show a section of the KLOE detector in which is visible the old position of the focalizing quadrupoles and the surrounding calorimeters QCAL [@oldqcal] which have a polar angle coverage of 0.94$\,<|\cos\theta|<\break$0.99. Each calorimeter consists of 16 azimuthal sectors composed by alternating layers of 2 mm lead and 1 mm BC408 scintillator tiles, for a total thickness of $\sim$5X$_0$. The readout is done by wavelength shifter fibers (Kuraray Y11-200) and photomultipliers. The fiber arrangement allows the measurement of the longitudinal coordinate by time differences. These calorimeters are characterized by a low light response ($\sim$3 pe/mip/tile at zero distance from the photomultiplier) due to the coupling in air, to the fiber lenght ($\sim$2 m for each tile) and to the quantum efficiency of the used photomultipliers (standard bialkali with $\sim$20% QE). The project of the new QCAL consists in a dodecagonal structure, 1 m long, covering the region of the quadrupoles. The structure consists in a sampling of 5 layers of 5 mm thick scintillator plates alternated with 3.5 mm thick tungsten plates, for a total depth of 4.75 cm (5.5 X$_0$). The active part of each plane is divided into twenty tiles of 5$\times$5 cm$^2$ area with 1 mm diameter WLS fibers embedded in circular grooves. Each fiber is then optically connected to a silicon photomultiplier of 1 mm$^2$ area, SiPM, for a total of 2400 channels. We have done some R&D studies on SiPM, fibers and tiles to choose the combination which optimizes the response of our system. Test on MPPC ------------ We have compared the characteristics of two different SiPM produced by Hamamatsu (multi pixel photon counter, MPPC): 100 (S10362-11-100U) and 400 pixels (S10362-11-050U), both with 1$\times 1$ mm$^2$ active area. To manage the signals, the electronic service of the Frascati Laboratory (SELF) has developed a custom electronics composed by a 1$\times$2 cm$^2$ card, containing the pre-amplifier, and a multifunction NIM board. For these tests, we have set the pre-amplifier gain to 50. The NIM board supplies the voltage to the photodetector (Vbias) with a precision of 2 mV and a stability at the level of 0.03 permill. A low threshold discriminator and a fanout are also present in the board. To determine the gain, we have prepared a setup based on a blue light LED and a polaroid filter to change light intensity. We have measured the gain and the dark rate variation as a function of the applied Vbias and the temperature of the photodetector. The readout electronics was based on CAMAC, with a charge sensitivity of 0.25 pC/count and a time of 125 ps/count. Our tests confirm the performances declared by Hamamatsu and show a significative variation of the detector gain as a function of the temperature. The 400 pixels shows a temperature dependence of the gain which is a factor four smaller than the 100 pixels (3% versus 12%), with a gain reduction of a factor five. Tests on fibers --------------- We have studied the light response of three different, 1 mm$^2$, fibers optically connected to MPPC: - Kuraray SCSF81 (blue scintillating) - Saint Gobain BCF92 single cladding (WLS from blue to green) - Saint Gobain BCF92 multi cladding (WLS from blue to green) The test is done firing the fiber with a Sr$^{90}$ source. The trigger is provided by a NE110 scintillator finger (1$\times$5$\times$0.5 cm$^3$) connected to a bialkali photomultiplier positioned below the fiber. As expected, a large light yield is shown for SCSF81 while the WLS fibers have a reduced response. However, the BCF92 multi cladding has a reasonable light yield as shown in Fig.\[figsgmc\]. For this fiber we have: maximum light yield, fast response (5 ns/pe) and high attenuation length (3.5 m). Tests on tiles -------------- Light response and time resolution of tiles have been measured using cosmic rays. The system was prepared connecting the fiber to the MPPC and using two NE110 fingers to trigger the signal. We have prepared two different tiles: A : 3 mm thick tile with 400 pixels MPPC, B : 5 mm thick tile with 100 pixels MPPC. Using ADC distribution we find: 14 pe/mip for tile “A” and 26 pe/mip for tile “B” (See Fig.\[figtile\]). These results are comparable taking into account the thickness ratio between tiles and the photon detection efficiency of the two detectors (40% for 400 pixels and 45% for 100 pixels). Correcting for the time dependence on pulse height, we find a preliminary time resolution of 1000 ps (750 ps) for tile “A” (“B”). Next plans ---------- We are now assembling two small dimension multi-tiles prototypes of the QCAL, to study signal transportation and to measure the effective radiation length. In 2009, we plan also to construct a “module 0” consisting of a complete slice of the dodecagon (1/12 of one calorimeter) with final material and electronics. Crystal calorimeter with timing, CCALT ====================================== In the new design of Da$\Phi$ne interaction region, the position of the quadrupoles increases the acceptance of the central calorimeter from 21$^{\circ}$ to 18$^{\circ}$. Below this limit we can safely insert few crystals to improve the acceptance for photons coming from $\eta$ and $K_S$ decays. This detector could work as veto detector for photons down to 8$^{\circ}$. The particular region is visible in Fig.\[figkloe\] and is delimited between the focalizing quadrupoles and the spherical interaction region of the KLOE detector. The proposed solution is to insert an homogeneus calorimeter based on LYSO ($Lu_{18}Y_{.2}SiO_5:Ce$) crystals. The most important characteristics of these crystals are a
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We study Lipschitz-free spaces over compact and uniformly discrete metric spaces enjoying certain high regularity properties - having group structure with left-invariant metric. Using methods of harmonic analysis we show that, given a compact metrizable group $G$ equipped with an arbitrary compatible left-invariant metric $d$, the Lipschitz-free space over $G$, ${\mathcal{F}}(G,d)$, satisfies the metric approximation property. We show also that, given a finitely generated group $G$, with its word metric $d$, from a class of groups admitting a certain special type of combing, which includes all hyperbolic groups and Artin groups of large type, ${\mathcal{F}}(G,d)$ has a Schauder basis. Examples and applications are discussed. In particular, for any net $N$ in a real hyperbolic $n$-space $\mathbb{H}^n$, ${\mathcal{F}}(N)$ has a Schauder basis.' address: - | Institute of Mathematics\ Czech Academy of Sciences\ Žitná 25\ 115 67 Praha 1\ Czech Republic - 'Instituto de Ciência e Tecnologia da Universidade federal de São Paulo, Av. Cesare Giulio Lattes, 1201, ZIP 12247-014 São José dos Campos/SP, Brasil' author: - Michal Doucha - 'Pedro L. Kaufmann' bibliography: - 'references.bib' title: 'Approximation properties in Lipschitz-free spaces over groups' --- [^1] Introduction ============ Lipschitz-free spaces form by now a fundamental class of Banach spaces, whose study has been revitalized since the appearance of the seminal paper of Godefroy and Kalton ([@godefroy2003lipschitz]). There are two main important properties that both characterize these spaces. Namely, they are free objects in the category of Banach spaces over the metric spaces. Second, they are canonical isometric preduals to the Banach spaces of pointed Lipschitz real-valued functions. Another appealing feature is that their study connects Banach space theory to several other areas of mathematics, including optimal transport and geometry, and, as we demonstrate here, also harmonic analysis. We recall some basic facts about Lipschitz-free spaces in Section \[section:preliminaries\]. Approximation properties in Lipschitz-free spaces have been one of the main research directions since the publication of [@godefroy2003lipschitz]. It has become clear since then that there are metric spaces such that the corresponding Lipschitz-free space does not have the approximation property, since by [@godefroy2003lipschitz Theorem 5.3], a Banach space $X$ has the bounded approximation property if and only if the Lipschitz-free space ${\mathcal{F}}(X)$ does. The attention was therefore shifted to certain amenable classes of metric spaces, in particular compact metric spaces and, to some extent, also to uniformly discrete metric spaces. The compact metric case is particularly important since it has been shown by Godefroy in [@God2015] that the bounded approximation property of Lipschitz-free space over a compact metric space $M$ is equivalent to the existence of linear almost extension operators of Lipschitz functions over subsets of $M$, a subject currently receiving high attention in both geometry and computer science (see e.g. [@BruBru] and [@LN05]). The question whether Lipschitz-free space over any uniformly discrete metric space has the bounded approximation property is perhaps the most serious and still open, we refer to [@godefroy2014free Question 1] for a motivation and to [@Kalton] for the proof that such a space has the approximation property. Regarding compact metric spaces, the first compact metric space such that the corresponding Lipschitz-free space fails the approximation property has been found in [@godefroy2014free], and later, even a compatible metric on the Cantor space has been found so that the Lipschitz-free space lacks the approximation property (see [@HaLaPe]). Positive results have been however obtained in [@Dalet-compact] and [@dalet2015free] when one restricts to countable compact, resp. proper metric spaces. The goal of this paper is to consider certain fundamental classes of compact metric spaces, resp. uniformly discrete metric spaces, which are amenable to methods of harmonic analysis, resp. geometry. Namely, compact groups with left-invariant (or right-invariant) metrics, resp. finitely generated groups with word metrics. It turns out that harmonic analytic methods, resp. certain combinatorial and geometric methods, go hand in hand with our goal of showing approximation properties in Lipschitz-free spaces over compact metric groups, resp. finitely generated groups. In the compact group case we obtain a satisfactory definitive solution. \[thm:intro1\] Let $G$ be a compact metrizable group with an arbitrary compatible left-invariant (or right-invariant) metric $d$. Then ${\mathcal{F}}(G,d)$ has the metric approximation property. Just to show a meager application, we recall that there has been interest in for which compatible metrics of the Cantor space the corresponding Lipschitz-free space has some approximation property. Godefroy and Ozawa show in [@godefroy2014free] that for certain ‘small Cantor spaces’, the free space has the metric approximation property. On the other hand, H' ajek, Lancien, and Perneck' a show in [@HaLaPe] that there are ‘fat Cantor spaces’ for which the free space does not have the approximation property. We recall that there is a very large and thoroughly studied class of compact (metrizable) groups, the *profinite (metrizable) groups*, which are inverse limits of finite groups. So in the infinite metrizable case, they are topologically totally disconnected uncountable metrizable spaces without isolated points - therefore homeomorphic to the Cantor space (see the monograph [@profinite] for more information on profinite groups). It turns out that for any compatible and left-invariant metric on any such group structure on the Cantor space we get a free space with the metric approximation property. In case of finitely generated groups, the metric approximation property follows from known results (see Section \[section:fingengrps\]), so we aim for much stronger property, having the Schauder basis, at the cost of having less general result that applies just to a certain subclass of finitely generated groups. We state the result and postpone the definition of the new notions to the corresponding section. \[thm:intro2\] Let $G$ be a shortlex combable group with its word metric $d$. Then ${\mathcal{F}}(G,d)$ has the Schauder basis [*(see Theorem \[thm:shortlex\])*]{}. We mention that the theorem applies in particular to hyperbolic groups and Artin groups of large type. One of the applications (see Corollary \[cor:hyperbolicnet\]) is that the Lipschitz-free space over any net in the real hyperbolic $n$-space ${\mathbb{H}}^n$ has the Schauder basis.\ The paper is organized as follows. In Section \[section:preliminaries\] we present a characterization of the $\lambda$-bounded approximation property tailored for Lipschitz-free spaces (Proposition \[tool\]). In Section \[section:cpt\] we prove Theorem \[thm:intro1\]; first we tackle Lie groups using harmonic analysis tools in Subsection, then in Subsection \[subsection:generalCpt\] we prove the general case by approximating compact groups by compact Lie ones. In this last subsection we also generalize of Theorem \[thm:intro1\] to compact homogeneous spaces equipped with quotient metrics (Theorem \[thm:homogeneousspace\]). Section \[section:fingengrps\] is dedicated to finitely generated groups; we prove Theorem \[thm:intro2\] and provide some examples and applications. We conclude with some remarks and questions in Section \[section:problems\], and presenting in Appendix \[appendSphere\] a generalization of Theorem \[thm:homogeneousspace\] for the specific case of the Euclidean sphere. Preliminaries {#section:preliminaries} ============= Lipschitz-free spaces --------------------- Let $M$ be a metric space and $0$ be some distinguished point in $M$. Let $\mathrm{Lip}_0(M)$ denote the Banach space of real-valued Lipschitz functions defined on $M$ which vanish at $0$, equipped with the norm $\|\cdot\|_{\mathrm{Lip}}$ which assigns to each function its minimal Lipschitz constant. The Lipschitz-free space over $M$, denoted by ${\mathcal{F}}(M)$, is the canonical isometric predual of $\mathrm{Lip}_0(M)$ given by the closed linear span of $\{\delta(x):x\in M\}$ in $\mathrm{Lip}_0(M)^*$, where each $\delta(x)$ is the evaluation functional defined by $\delta(x)(f):=f(x)$. This gives $\mathrm{Lip}_0(M)$ a $w^*$ topology which coincides, on bounded sets of $\mathrm{Lip}_0(M)$, with the topology of pointwise convergence. ${\mathcal{F}}(M)$ satisfies a powerful universal property: given a Banach space $X$ and a Lipschitz function $F:M\to X$ vanishing at $0$, there exists a unique bounded linear operator $T:{\mathcal{F}}(M)\to X$ such that $T\circ \delta = F$. Its operator norm coincides with the Lipschitz constant of $F$. We refer to Weaver’s book [@weaver1999lipschitz] for a thorough introduction to the subject. There, Lipschitz-free spaces are denominated Arens
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We consider a radiation from a uniformly accelerating harmonic oscillator whose minimal coupling to the scalar field changes suddenly. The exact time evolutions of the quantum operators are given in terms of a classical solution of a forced harmonic oscillator. After the jumping of the coupling constant there occurs a fast absorption of energy into the oscillator, and then a slow emission follows. Here the absorbed energy is independent of the acceleration and proportional to the log of a high momentum cutoff of the field. The emitted energy depends on the acceleration and also proportional to the log of the cutoff. Especially, if the coupling is comparable to the natural frequency of the detector ($e^2/(4m) \sim \omega_0$) enormous energies are radiated away from the oscillator.' address: - ' Dept. of Physics, Sungkyunkwan Univ., SUWON 440-746, KOREA ' - ' Dept. of Physics, KAIST , Taejon 305-701, KOREA ' author: - 'Hyeong-Chan Kim[^1]' - 'Jae Kwan Kim[^2]' date: 'April 8, 1997' title: Radiation from a uniformly accelerating harmonic oscillator --- [      pacs number 04.60.+n, 03.70.+k ]{} Introduction ============ It is well known that the concept of a particle depends on the motion of an observer [@birrell]. Especially, the Minkowski vacuum is a canonical ensemble with the temperature $a/2\pi$ from the point of view of a uniformly accelerated observer with the acceleration $a$ (the thermalization theorem) [@unruh]. This observer dependence is most easily shown if one use a particle detector model invented by Unruh [@unruh] and DeWitt [@dewitt79]. It consists of an idealized point particle with internal energy levels labeled by $E$, coupled via a monopole interaction with a scalar field $\phi$ (Unruh-DeWitt model). Following these, many works emerged in the literature. Letaw [@Letaw] exhibited the stationary world lines, on which the detectors in a vacuum have a time-independent excitation spectra. Grove and Ottewill [@Grove:Otte] studied the problem of a non-extended detector, and clarified the radiation effect arising both from the walls of the detector and from the interaction with the field. Several authors [@hinton; @takagi] discussed the anisotropic nature of the thermal radiation of the accelerated detector. A full review for this thermal character was given by Takagi [@takagi86]. The vacuum noise seen by a uniformly accelerated observer in flat space-times of arbitrary dimensions was investigated and was shown to exhibit the phenomenon of the apparent inversion of statistics in odd dimensions, which was discussed precisely by Unruh [@unruh86] and Fukazawa [@fukazawa]. A few years ago, the excitation rate associated with a uniformly accelerated finite-time detector interacting with the Minkowski vacuum has been analyzed in an inertial frame by Svaiter and Svaiter [@svaiter]. They found a logarithmic ultraviolet divergences on the transition amplitude, which was due to the instantaneous switching of the detector [@higuchi]. This UV divergence does not occur in lower dimensions. Grove argue that a macroscopic constantly accelerating object will emit negative energy radiation until equilibrium with the Minkowski vacuum is achieved [@grove]. Several years ago a new particle detector model–a harmonic oscillator coupled to a scalar field in $1+1$ dimensions–was introduced by Raine, Sciama, and Grove(RSG) [@raine]. Several aspects of this model was discussed in connection with the ‘open quantum system’ [@unruhZurek; @unruhWald; @anglin]. Hinterleitner [@hin] and Massar, Parentani, and Brout [@massar] shown that there is a polarization cloud which surrounds the detector at all times and energy is exchanged with it locally. Audretsch and Müller [@aud] explored nonlocal pair correlations in accelerated detector. Recently stochastic aspects of this detector were discussed by Raval, Hu, and Anglin [@raval]. These works mainly interested on the asymptotic states with time independent coupling. In this paper we consider the intermediate region during the equilibrium achieved between the detector and the field. We show that this is not a simple energy absorption process but there are two main stages after the two systems in contact with. First stage is a fast absorption of energy of the oscillator from the field. This occurs shortly after the change of the coupling in a time which is much smaller than the inverse of the characteristic frequency of the oscillator. The total energy absorbed during this period is independent of the acceleration and depends on the log of a high momentum cutoff. Second stage is slow emission of radiation which exponentially decrease in a time scale of the coupling constant. The total radiated energy during this period depends on the acceleration of the oscillator. If the coupling constant is small then the total radiation is smaller than the inertial one. But if the coupling is comparable to the characteristic frequency, enormous energies are radiated away from the oscillator. In the case of a weakly coupled system, the absorbed energy during the first stage is larger than the emitted one during the second stage. In Sec. II-A, we describe the model in Minkowski space and give the general form of the solution for the field and the oscillator. These evolutions of the operators are given by use of the inhomogeneous solution $G(\omega,t)$ of a forced harmonic oscillator. Similarly, all physical quantities like the correlation function or the stress tensor can be expressed with this single function. In Sec. II-B, the model is generalized to incorporate the uniformly accelerating oscillators. Sec. III is devoted to present two solvable models. $G(\omega,t)$ is obtained in the asymptotic region. We obtain the stress tensor in Sec. IV when the detector is turned on suddenly. Sec. V is summary and discussions. There are two appendices where we describe the details of the calculation of the stress tensor. Models for the particle detector ================================ Let us consider a minimally coupled system of a massless real scalar field $\phi(t,x)$ in two dimensions and a detector of a harmonic oscillator $q(t)$ with mass $m$. The action for this system is $$\begin{aligned} \label{ac} S &=& \int \mbox{d}x \mbox{d}t \frac{1}{2}\left\{ \left(\frac{\partial}{\partial t} \phi(t,x) \right)^2 - \left( \frac{\partial}{\partial x} \phi(t,x)\right)^2 \right\} \\ &+& \int d\tau \left\{ \frac{1}{2} m \left(\frac{d q(\tau)}{d \tau} \right)^2 -\frac{1}{2} m \omega_0^2 q^2(\tau) -e(\tau)q(\tau) \frac{ d\phi}{d \tau} \left(t(\tau),x(\tau)\right) \right\} . \nonumber\end{aligned}$$ The oscillator follows the explicitly given path $(t(\tau),x(\tau))$ where $\tau$ is the proper time of the oscillator along the path. In this paper, we select two paths through which the oscillator moves: the inertial and the uniformly accelerated. Varying Eq. (\[ac\]) with respect to $\phi(t,x)$ and $q(\tau)$ we get the Heisenberg equation of motion for the field and the oscillator $$\begin{aligned} \Box \phi(t,x) &=& \frac{de(\tau)q(\tau)}{d \tau} \delta (\rho), \label{2}\\ m \left( \frac{d}{d \tau}\right)^2 q(\tau) &+& m \omega_0^2 q(\tau) = - e(\tau)\frac{d \phi}{d \tau}(t(\tau),x(\tau)), \label{3}\end{aligned}$$ where $\rho$ is an appropriate space coordinate which is orthogonal to $\tau$ and the path of the oscillator can be represented as $\rho=0$. Eq. (\[2\]) can be integrated to give $$\begin{aligned} \label{phi:uv} \phi(t,x) = \phi^0(t,x) + \frac{e(\tau_{ret})}{2} q(\tau_{ret}),\end{aligned}$$ where $\tau_{ret}$ is the value of $\tau$ at the intersection of the past lightcone of $(t,x)$ and the detector trajectory. where we have used the explicit retarded propagator of a massless field $$\begin{aligned} G_{\mbox{ret}}(t,x;0,0) = \frac{1}{2} \theta(t+x) \theta(t-x).\end{aligned}$$ Substituting the solution (\[phi:uv\]) into Eq. (\[3\]), one get $$\begin{aligned} \label{qeq} m \ddot{q}(\tau) + \frac{1}{2} e^2(\tau) \dot{q}(\tau) + m \left( \omega_0^2 + \frac{\dot{e}^2(\tau)}{4 m}\right) q(\tau) = -e(\tau) \dot{\phi}^0(t
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'The time evolution of the local field in [*symmetric*]{} $Q$-Ising neural networks is studied for arbitrary $Q$. In particular, the structure of the noise and the appearance of gaps in the probability distribution are discussed. Results are presented for several values of $Q$ and compared with numerical simulations.' author: - | D. Bollé [^1] [^2]\ Instituut voor Theoretische Fysica, K.U. Leuven,\ B-3001 Leuven, Belgium\ \ and G. M. Shim [^3]\ Department of Physics, Chungnam National University\ Yuseong, Taejon 305-764, R.O. Korea title: 'Local field dynamics in symmetric $Q$-Ising neural networks' --- Symmetric networks; $Q$-Ising neurons; parallel dynamics; local field; probabilistic approach Introduction ============ In a number of papers in the nineties (cfr. [@PZ]-[@BJS99] and references therein) the parallel dynamics of $Q$-Ising type neural networks has been discussed for several architectures –extremely diluted, layered feedforward, recurrent– using a probabilistic approach. For the asymmetric extremely diluted and layered architectures the dynamics can be solved exactly and it is known that the local field only contains Gaussian noise. For networks with symmetric connections, however, things are quite different. Even for extremely diluted versions of these systems feedback correlations become essential from the second time step onwards, complicating the dynamics in a nontrivial way. A complete solution for the parallel dynamics of symmetric $Q$-Ising networks at zero-temperature taking into account all feedback correlations, has been obtained only recently using a probabilistic signal-to-noise ratio analysis [@BJSF]-[@BJS99]. Thereby it is seen that both for the fully connected and the extremely diluted symmetric architectures, the local field contains a discrete and a normally distributed noise part. The difference between the two architectures is that for the diluted model the discrete part at a certain time $t$ does not involve the spins at all previous times $t-1, t-2, \ldots$ up to $0$ but only the spins at time step $t-1$. Even so, this discrete part prevents a closed-form solution of the dynamics but a recursive scheme can be developed in order to calculate the complete time evolution of the order parameters, i.e., the retrieval overlap and the activity. In the work above the focus has been on the non-equilibrium behavior of the order parameters of the network. But, since the local field itself is a basic ingredient in the development of the relevant recursive scheme it is interesting to study also the non-equilibrium behavior of the local field distribution. The more so since this distribution does not convergence to a simple sum of Gaussians as is frequently thought, but it develops a gap structure. This is precisely one of the points studied in detail in the present communication. Moreover, the analogies and differences between the fully connected architecture and the symmetrically diluted one are highlighted. Finally, numerical simulations are presented confirming the analytic study and giving additional insight in the structure of these local field distributions. The model ========= Consider a neural network $\Lambda$ consisting of $N$ neurons which can take values $\sigma_i$ from a discrete set $ {\cal S} = \lbrace -1 = s_1 < s_2 < \ldots < s_Q = +1 \rbrace $. The $p$ patterns to be stored in this network are supposed to be a collection of independent and identically distributed random variables (i.i.d.r.v.), $\{{\xi}_i^\mu \in {\cal S}\}$, $\mu \in {\cal P}=\{1,\ldots,p\}$ and $i \in \Lambda$, with zero mean, $E[\xi_i^\mu]=0$, and variance $A=\Var[\xi_i^\mu]$. The latter is a measure for the activity of the patterns. Given the configuration ${\bsigma}_\Lambda(t)\equiv\{\sigma_j(t)\}, j\in\Lambda=\{1,\ldots,N\}$, the local field in neuron $i$ equals $$\label{eq:h} h_i({\bsigma}_{\Lambda}(t))= \sum_{j\in\Lambda} J_{ij}(t)\sigma_j(t)$$ with $J_{ij}$ the synaptic coupling from neuron $j$ to neuron $i$. In the sequel we write the shorthand notation $h_{\Lambda,i}(t) \equiv h_i({\bsigma}_{\Lambda}(t))$. For the extremely diluted symmetric (SED) and the fully connected (FC) architectures the couplings are given by the Hebb rule $$\begin{aligned} J_{ij}^{SED}&=&\frac{c_{ij}}{CA} \sum_{\mu \in {\cal P}} \xi_i^\mu \xi_j^\mu \quad \mbox{for} \quad i \not=j \,, \quad J_{ii}^{SED}=0 \, , \label{eq:JED} \\ J_{ij}^{FC}&=&\frac{1}{NA} \sum_{\mu \in {\cal P}} \xi_i^\mu \xi_j^\mu \quad \mbox{for} \quad i \not=j \,, \quad J_{ii}^{FC}=0 \, , \label{eq:JFC} \end{aligned}$$ with the $\{c_{ij}=0,1\}, i,j \in \Lambda$ chosen to be i.i.d.r.v. with distribution $\mbox{Pr}\{c_{ij}=x\}=(1-C/N)\delta_{x,0} + (C/N) \delta_{x,1}$ and satisfying $c_{ij}=c_{ji} $. For the diluted symmetric model the architecture is a local Cayley-tree but, in contrast with the diluted asymmetric model, it is no longer directed such that it causes a feedback from $t \geq 2$ onwards. In the limit $N \rightarrow \infty$ the probability that the number of connections $T_i=\{j\in \Lambda |c_{ij}=1\}$ giving information to the site $i \in \Lambda$, is still a Poisson distribution with mean $C=E[|T_i|]$. Thereby it is assumed that $ C \ll \log N$ and in order to get an infinite average connectivity allowing to store infinitely many patterns one also takes the limit $C \rightarrow \infty$ [@BJS99]. At zero temperature all neurons are updated in parallel according to the rule $$\begin{aligned} \label{eq:gain} \sigma_i(t+1) & = & \mbox{g}_b(h_{\Lambda,i}(t)) \nonumber \\ \mbox{g}_b(x) &\equiv& \sum_{k=1}^Qs_k \left[\theta\left[b(s_{k+1}+s_k)-x\right]- \theta\left[b(s_k+s_{k-1})-x\right] \right]\end{aligned}$$ with $s_0\equiv -\infty$ and $s_{Q+1}\equiv +\infty$. Here $\mbox{g}_b(\cdot)$ is the gain function and $b>0$ is the gain parameter of the system. For finite $Q$, this gain function is a step function. The gain parameter $b$ controls the average slope of $\mbox{g}_b(\cdot)$. Local field dynamics ==================== In order to measure the retrieval quality of the system one can use the Hamming distance between a stored pattern and the microscopic state of the network $$d({\bxi}^\mu,{\bsigma}_\Lambda(t))\equiv \frac{1}{N} \sum_{i\in \Lambda}[\xi_i^\mu-\sigma_i(t)]^2 \,.$$ This introduces the main overlap and the arithmetic mean of the neuron activities $$\label{eq:mdef} m_\Lambda^\mu(t)=\frac{1}{NA} \sum_{i\in\Lambda}\xi_i^\mu\sigma_i(t), \quad \mu \in {\cal P}\, ; \quad a_\Lambda(t)=\frac{1}{N}\sum_{i\in\Lambda}[\sigma_i(t)]^2 \,.$$ The key question is then how these quantities evolve in time under the parallel dynamics specified before. For a general time step we find from eq. (\[eq:gain\]) using the law of large numbers (LLN) that in the thermodynamic limit $$\begin{aligned} m^1(t+1) \ustr{Pr}{=} \frac{1}{A} \langle\!\langle \xi_i^1\mbox{g}_b(h_i(t)) \rangle\!\rangle , \quad a(t+1) \ustr{Pr}{=} \langle\!\langle \mbox{g}_b^2(h_i(t)) \rangle\!\rangle \, , \label{eq:a}\end{aligned}$$ where the convergence is in probability
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'This is a brief survey of the research performed by Grandata Labs in collaboration with numerous academic groups around the world on the topic of human mobility. A driving theme in these projects is to use and improve Data Science techniques to understand mobility, as it can be observed through the lens of mobile phone datasets. We describe applications of mobility analyses for urban planning, prediction of data traffic usage, building delay tolerant networks, generating epidemiologic risk maps and measuring the predictability of human mobility.' author: - | Carlos Sarraute[^1,\*^]{} and Martin Minnoni[^1^]{}\ [[^1^]{}Grandata Labs, San Francisco, CA, USA]{}\ [[^^]{}Corresponding author: charles@grandata.com]{}\ bibliography: - '../GD\_works.bib' title: Brief survey of Mobility Analyses based on Mobile Phone Datasets --- Introduction ============ The mission of Grandata’s research lab is to improve our understanding of Human Dynamics through the analysis of massive datasets coming from mobile phone companies and other industries. This research has been performed in collaboration with numerous academic groups at MIT, INRIA and ENS Lyon, LNCC, UBA and many others. We provide here a brief review of our research, intended to serve as an introduction and guideline to the research papers on mobility aspects. This brief survey focuses on the analysis of mobility in space, investigating the important locations of the users’ trajectories, and how they can be used to infer their participation in large social events. The study of mobility has numerous applications, such as urban planning (Section \[urban-planning\]), data traffic usage prediction (Section \[data-traffic-usage\]), building delay tolerant networks (Section \[delay-tolerant-networks\]), epidemiology (Section \[epidemiology\]), and predictability of human mobility (Section \[mobility-predictability\]).
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We investigate the suppression of the baryon density fluctuations compared to the dark matter in the linear regime. Previous calculations predict that the suppression occurs up to a characteristic mass scale of $\sim 10^6$ M$_\odot$, which suggests that pressure has a central role in determining the properties of the first luminous objects at early times. We show that the expected characteristic mass scale is in fact substantially lower (by a factor of $\sim 3$–10, depending on redshift), and thus the effect of baryonic pressure on the formation of galaxies up to reionization is only moderate. This result is due to the influence on perturbation growth of the high pressure that prevailed in the period from cosmic recombination to $z\sim 200$, when the gas began to cool adiabatically and the pressure then dropped. At $z\sim10$ the suppression of the baryon fluctuations is still sensitive to the history of pressure in this high-redshift era. We calculate the fraction of the cosmic gas that is in minihalos and find that it is substantially higher than would be expected with the previously-estimated characteristic mass. Expanding our investigation to the non-linear regime, we calculate in detail the spherical collapse of high-redshift objects in a $\Lambda$CDM universe. We include the gravitational contributions of the baryons and radiation and the memory of their kinematic coupling before recombination. We use our results to predict a more accurate halo mass function as a function of redshift.' author: - | S. Naoz and R. Barkana$^{1}$ [^1]\ $^{1}$School of Physics and Astronomy, The Raymond and Beverly Sackler Faculty of Exact Sciences,\ Tel Aviv University, Tel Aviv 69978, ISRAEL title: The formation and gas content of high redshift galaxies and minihalos --- \[firstpage\] galaxies:high-redshift – cosmology:theory – galaxies:formation Introduction {#intro} ============ The detection of the cosmic microwave background (CMB) temperature anisotropies [@bennett] confirmed the notion that the present-day galaxies and large-scale structure (LSS) evolved from the primordial inhomogeneities in the density distribution at very early times. After cosmic recombination, the gas decoupled from its mechanical drag on the CMB, and the baryons subsequently began to fall into the pre-existing gravitational potential wells of the dark matter. Regions that were denser than average collapsed and formed bound halos. First the smallest, least massive objects collapsed, and later, larger objects formed through a mixture of mergers and accretion. The formation and properties particularly of early galaxies at high redshift are being actively studied in anticipation of many expected observational probes [e.g., @BL01; @R06]. A well-known solution for the collapse of a halo that consists of dark matter *only* in an Einstein de Sitter (EdS) universe was presented by @gg. This solution considers a spherical region initially with a small uniform overdensity compared to the background universe. As the universe expands, the overdensity expands slower than the background until it reaches a maximum radius, turns around, and collapses. The critical overdensity, in the corresponding linearly-extrapolated calculation, marks the collapse time of a dark matter halo in this case as $\delta_c=1.686$, a value that does not depend on the halo mass or collapse redshift. The mathematical solution gives a singularity as the final state, but physically we know that even a small initial asymmetry will make the object stabilize with a finite size after reaching a virial equilibrium between motion and gravity. Extensive work has been done on spherical collapse models, especially models that include a cosmological constant or a dark energy background [e.g., @lahav; @desh; @hoffman; @H; @lahav2; @wang]; in particular, the cosmological constant $\Lambda$ changes the above value of overdensity ($\delta_c$) by about $0.6\%$. In addition, many numerical simulations of the formation of primordial objects at $z\sim 20$–30 have been performed. However, the earliest stars formed at $z \sim 65$ [@NNB], and even for a halo that collapsed at $z\sim30$, $\delta$ must have started significantly non-linear ($\sim 9\%$) even for a simulation that begins as early as $z \sim 600$. With the $\Lambda$CDM cosmological parameters [@Spergel06], the contribution of the photons to the expansion of the universe cannot be neglected when considering the formation of the first objects [@NNB]. Moreover, the baryons have a non-negligible contribution compared to the dark matter, and their different evolution must be included in the collapse process. When considering the formation and properties of the first luminous objects, we must investigate the relation between the baryon and the dark matter fluctuations. @cs defined a fiducial “filtering mass” that describes the highest mass at which the baryonic pressure still manages to suppress the linear baryonic fluctuations significantly. @gnedin00 extended the usefulness of the filtering mass to the fully non-linear regime by showing that it is also related to another characteristic mass scale – the largest halo mass for which the gas content is substantially suppressed compared to the cosmic fraction. As we show below, if we follow previous calculations [@cs; @gnedin00; @gnedin03], we find a characteristic mass at high redshift of $\sim 10^6 M_\odot$, approximately constant at $z\ga 60$ and decreasing only slowly with time afterwards. This is somewhat larger than the mass scale of the first objects and suggests a potent effect on the formation of the first objects. Here we present an improved calculation of the characteristic mass that is mainly based on the improved calculation of the baryon density and temperature fluctuations that we presented in @NB. We first review the basic equations of linear perturbation growth (Section \[sec:scales\]). We then divide the power spectrum into several different ranges of scales that are associated with large-scale structure (Section \[LS\]), the filtering scale (Section \[sec:TmTran\]), and small scales (Section \[small\]). Note that we define the filtering mass with a different normalization than in previous works, as explained in Section \[sec:TmTran\]. For completeness we compare our calculation to the older, inaccurate approximation of a spatially-uniform sound speed along with other approximations (Sections \[sec:CS\] and \[sec:CSMF\]). We use our results for the filtering mass to estimate the gas fraction in minihalos (Section \[sec:gas\]). In Section \[non-linear\] we calculate in detail the critical overdensity for collapse of halos that form at very high redshifts, following the evolution of perturbations outside the horizon (Section \[sec:outH\]) and inside it (Section \[sec:delc\]). We also predict the halo abundance at different redshifts (Section \[mass\_a\]). Finally, we summarize and discuss our results in Section 4. Our calculations are made in a $\Lambda$CDM universe, including dark matter, baryons, radiation, and a cosmological constant. We assume cosmological parameters matching the three year WMAP data together with weak lensing observations [@Spergel06], i.e., $\Omega_m=0.299$, $\Omega_\Lambda=0.74$, $\Omega_b=0.0478$, $h=0.687$, $n=0.953$ and $\sigma_8=0.826$. We also consider the effect of current uncertainties in the values of cosmological parameters on some of our results, by comparing to the results with a different cosmological parameter set specified by @Viel: $\Omega_m=0.253$, $\Omega_\Lambda=0.747$, $\Omega_b=0.0425$, $h=0.723$, $n=0.957$ and $\sigma_8=0.785$. These parameters represent typical 1-$\sigma$ errors, in terms of the parameter uncertainties given by @Spergel06. Linear Growth of Perturbations {#sec:linear} ============================== The Basic Equations {#sec:scales} ------------------- @NB showed that the baryonic sound speed varies spatially, so that the baryon temperature and density fluctuations must be tracked separately. Thus, the evolution of the linear density fluctuations of the dark matter ($\delta_{\mathrm {dm}}$) and the baryons ($\delta_{\mathrm {b}}$) is described by two coupled second-order differential equations: $$\begin{aligned} \label{g_T} \ddot{\delta}_{{\mathrm {dm}}} + 2H \dot {\delta}_{{\mathrm {dm}}} & = & \frac{3}{2}H_0^2\frac{\Omega_{m}}{a^3} \left(f_{{\mathrm {b}}} \delta_{{\mathrm {b}}} + f_{{\mathrm {dm}}} \delta_{{\mathrm {dm}}}\right)\ , \\\ddot{\delta}_{{\mathrm {b}}}+ 2H \dot {\delta}_{{\mathrm {b}}} & = & \frac{3}{2}H_0^2\frac{\Omega_{m}}{a^3} \left(f_{{\mathrm {b}}} \delta_{{\mathrm {b}}} + f_{{\mathrm {dm}}} \delta_{{\mathrm {dm}}}\right)-\frac{k^2}{a^2}\frac{k_B\bar{T}}{\mu} \left(\delta_{{\mathrm {b}}}+\delta_{T}\right)\ ,\nonumber\end{aligned}$$ where $\Omega_m$ is the present matter density as a fraction of the critical density, $k$ is the comoving wavenumber,
{ "pile_set_name": "ArXiv" }
null
--- abstract: | Using a coupling for the weighted sum of independent random variables and the explicit expression of the transition semigroup of Ornstein-Uhlenbeck processes driven by compound Poisson processes, we establish the existence of a successful coupling and the Liouville theorem for general Ornstein-Uhlenbeck processes. Then we present the explicit coupling property of Ornstein-Uhlenbeck processes directly from the behaviour of the corresponding symbol or characteristic exponent. This approach allows us to derive gradient estimates for Ornstein-Uhlenbeck processes via the symbol. **Keywords:** Ornstein-Uhlenbeck processes; coupling property; Liouville theorem; gradient estimates. **MSC 2010:** 60J25; 60J75. author: - 'René L. Schilling Jian Wang' title: '**On the Coupling Property and the Liouville Theorem for Ornstein-Uhlenbeck Processes**' --- [^1] [^2] Main Results {#section1} ============ Let $(X^x_t)_{t{\geqslant}0}$ be an $n$-dimensional Ornstein-Uhlenbeck process, which is defined as the unique strong solution of the following stochastic differential equation $$\label{ou1} dX_t = AX_t\,dt + B\,dZ_t,\qquad X_0=x\in{\mathds R}^n.$$ Here $A$ is a real $n\times n$ matrix, $B$ is a real $n\times d$ matrix and $Z_t$ is a Lévy process in ${\mathds R}^d$; note that we allow $Z_t$ to take values in a proper subspace of ${\mathds R}^d$. It is well known that $$X_t^x =e^{tA}x + \int_0^t e^{(t-s)A}B\,dZ_s.$$ The characteristic exponent or symbol $\Phi$ of $Z_t$, defined by $${\mathds E}\bigl(e^{i{\langle\xi,Z_t\rangle}}\bigr) =e^{-t\Phi(\xi)},\quad \xi\in{\mathds R}^d,$$ enjoys the following Lévy-Khintchine representation: $$\label{ou2} \Phi(\xi) =\frac{1}{2}{\langleQ\xi,\xi\rangle} +i{\langleb,\xi\rangle} +\int_{z\neq 0} \Bigl(1-e^{i{\langle\xi,z\rangle}}+i{\langle\xi,z\rangle}{\mathds 1}_{B(0,1)}(z)\Bigr)\nu(dz),$$ where $Q=(q_{j,k})_{j,k=1}^d$ is a positive semi-definite matrix, $b\in{{\mathds R^d}}$ is the drift vector and $\nu$ is the Lévy measure, i.e.a $\sigma$-finite measure on ${\mathds R}^d\setminus\{0\}$ such that $\int_{z\neq 0}(1\wedge |z|^2)\,\nu(dz)<\infty$. For every $\varepsilon>0$, define ${\nu}_\varepsilon$ on ${\mathds R}^d$ as follows: $${\nu}_\varepsilon(C) = \begin{cases} \nu(C), & \text{if\ \ } \nu({\mathds R}^d)<\infty;\\ \nu(C\setminus \{z: |z|<\varepsilon\}), & \text{if\ \ } \nu({\mathds R}^d)=\infty. \end{cases}$$ Let $(Y_t)_{t{\geqslant}0}$ be a Markov process on ${\mathds R}^n$ with transition function $P_t(x,\cdot)$. Then, according to [@Li; @T; @SW], we say that $(Y_t)_{t{\geqslant}0}$ admits a *successful coupling* (also: enjoys the *coupling property*) if for any $x,y\in{\mathds R}^n$, $$\label{prex1} \lim_{t\rightarrow\infty}\|P_t(x,\cdot)-P_t(y,\cdot)\|_{{\mathrm{Var}}}=0,$$ where $\|\cdot\|_{{\mathrm{Var}}}$ stands for the total variation norm. If a Markov process admits a successful coupling, then it also has the Liouville property, i.e. every bounded harmonic function is constant; in this context a function $f$ is harmonic, if $Lf=0$ where $L$ is the generator of the Markov process. See [@CG; @CW] and the references therein for this result and more details on the coupling property. Let $A$ be an $n\times n$ matrix. We say that an eigenvalue $\lambda$ of $A$ is *semisimple* if the dimension of the corresponding eigenspace is equal to the algebraic multiplicity of $\lambda$ as a root of characteristic polynomial of $A$. Note that for symmetric matrices $A$ all eigenvalues are real and semisimple. Recall that for any two bounded measures $\mu$ and $\nu$ on $({\mathds R}^d,{\mathscr{B}}({\mathds R}^d))$, $\mu\wedge\nu:=\mu-(\mu-\nu)^+$, where $(\mu-\nu)^{\pm}$ refers to the Jordan-Hahn decomposition of the signed measure $\mu-\nu$. In particular, $\mu\wedge\nu=\nu\wedge\mu$, and $\mu\wedge \nu\,({\mathds R}^d)=\frac{1}{2}\big[\mu({\mathds R}^d)+\nu({\mathds R}^d)-\|\mu-\nu\|_{{\mathrm{Var}}}\big].$ One of our main results is the following \[th1\] Let $P_t(x,\cdot)$ be the transition probability of the Ornstein-Uhlenbeck process $\{X_t^x\}_{t{\geqslant}0}$ given by . Assume that ${\operatorname{Rank}}(B)=n$ (which implies $n{\leqslant}d$), and that there exist $\varepsilon,\delta>0$ such that $$\label{th2233} \inf_{z\in{\mathds R}^d,|z|{\leqslant}\delta}\nu _\varepsilon\wedge (\delta_z*\nu_\varepsilon)({\mathds R}^d)>0.$$ If the real parts of all eigenvalues of $A$ are non-positive and if all purely imaginary eigenvalues are semisimple, then there exists a constant $C=C(\varepsilon,\delta,\nu,A,B)>0 $ such that for all $x,y\in{\mathds R}^n$ and $t>0$, $$\label{th21} \|P_t(x,\cdot)-P_t(y,\cdot)\|_{{\mathrm{Var}}} {\leqslant}\frac{C(1+|x-y|)}{\sqrt{t}}\wedge2.$$ As a consequence of Theorem \[th1\], we immediately obtain the following result which partly answers the following question about Liouville theorems for non-local operators from [@PZ page 458]: *A challenging task would be to apply other probabilistic techniques, based on ... coupling to non-local operators*. Under the conditions of Theorem \[th1\], the Ornstein-Uhlenbeck process $\{X_t^x\}_{t{\geqslant}0}$ admits a successful coupling and has the Liouville property. \[remarkth1\] (1) If $A=0$, $d=n$ and $B={\operatorname{id}}_{{\mathds R}^n}$, then $X_t$ is just a Lévy process on ${\mathds R}^n$. The condition is one possibility to guarantee sufficient jump activity such that the Lévy process $X_t$ admits a successful coupling. To see that is sharp, we can use the example in [@SW Remark 1.2]. \(2) Let $Z_t$ be a (rotationally symmetric) $\alpha$-stable Lévy process $Z_t$, $0<\alpha<2$, and denote by $X_t$ the $n$-dimensional Ornstein-Uhlenbeck process driven by $Z_t$, i.e.$$dX_t = AX_t\,dt + dZ_t.$$ If at least one eigenvalue of $A$ has positive real part, then $X_t$ does not have the coupling property. Indeed, according to [@PZ Example 3.4 and Theorem 3.5], we know that $X_t$ does not have the Liouville property, i.e. there exists a bounded harmonic function which is not constant. According to [@Li Theorem 21.12] or [@CG Theorem 1 and its second remark], $X_t$ does not have the coupling property. This example indicates that the non-positivity of the real parts of the eigenvalues of $A$ is also necessary. In [@SW Theorem 4.1 and Corollary 4.2] we show that Lévy processes which have the strong Feller property admit the coupling property. A similar conclusion, however, does not hold for general Ornstein-Uhlenbeck processes. Consider, for instance, the one-dimensional Ornstein-Uhlenbeck process given by $$dX_t=X_t\,dt+dZ_t,\qquad X_0=x\in{\mathds R},$$ where $Z_t$ is an $\alpha$-stable Lévy process $
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'The design objectives for an automatic transcription system are to produce text readable by humans and to minimize the impact on manual post-editing. This study reports on a recognition system used for transcribing speeches in the Icelandic parliament - Althingi. It evaluates the system performance and its effect on manual post-editing. The results are compared against the original manual transcription process. 239 total speeches, consisting of 11 hours and 33 minutes, were processed, both manually and automatically, and the editing process was analysed. The dependence of word edit distance on edit time and the editing real-time factor has been estimated and compared to user evaluations of the transcription system. The main findings show that the word edit distance is positively correlated with edit time and a system achieving a 12.6% edit distance would match the performance of manual transcribers. Producing perfect transcriptions would result in a real-time factor of 2.56. The study also shows that 99% of low error rate speeches received a medium or good grade in subjective evaluations. On the contrary, 21% of high error rate speeches received a bad grade.' address: | Reykjavik University\ Language and Voice Lab\ Menntavegur 1, 101 Reykjavik, Iceland bibliography: - 'refs.bib' title: 'Manual Post-editing of Automatically Transcribed Speeches from the Icelandic Parliament - Althingi' --- speech recognition, parliamentary transcription, manual editing, human-computer interaction, Icelandic Introduction ============ In the last 5 years, automatic speech recognition (ASR) technology has advanced enough to be used in real-life applications. Recognition technology has been used extensively to transcribe speeches for large languages such as English, German or Spanish [@miro2015efficiency; @munteanu2008collaborative; @kolkhorst2012evaluation]. These systems are often composed of an ASR module to produce audio-to-text transcription and several natural language processing modules to improve text formatting. The main issue, however, is that neither module performs with perfect accuracy so manual post-processing is needed to produce final transcriptions. The system for automatically transcribing university lectures in Spanish [@miro2015efficiency] compared three post-processing approaches: one involving automatic corrections, another using lecturer corrections, and a third using a mixture of both. The system was tested on twenty lectures and the was compared to a real-time factor of the post-editing time versus the total duration of the lecture. The authors conclude that the edit time is directly correlated with the transcription accuracy, but the relationship between the real-time factor and was weak, perhaps due to the low range produced by the ASR. In the English transcription system [@munteanu2008collaborative], the challenge of achieving a low error rate in transcribing university lectures was handled using collaborative editing. The authors’ findings conclude that correcting transcripts with $\mbox{\it WER}$ lower than 25% increases the editing effort. The transcription errors for lectures in German [@munteanu2008collaborative] were corrected using student edits and this error correction was studied. During the transcription process they noted that their ASR made errors caused by uncommon and non-German terms in the lectures. Their analysis showed that the corrections of inexperienced editors tend to bring a high WER down to about 25%, corroborating the findings of [@munteanu2008collaborative]. Evaluation of post-editing transcribed speech was studied in [@sperber2017transcribing], where authors observe a strong variation in editing accuracy and speed among editors. Authors also note that low $\mbox{\it WER}$ transcripts require advanced editing strategies to achieve error rate improvements comparable to improvements for high $\mbox{\it WER}$ transcripts. Different transcription strategies were compared in [@Sperber2016]; namely a fully manual post-editing of ASR transcripts and confidence-enhanced post-editing of ASR transcripts. The authors conclude that post-editing automatic transcripts results in more accurate and faster transcripts, when compared to manually transcribing from scratch. This conclusion was further corroborated in [@Miro2018], which dealt with automatic subtitling of videos. This paper evaluates an ASR system in the context of transcribing speeches for the Icelandic parliament - Althingi. The system has only recently been developed for Icelandic [@gudhnason2012almannaromur; @Helgadottir2017corpus; @gudhnason2017building; @NIKULASDOTTIR2018], and is now being incorporated into the transcription process of Althingi. The current manual procedure is done in two stages: an initial manuscript is obtained from a contracted transcription service, which is then post-edited by in-house specialists. The main objective of the current project is to replace the initial manual transcription process with an automatic speech recognizer. This is the first time an ASR system is used as a core component in transcription for the Icelandic language, and the purpose of this paper is threefold; to introduce the evaluation procedure, to present the first measurements of the manual post-editing and to report on the performance of the system. Transcription System for Althingi ================================= The current transcription procedure for the Icelandic parliament is done in two stages, illustrated in Figure \[fig:althingi\_workflow\]. The speeches are first created in the Althingi document management system, Documentum, as XML documents, with only the speech meta-data and a link to the speech in the MP3 format. Then, in the manual transcription stage the transcribers listen to the audio and transcribe the speech into the XML document (Text A). The initial transcript is meant to reflect the spoken record as accurately as possible but the transcribers might also enrich the text with minor changes. For example, they might add in different formatting for poems and remove repetitions. Next, in the manual editing stage the XML speech document is sent to the editors who modify the speech to be fit for publication and record their editing time. It is common that an editor corrects transcription errors, fixes grammar or enriches the text with context to make the parliamentary record clearer. Finally, the speeches (Text C) are published to their website. ![Diagram of the transcription and editing process for Althingi. t(d) indicates editing time.[]{data-label="fig:althingi_workflow"}](Althingicurrentprocess.pdf){width="\linewidth"} The main objective of the current project is to replace the first stage, manual transcription, with an automatic speech recognizer. Before the experiment, the in-house specialists gave suggestions regarding relevant data to gather and discussed the important differences between the ASR and manual transcriptions. For the experiment, the manual transcription and ASR transcriptions were done in parallel. With the intent of using Text A as reference material, only the ASR transcriptions received manual post-editing. The experiment was performed for a week; on the first day, only the first stage was tested, to ensure the integration worked as intended. For that week the Icelandic parliament was in session for four days. It is from the last three days that this data was gathered. The ASR system -------------- The details about the preparation of the ASR training data and the development of the ASR can be found in [@Helgadottir2017corpus]. The acoustic model is a deep neural network, based on a recipe developed for the Switchboard corpus[^1], using the Kaldi ASR toolkit [@povey2011kaldi]. It is a sequence trained neural network based on lattice-free MMI [@povey2016purely]. It consists of seven time delay deep neural network layers [@waibel1989phoneme] and three long-short term memory layers [@sak2014long]. The network takes 40 dimensional LDA feature vectors and a 100 dimensional i-vector as input. Two n-gram language models were trained using the KenLM toolkit [@heafield2011kenlm]. The first one is a pruned trigram model, used in the decoding. The other one is a 5-gram language model, trained on the total parliamentary text set, 55M tokens, and is used for re-scoring decoding results. The lexicon is based on the pronunciation dictionary from the Hjal project [@rognvaldsson2003icelandic], available at M[á]{}lf[ö]{}ng[^2]. We added words from the language model training data, which appeared three or more times, with some constraints, resulting in roughly a dictionary containing 200k words. Inconsistencies in the pronunciation dictionary were also fixed. The WER of the ASR, before any post-processing is done, is $9.63\%$ on the test set, using 1500 hours of parliamentary speeches and corresponding text, for training. In real life, this number is going to be higher, partly because of imperfect punctuation reconstruction and disparate casing of many words in our texts, and partly because the ASR test set had been manually cleaned to better match the audio. Automatic post-processing ------------------------- The ASR returns a stream of words with no punctuation or formatting. Since the purpose of the system is to publish parliamentary speeches, human readability needs to be factored into the final transcription. The OpenGrm Thrax Grammar Development tool [@tai2011thrax; @roark2012opengrm] was used to compile grammars into weighted finite-state transducers, in order to denormalize numbers and abbreviations, according to parliamentary conventions. The Punctuator toolkit [@tilk2016bidirectional] is used to restore punctuations in
{ "pile_set_name": "ArXiv" }
null
--- author: - 'G. Ruffini - Jan 2017 — Starlab Technical Note, TN00344 (v1.0)' bibliography: - 'kolmogorov.bib' title: 'Lempel-Ziv complexity reference' --- Abstract {#abstract .unnumbered} ======== The aim of this note is to provide some reference facts for LZW—mostly from Thomas and Cover [@Cover:2006aa]—adapted to the needs of the Luminous project. LZW is an algorithm to compute a Kolmogorov Complexity estimate derived from a limited programming language that only allows copy and insertion in strings (not Turing complete set). Despite its delightful simplicity, it is rather powerful and fast. We then focus on definitions of LZW derived complexity metrics consistent with the notion of descriptive length, and discuss different normalizations, which result in a set of metrics we call $\rho_0$, $\rho_1$ and $\rho_2$, in addition to the Description Length $l_{LZW}$ and the Entropy Rate. LZW compression: the main concept ================================= The main idea in LZW is to look for repeating patterns in the data, and instead of rewriting repeating sequences, refer to the last one seen [@Lempel:1976aa]. As Kaspar clearly states, LZW is the Kolmogorov Complexity computed with a limited set of programs that only allow copy and insertion in strings [@Kaspar:1987aa; @Ruffini:2016ad].\ “We do not profess to offer a new absolute measure for complexity which, as mentioned already, we believe to be nonexistent. Rather, we propose to evaluate the complexity of a finite sequence from the point of view of a simple self- delimiting learning machine which, as it scans a given n- digit sequence $S=s_{1}\cdot s_{1} \cdot ... s_{n}$ , from left to right, adds a new word to its memory every time it discovers a substring of consecutive digits not previously encountered. The size of the vocabulary, and the rate at which new words are encountered along $S$, serve as the basic ingredients in the proposed evaluation of the complexity of $S$.”\ We consider a string of characters in and alphabet with $A$ symbols (typically binary) of length $n$. From wikipedia: A high level view of the encoding algorithm is shown here: %[frame=single] % Start your code-block %P = [set of parameters] %B= [set of backgrounds] 1. Initialize the dictionary to contain all strings of length one. 2. Find the longest string W in the dictionary that matches the current input. 3. Emit the dictionary index for W to output and remove W from the input. 4. Add W followed by the next symbol in the input to the dictionary. 5. Go to Step 2. After applying LZW, we will end up with a set of words (or phrases, as they are sometimes called) $c(n)$ that go into a dictionary. The length of the compressed string will be $l_{LZW} \leq n$ (the analog of Kolmogorov or algorithmic complexity).\ The [**description length of the sequence encoded by LZW**]{} would have length less or equal to the number of phrases times the number of bits needed to identify a seen phrase plus the bits to specify a new symbol (to form a new phrase), hence[^1] $$l_{LZW} \le c(n) \log_{2} \left[ c(n)+ \log_{2} A \right] \approx c(n) \log_{2} \left[ c(n)\right]$$ The process of digitization =========================== When we digitize (e.g., binarize) a signal prior LZW, we are creating a new string from the data, and we make an explicit choice on what aspects of the data we wish to compress. In this process we destroy information—we are going to do lossy compression. Thus, the choice of digitization results in us having access to a subset of the features of the original string.\ A reasonable strategy is to preserve as much information as possible in the resulting transformed string. In this sense, using methods that maximize the entropy of the resulting series are recommended, such as using the median for thresholding (this is guaranteed to result in $H_{0}=1$)[^2].\ On the other hand, other methods that destroy more information may tap and highlight other, also relevant features of the data. At this stage, then, how to binarize or preprocess (e.g., filter) the original string is an empirical question. The same applies to the choice of compression method, of course, as LZW is just one framework for compression. LZW and entropy rate for stochastic processes ============================================= The main fact from Thomas and Cover [@Cover:2006aa] refers to stochastic random processes $\{X_{i}\}$. A key concept is the [**entropy rate**]{} of the stochastic process, given by $$\mathcal H(X)= \lim_{n\rightarrow \infty} {1\over n} H(X_1, ..., X_n),$$ when this limit exists, with $H$ denoting the usual multivariate entropy of $X$, $ H(X)=-E_{X}[\log(P(X)] $. It is an important theorem that for stationary processes, $$\mathcal H(X) = \lim_{n\rightarrow \infty} H(X_n|X_{n-1}, X_{n-2} ..., X_1).$$ Let also $$H_{0}(p) = -p\log p -(1-p)\log (1-p)$$ denote the [**univariate entropy**]{}, with $p$ the probability of a Bernoulli (binary) process (Markov chain[^3] of order zero).\ We note that entropy rate of a stochastic processes is non-increasing as a function of order, that is, $0\leq \mathcal H \leq .. \leq H_{q} \leq ... \leq H_{0} \leq 1$.\ The fundamental relation is that description length is closely related to entropy rate, $$l_{LZW}= c(n) \log_{2} \left[ c(n)+ \log_{2} A \right] \approx c(n) \log_{2} \left[ c(n)\right] \longrightarrow {n}{\mathcal H}$$ Another important result in what follows is that with probability 1 (Thomas and Cover Theorem 13.5.3) $$\lim_{n\rightarrow \infty} \sup l_{LZW} \leq n \mathcal H$$ which can rewrite as $$\lim_{n\rightarrow \infty} \sup c(n) \log_{2} c(n) \leq n \mathcal H \leq n H_{0}$$ and use to rewrite (in the limit above) $$c(n) \leq \frac{n \mathcal H}{\log_{2} c(n) } \leq \frac{n \mathcal H}{\log_{2} \frac{n \mathcal H}{\log_{2} c(n) }} \sim \frac{n \mathcal H}{\log_{2} n } \leq \frac{n H_{0}}{\log_{2} n }$$ which we use below for normalization purposes. Metrics ======= Two metrics are used in the field, one is $c(n)$ and the other $l_{LZW}$. Of the two the latter is more closely related to Kolmogorov complexity or description length. Both contain similar information (in fact one is a monotonic function of the other). Fundamental Normalization of LZW ================================ The purest way to normalize this metric is to normalize by the original string length $n$ $$\rho_{0} = l_{LZW} / n = \frac{c(n) \log_{2} [ c(n) +A] } {n} \rightarrow \mathcal H$$ with units of bits per character. This is the [**LZW compression ratio**]{}. Other normalizations or measures ================================ A typical normalization adopted by the literature is to “divide by entropy”. By this we mean $\rho_{1}= l_{LZW} / H_{0}$. In the literature this is usually defined through $c(n)$, $$\rho_{1} = \frac{c(n)}{\frac{n H_{0}}{\log_{2} n } } \sim \frac{\mathcal H}{H_{0}} \sim \frac{l_{LZW} }{nH_{0}} \rightarrow \frac{\mathcal H}{H_{0}}$$ (with units of bits per character). Essentially the same can be computed from the randomly reshuffled data series, which with high probability forces $l_{LZW} \sim n H_{0}$ by destroying 2nd order interactions. Hence, $$\rho_{1} \approx \frac{l_{LZW}} { H_{0}} \approx \frac{l_{LZW} }{l^{shuf}_{LZW}}$$ This ratio tells us how much information density is hidden in 2nd and higher order entropy rate as compared to first order one.\ We can think of this a being the comparison of “first order apparent complexity” (entropy) and an estimate
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'The magnetic structures and the magnetic phase transitions in the Mn-doped orthoferrite TbFeO$_3$ studied using neutron powder diffraction are reported. Magnetic phase transitions are identified at $T^\mathrm{Fe/Mn}_N \approx$ 295 K where a paramagnetic-to-antiferromagnetic transition occurs in the Fe/Mn sublattice, $T^\mathrm{Fe/Mn}_{SR} \approx$ 26 K where a spin-reorientation transition occurs in the Fe/Mn sublattice and $T^\mathrm{R}_N \approx$ 2 K where Tb-ordering starts to manifest. At 295 K, the magnetic structure of the Fe/Mn sublattice in TbFe$_{0.5}$Mn$_{0.5}$O$_3$ belongs to the irreducible representation $\Gamma_4$ ($G_xA_yF_z$ or $Pb''n''m$). A mixed-domain structure of ($\Gamma_1 + \Gamma_4$) is found at 250 K which remains stable down to the spin re-orientation transition at $T^\mathrm{Fe/Mn}_{SR}\approx$ 26 K. Below 26 K and above 250 K, the majority phase ($> 80\%$) is that of $\Gamma_4$. Below 10 K the high-temperature phase $\Gamma_4$ remains stable till 2 K. At 2 K, Tb develops a magnetic moment value of 0.6(2) $\mu_\mathrm{B}/$f.u. and orders long-range in $F_z$ compatible with the $\Gamma_4$ representation. Our study confirms the magnetic phase transitions reported already in a single crystal of TbFe$_{0.5}$Mn$_{0.5}$O$_3$ and, in addition, reveals the presence of mixed magnetic domains. The ratio of these magnetic domains as a function of temperature is estimated from Rietveld refinement of neutron diffraction data. Indications of short-range magnetic correlations are present in the low-$Q$ region of the neutron diffraction patterns at $T < T^\mathrm{Fe/Mn}_{SR}$. These results should motivate further experimental work devoted to measure electric polarization and magnetocapacitance of TbFe$_{0.5}$Mn$_{0.5}$O$_3$.' author: - 'Harikrishnan S. Nair' - Tapan Chatterji - 'C. M. N. Kumar' - Thomas Hansen - Hariharan Nhalil - Suja Elizabeth - 'André M. Strydom' title: 'Magnetic structures and magnetic phase transitions in the Mn-doped orthoferrite TbFeO$_3$ studied by neutron powder diffraction' --- \[INTRO\]Introduction ===================== The orthoferrite \[$R$FeO$_3$; $R$ = rare earth\] oxides have been recently re-investigated experimentally and theoretically from the fascinating perspective of multiferroicity. [@mandal2011spin; @shang2013multiferroic; @deng2015magnetic; @pavlov2012optical; @zhao2014creating] Pursuing the recent line of multiferroics research, theoretical work on $R$FeO$_3$ thinfilms has identified that strain can convert paraelectric phase of orthoferrites in to ferroelectrics thus rendering them multiferroic[@zhao2014creating]. It has been found theoretically that for large values of strain on $R$FeO$_3$ with large rare earth ion, giant polarization is realized. In fact, with increasing strain, a new ferroelectric phase, not observed in any perovskite before, is realized. Multifunctional properties like large magnetoelectric coupling and ultrafast optical control of spins have been observed in the orthoferrites [@tokunaga2008magnetic; @yamaguchi2013terahertz; @mikhaylovskiy2014terahertz]. The $R$FeO$_3$ realize high Néel temperature, $T_N \approx$ 623 -740 K[@marezio1970crystal; @eibschutz1967mossbauer] however, in bulk form they are paraelectric rather than ferroelectric suggesting weak multiferroic effects. Weak ferroelectric polarization has been recently reported in Gd and Sm orthoferrites [@tokunaga2008magnetic; @lee2011spin] which are thought to have “improper” origin induced by magnetic order. In TbFeO$_3$, an unusual incommensurate magnetic phase was discovered through neutron diffraction [@artyukhin2012solitonic] – it was shown that the exchange of spin waves between extended topological defects could result in novel magnetic phases drawing parallels with the Yukawa forces that mediate between protons and neutrons in a nucleus. The Fe$^{3+}$ moments in TbFeO$_3$ exhibit $G_xA_yF_z$ ($Pb'n'm$) spin configuration at room temperature [@bouree1975mise; @bertaut1967structures; @tejada1995quantum] which is accompanied by a spin-reorientation to $F_xC_yG_z$ ($Pbn'm'$). At 3 K, another spin re-orientation occurs to revert to the $G_xA_yF_z$ ($Pb'n'm$) structure. It is considered that the Tb$^{3+}$ spins order in $F_xC_y$ structure in 10 - 3 K interval and in the $A_xG_y$ structure below 3 K. Doping the $R$-site in $R$FeO$_3$ with another rare earth is found to be profitable to realize electric field induced generation and reversal of ferromagnetic moments [@tokunaga2012electric; @tokunaga2014magnetic]. Chemical substitution at the Fe-site in $R$FeO$_3$ also brings about interesting multiferroic effects. For example, in the case of Mn-substituted YFeO$_3$, magnetoelectric and magnetodielectric effects at different temperatures were reported[@mandal2011spin]. First-order spin-reorientation effects were observed as a result of Mn-substitution however, the magnetodielectric effects were observed at lower temperatures than $T_N$ or $T_{SR}$. Giant magnetodielectric coupling is also observed in another doped-orthoferrite, DyMn$_{0.33}$Fe$_{0.67}$O$_3$ [@hong2012temperature]. Spin-reorientation effects and magnetic sublattice effects were also observed in doped-orthoferrites with large $R$[@nagata2001magnetic; @mihalik2013magnetic]. $G$-type magnetic ordering of Mn$^{3+}$ and Cr$^{3+}$ spins were observed below $T_N \approx$ 84 K in the case of TbMn$_{0.5}$Cr$_{0.5}$O$_3$[@staruch2014magnetic], in addition to signatures of short-range magnetic correlations observed below 40 K which was attributed to the ferromagnetic component from canting of magnetic moments along the $c$-axis. In the case of Mn-substituted compound TbFe$_{0.75}$Mn$_{0.25}$O$_3$, the $T_N$ was determined to be 550 K and the $T_{SR}$ as 180 K through magnetic studies and Mößbauer spectroscopy[@kim2011spin].\ In our previous investigation using magnetometry it was inferred that TbFe$_{0.5}$Mn$_{0.5}$O$_3$ orders in $A_xG_yC_z$ ($\Gamma_1$) structure at $T^\mathrm{Fe/Mn}_N \approx$ 286 K followed by a spin re-orientation at $T^\mathrm{Fe/Mn}_{SR} \approx$ 28 K to the structure $G_xA_yF_z$ ($\Gamma_4$)[@hariharan2015reorientation]. No signature of Tb ordering was obtained in the previous study. In the present manuscript, we make a detailed investigation of the magnetic structures and spin re-orientation transitions in TbFe$_{0.5}$Mn$_{0.5}$O$_3$ using neutron powder diffraction in order to confirm the magnetic structure arrived at through macroscopic magnetization earlier. We update the magnetic structures as a function of temperature and observe that they evolve between $\Gamma_1$ and $\Gamma_4$ through mixed-domains of ($\Gamma_1$ + $\Gamma_4$). \[EXP\]Experimental details =========================== Polycrystalline samples of TbFe$_{0.5}$Mn$_{0.5}$O$_3$ were prepared by conventional solid state reaction methods employing the oxides Tb$_2$O$_3$, FeO, MnO$_2$ (all from Sigma Aldrich, 99.9$\%$) as precursors. The thoroughly-mixed powder was heated at 1300$^{\circ}$ C for 4 days with intermediate grinding. The phase-purity of the black powder that resulted was checked using x ray diffraction employing a Philips X’pert diffractometer with Cu-$K\alpha$ radiation. The chemical composition of the prepared sample was determined using the Inductively Coupled Plasma emission Spectroscopy (ICPAES) method. Magnetization measurements were performed on a sintered pellet of the sample in a magnetic property measurement system (MPMS, Quantum Design, San Diego). Neutron powder diffraction experiments were performed on 8 g of TbFe$_{0.5}$Mn$_{0.5}$O$_3$ powder at the instrument D1B in ILL, Grenoble. A wavelength of 2
{ "pile_set_name": "ArXiv" }
null
--- abstract: | This paper elaborates on relationalism about space and time as motivated by a minimalist ontology of the physical world: there are only matter points that are individuated by the distance relations among them, with these relations changing. We assess two strategies to combine this ontology with physics, using classical mechanics as example: the Humean strategy adopts the standard, non-relationalist physical theories as they stand and interprets their formal apparatus as the means of bookkeeping of the change of the distance relations instead of committing us to additional elements of the ontology. The alternative theory strategy seeks to combine the relationalist ontology with a relationalist physical theory that reproduces the predictions of the standard theory in the domain where these are empirically tested. We show that, as things stand, this strategy cannot be accomplished without compromising a minimalist relationalist ontology. *Keywords*: relationalism, parsimony, atomism, matter points, ontic structural realism, Humeanism, classical mechanics author: - 'Antonio Vassallo[^1], Dirk-André Deckert[^2], Michael Esfeld[^3]' bibliography: - 'references\_fundont.bib' title: Relationalism about mechanics based on a minimalist ontology of matter --- *Accepted for publication in European Journal for Philosophy of Science* From atomism to relationalism about space and time {#sec:motivation} ================================================== Atomism, going back to the pre-Socratic philosophers Leucippus and Democritus and turned into a precise physical theory by Newton, is the most successful paradigm in both classical physics and traditional natural philosophy. On the one hand, it is a proposal for a fundamental ontology that is most parsimonious and most general, applying to everything in the universe. On the other hand, it offers a clear and simple explanation of the realm of our experience. Macroscopic objects are composed of fundamental, indivisible particles. All the differences between the macroscopic objects – at a time as well as in time – are accounted for in terms of the spatial configuration of these particles and its change. However, there is no straightforward answer to the question of what are the atoms. Both Democritus and Newton adopt the view of the atoms being inserted into an absolute space. Consequently, they are committed to a dualism between on the one hand space and on the other hand matter in the guise of atoms filling space. But what is it that fills space? In other words, what makes up the difference between a location in space being occupied by an atom and its being empty? Physical properties taken to characterize matter – such as mass, charge, or spin, etc. – are introduced in physical theories in terms of their causal role for the evolution of the configuration of matter. Consequently, invoking these properties cannot answer the question of what it is that evolves in space as described or prescribed by these properties (see [@Blackburn:1990aa]). To put it differently, the parameters that figure in the equations of a physical theory *presuppose* a spatial configuration of matter to which they are applied. This is particularly evident in the case of the quantum state as represented by the wave function, which is defined on configuration space – that is, the mathematical space each of whose points represents a possible configuration of matter in physical space. Hence, the quantum state presupposes a configuration of objects in physical space to which it is applied. But also in classical mechanics, parameters such as mass and charge presuppose objects given in terms of their spatial location to which they are applied. If one reacts to this situation by taking the objects in space to be bare substrata (cf. [@Locke:1690aa], book II, chapter XXIII, § 2), one runs into the problem that a bare substratum or primitive stuff-essence of matter is mysterious. The same goes for a primitive thisness (haecceity) of the material objects. However, this impasse of not being able to come up with a characterization of matter that stands up to scrutiny arises only if one accepts a dualism of an absolute space and matter as that what fills space. If one abandons this dualism and conceives atomism in terms of relationalism about space, then the spatial relations are available to answer the question of what the atoms are. This is the idea that we shall pursue in this paper, making use of the Cartesian conception of matter in terms of (spatial) extension and the stance of ontic structural realism according to which objects are individuated by the relations in which they stand – i.e. distance relations in our case. In other words, our claim is that atomism, if set out in terms of point particles being inserted into an absolute space, fails to achieve the aim of being a parsimonious ontology. The consequence of this failure is that atomism, thus conceived, is unable to formulate a cogent answer to the question of what the atoms are. To meet the requirement of parsimony, one has to abandon points of space and retain only point particles (matter points), with these point particles standing in distance relations that individuate them. Atomism hence motivates relationalism about space and time instead of being tied to the commitment of an absolute space and time. Furthermore, relationalism is motivated by the fact that the commitment to absolute space and time introduces a surplus structure that is not needed to account for the empirical evidence. Thus, Leibniz points out in his famous objections to Newton’s substantivalism that there are many different possibilities to place or to transform the whole configuration of matter in an absolute space that leave the spatial relations among the material objects unchanged so that there is no physical difference between them (see notably Leibniz’ third letter, §§ 5-6, and fourth letter, § 15, in [@Leibniz:1890aa], pp. 363-364, 373-374, English translation [@Leibniz:2000aa]). However, Leibniz’ objection does not apply to all forms of substantivalism in classical physics, not to mention relativity physics. For instance, it can be circumvented in neo-Newtonian space-time (see e.g. [@Maudlin:1993aa], p. 192; see furthermore [@Pooley:2013aa], for a recent and comprehensive overview of the substantivalism/relationalism debate). Nonetheless, the general objection of introducing a commitment to surplus structure hits any form of space-time substantivalism: assume, as is well supported by all the available physical evidence, that the configuration of matter consists in finitely many discrete objects, such as point particles. If that configuration is embedded in an absolute space, then that space will stretch out to infinity, unless an arbitrary boundary is imposed (at least in a classical setting, since in general relativity the global matter distribution might determine a compact geometry); in any case, it will stretch out far beyond the actual particle configuration. However, all the experimental evidence is one of relative particle positions and change of particle positions, that is, motion. Thus, space is needed in physics only to describe the configuration of matter and, notably, the change in that configuration. Consequently, subscribing to the existence of an absolute space in which that configuration is embedded amounts to inflating the ontology. Against this background, our claim is that in order to accomplish the task of elaborating on a parsimonious ontology of the physical world – at least as far as the setting of classical, pre-relativistic physics is concerned –, only the following two axioms are required: \[a1\] There are distance relations that individuate objects, namely matter points. \[a2\] The matter points are permanent, with the distances between them changing. We submit that these two axioms are necessary and minimally sufficient to formulate an ontology of the physical world in the context of classical, pre-relativistic physics that is empirically adequate, given that all the empirical evidence comes down to relative particle positions and change of these positions. Why should one single out the distance relations? If there is a plurality of objects, there has to be a certain type of relations in virtue of which these objects make up a configuration that then is the world. Generally speaking, one can conceive different types of relations making up different sorts of worlds. For instance, one may imagine thinking relations that individuate mental substances making up a world of minds, etc. Lewis’s hypothetical basic relations of like-chargedness and opposite-chargedness, by contrast, would not pass the test, since, as Lewis notes himself, these relations fail to individuate the objects that stand in them as soon as there are at least three objects ([@Lewis:1986ab], p. 77). When it comes to the natural world, the issue are relations that qualify as providing for extension. That is the reason to single out distance relations. In a future theory of quantum gravity, these relations may be conceived in a different manner than in our current and past physical theories. Nonetheless, we submit that relations providing for extension – namely distances – are indispensable for an ontology of the natural world that is to be empirically adequate. Change in these relations then is sufficient to obtain empirical adequacy. That is the reason to pose the two above mentioned axioms, and only these two ones. Accordingly, distances individuating point-objects that then are matter points and change of these distances are the primitives of a minimalist ontology of the physical world, again at least as far as classical, pre-relativistic physics is concerned. To convey what axiom \[a1\] means, we have to choose a representation. Let us consider a universe consisting of a finite number of $N\in\mathbb{N}$ matter points. Taking the number of matter points to be finite is sufficient for empirical adequacy and will make the following discussion
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We present measurements of the fluctuation superconductivity in an underdoped thin film of La$_{1.905}$Sr$_{0.095}$CuO$_4$ using time-domain THz spectroscopy. We compare our results with measurements of diamagnetism in a similarly doped crystal of La$_{2-x}$Sr$_x$CuO$_4$. We show through a vortex-plasma model that if the fluctuation diamagnetism solely originates in vortices, then they must necessarily exhibit an anomalously large vortex diffusion constant, which is more than two orders of magnitude larger than the Bardeen-Stephen estimate. This points to either the extremely unusual properties of vortices in the under-doped $d$-wave cuprates or a contribution to the diamagnetic response that is not superconducting in origin.' author: - 'L.S. Bilbro' - 'R.Valdés Aguilar' - 'G. Logvenov' - 'I. Bozovic' - 'N.P. Armitage' title: 'On the possibility of fast vortices in the Cuprates: A vortex plasma model analysis of the THz conductivity and diamagnetism in La$_{2-x}$Sr$_{x}$CuO$_4$' --- Nearly 25 years after the demonstration of high-temperature superconductivity in the cuprate superconductors and more than 15 years since the discovery of the anomalous pseudogap in underdoped compounds, the microscopic physics of the superconducting phase and its relationship to the pseudogap remain hotly debated. Due to their low superfluid densities, it is generally agreed that superconducting fluctuations will be large and prominent in these materials [@Emery95a]. What is less agreed upon is the temperature range above $T_c$ in which superconducting correlations are truly significant and their contributions to the physics of the pseudogap. Experimental probes such as photoemission, tunneling, NMR spin relaxation, heat capacity, the Nernst effect, and diamagnetic susceptibility have shown evidence for a gaplike structure reminiscent of $d$-wave superconductivity in the density of states implying a strong connection of the pseudogap to superconductivity and/or superconducting correlations at temperatures well above $T_c$ [@Timusk99a; @Norman05a; @Xu00a; @Wang05a; @Li10a]. However, other mechanisms exist that can create such structures in the density of states [@Hlubina95a; @Chakravarty01a]. ![image](PRBratioFig1.pdf){width="2.1\columnwidth"} Interestingly, perhaps the most essential probe of the electronic properties – charge transport – does not show an extended range of superconducting fluctuations in temperature or field. [@Corson99a; @Miura02a; @Ando95a]. In La$_{2-x}$Sr$_x$CuO$_4$ the region of enhanced diamagnetism extends almost 100 K above $T_c$ [@Li10a] while the THz fluctuation conductivity has an extent limited to 10 - 20 K above $T_c$ [@Bilbro11a]. This is surprising as one might expect a close correspondence between these quantities [@Halperin79a]. Similarly, it has been argued from Nernst and diamagnetism measurements that $H_{c2}$ may be as high as 150 T [@Li10a], while the resistive transition is essentially complete in optimally and underdoped LSCO by 45 T [@Miura02a; @Ando95a]. In this Rapid Communication we present results of our detailed THz time-domain spectroscopy (TTDS) study of the fluctuation superconductivity in LSCO. The THz fluctuation conductivity shows an onset approximately only 10 K above $T_c$, which contrasts strongly with measurements like diamagnetism in which the onset is approximately 100K above $T_c$. We analyze our data in the context of a vortex plasma model and show, however that it is not the functional dependences of these data that are in strongest contrast, but their overall scales. Conventional vortex dynamics would predict a much larger fluctuation conductivity given the size of diamagnetism. We demonstrate that if the regime of enhanced diamagnetism originates in vortices, then the vortex diffusion constant $D$ must be anomalously large and in the range of 10-30 cm$^2/$sec above $T_c$. This is more than two orders of magnitude larger than conventional benchmarks based on the Bardeen-Stephen model [@Stephen65a]. It is then a well-posed theoretical challenge to explain a $D$ this large. This points to either extremely unusual vortex properties in the underdoped $d$-wave cuprates or a contribution to the diamagnetic response that is not superconducting in origin. We begin with the observation that the ratio $\chi_{2D}/\mu_0 G$ of the two-dimensional (2D) susceptibility over the conductance has units of length squared over time, i.e., diffusion [@Torron94a]. One can show that in a diffusive vortex plasma this ratio gives a unique measure of the vortex diffusion constant[@Orenstein06a]. Using the notation of Halperin and Nelson [@Halperin79a], but in SI units, the 2D susceptibility and conductance of a conventional thin superconducting film at temperatures above a vortex unbinding transition are $$\begin{aligned} \chi_{2D} = - \frac{ c_2 \pi^2\mu_0 k_B T}{\phi_0^2} \xi^2 \label{Suscp}\\ G_S = \frac{1}{\phi_0^2 n_f \mu. } \label{Cond}\end{aligned}$$ Here $\xi$ is a correlation length, $\phi_0$ is the flux quantum, and $\mu$ is the vortex mobility. $n_f$ is the areal density of thermally excited free vortices, which is related to the correlation length by the relation $n_f =1 / 2\pi c_1 \xi^{2}$. $c_1$ and $c_2$ are small dimensionless constants. It is reasonable to expect that very close to $T_c$ vortices are the principal degrees of freedom in even quasi-2D materials. Note that these are essentially model-free forms constrained only by dimensional analysis, Maxwell equations, and immutable properties of superfluid vortices like the Josephson relation. Using accepted values for $c_1$ and $c_2$ [@Halperin79a], and the Einstein relation $D = \mu k_B T$, the expression $$D(T) = - \frac{6}{ \mu_0 } \frac{\chi_{2D}}{G_S}. \label{ratio}$$ follows [@Orenstein06a] and in principle may be used to give a determination of the vortex diffusion constant $D$ using only experimentally determined quantities. Interestingly, this treatment using the analogous equations within the Gaussian approximation and in the dirty limit gives the diffusion constant of the normal state $electrons$. This is potentially useful as a diagnostic considering that electronic diffusion is proportional to the normal-state conductance while vortex diffusion is conventionally proportional to the normal-state resistance. One may also heuristically motivate Eq. \[ratio\] through the fact that correlations in length (diamagnetism in 2D $\propto \xi^2$) probed by a thermodynamic measurement like susceptibility and the correlations in time ($1/\Omega$) probed by a dynamic measurement like conductivity are related within diffusive dynamics as $\xi^2 \propto D/ \Omega$, where $\Omega$ is the characteristic fluctuation rate. A problem with applying Eq. \[ratio\] to real type-II superconductors is that, in general, the motion of vortices is limited by both dissipative (viscous) flux-flow and pinning forces. In 2D, the classical equation of motion for a single vortex is $ \dot{x} / \mu + k_p x = K_y \phi_0$ where $K_y$ is a driving sheet current, $x$ is the vortex displacement and $k_p$ is a pinning constant [@Gittleman66a]. Here the complex physics of pinning and flux-flow are represented by phenomenological parameters. This leads to an expression for the 2D resistance from moving vortices as $R_v = \phi_0^2 n_f \mu [ 1 / (1+ i \omega_{d}/\omega)]$ where $\omega_{d} = k_p \mu $ is the “depinning frequency". This expression shows that at frequencies well above $\omega_{d}$, viscous forces dominate and the motion of vortices becomes predominately dissipative. This is a considerable simplification. In this limit the expression for $R_v$ reduces to the inverse of Eq. 2 for the vortex conductance. In cuprate superconductors, $\omega_{d}$ is generally of the order of a few GHz [@Golosovsky94a]. This puts the appropriate frequency regime to probe purely dissipative vortex transport in the range of our TTDS measurements. We have measured the THz range optical conductivity of molecular beam epitaxy (MBE) grown LSCO films using a homebuilt transmission-based time-domain THz spectrometer. With this technique the complex transmission function can be directly inverted to get the complex conductivity [@Comment2]. In Fig. 1(a) and (b) we present the real ($\sigma_1$) and imaginary ($\sigma_2$) THz conductivity of one particular LSCO film (x=0.095, $T_c$=23.5K) out of
{ "pile_set_name": "ArXiv" }
null
--- author: - title: '**Comparison of potential ASKAP HI survey source finders** ' --- Introduction ============ Radio astronomy is facing a new era, acquiring extremely large data volumes with the coming of the Square Kilometre Array (SKA) [@dewdney2009] and precursors such as MeerKAT [@jonas2009] in South Africa, APERTIF [@verheijen2008] in the Netherlands and the Australian SKA Pathfinder (ASKAP) [@deboer2009] in Australia. Various continuum (2D) and spectral line (3D) surveys, which cover large fractions of the sky, will be conducted with these telescopes. The surveys are expected to detect millions of objects, accelerating the need for reliable automated source finders. A good source finder should have high [*completeness*]{} and high [ *reliability*]{}, ie. a low rate of false detections. Choosing a suitable trade-off between both parameters is necessary and depends on both the algorithm and the rms uniformity of the data. Detecting objects is relatively easy in the case of (strong) point sources, but becomes more complicated in the case of irregular shapes and diffuse or extended emission in one or more dimensions and at low signal to noise ratios. The work presented in this paper aims to highlight the strengths and weaknesses of potential 3D source finders for the Deep Investigations of Neutral Gas Origins (DINGO) survey [@meyer2009] and the Widefield ASKAP L-band Legacy All-sky Blind Survey (WALLABY) [@koribalski2009]. These are two of the large survey science projects for ASKAP [@johnston2008]. To achieve the respective science goals, we aim to develop source finding algorithms which reliably and efficiently recover 3D sources. We have identified five different source finders that will be subjected to testing and comparison; 1) the [Duchamp]{} source finder [@whiting2011a], 2) the [Gamma-finder]{} [@boyce2003] 3) the CNHI source finder [@jurek2011], 4) the 2D-1D Wavelet Reconstruction source finder [@floer2011] and 5) the [S+C finder]{} [@serra2011a]. Testing of each algorithm was done on the same set of data cubes. The first containing 961 point sources with varying peak flux and a Gaussian velocity profile. The second cube contains 1024 modelled galaxies with more realistic properties such as extended disks, inclinations and rotation profiles. Here we compare their performance in terms of completeness and reliability. In section 2 we briefly summarise the main properties of the source finding algorithms and in section 3 we describe the testing method and the two model cubes that have been used for the testing. The test results are presented in Section 4, followed by a discussion in Section 5. We compare in detail the performance and reliability of the source finders, to understand where the strong and weak points of the different source finders are and to highlight possible improvements. We finish with a short conclusion in the final section. Source Finders ============== Here we provide a short description of the five source finders compared throughout the paper. For a more extended review of the individual algorithm we refer to the reference papers describing each method in detail. Duchamp source finder --------------------- [Duchamp]{} [@whiting2011a] is a source finder designed for 3D data, although it can be used for 2D and even 1D datasets. The source finder has been developed by Matthew Whiting at CSIRO.[^1] [Duchamp]{} identifies sources by simply applying a specified flux or signal-to-noise threshold and searching for signals above that threshold. In a second step, detections are merged or rejected based on several criteria specified by the user. To improve its performance, [Duchamp]{} offers several methods of preconditioning and filtering of the input data, including spatial and spectral smoothing as well as wavelet reconstruction of the entire image or cube. In a final step, [Duchamp]{} measures several basic parameters for each detected source, including position, radial velocity, size, line width, and integrated flux. The performance of the [Duchamp]{} source finder is tested in @westmeier2011. CNHI source finder ------------------ The Characterised Noise (CNHI) source finder [@jurek2011] is being developed as part of the WALLABY design study. The CNHI source finder treats spectral datacubes as a collection of spectra, using the Kuiper test, which is a variant of the Kolmogorov-Smirnov test, to identify regions in each spectrum that do not look like noise. The Kuiper test is used to calculate the probability that the test region and the rest of the spectrum come from the same distribution of voxel flux values. If the probability is sufficiently low, then the test region is flagged as an object section. The probability threshold is specified by the user. Once all of the spectra have been processed, the object sections are combined into objects. Object sections are combined using a variant of Lutz’s one pass algorithm. There are two caveats to using the CNHI source finder. Firstly, the CNHI source finder assumes that each spectrum is dominated by noise. This is a safe assumption as spectral datacubes are generally sparsely populated by sources. The presence of ripples, artifacts and continuum signal will potentially invalidate this assumption though. The second caveat is that the test region needs to be at least four channels wide for the Kuiper test to be reliable. This matches the smallest channel extent expected of WALLABY sources. Spectral datacubes with a poorer velocity resolution than WALLABY will be affected by this. For a more detailed description of the CNHI source finder see @jurek2011. Gamma-finder ------------ [Gamma-finder]{} is a [Java]{} application developed by [@boyce2003] which automatically searches for objects in 3-dimensional data cubes. The searching algorithm of [Gamma-finder]{} is based on the [Gamma-test]{} [@Stefansson1997], and a full description can be found in [@jones2002]. The [Gamma-test]{} is a near-neighbour data analysis routine which estimates the noise variance in a continuous dataset. This estimate is known as the [*Gamma Statistic*]{}, denoted by $\Gamma$. When using the [Gamma-finder]{} a Gamma signal-to-noise ratio can be defined which is used as a clipping for objects to be qualified as a detection. The output of the [Gamma-finder]{} is limited compared to other source finders (eg. [Duchamp]{} and CNHI), because it does not do any parametrisation, but only gives the three dimensional position of a detection and the sigma level. 2D-1D Wavelet Reconstruction source finder ------------------------------------------ The 2D-1D Wavelet Reconstruction source finder is described in detail in @floer2011, they have adapted a multi-dimensional wavelet denoising scheme first used by @starck2009. It takes into account that 3D data from spectroscopic surveys have two angular dimensions and one spectral dimension, in which the shape of the sources is vastly different than in the angular dimensions. The algorithm therefore performs a two-dimensional wavelet transform in all planes of the cubes and a subsequent one-dimensional wavelet transform along each line of sight, i.e. each pixel. Once the image has been de-noised by thresholding of the wavelet coefficients, reconstructing the data from only the significant coefficients yields a noise-free cube. The latter can be used to create a mask for the sources in the original data. Smooth plus clip (S+C) finder ----------------------------- @serra2011a developed a source finder which uses a limited number of filters in order to optimise the signal-to-noise ratio of objects present in a data cube. For each dataset, the finder looks for sources in the original cube and in the cubes obtained by smoothing the original cube either on the sky, or in velocity, or along all three axes. In this study we use a Gaussian filter of FWHM=60 arcsec for smoothing on the sky, and a box filter of width 2, 4, 8, 16, and 32 channels for smoothing in velocity. For each smoothed cube a mask is built including all voxels brighter (in absolute value) than a chosen threshold. The final mask is the union of all masks (i.e., a voxel is included in the total mask if it is included in at least one of the individual masks), a value of 1 is allocated to all masked voxels and 0 to all unmasked voxels. A size filter is applied to the final binary mask by convolving it with a 30 arcsec Gaussian kernel, equal to the original angular resolution of the cube and to 3 channels in velocity. Subsequently the mask is shrunk again by taking only voxels in the convolved mask brighter than 0.5. This procedure removes a large number of noise peaks included in the mask whose size is of the order of the cube resolution. Testing method ============== When comparing the five 3D source finders, we concentrate on two main parameters, the completeness and the reliability of a source finder. Completeness is defined as the number of detected sources divided by the total number of sources. While this number is known for simulated cubes, in reality we usually have a much harder problem: we neither know the number of detectable sources in a cube, nor their shape, size or velocity extent. There are a few examples of real datacubes where there is a much deeper datacube of the same region
{ "pile_set_name": "ArXiv" }
null
--- abstract: | In the context of abstract elementary classes (AECs) with a monster model, several possible definitions of superstability have appeared in the literature. Among them are no long splitting chains, uniqueness of limit models, and solvability. Under the assumption that the class is tame and stable, we show that (asymptotically) no long splitting chains implies solvability and uniqueness of limit models implies no long splitting chains. Using known implications, we can then conclude that all the previously-mentioned definitions (and more) are equivalent: \[abstract-thm\] Let ${\mathbf{K}}$ be a tame AEC with a monster model. Assume that ${\mathbf{K}}$ is stable in a proper class of cardinals. The following are equivalent: 1. \[abstract-1\] For all high-enough $\lambda$, ${\mathbf{K}}$ has no long splitting chains. 2. \[abstract-2\] For all high-enough $\lambda$, there exists a good $\lambda$-frame on a skeleton of ${\mathbf{K}}_\lambda$. 3. \[abstract-3\] For all high-enough $\lambda$, ${\mathbf{K}}$ has a unique limit model of cardinality $\lambda$. 4. \[abstract-4\] For all high-enough $\lambda$, ${\mathbf{K}}$ has a superlimit model of cardinality $\lambda$. 5. \[abstract-5\] For all high-enough $\lambda$, the union of any increasing chain of $\lambda$-saturated models is $\lambda$-saturated. 6. \[abstract-6\] There exists $\mu$ such that for all high-enough $\lambda$, ${\mathbf{K}}$ is $(\lambda, \mu)$-solvable. This gives evidence that there is a clear notion of superstability in the framework of tame AECs with a monster model. address: - 'Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA' - 'Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA' author: - Rami Grossberg - Sebastien Vasey bibliography: - 'superstab-defs.bib' date: | \ AMS 2010 Subject Classification: Primary 03C48. Secondary: 03C45, 03C52, 03C55, 03C75, 03E55. title: Equivalent definitions of superstability in tame abstract elementary classes --- Introduction ============ In the context of classification theory for abstract elementary classes (AECs), a notion analogous to the first-order notion of *stability* exists: let us say that an AEC ${\mathbf{K}}$ is *stable in $\lambda$* if ${\mathbf{K}}$ has at most $\lambda$-many Galois types over every model of cardinality $\lambda$ (a justification for this definition is Fact \[stab-spectrum\], showing that it is equivalent, under tameness, to failure of the order property). However it has been unclear what a parallel notion to superstability might be. Recall that for first-order theories we have: \[fo-superstab\] Let $T$ be a first-order complete theory. The following are equivalent: 1. $T$ is stable in every cardinal $\lambda \ge 2^{|T|}$. 2. For all infinite cardinals $\lambda$, the union of an increasing chain of $\lambda$-saturated models is $\lambda$-saturated. 3. $\kappa(T)=\aleph_0$ and $T$ is stable. 4. $T$ has a saturated model of cardinality $\lambda$ for every $\lambda\geq 2^{|T|}$. 5. $T$ is stable and ${\operatorname{D}}^n[{\bar{x}}={\bar{x}},L (T),{\infty}]<{\infty}$. 6. There does not exists a set of formulas $\Phi=\{{\varphi}_n({\bar{x}};{\bar{y}}_n)\mid n<\omega\}$ such that $\Phi$ can be used to code the structure $(\omega^{\leq\omega},<,<_{lex})$ $(1) \implies (2)$ and $(1) \iff (\ell)$ for $\ell \in \{3, 4, 5, 6\}$ all appear in Shelah’s book [@shelahfobook]. Albert and Grossberg [@agchains 13.2] established $(2)\implies (6)$. In the last 30 years, in the context of classification theory for non elementary classes, several notions that generalize that of first-order superstablity have been considered. See papers by Grossberg, Shelah, VanDieren, Vasey and Villaveces: [@grsh238; @gr88], [@sh394], [@shvi635], [@vandierennomax; @nomaxerrata], [@gvv-mlq], [@ss-tame-jsl; @indep-aec-apal]. Reasons for developping a superstability theory in the non-elementary setup include the aesthetic appeal (guided by motivation from the first-order case) and recent applications such as Shelah’s eventual categoricity conjecture in universal classes [@ap-universal-v10; @categ-universal-2-v3-toappear] or the fact that (in an AEC with a monster model) the model in a categoricity cardinal is saturated [@categ-saturated-v2]. In [@sh394 p. 267] Shelah states that part of the program of classification theory for AECs is to show that all the various notions of first-order saturation (limit, superlimit, or model-homogeneous, see Section \[sat-def-subsec\]) are equivalent under the assumption of superstablity. A possible definition of superstability is *solvability* (see Definition \[solvability-def\]), which appears in the introduction to [@shelahaecbook] and is hailed as a true counterpart to first-order superstability. Full justification is delayed to [@sh842] but [@shelahaecbook Chapter IV] already uses it. Other definitions of superstability analogous to the ones in Fact \[fo-superstab\] can also be formulated. The main result of this paper is to show that, at least in tame AECs with a monster model, several definitions of superstability that previously appeared in the literature are equivalent (see the preliminaries for precise definitions of some of the concepts appearing below). Many of the implications have already been proven in earlier papers, but here we complete the loop by proving two theorems. Before stating them, some notation will be helpful: \[hanf-notation\] Given a fixed AEC ${\mathbf{K}}$, set $H_1 := {\beth_{\left(2^{{\operatorname{LS}}({\mathbf{K}})}\right)^+}}$. **Theorem \[ss-from-chainsat\].** Let ${\mathbf{K}}$ be an ${\operatorname{LS}}({\mathbf{K}})$-tame AEC with a monster model. There exists $\chi < H_1$ such that for any $\mu \ge \chi$, if ${\mathbf{K}}$ is stable in $\mu$, there is a saturated model of cardinality $\mu$, and every limit model of cardinality $\mu$ is $\chi$-saturated, then ${\mathbf{K}}$ has no long splitting chains in $\mu$. **Theorem \[strong-solvable-thm\].** Let ${\mathbf{K}}$ be an ${\operatorname{LS}}({\mathbf{K}})$-tame AEC with a monster model. There exists $\chi < H_1$ such that for any $\mu \ge \chi$, if ${\mathbf{K}}$ is stable in $\mu$ and has no long splitting chains in $\mu$ then ${\mathbf{K}}$ is uniformly $(\mu', \mu')$-solvable, where $\mu' := \left(\beth_{\omega + 2} (\mu)\right)^+$. These two theorems prove (\[sc0-3\]) implies (\[sc0-1\]) and (\[sc0-1\]) implies (\[sc0-6\]) of our main corollary, proven in detail after the proof of Corollary \[main-cor-unbounded\]. \[main-cor\] Let ${\mathbf{K}}$ be a ${\operatorname{LS}}({\mathbf{K}})$-tame AEC with a monster model. Assume that ${\mathbf{K}}$ is stable in some cardinal greater than or equal to ${\operatorname{LS}}({\mathbf{K}})$. The following are equivalent: 1. \[sc0-1\] There exists $\mu_1 < H_1$ such that for every $\lambda\geq\mu_1$, ${\mathbf{K}}$ has no long splitting chains in $\lambda$. 2. \[sc0-2\] There exists $\mu_2 < H_1$ such that for every $\lambda\geq\mu_2$, there is a good $\lambda$-frame on a skeleton of ${\mathbf{K}}_\lambda$ (see Section \[skeleton-sec\]). 3. \[sc0-3\] There exists $\mu_3 < H_1$ such that for every $\lambda\geq\mu_3$, ${\mathbf{K}}$ has a unique limit model of cardinality $\lambda$. 4. \[sc0-4\] There exists $\mu_4 < H_1$ such that for every $\lambda\geq\mu_4$, ${\mathbf{K}}$ has a superlimit model of cardinality $\lambda$. 5
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We report experimental studies of the influence of symmetric dual-loop optical feedback on the RF linewidth and timing jitter of self-mode-locked two-section quantum dash lasers emitting at 1550 nm. Various feedback schemes were investigated and optimum levels determined for narrowest RF linewidth and low timing jitter, for single-loop and symmetric dual-loop feedback. Two symmetric dual-loop configurations, with balanced and unbalanced feedback ratios, were studied. We demonstrate that unbalanced symmetric dual loop feedback, with the inner cavity resonant and fine delay tuning of the outer loop, gives narrowest RF linewidth and reduced timing jitter over a wide range of delay, unlike single and balanced symmetric dual-loop configurations. This configuration with feedback lengths 80 and 140 m narrows the RF linewidth by $\sim$ 4-67x and $\sim$ 10-100x, respectively, across the widest delay range, compared to free-running. For symmetric dual-loop feedback, the influence of different power split ratios through the feedback loops was determined. Our results show that symmetric dual-loop feedback is markedly more effective than single-loop feedback in reducing RF linewidth and timing jitter, and is much less sensitive to delay phase, making this technique ideal for applications where robustness and alignment tolerance are essential.' address: 'Department of Physics and Tyndall National Institute, University College Cork, Ireland T12 YN60' author: - 'Haroon Asghar, Wei Wei, Pramod Kumar, Ehsan Sooudi, and John. G. McInerney' title: 'Stabilization of self-mode-locked quantum dash lasers by symmetric dual-loop optical feedback' --- [99]{} V. Vujicic, C. Calò, R. Watts, F. Lelarge, C. Browning, K. Merghem, A. Martinez, A. Ramdane, and L. P. Barry, “ Quantum dash mode-locked lasers for data centre applications ,”  IEEE J. Sel. Top. Quantum Electron. [**21**]{}(6), 53–60 (2015). J. P-Cetina, S. Latkowski, R. M-.Basilio, and P. Landais, “ Wavelength tunability of all-optical clock-recovery based on quantum-dash mode-locked laser diode under injection of a 40-Gb/s NRZ data stream,”  IEEE J. Sel. Top. Quantum Electron. [**23**]{}(9), 531–533 (2011). J. Pfeifle, I. Shkarban, S. Wolf, J. N. Kemal, C. Weimann, W. Hartmann, N. Chimot, S. Joshi, K. Merghem, A. Martinez, M. Weber, A. Ramdane, F. Lelarge, W. Freude, C. Koos, “ Coherent terabit communications using a quantum-dash mode-locked laser and self-homodyne detection,”  in *OFC Conference* (2015), paper W2A.19. E. Sooudi, C. D. Dios, J. G. McInerney, G. Huyet, F. Lelarge, K. Merghem, R. Rosales, A. Martinez, A. Ramdane, and S. P. Hegarty, “ A novel scheme for two-level stabilization of semiconductor mode-locked lasers using simultaneous optical injection and optical feedback,,”  IEEE J. Sel. Top. Quantum Electron. [**19**]{}(4), 1101208 (2011). O. Solgaard and K. Y. Lau, “ Optical feedback stabilization of the intensity oscillations in ultrahigh-frequency passively modelocked monolithic quantum-well lasers,”  IEEE Photonics Technol. Lett. [**5**]{}(11), 1264–1267 (1993). C.-Y. Lin, F. Grillot, N. A. Naderi, Y. Li, and L. F. Lester, “ rf linewidth reduction in a quantum dot passively mode-locked laser subject to external optical feedback,”  Appl. Phys. Lett. [**96**]{}(5), 051118 (2010). K. Merghem, R. Rosales, S. Azouigui, A. Akrout, A. Martinez, F. Lelarge, G.-H. Duan, G. Aubin, and A. Ramdane, “ Low noise performance of passively mode locked quantum-dash-based lasers under external optical feedback,”  Appl. Phys. Lett. [**95**]{}(13), 131111 (2009). D. Arsenijević, M. Kleinert, and D. Bimberg, “ Phase noise and jitter reduction by optical feedback on passively mode-locked quantum dot lasers,”  Appl. Phys. Lett. [**103**]{}(23), 231101 (2013). X. Steve Yao and Lute Maleki, “Optoelectronic microwave oscillator”, J. Opt. Soc. Am. B [**13**]{}(8), 1725–1735 (1996). J.-H. Cho, H. Kim, and H.-K. Sung, “Reduction of spurious tones and phase noise in dual-loop OEO by loop-gain control,”  IEEE Photonics Technol. Lett. [**27**]{}(13), 1391–1393 (2015). F. V. Dijk, A. Enard, X. Buet, F. Lelarge, and G.- H. Duan, “ Phase noise reduction of a quantum dash mode-locked laser in a millimeter-wave coupled opto-electronic oscillator,” J. Lightwave Technol. [**26**]{}(15), 2789–2794 (2008). V. F. Olle, A. Wonfor, L. A. M. Sulmoni, P. P. Vasilèv, J.-M. Lamy, J.-F. Carlin, N. Grandjean, R. V. Penty, and I. H. White, “ Hybrid and passive mode-locking of a monolithic two-section MQW InGaN/GaN laser diode,” IEEE Photonics Technol. Lett. [**25**]{}(15), 1514–1516 (2013). E. Sooudi, G. Huyet, J. G. McInerney, F. Lelarge, K. Merghem, R. Rosales, A. Martinez, A. Ramdane, and S. P. Hegarty, “ Injection-Locking properties of InAs/InP based mode-locked quantum-dash lasers at 21 GHz,” IEEE Photonics Technol. Lett. [**23**]{}(20), 1544–1546 (2011). Y. Cheng, X. Luo, J. Song, T.-Y. Liow, G.-Q. Lo, Y. Cao, X. Hu, X. Li, P. H. Lim, and Q. J. Wang, “ Passively mode-locked III-V/silicon laser with continuous-wave optical injection,” Opt. Express. [**23**]{}(5), 6392–6399 (2015). F. Quinlan, S. Gee, S. Ozharar, and P. J. Delfyett, “ Greater than 20-dB supermode noise suppression and timing jitter reduction via cw injection of a harmonically mode-locked laser,”  IEEE Photonics Technol. Lett. [**19**]{}(16), 1221–1223 (2007). M. Haji, L. Hou, A. E. Kelly, J. Akbar, J. H. Marsh, J. M. Arnold, and C. N. Ironside, “ High frequency optoelectronic oscillators based on the optical feedback of semiconductor mode-locked laser diodes,” Opt. Express. [**20**]{}(3), 3268–3274 (2012). W. Wei, H. Asghar, P. Kumar, D. Marah, and J. G. McInerney, “ Sub-kHz RF linewidth of quantum-dash mode-locked laser by self-injection from symmetric dual-loop feedback and fiber delay,” in *CLEO Conference* (2016), paper STh4L. O. Nikiforov, L. Jaurigue, L. Drzewietzki, K. Lüdge, and S. Breuer, “ Experimental demonstration of change of dynamical properties of a passively mode-locked semiconductor laser subject to dual optical feedback by dual full delay-range tuning,” Opt. Express. [**24**]{}(13), 14301–14310 (2016). L. C. Jaurigue, O. Nikiforov, E. Schöll, S. Breuer, and K. Lüdge, “ Dynamics of a passively mode-locked semiconductor laser subject to dual-cavity optical feedback,” Phys. Rev. E. [**93**]{}(2), 022205 (2016). H. Asghar, E. Sooudi, P. Kumar, W. Wei, and J. G. McInerney, “ Optimum stabilization of self-mode-locked quantum dash lasers using dual optical feedback with improved tolerance against phase delay mismatch,” Opt. Express. [**25**]{}(14), 15796–15805 (2017). L. Drzewietzki, S. Breuer, and W. Elsäßer, “ Timing jitter reduction of passively mode-locked semiconductor
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We present a numerical study of dephasing of electron spin ensembles in a diffusive quasi-one-dimensional GaAs wire due to the D’yakonov-Perel’ spin-dephasing mechanism. For widths of the wire below the spin precession length and for equal strength of Rashba and linear Dresselhaus spin-orbit fields a strong suppression of spin-dephasing is found. This suppression of spin-dephasing shows a strong dependence on the wire orientation with respect to the crystal lattice. The relevance for realistic cases is evaluated by studying how this effect degrades for deviating strength of Rashba and linear Dresselhaus fields, and with the inclusion of the cubic Dresselhaus term.' author: - 'J. Liu' - 'T. Last' - 'E. J. Koop' - 'S. Denega' - 'B. J. van Wees' - 'C. H. van der Wal' title: 'Spin-dephasing anisotropy for electrons in a diffusive quasi-1D GaAs wire' --- \[sec:Introduction\]INTRODUCTION ================================ The ability to maximize the spin-dephasing time $T_{2}^*$ of an electron spin ensemble is one of the key issues for developing semiconductor-based spintronic devices [@review; @Kikkawa1998]. However, in all III-V semiconductor materials spin ensembles rapidly dephase due to the D’yakonov-Perel’ (DP) spin-dephasing mechanism [@D'yakonov1986; @Miller2003]. For the case of electron ensembles in a heterojunction two-dimensional electron gas (2DEG), two distinct contributions to DP spin-dephasing have to be considered: the inversion asymmetry of the confining potential (structural inversion asymmetry) and the bulk inversion asymmetry of the crystal lattice. The former results in an effective Rashba field and the latter in an effective Dresselhaus field, which includes linear and cubic contributions [@D'yakonov1986; @Bychkov1984; @Lommer1985; @Miller2003]: $$\begin{aligned} \vec{B}_{R} \ &=& \ C_{R}~(\hat{x}k_{y}-\hat{y}k_{x}), \\ \vec{B}_{D1} \ &=& \ C_{D1}(-\hat{x}k_{x}+\hat{y}k_{y}), \\ \vec{B}_{D3} \ &=& \ C_{D3}(\hat{x}k_xk_{y}^2-\hat{y}k_{x}^2k_y), %\vec{B}_{D3} \ &=& \ \alpha_{D3}(\hat{x}k_x[k_{y}^2-k_{z}^2]+\hat{y}k_y[k_{z}^2-k_{x}^2]+\hat{z}k_z[k_{x}^2-k_{y}^2]),\end{aligned}$$ where $\hat{x}$, $\hat{y}$ are the unit vectors along the \[100\] and \[010\] crystal directions, $k_{x}$, $k_{y}$ are the components of the in-plane wave vector, and $C_{R,D1,D3}$ are the spin-orbit coupling parameters. The total effective spin-orbit field $\vec{B}_{eff}$ is the vector sum of all three contributions. For 2D and quasi-1D electron systems, the direction and magnitude of these effective spin-orbit fields can be illustrated as arrows on the Fermi circle. Figure \[1\] presents this for selected points in the 2D momentum space, for the Rashba (a) and linear Dresselhaus (b) field alone, and their sum (c) for the case of equal strength of Rashba and linear Dresselhaus field. In contrast to the individual cases (Figure \[1\] (a), (b)), the magnitude of the vector sum shows a strong anisotropy in momentum space (Figure \[1\] (c)). ![A schematic representation of the direction and magnitude of the effective magnetic field for selected points in a two-dimensional $k$-space, sketched for (a) the Rashba field, (b) the linear Dresselhaus field and for (c) the symmetric case of the sum of equal Rashba and linear Dresselhaus field. Both the magnitude and direction are depicted as arrows on the Fermi circle with radius $k_F$ in the ($k_x$, $k_y$)-plane.[]{data-label="1"}](fig1.png){width="8cm"} This already suggests that spin dephasing in very narrow wires in which electron motion is restricted to the \[110\] direction can be strongly reduced as compared free 2DEG or such wires oriented along other crystal directions. However, it is harder to analyze whether such a dephasing anisotropy also occurs for wider quasi-1D wires, where the motion in the 2DEG plane is still completely random and diffusive, but where the width of the wire is less than length scales as the spin precession length or the mean free path. For the latter case the transport regime could be named quasi-ballistic, but we consider the case of a large ensemble where transport along the wire is still diffusive, and where the width of the wire in the 2DEG plane is much wider than the regime of quantum confinement. Initial studies of such spin-dephasing anisotropy include a recent experiment [@Folk2008] on wires, and theoretical work on drifting ensembles in free 2DEG [@Loss2007]. However, until now most emphasis was on work related to the spin field-effect transistor [@Datta1990], using InAs-based systems or highly asymmetrical heterojunctions, where structural inversion asymmetry dominates the spin-orbit interaction [@Kiselev2000; @Bruno2002; @Mireles2001; @Pramanik2003; @Holleitner2006].\ We report here how the D’yakonov-Perel’ spin dephasing mechanism can be strongly suppressed in diffusive quasi-1D electron systems based on GaAs heterojunction material, for which Rashba and linear Dresselhaus spin-orbit contributions can be indeed of comparable magnitude [@Miller2003; @Folk2008]. The dephasing is studied for spin ensembles initially aligned perpendicular to the plane of the wire (\[001\] direction). This situation reflects the method of preparing and interrogating a spin population via optical pump-probe techniques [@Kikkawa1998]. Our numerical calculation is first performed for conditions with equal Rashba and linear Dresselhaus contributions and the cubic Dresselhaus term set to zero. For widths of the diffusive quasi-1D wires smaller than the spin precession length the DP spin-dephasing mechanism can be strongly suppressed and the spin-dephasing time $T_2^*$ is considerably enhanced if the wire is aligned along the direction of zero effective spin-orbit field. Moreover, we want to point out that the value of our numerical tool lies in the opportunity to study such phenomena also for more realistic conditions. Thus, we can study how breaking the equality of the Rashba and linear Dresselhaus spin-orbit fields, or adding the cubic Dresselhaus term leads to a degradation of the spin-dephasing anisotropy. \[Method\]Method ================ We apply a Monte Carlo method [@Koop2008] to study the temporal evolution of the normalized spin orientation (average spin expectation value) in an elongated quasi-1D wire. Our numerical tool is based on a semiclassical approach. We use a classical description for the electron motion, and a quantum mechanical description of the dynamics of the electron spin. The wire is treated as a rectangular box of aspects 1 $\mu$m and 200 $\mu$m. The electron density and mobility are set to 4$\cdot10^{15}$ m$^{-2}$ and 100 m$^{2}$/Vs, which are typical values for a GaAs/AlGaAs heterojunction material. All electrons are assumed to have the same Fermi velocity $\upsilon_{F}$ of 2.7$\cdot10^{5}$ m/s. This is a valid approximation for $k_BT$, $\Delta E_{Z,SO}$ $\ll$ $E_F$ (with respect to the bottom of the conduction band), where $\Delta E_{Z,SO}$ is the Zeeman splitting due to the spin-orbit fields alone. Electron-electron interaction and inelastic scattering mechanisms are neglected.\ The electron is regarded as a point particle which moves on a classical trajectory between scatter events on impurities (randomly determined at a rate to obtain an average scatter time of 38 ps) yielding diffusive behavior in the ensemble (electron mean free path $L_{p}$ = $\upsilon_{F}\tau_{p}$ = 10 $\mu$m), and specular scattering on the edges of the wire. For each electron moving on such a ballistic trajectory we calculate the spin evolution in the effective spin-orbit fields quantum mechanically, and we then take the ensemble average on a set of electrons with random initial position and momentum direction.\ Within a straight ballistic segment of an electron trajectory the spin rotates around $\vec{B}_{eff}$ over a precession angle given by $$\begin{aligned} \phi_{prec} \ = \ \frac{g \mu_B |\vec{B}_{eff}|}{\hbar} \ t,\end{aligned}$$ where $\hbar$ is the reduced Planck’s constant and $t$ the time of traveling through the segment. The spin rotation operator $\widehat
{ "pile_set_name": "ArXiv" }
null
--- abstract: | The muon anomalous magnetic moment is one of the most precisely measured quantities in particle physics. In a recent experiment at Brookhaven it has been measured with a remarkable 14-fold improvement of the previous CERN experiment reaching a precision of 0.54ppm. Since the first results were published, a persisting “discrepancy” between theory and experiment of about 3 standard deviations is observed. It is the largest “established” deviation from the Standard Model seen in a “clean” electroweak observable and thus could be a hint for New Physics to be around the corner. This deviation triggered numerous speculations about the possible origin of the “missing piece” and the increased experimental precision animated a multitude of new theoretical efforts which lead to a substantial improvement of the prediction of the muon anomaly $a_\mu=(g_\mu-2)/2$. The dominating uncertainty of the prediction, caused by strong interaction effects, could be reduced substantially, due to new hadronic cross section measurements in electron-positron annihilation at low energies. Also the recent electron $g-2$ measurement at Harvard contributes substantially to the progress in this field, as it allows for a much more precise determination of the fine structure constant $\alpha$ as well as a cross check of the status of our theoretical understanding. In this report we review the theory of the anomalous magnetic moments of the electron and the muon. After an introduction and a brief description of the principle of the muon $g-2$ experiment, we present a review of the status of the theoretical prediction and in particular discuss the role of the hadronic vacuum polarization effects and the hadronic light–by–light scattering correction, including a new evaluation of the dominant pion-exchange contribution. In the end, we find a 3.2 standard deviation discrepancy between experiment and Standard Model prediction. We also present a number of examples of how extensions of the electroweak Standard Model would change the theoretical prediction of the muon anomaly $a_\mu$. Perspectives for future developments in experiment and theory are briefly discussed and critically assessed. The muon $g-2$ will remain one of the hot topics for further investigations. address: - 'Humboldt-Universität zu Berlin, Institut für Physik, Newtonstrasse 15, D-12489 Berlin, Germany' - 'Institute of Physics, University of Silesia, ul. Uniwersytecka 4, PL-40007 Katowice, Poland' - | Regional Centre for Accelerator-based Particle Physics, Harish-Chandra Research Institute,\ Chhatnag Road, Jhusi, Allahabad - 211 019, India author: - Fred Jegerlehner - Andreas Nyffeler title: 'The Muon g-2' ---   --------------------------------------------------- HU-EP-09/07, HRI-P-09-02-001, RECAPP-HRI-2009-003 --------------------------------------------------- , muon, anomalous magnetic moment, precision tests 14.60.Ef,13.40.Em
{ "pile_set_name": "ArXiv" }
null
--- author: - | Jin Guo[^1] and [Tongsuo Wu[^2]]{}\ [Department of Mathematics, Shanghai Jiaotong University]{} title: ' **Monomial ideals under ideal operations[^3]** ' --- [3mm]{}[**Abstract.**]{} [In this paper, we show for a monomial ideal $I$ of $K[x_1,x_2,\ldots,x_n]$ that the integral closure ${\overline}{I}$ is a monomial ideal of Borel type (Borel-fixed, strongly stable, lexsegment, or universal lexsegment respectively), if $I$ has the same property. We also show that the $k^{th}$ symbolic power $I^{(k)}$ of $I$ preserves the properties of Borel type, Borel-fixed and strongly stable, and $I^{(k)}$ is lexsegment if $I$ is stably lexsegment. For a monomial ideal $I$ and a monomial prime ideal $P$, a new ideal $J(I, P)$ is studied, which also gives a clear description of the primary decomposition of $I^{(k)}$. Then a new simplicial complex $_J\bigtriangleup$ of a monomial ideal $J$ is defined, and it is shown that $I_{_J\bigtriangleup^{\vee}} = \sqrt{J}$. Finally, we show under an additional weak assumption that a monomial ideal is universal lexsegment if and only if its polarization is a squarefree strongly stable ideal. ]{} [3mm]{}[Key Words:]{} [Borel type monomial ideal; $k^{th}$ symbolic power; integral closure; polarization; universal lexsegment monomial ideal]{} [4mm]{} Introduction ============= [3mm]{}Throughout the paper, $K$ is an infinite field and let $S=K[x_1,\,x_2,\,\ldots,\,x_n]$ be the polynomial ring with $n$ indeterminants over $K$. If an ideal $I$ is generated by $u_1,\ldots,u_s$, then we denote it by $I=\langle u_1,\ldots,u_s\rangle$. For a monomial ideal $I$ of $S$, recall that $I$ is called [*strongly stable*]{} if for any monomial $u$ in $I$ and any $i<j\le n$, $x_j\mid u$ implies $x_i(u/x_j)\in I$. Recall that $I$ is called [*Borel-fixed*]{}, if ${\alpha}(u)\in I$ holds for any invertible upper $n\times n$ matrix ${\alpha}$ over $K$. Recall that $I$ is called [*of Borel type*]{} if $$I:x_i^{\infty}=I: \langle x_1,\,x_2,\,\ldots,\,x_i \rangle^\infty\quad \quad (*)$$ holds for every $i=1,\ldots,n$. It is known that each strongly stable monomial ideal is Borel-fixed, and the converse holds under the additional assumption $char(K)=0$. Bayer and Stillman in [@BS] noted that Borel-fixed ideals satisfy condition $(*)$. Herzog et al. in [@HPV] gave the definition of a Borel type monomial ideal, and they proved among other things that a Borel type monomial ideal is sequentially Cohen-Macaulay, see also [@Popescu]. Furthermore, there are other two classes of strongly stable monomial ideals, namely, monomial ideals which are lexsegment or universal lexsegment, see [@AHH] or [@HH]. We have the following relations for conditions on a monomial ideal: [3mm]{} universal lexsegment${\Longrightarrow}$lexsegment${\Longrightarrow}$strongly stable${\Longrightarrow}$ Borel-fixed${\Longrightarrow}$ of Borel type. [2mm]{} The following is the fundamental characterization of Borel type monomial ideals: \[BT\] ([@HH Proposition 4.2.9]) For a monomial ideal $I$ of $S$, the following conditions are equivalent: $(1)$ $I$ is of Borel type. $(2)$ For each monomial $u\in I$ and all positive integers $i,j,s$ with $i<j\le n$ such that $x_j^s\mid u$, there exists an integer $t\ge 0$ such that $x_i^t(u/x_j^s)\in I$. $(3)$ Each associated prime ideal $P$ of $I$ has the form $\langle x_1,x_2,\ldots, x_r \rangle$ for some $r\le n$. In [@MC Proposition 1], Mircea Cimpoeas observed that the afore mentioned property is preserved under several operations, such as sum, intersection, product, colon. For a monomial ideal $I$ of Borel type, note that $I:\mathfrak{m}^\infty=I:\mathfrak{m}^r$ holds for $r>>0$, thus the saturation $I:\mathfrak{m}^\infty$ is a monomial ideal of Borel type. The root ideal $\sqrt{I}$ is a prime ideal of the form $\langle x_1,x_2,\ldots,x_r \rangle$, and is thus universal lexsegment. Some parts of the following proposition are well known, the others are direct to check, so we omit the verification. \[operation\] Let $I,J, L$ be monomial ideals of $S$. \(1) If further $I,J$ are of Borel type (strongly stable, respectively), then each of the following is a monomial ideal of Borel type ( strongly stable, respectively): $$I\cap J ,\, I+J,\,\,I:L,\,\, IJ.$$ In particular, the saturation $I:\mathfrak{m}^\infty $ of $I$ is of Borel type (strongly stable, respectively) if $I$ has the same property. \(2) If further $I,J$ are Borel-fixed ideals, then each of $ I\cap J ,\, I+J,\,\, I:J,\,\, IJ$ is again Borel-fixed. In particular, the saturation $I:\mathfrak{m}^\infty $ of $I$ is Borel-fixed. \(3) If further $I,J$ are lexsegment (universal lexsegment, respectively) ideals, then each of $ I\cap J ,\, I+J,\,\, I:L$ is again lexsegment (universal lexsegment, respectively). [3mm]{} Let $I$ be a Borel-fixed monomial ideal, and $L$ a monomial ideal which need not to be Borel-fixed. The following example shows that the colon $I : L$ may be not Borel-fixed. \[colon not Borel-fixed\] Let $K$ be a field with $char(K)=2$, and let $S=K[x_1, \ldots, x_n]$. If $I= \langle x_1^3, x_1x_2^2 \rangle$. It is direct to check that $I$ is Borel-fixed. Set $L = \langle x_2 \rangle$. It is easy to see that $I : L = \langle x_1^3, x_1x_2 \rangle$, which is not Borel-fixed. [3mm]{} The following example shows that $IJ$ may be not lexsegment, even though $I, J$ are lexsegment. \[product not lexsegment\] Let $S=K[x_1, x_2, x_3]$, and let $I= \langle x_1^3, x_1^2x_2, x_1^2x_3, x_1x_2^2, x_1x_2x_3 \rangle$. It is easy to see that $I$ is lexsegment, and $u=x_1^2x_2^2x_3^2 \in I^2$. Note that $v= x_1^3x_3^3 \not\in I^2$ and $v >_{lex}u$, so $I^2$ is not lexsegment. [3mm]{}As an application of Proposition \[operation\], we now give an alternative proof to the following: \[regular\] ([@HH Proposition 4.3.3]) Let $I{\subseteq}S$ be a monomial ideal of Borel type. Then $x_n,\ldots,x_1$ is an almost regular sequence on $S/I$. In the proof of [@HH Lemma 4.3.1], let $M=S/I$. Then the corresponding $N$ (i.e., $0:_M\mathfrak{m}^\infty$) is identical with $(I:\mathfrak{m}^\infty)/I$. Note that $M/N\cong S/(I: \mathfrak{m}^\infty )$ holds. If $M=N$, then each element of $S_1$ is almost regular on $M$ since $M$ has finite length. Now assume $M\neq N$. Since $I: \mathfrak{m}^\infty $ is monomial of Borel type and $\mathfrak{m}\not\in Ass(M/N)$, as is shown in the proof of the Lemma 4.3.1, it follows by [@HH Proposition 4.2.9(d)] that $x_
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We have used semi-numerical simulations of reionization to study the behaviour of the power spectrum of the EoR 21-cm signal in redshift space. We have considered two models of reionization, one which has homogeneous recombination (HR) and the other incorporating inhomogeneous recombination (IR). We have estimated the observable quantities — quadrupole and monopole moments of power spectrum at redshift space from our simulated data. We find that the magnitude and nature of the ratio between the quadrupole and monopole moments of the power spectrum ($P^s_2 /P^s_0$) can be a possible probe for the epoch of reionization. We observe that this ratio becomes negative at large scales for $\xb \leq 0.7$ irrespective of the reionization model, which is a direct signature of an inside-out reionization at large scales. It is possible to qualitatively interpret the results of the simulations in terms of the fluctuations in the matter distribution and the fluctuations in the neutral fraction which have power spectra and cross-correlation $P_{\Delta \Delta}(k)$, $P_{xx}(k)$ and $P_{\Delta x}(k)$ respectively. We find that at large scales the fluctuations in matter density and neutral fraction is exactly anti-correlated through all stages of reionization. This provides a simple picture where we are able to qualitatively interpret the behaviour of the redshift space power spectra at large scales with varying $\xb$ entirely in terms of a just two quantities, namely $\xb$ and the ratio $P_{xx}/P_{\Delta \Delta}$. The nature of $P_{\Delta x}$ becomes different for HR and IR scenarios at intermediate and small scales. We further find that it is possible to distinguish between an inside-out and an outside-in reionization scenario from the nature of the ratio $P^s_2 /P^s_0$ at intermediate length scales.' author: - | Suman Majumdar$^{1,2}$[^1], Somnath Bharadwaj$^1$[^2] and T. Roy Choudhury$^{3}$[^3]\ $^1$Department of Physics and Meteorology & Centre for Theoretical Studies, IIT, Kharagpur 721302, India\ $^2$ Department of Astronomy & Oskar Klein Centre, AlbaNova, Stockholm University, SE-106 91 Stockholm, Sweden\ $^3$National Centre for Radio Astrophysics, TIFR, Post Bag 3, Ganeshkhind, Pune 411007, India title: 'The effect of peculiar velocities on the epoch of reionization (EoR) 21-cm signal' --- 1[x\_[H [i]{}]{}]{} 1[P\_[[H ]{}]{}]{} 1[\_[[H ]{}]{}]{} [U]{} ¶ [[R]{}]{}[[R]{}]{} methods: data analysis - cosmology: theory: - diffuse radiation Introduction ============ The epoch when the neutral hydrogen () in the inter-galactic medium (IGM) was reionized by the first luminous sources, is one of the least known periods in the history of our universe. Observations of the CMBR [@spergel03; @page07; @komatsu11; @larson11] and absorption spectra of high redshift quasars [@becker01; @fan03; @white03; @fan06; @willott07; @goto11] suggest that the epoch of reionization (EoR) probably extended over the redshift range $6 \leq z \leq 15$ [@fan06; @choudhury06a; @alvarez06; @mitra11]. However these observations are limited in their ability to shed light on many important questions regarding EoR. What are the major sources of reionization? What are the typical sizes and the topology of the ionized regions at different stages? Observations of redshifted 21-cm radiation from neutral hydrogen hold the promise to answer some of these questions. The brightness temperature of the redshifted 21-cm radiation directly probes the distribution at the epoch where the radiation originated. It is thus possible to track the entire reionization history as it gradually proceeds with redshift. The presently functioning low frequency radio telescopes [GMRT[^4]]{} [@swarup], LOFAR[^5] and [21CMA[^6]]{}, the upcoming [MWA[^7]]{} and the future [SKA[^8]]{} all cover the frequency range relevant for the EoR 21-cm signal, and this is one of the major goals for most of these telescopes. It is therefore very important to have a good picture of the expected signal in order to make forecasts for and correctly interpret the future observations of the redshifted 21-cm radiation. There has been a considerable amount of work towards simulating the expected EoR 21-cm signal. In particular, there have been numerical simulations which use ray-tracing to follow the propagation of ionization fronts in the IGM [@gnedin00; @ciardi01; @ricotti02; @razoumov02; @maselli03; @sokasian03; @iliev06; @mellema06; @mcquinn07; @trac07; @semelin07; @shin08; @iliev08; @shapiro08; @thomas09; @baek09]. Such simulations are computationally extremely expensive, and it is difficult to simulate large volumes, and to re-run the simulations considering different values of the simulation parameters. Semi-numerical simulations which consider the average photon density in place of a detailed ray-tracing analysis provide a computationally less expensive technique to simulate the EoR 21-cm signal [@furlanetto04; @mesinger07; @geil08a; @lidz09; @choudhury09; @alvarez09; @santos10; @mesinger11; @zahn11]. The fluctuations in the brightness temperature of the redshifted 21-cm radiation essentially trace the distribution during EoR. The redshift space distortion caused by peculiar velocities also plays an important role in shaping the redshifted 21-cm signal [@bharadwaj01; @bharadwaj04]. In fact, we expect the peculiar velocities to introduce an anisotropy in the three dimensional power spectrum of the EoR 21-cm signal [@barkana05; @bharadwaj05; @wang06], very similar to the characteristic anisotropy present in the galaxy power spectrum [@kaiser87]. @barkana05 have proposed that it may be possible to use this anisotropy to separate the effect of the peculiar velocities from the other astrophysical information present in the 21-cm power spectrum. Until recently, most simulations of the EoR 21-cm signal have not considered the effect of redshift space distortions. Some of the earlier work @mellema06 and @thomas09 have considered this effect while generating maps through their simulations, but have not studied it’s implication on the statistical properties like the power spectrum of the brightness temperature fluctuations. Recently, @santos10 and @mesinger11 have included the effect of redshift space distortions in an approximate, perturbative fashion in their semi-numerical simulation and used this to study it’s implications on the redshifted 21-cm brightness temperature power spectrum at different stages of the reionization. In a very recent work @mao12 discuss the methodology to implement redshift space distortion in numerical simulations of reionization, and used this to study the 21-cm brightness temperature power spectrum during EoR. Most of the earlier semi-numerical simulations ([*e.g.*]{} @furlanetto04 [@mesinger07; @geil08a; @lidz09; @alvarez09; @santos10; @mesinger11; @zahn11]) have assumed spatially homogeneous recombination which predicts strictly inside-out reionization where the most dense regions ionize first, the ionization subsequently propagating to lower densities. However, there are observations which indicate exactly opposite picture at the end of reionization, where the high density regions remain neutral (due to self-shielding) and the low density regions are highly ionized. @choudhury09 have attempted to make their semi-numerical simulation consistent with these observations by incorporating the fact that recombination occurs faster in high density regions. In these simulations reionization is inside-out only in the early stages. However, self-shielded, high density clumps remain neutral in the later stages of reionization when inhomogeneous recombination is taken into account. In this paper we follow @choudhury09 to develop a semi-numerical code to simulate reionization, with the further improvement that we incorporate the effect of redshift space distortion due to peculiar velocities. We have used these simulations to study the effect of peculiar velocities on the EoR 21-cm signal, both with homogeneous recombination and with inhomogeneous recombination. In this paper we have used semi-numerical simulations to determine the EoR 21-cm signal at different stages of reionization, and used the power spectrum to quantify the statistical properties of this signal. We have calculated $P^r_{\HI}(k)$ the power spectrum in real space and its redshift space counterpart $P^s_{\HI}({\bf k})$, and compared these two to asses the effect of peculiar velocities. The anisotropy of the 21-cm signal, quantified through various angular multipoles of $P^s_{\HI}({\bf k})$, is a very useful tool to study the effect of redshift space distortion. In particular, we have studied the monopole and quadrupole moments of $P^s_{\HI}({\bf k})$ in order to identify the features characteristic of redshift space distortion at different stages of reionization. To our knowledge, this anisotropy has not
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'Let $X, Y$ be complete, simply connected Riemannian surfaces with pinched negative curvature $-b^2 \leq K \leq -1$. We show that if $f : \partial X \to \partial Y$ is a Moebius homeomorphism between the boundaries at infinity of $X, Y$, then $f$ extends to an isometry $F : X \to Y$. This can be viewed as a generalization of Otal’s marked length spectrum rigidity theorem for closed, negatively curved surfaces, in the sense that Otal’s theorem asserts that if $X, Y$ admit properly discontinuous, cocompact, free actions by groups of isometries and the boundary map $f$ is Moebius and equivariant with respect to these actions then it extends to an isometry. In our case there are no cocompactness or equivariance assumptions, indeed the isometry groups of $X, Y$ may be trivial.' address: 'Indian Statistical Institute, Kolkata, India. Email: kingshook@isical.ac.in' author: - Kingshook Biswas bibliography: - 'moeb.bib' title: 'Moebius rigidity for simply connected, negatively curved surfaces' --- \[section\] \[theorem\][Proposition]{} \[theorem\][Lemma]{} \[theorem\][Corollary]{} \[theorem\][Definition]{} \[theorem\][Conjecture]{} \[theorem\][Remark]{} \[theorem\][Claim]{} \[theorem\][Definition-Theorem]{} Introduction ============ We continue in this article the study of Moebius maps between boundaries of CAT(-1) spaces undertaken in [@biswas3], [@biswas4], [@biswas5], [@biswas6], [@biswas7]. The principal question is whether a Moebius homeomorphism between the boundaries at infinity of two CAT(-1) spaces extends to an isometry between the spaces. We recall that the boundary $\partial X$ of a CAT(-1) space comes equipped with a positive function on the set of quadruples of distinct points in $\partial X$, called the cross-ratio, and a map $f : \partial X \to \partial Y$ between boundaries is said to be Moebius if it preserves cross-ratios. Bourdon [@bourdon2] showed that if $X$ is a rank one symmetric space of noncompact type with the metric normalized so that the maximum of the sectional curvatures equals $-1$, and $Y$ is any CAT(-1) space, then any Moebius embedding $f : \partial X \to \partial Y$ extends to an isometric embedding $F : X \to Y$. In [@biswas3] it was shown that if $X, Y$ are proper, geodesically complete CAT(-1) spaces, then any Moebius homeomorphism $f : \partial X \to \partial Y$ extends to a $(1, \log 2)$-quasi-isometry $F : X \to Y$. This extension was shown in [@biswas5] to coincide with a certain geometrically defined extension of Moebius maps called the [*circumcenter extension*]{}. For $X, Y$ complete, simply connected Riemannian manifolds of pinched negative curvature $-b^2 \leq K \leq -1$, the main result of [@biswas3] was improved in [@biswas5] to show that the circumcenter extension $F : X \to Y$ of a Moebius homeomorphism $f : \partial X \to \partial Y$ is a $(1, (1 - 1/b)\log 2)$-quasi-isometry. The case of complete, simply connected Riemannian manifolds $X, Y$ of pinched negative curvature $-b^2 \leq K \leq -1$ was further studied in [@biswas6], where it was shown that if $f : \partial X \to \partial Y$ and $g : \partial Y \to \partial X$ are mutually inverse Moebius homeomorphisms, then their circumcenter extensions $F : X \to Y$ and $G : Y \to X$ are $\sqrt{b}$-bi-Lipschitz homeomorphisms which are inverses of each other. Another case which has been considered is that of compact deformations of a negatively curved manifold [@biswas4], [@biswas7]. Here, we consider a complete, simply connected Riemannian manifold $(X, g_0)$ of pinched negative curvature $-b^2 \leq K_{g_0} \leq -1$, and a Riemannian metric $g_1$ on $X$ such that $g_1 = g_0$ outside a compact in $X$, and such that $g_1$ has sectional curvature bounded above by $-1$. The identity map $id : (X, g_0) \to (X, g_1)$ is bi-Lipschitz, and thus induces a homeomorphism $f : \partial_{g_0} X \to \partial_{g_1} X$ between the boundaries at infinity of $(X, g_0)$ and $(X, g_1)$. While some partial results were proved in [@biswas4], in [@biswas7] a complete solution to the problem in this case was obtained: if the boundary map $f : \partial_{g_0} X \to \partial_{g_1} X$ is Moebius, then its circumcenter extension $F : (X, g_0) \to (X, g_1)$ is an isometry. In the present article we obtain a complete solution to the problem of extending Moebius maps to isometries for the case of complete, simply connected Riemannian manifolds of pinched negative curvature in dimension two: \[mainthm\] Let $X, Y$ be complete, simply connected Riemannian surfaces of pinched negative curvature $-b^2 \leq K \leq -1$. If $f : \partial X \to \partial Y$ is a Moebius homeomorphism, then the circumcenter extension of $f$ is an isometry $F : X \to Y$. The above theorem may be viewed as a generalization of the well-known result of Otal on marked length spectrum rigidity for closed, negatively curved surfaces [@otal2]. This result states that if two closed, negatively curved surfaces have the same marked length spectrum, then they are isometric. It is well-known that two closed, negatively curved manifolds have the same marked length spectrum if and only if there is an equivariant Moebius map between the boundaries of their universal covers (see [@otal1] and section 5 of [@biswas3]). Thus Otal’s result is equivalent to the following: if $X, Y$ are complete, simply connected Riemannian surfaces with curvature bounded above by $-1$, admitting free, properly discontinuous, cocompact, isometric actions by a discrete group $\Gamma$, and $f : \partial X \to \partial Y$ is an equivariant Moebius map, then $f$ extends to an isometry $F : X \to Y$. We remark that the cocompactness of the actions is crucial to Otal’s proof, where a certain invariant is defined by integrating over the compact quotient $X/\Gamma$. In our case, we do not assume existence of any isometric group actions or equivariance of the Moebius map, indeed the isometry groups of $X, Y$ may well be trivial. The proof of Theorem \[mainthm\] relies on certain properties of the circumcenter extension proved in [@biswas7]. In section 2 we recall the necessary preliminaries on Moebius maps and the circumcenter extension, and then in section 3 we prove the main theorem. Preliminaries ============= For details and proofs of the assertions made in this section we refer to [@biswas3], [@biswas5], [@biswas6], [@biswas7]. Moebius metrics and visual metrics ---------------------------------- Let $(Z, \rho_0)$ be a compact metric space of diameter one. For a metric $\rho$ on $Z$, the cross-ratio with respect to the metric $\rho$ is the function of quadruples of distinct points in $Z$ defined by $$[\xi, \xi', \eta, \eta']_{\rho} := \frac{\rho(\xi, \eta)\rho(\xi', \eta')}{\rho(\xi, \eta')\rho(\xi', \eta)}$$ A metric $\rho$ on $Z$ is said to be antipodal if it has diameter one and for any $\xi \in Z$ there exists $\eta \in Z$ such that $\rho(\xi, \eta) = 1$. We assume that the metric $\rho_0$ is antipodal. We say that two metrics $\rho_1, \rho_2$ on $Z$ are Moebius equivalent if for all quadruples of distinct points in $Z$, the cross-ratios with respect to the two metrics are equal. We let $\mathcal{M}(Z, \rho_0)$ denote the set of all antipodal metrics on $Z$ which are Moebius equivalent to $\rho_0$. For any $\rho_1, \rho_2 \in \mathcal{M}(Z, \rho_0)$, there exists a positive continuous function on $Z$ called the derivative of $\rho_2$ with respect to $\rho_1$, denoted by $\frac{d\rho_2}{d\rho_1}$, such that $$\rho_2(\xi, \eta)^2 = \frac{d\rho_2}{d\rho_1}(\xi
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'After almost 20 years of hunting, only about a dozen hot corinos, hot regions enriched in interstellar complex organic molecules (iCOMs), are known. Of them, many are binary systems with the two components showing drastically different molecular spectra. Two obvious questions arise. Why are hot corinos so difficult to find and why do their binary components seem chemically different? The answer to both questions could be a high dust opacity that would hide the molecular lines. To test this hypothesis, we observed methanol lines at centimeter wavelengths, where dust opacity is negligible, using the Very Large Array interferometer. We targeted the NGC 1333 IRAS 4A binary system, for which one of the two components, 4A1, has a spectrum deprived of iCOMs lines when observed at millimeter wavelengths, while the other component, 4A2, is very rich in iCOMs. We found that centimeter methanol lines are similarly bright toward 4A1 and 4A2. Their non-LTE analysis indicates gas density and temperature ($\geq2\times10^6$ cm$^{-3}$ and 100–190 K), methanol column density ($\sim10^{19}$ cm$^{-2}$) and extent ($\sim$35 au in radius) similar in 4A1 and 4A2, proving that both are hot corinos. Furthermore, the comparison with previous methanol line millimeter observations allows us to estimate the optical depth of the dust in front of 4A1 and 4A2, respectively. The obtained values explain the absence of iCOMs line emission toward 4A1 at millimeter wavelengths and indicate that the abundances toward 4A2 are underestimated by $\sim$30%. Therefore, centimeter observations are crucial for the correct study of hot corinos, their census, and their molecular abundances.' author: - Marta De Simone - Cecilia Ceccarelli - Claudio Codella - 'Brian E. Svoboda' - Claire Chandler - Mathilde Bouvier - Yamamoto Satoshi - Nami Sakai - Paola Caselli - Cecile Favre - Laurent Loinard - Bertrand Lefloch - Hauyu Baobab Liu - 'Ana López-Sepulcre' - 'Jaime E. Pineda' - Vianney Taquet - Leonardo Testi bibliography: - 'IRAS4A.bib' title: 'Hot Corinos chemical diversity: myth or reality? ' --- Introduction {#sec:intro} ============ Interstellar complex organic molecules (iCOMs) are molecules detected in the interstellar medium containing carbon and at least six atoms [@herbst_complex_2009; @ceccarelli_seeds_2017]. These molecules are of particular interest because they carry a substantial fraction of carbon that can be used for prebiotic chemistry [e.g. @caselli_our_2012]. In solar-like young Class 0 protostars, iCOMs are found in relatively large quantities toward the so-called hot corinos, which are compact ($\leq$100 au), hot ($\geq$100 K) and dense ($\geq10^7$ cm$^{-3}$) regions enriched in iCOMs at the center of the envelopes accreting the future star [@ceccarelli_hot_2004; @ceccarelli_extreme_2007; @caselli_our_2012]. The first hot corino was discovered in 2003 toward the Class 0 source IRAS 16293–2422 [e.g. @cazaux_hot_2003; @jorgensen_alma_2016; @manigand_alma-pils_2020]. Since then other Class 0 hot corinos have been discovered: NGC 1333 IRAS 4A [hereafter IRAS 4A; e.g. @bottinelli_complex_2004; @taquet_constraining_2015; @lopez-sepulcre_complex_2017; @de_simone_glycolaldehyde_2017; @sahu_implications_2019], NGC 1333 IRAS 2A, NGC 1333 IRAS 4B [e.g. @jorgensen_probing_2005; @bottinelli_hot_2007; @maury_first_2014; @de_simone_glycolaldehyde_2017], HH 212 [@codella_water_2016; @bianchi_deuterated_2017; @lee_formation_2017; @lee_first_2019], B335 [@imai_discovery_2016], L483 [@oya_l483_2017; @jacobsen_organic_2018], Barnard1b-S [@marcelino_alma_2018], Ser-emb 1 [@martin-domenech_new_2019], BHR71-IRS1 [@yang_constraining_2020]. Lately, few more evolved Class I hot corinos were also discovered: NGC 1333 SVS13A [@de_simone_glycolaldehyde_2017; @bianchi_census_2019], B1a [@oberg_complex_2014] and Ser-emb 17 [@bergner_organic_2019]. Therefore, after almost 20 years, only about a dozen hot corinos are known. Recent surveys concluded that $\sim$30% of low-mass Class 0/I protostars show emission from at least three iCOMs [@de_simone_glycolaldehyde_2017; @belloche_questioning_2020]. Most of the hot corinos cited above turn out to be binary systems when imaged at high angular resolution. This is in agreement with previous surveys that found that 40-60% of protostars are multiple systems [@maury_first_2014; @tobin_vla_2016.] Interestingly, with the first hot corino maps it became clear that the two objects in a given binary system can substantially differ in molecular complexity. Illustrative examples are provided by IRAS 16293–2422 and IRAS 4A [@jorgensen_alma_2016; @lopez-sepulcre_complex_2017]. IRAS 16293–2422 is composed by two sources, A and B, separated by $5\farcs1$ ($\sim$ 720 au), where source A, weaker in millimeter continuum emission, is brighter in iCOMs lines than source B [e.g. @caux_timasss_2011; @pineda_first_2012; @jorgensen_alma_2016; @manigand_alma-pils_2020]. IRAS 4A, located in the NGC 1333 region in the Perseus cloud at $(299\pm15)$ pc of distance [@zucker_mapping_2018], is also a binary system composed by IRAS 4A1 and IRAS 4A2 (hereafter 4A1 and 4A2), separated by 1.8$''$ ($\sim$540 au): while 4A1 is brighter in the mm continuum than 4A2, only 4A2 shows bright iCOMs lines [@taquet_constraining_2015; @lopez-sepulcre_complex_2017; @de_simone_glycolaldehyde_2017]. However, not always the brightest millimeter continuum source in a binary system is the one weak in iCOMs emission [see e.g. @ospina-zamudio_first_2018]. In summary, despite two decades of hunting, only a dozen hot corinos are known so far. Of them, many are binary systems with the two components showing drastically different molecular spectra. Two related questions arise: (1) Why are hot corinos so difficult to find? While it is known that not all Class 0/I sources possess hot corinos [e.g. @sakai_warm_2013; @higuchi_chemical_2018; @bouvier_hunting_2020], observational biases might hamper their detection. (2) Why do coeval objects seem drastically differ in their chemical composition? Is this a real difference or is it only/mostly due to observational biases? A major observational bias could be caused by the dust opacity, which could be very high in Class 0/I sources, due to their high densities and, consequently, column densities [e.g. @miotello_grain_2014; @galvan-madrid_effects_2018; @galametz_low_2019]. If the effect of dust absorption is not negligible, there are three major consequences: (1) hot corinos may be difficult to detect in the millimeter (also) because of the high dust absorption of the iCOMs lines; (2) the molecular complexity diversity observed in binary systems objects may reflect a difference in the front dust column density rather than a real chemical difference of the two objects; (3) the iCOMs abundances in hot corinos could have been so far underestimated. In order to test this hypothesis, we targeted the IRAS 4A binary system, where the two objects show extremely different iCOMs line spectra at mm wavelengths (see above), and carried out observations of several methanol lines, one of the simplest iCOMS, at centimeter wavelengths where the dust is optically thin. Observations {#sec:obs} ============ The IRAS 4A system was observed at 1.3 cm using K–band receivers (18-26.5 GHz) of the Very Large Array (VLA) in C-configuration (35–3400 m) on 2018 December 10 (project ID: VLA/18B-166). We targeted 10 CH$_3$OH lines, with frequencies from 24.9 to
{ "pile_set_name": "ArXiv" }
null
[**Canonical dual theory applied to a Lennard-Jones potential minimization problem**]{}\ [Jiapu Zhang]{}\ Centre for Informatics and Applied Optimization, &\ Graduate School of Sciences, Information Technology and Engineering,\ The University of Ballarat, Mount Helen, VIC 3350, Australia.\ Emails: j.zhang@ballarat.edu.au, jiapu\_zhang@hotmail.com\ Telephones: 61-3-5327 9809 (office), 61-4 2348 7360 (mobile)\ [**Abstract**]{} The simplified Lennard-Jones (LJ) potential minimization problem is $$\mbox{minimize}~~~f(x)=4\sum_{i=1}^N \sum_{j=1,j<i}^N \left( \frac{1}{\tau_{ij}^6} -\frac{1}{\tau_{ij}^3} \right)~~~\mbox{subject to}~~~ x\in \mathbb{R}^n,$$ where $\tau_{ij}=(x_{3i-2}-x_{3j-2})^2 +(x_{3i-1}-x_{3j-1})^2 +(x_{3i} -x_{3j} )^2$, $(x_{3i-2},x_{3i-1},x_{3i})$ is the coordinates of atom $i$ in $\mathbb{R}^3$, $i,j=1,2,\dots,N(\geq 2 \quad \text{integer})$, $n=3N$ and $N$ is the whole number of atoms. The nonconvexity of the objective function and the huge number of local minima, which is growing exponentially with $N$, interest many mathematical optimization experts. In this paper, the canonical dual theory elegantly tackles this problem illuminated by the amyloid fibril molecular model building.\ [**Key words**]{} Mathematical Canonical Duality Theory $\cdot$ Mathematical Optimization $\cdot$ Lennard-Jones Potential Minimization Problem $\cdot$ Global Optimization. Introduction ============ Neutral atoms are subject to two distinct forces in the limit of large distance and short distance: a dispersion force (i.e. attractive van der Waals (vdw) force) at long ranges, and a repulsion force, the result of overlapping electron orbitals. The Lennard-Jones (L-J) potential represents this behavior ([http://en.wikipedia.org/wiki/Lennard-Jones\_potential]{}, or [@locatelli2008] and references therein). The L-J potential is of the form $$\label{LJ_r_form} V(r)=4\varepsilon \left[ (\frac{\sigma}{r})^{12} - (\frac{\sigma}{r})^6 \right],$$ where $r$ is the distance between two atoms, $\varepsilon$ is the depth of the potential well and $\sigma$ is the atom diameter; these parameters can be fitted to reproduce experimental data or deduced from results of accurate quantum chemistry calculations. The $(\frac{\sigma}{r})^{12}$ term describes repulsion and the $(\frac{\sigma}{r})^6$ term describes attraction (Fig. \[LJ\_potential\]). ![The Lennard-Jone Potential (formulas (\[LJ\_r\_form\]) and (\[LJ\_AB\_form\])) (This Figure can be found in website [http://homepage.mac.com/swain/CMC/DDResources/mol\_interactions/molecular\_interactions.html]{}](LJ_potential.eps){width="4.2in"} ). \[LJ\_potential\] In Fig. \[LJ\_potential\] we may see two points: (I) $V(r)=0$ (but the value of $V(r)$ is not the minimal value) when $r=\sigma$ (i.e. the distance between two atoms equals to the sum of [*atom radii*]{} of the atoms); and (II) when $r=2^{1/6}\sigma$ (i.e. the distance between two atoms equals to the sum of [*vdw radii*]{} of the atoms), the value of $V(r)$ reaches its minimal value $-\varepsilon$ (i.e. the bottom of the potential well; the force between the atoms is zero at this point). This paper is written based on (II). If we introduce the coordinates of the atoms whose number is denoted by $N$ and let $\varepsilon = \sigma =1$ be the reduced units, the form (\[LJ\_r\_form\]) becomes $$\label{LJ_x_form} f(x)=4\sum_{i=1}^N \sum_{j=1,j<i}^N \left( \frac{1}{\tau_{ij}^6} -\frac{1}{\tau_{ij}^3} \right),$$ where $\tau_{ij}=(x_{3i-2}-x_{3j-2})^2 +(x_{3i-1}-x_{3j-1})^2 +(x_{3i} -x_{3j} )^2 =||X_i-X_j||^2_2$ and $(x_{3i-2},x_{3i-1},x_{3i})$ is the coordinates of atom $i$, $i,j=1,2,\dots, N (\geq 2)$. The minimization of L-J potential $f(x)$ on $\mathbb{R}^n$ (where $n=3N$) is an optimization problem: $$\label{LJ_f_form} \min_{s.t. x\in \mathbb{R}^{3N}} f(x).$$ This optimization problem interests many optimization experts, for example, Pardalos [@pardalossx1994], Xue [@xuemr1992; @xue1993; @xue1994a; @xue1994b; @xue2002], Huang [@huang2002b; @huang2002a] et al..\ For (\[LJ\_f\_form\]), when its global optimization solution is reached, the value $r$ in (\[LJ\_r\_form\]) should be the sum of two [*vdw radii*]{} of the two atoms interacted. The three dimensional structure of a molecule with $N$ atoms can be described by specifying the 3-Dimensional coordinate positions $X_1, X_2, \dots, X_N \in \mathbb{R}^3$ of all its atoms. Given bond lengths $r_{ij}$ between a subset $S$ of the atom pairs, the determination of the molecular structure is $$\begin{aligned} (\mathcal{P}_0 ) \quad to \quad find \quad &X_1,X_2,\dots ,X_N \quad s.t. \quad ||X_i-X_j||=r_{ij}, (i,j)\in S, \label{orginal_problem}\end{aligned}$$ where $||\cdot ||$ denotes a norm in a real vector space and it is calculated as the Euclidean distance 2-norm in this paper. (\[orginal\_problem\]) can be reformulated as a mathematical global optimization problem (GOP) $$\begin{aligned} (\mathcal{P} ) \quad &\min P(X)=\sum_{(i,j)\in S} w_{ij} (||X_i-X_j||^2 -r_{ij}^2 )^2 \label{prime_problem}\end{aligned}$$ in the terms of finding the global minimum of the function $P(X)$, where $w_{ij}, (i,j)\in S$ are positive weights, $X = (X_1, X_2, \dots, X_N)^T \in \mathbb{R}^{N\times 3}$ [@morew1997] and usually $S$ has many fewer than $N^2/2$ elements due to the error in the theoretical or experimental data [@grossols2009; @zoubs1997]. There may even not exist any solution $X_1, X_2, \dots, X_N$ to satisfy the distance constraints in (\[orginal\_problem\]), for example when data for atoms $i, j, k \in S$ violate the triangle inequality; in this case, we may add a perturbation term $-\epsilon^TX$ to $P(X)$: $$\begin{aligned} (\mathcal{P}_{\epsilon} ) \quad &\min P_{\epsilon}(X)=\sum_{(i,j)\in S} w_{ij} (||X_i-X_j||^2 -r_{ij}^2 )^2 -\epsilon^TX, \label{prime_approx_problem}\end{aligned}$$ where $\epsilon \geq 0$. Thus, the L-J potential optimization problem (\[LJ\_f\_form\]) is rewritten into the optimization problem (\[prime\_approx\_problem\]).\ Problem (\[prime\_approx\_problem\]) is just the minimization problem of sum of fourth-order polynomials, which can be elegantly solved by the canonical dual theory (CDT) in optimization [@gao-book2000; @gaorp2010; @gaow2012]. We apply the above theory to an amyloid fibril molecular model building problem. The rest of this paper is arranged as follows. In the next section, i.e. Section 2, the CDT will be briefly introduced and its effectiveness will be illuminated by applying the CDT-based optimization approach to the famous double-well problem. In Section 3,
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'Material recognition can help inform robots about how to properly interact with and manipulate real-world objects. In this paper, we present a multimodal sensing technique, leveraging near-infrared spectroscopy and close-range high resolution texture imaging, that enables robots to estimate the materials of household objects. We release a dataset of high resolution texture images and spectral measurements collected from a mobile manipulator that interacted with 144 household objects. We then present a neural network architecture that learns a compact multimodal representation of spectral measurements and texture images. When generalizing material classification to new objects, we show that this multimodal representation enables a robot to recognize materials with greater performance as compared to prior state-of-the-art approaches. Finally, we present how a robot can combine this high resolution local sensing with images from the robot’s head-mounted camera to achieve accurate material classification over a scene of objects on a table.' author: - 'Zackory Erickson, Eliot Xing, Bharat Srirangam, Sonia Chernova, and Charles C. Kemp[^1][^2][^3][^4]' bibliography: - 'bibliography.bib' title: | **Multimodal Material Classification for Robots using\ Spectroscopy and High Resolution Texture Imaging** --- Introduction {#sec:intro} ============ When interacting with everyday objects, people frequently use material properties to inform their interactions [@buckingham2009cueslifting]. We make sure not to place metal in the microwave, we take caution when carrying glass or ceramic objects, we look for styrofoam or paper cups to hold hot liquids, and we sort some paper, plastic, and metal objects into recycling bins. Robots can benefit from these same skills when operating in human environments. In this work, we demonstrate how robots can use a non-contact multimodal sensing technique, based on spectroscopy and close-range texture imaging, to accurately estimate the materials of household objects prior to manipulation. This sensing approach collects near-infrared spectral measurements from a handheld micro spectrometer with a narrow field-of view camera for high resolution texture imaging. Both sensors are small and can be held by or directly integrated into a robot’s end effector. Non-contact sensing can enable a robot to determine properties and use cases of objects without the intricacies of contact physics that can affect the performance of haptic touch-based sensing. To evaluate this multimodal sensing technique, we have assembled and released a dataset of 14,400 high resolution texture images and corresponding spectral measurements. We collected this data with a PR2 mobile manipulator that interacted with 144 household objects, shown in Fig. \[fig:intro\], which spanned eight material categories: ceramic, fabric, foam, glass, metal, paper, plastic, and wood. Using this dataset, we trained a neural network that learns a shared representation of spectral and visual sensory data. By learning a compact multimodal representation, our model achieves state-of-the-art material recognition performance of 80.0% when generalizing material classification to a new set of heldout objects across eight materials (12.5% baseline with a random classifier). We further investigate the role of texture image preprocessing by comparing several ImageNet-pretrained CNN models for generating lower-dimensional visual representations. Finally, using this spectral and visual sensing approach, we demonstrate that a robot can reliably classify a scene of objects on a table without direct contact. In this work, we make the following contributions: - We introduce a near-infrared spectroscopy and high resolution texture imaging approach that surpasses prior state-of-the-art performance [@erickson2019spec] for material classification. - We release SpectroVision, a dataset of 14,400 high resolution texture images and spectral measurements collected from a PR2 mobile manipulator that interacted with 144 household objects from eight material categories. - We demonstrate that our multimodal approach surpasses the performance of models trained on each independent modality. - We show that a robot equipped only with our handheld sensors and an RGB-D camera can successfully use our approach to perform material classification on multiple objects casually arranged on a table. Related Work and Background {#sec:related_work} =========================== Material Recognition -------------------- Material recognition using haptic sensors, which require direct physical contact with objects, has been widely explored. Modalities such as force [@bhattacharjee2012tactile; @decherchi2011tactile], temperature [@kerr2013thermal; @bhattacharjee2015heat; @cho2018thermal], capacitance [@alagi2018capacitive], and vibration [@sinapov2011vibrotactile; @fang2019acoustic], have been used in haptic perception for material recognition. The BioTac fingertip, capable of sensing force, temperature, and vibration, has been studied for multimodal haptic perception [@chu2015hapticadjectives; @fishel2012biotac; @xu2013tactile; @kerr2018biotac]. Chin et al. introduced a compliant haptic sensor for robots to distinguish between plastic, metal, and paper during recycling [@chin2019automated]. Several works also use multimodal perception by combining data from multiple modalities for material recognition and outperforming single modality approaches [@chathuranga2013investigation; @sinapov2014grounding; @erickson2017semi; @bhattacharjee2018multimodal; @zhang2019multimodalcutting]. Similarly, we find that non-contact material recognition approaches also benefit from multiple sensing modalities, and we demonstrate that visual sensing couples well with spectroscopy. Several studies have evaluated visual features for material recognition [@liu2010exploring; @hu2011toward; @dimitrov2014vision; @bell2015material; @schwartz2019recognizing]. Extensive literature also exists for leveraging visual or depth imaging for vision-based tactile sensors, including the GelSight [@yuan2017shape; @li2013sensing], FingerVision [@yamaguchi2017implementing], and TacTip [@ward2018tactip; @cramphorn2018voronoi], to perform a variety of manipulation tasks [@chen2018tactile; @yamaguchi2016combining], texture recognition [@luo2018vitac; @lee2019touching], and estimation of material properties [@kampouris2016fine; @yuan2017gelsight]. Both [@yuan2017shape] and [@gao2016deep] have used visual and haptic features to estimate object properties, such as hardness or haptic adjectives. Overall, we find that multimodal approaches overcome weaknesses in the ability of any individual modality to classify materials. Spectroscopy ------------ Spectroscopy [@pasquini2018specreview] has found a number of practical applications such as for pharmaceutical manufacturing [@roggo2007pharma], food analysis [@bellon1994food], and recycled material separation [@masoumi2012plastic]. Recently, a number of handheld spectrometers have been developed for performing spectral analysis outside of lab and manufacturing settings [@crocombe2018portable; @rateni2017smartphonebased]. These portable micro spectrometers have been demonstrated for pharmaceutical quality control [@yan2018pharma] and food analysis [@das2016fruit; @lee2017nir; @kartakoullis2018meat]. Prior research has shown how a robot can use near-infrared spectroscopy with a commercial handheld SCiO spectrometer to recognize materials of household objects [@erickson2019spec]. Near-infrared spectroscopy has since been used by robots to recognize the materials of household objects for informing semantic grasp predictions and for tool construction [@liu2019cage; @nair2019autonomous; @shrivatsav2019tool]. In this paper, we demonstrate that robots can more accurately recognize common household materials by leveraging both spectroscopy and close-range texture imaging. Texture Representation ---------------------- Several techniques have been introduced for extracting or learning texture representations from visual images, including convolutional neural network (CNN) based texture analysis [@liu2019texturesurvey] and handcrafted descriptors [@paolo2017texturedescriptors]. Recent work in texture analysis has primarily investigated CNN-based texture representations [@liu2019texturesurvey]. This is due in part to a collection of works in texture and material classification tasks that have shown learned CNN feature descriptors frequently outperform alternative, handcrafted approaches [@paolo2017texturedescriptors; @liu2019bow; @schwartz2019recognizing; @bell2015material; @kalliatakis2017evaluating]. Research in texture synthesis [@gatys2015texture; @lin2016visualizing] has also provided insight into the ways in which CNNs capture and encode textures. Vision-based tactile sensing techniques for texture classification have frequently used texture features from pretrained ImageNet models [@yuan2018active]. The use of these models for extracting textural features is further supported by findings of Geirhos et al. [@geirhos2018imagenettexture] that ImageNet-trained CNNs are more biased towards recognizing and representing localized textures rather than global shape structure, similar to results by [@gatys2015texture; @long2018texturestat; @ballester2016cnnsketches]. Building on these prior findings, we leverage pretrained ImageNet CNNs to extract robust visual texture features for material classification. SpectroVision Dataset {#sec:dataset} ===================== Sensors ------- Our sensing approach consists of a micro handheld spectrometer for
{ "pile_set_name": "ArXiv" }
null
--- abstract: | Following [@Visintin], we exploit the fractional perimeter of a set to give a definition of fractal dimension for its measure theoretic boundary. We calculate the fractal dimension of sets which can be defined in a recursive way and we give some examples of this kind of sets, explaining how to construct them starting from well known self-similar fractals.\ In particular, we show that in the case of the von Koch snowflake $S\subset{\mathbb R}^2$ this fractal dimension coincides with the Minkowski dimension, namely $$P_s(S)<\infty\qquad\Longleftrightarrow\qquad s\in\Big(0,2-\frac{\log4}{\log3}\Big).$$ We also study the asymptotics as $s\to1^-$ of the fractional perimeter of a set having finite (classical) perimeter. author: - Luca Lombardini title: Fractional perimeter from a fractal perspective --- [Introduction and main results]{} It is well known (see e.g. [@Gamma] and [@cafenr]) that sets with a regular boundary have finite $s$-fractional perimeter for every $s\in(0,1)$. In this paper we show that also sets with an irregular, “fractal”, boundary can have finite $s$-perimeter for every $s$ below some threshold $\sigma<1$.\ Actually, the $s$-perimeter can be used to define a “fractal dimension” for the measure theoretic boundary $$\partial^-E:=\{x\in{\mathbb R}^n\,|\,0<|E\cap B_r(x)|<\omega_nr^n\textrm{ for every }r>0\},$$ of a set $E\subset{\mathbb R}^n$. Indeed, in [@Visintin] the author suggested using the index $s$ of the seminorm $[\chi_E]_{W^{s,1}}$ as a way to measure the codimension of $\partial^-E$ and he proved that the fractal dimension obtained in this way is less or equal than the (upper) Minkowski dimension. We give an example of a set, the von Koch snowflake, for which these two dimensions coincide. Moreover, exploiting the roto-translation invariance and the scaling property of the $s$-perimeter, we calculate the dimension of sets which can be defined in a recursive way similar to that of the von Koch snowflake. On the other hand, as remarked above, sets with a regular boundary have finite $s$-perimeter for every $s$ and actually their $s$-perimeter converges, as $s$ tends to 1, to the classical perimeter, both in the classical sense (see [@cafenr]) and in the $\Gamma$-convergence sense (see [@Gamma]).\ As a simple byproduct of the computations developed in this paper, we exploit Theorem 1 of [@Davila] to prove this asymptotic property for a set $E$ having finite classical perimeter in a bounded open set with Lipschitz boundary.\ This last result is probably well known to the expert, though not explicitly stated in the literature (as far as we know).\ In particular, we remark that this lowers the regularity requested in [@cafenr], where the authors asked the boundary $\partial E$ to be $C^{1,\alpha}$.\ We begin by recalling the definition of $s$-perimeter. Let $s\in(0,1)$ and let $\Omega\subset\mathbb R^n$ be an open set. The $s$-fractional perimeter of a set $E\subset\mathbb R^n$ in $\Omega$ is defined as $$P_s(E,\Omega):=\mathcal L_s(E\cap\Omega,{\mathcal C}E\cap\Omega)+ \mathcal L_s(E\cap\Omega,{\mathcal C}E\setminus\Omega)+ \mathcal L_s(E\setminus\Omega,{\mathcal C}E\cap\Omega),$$ where $$\mathcal L_s(A,B):=\int_A\int_B\frac{1}{|x-y|^{n+s}}\,dx\,dy, $$ for every couple of disjoint sets $A,\,B\subset\mathbb R^n$. We simply write $P_s(E)$ for $P_s(E,{\mathbb R}^n)$.\ We can also write the fractional perimeter as the sum $$P_s(E,\Omega)=P_s^L(E,\Omega)+P_s^{NL}(E,\Omega),$$ where $$\begin{split} &P_s^L(E,\Omega):=\mathcal L_s(E\cap\Omega,{\mathcal C}E\cap\Omega)=\frac{1}{2}[\chi_E]_{W^{s,1}(\Omega)},\\ & P_s^{NL}(E,\Omega):={\mathcal L}_s(E\cap\Omega,{\mathcal C}E\setminus\Omega)+{\mathcal L}_s(E\setminus\Omega,{\mathcal C}E\cap\Omega). \end{split}$$ We can think of $P^L_s(E,\Omega)$ as the local part of the fractional perimeter, in the sense that if $|(E\Delta F)\cap\Omega|=0$, then $P^L_s(F,\Omega)=P^L_s(E,\Omega)$. We say that a set $E$ has locally finite $s$-perimeter if it has finite $s$-perimeter in every bounded open set $\Omega\subset{\mathbb R}^n$.\ Now we give precise statements of the results obtained, starting with the fractional analysis of fractal dimensions. [Fractal boundaries]{} First of all, we prove in Section 3.1 that in some sense the measure theoretic boundary $\partial^-E$ is the “right definition” of boundary for working with the $s$-perimeter. To be more precise, we show that $$\partial^-E=\{x\in{\mathbb R}^n\,|\,P_s^L(E,B_r(x))>0,\,\forall\,r>0\},$$ and that if $\Omega$ is a connected open set, then $$P_s^L(E,\Omega)>0\quad\Longleftrightarrow\quad \partial^-E\cap\Omega\not=\emptyset.$$ This can be thought of as an analogue in the fractional framework of the fact that for a Caccioppoli set $E$ we have $\partial^-E=$ supp $|D\chi_E|$. Now the idea of the definition of the fractal dimension consists in using the index $s$ of $P_s^L(E,\Omega)$ to measure the codimension of $\partial^- E\cap\Omega$, $${\textrm{Dim}}_F(\partial^-E,\Omega):=n-\sup\{s\in(0,1)\,|\,P^L_s(E,\Omega)<\infty\}.$$ As shown in [@Visintin] (Proposition 11 and Proposition 13), the fractal dimension $\textrm{Dim}_F$ defined in this way is related to the (upper) Minkowski dimension by $$\label{intro_dim_ineq} {\textrm{Dim}}_F(\partial^-E,\Omega)\leq\overline{{\textrm{Dim}}}_\mathcal M(\partial^-E,\Omega),$$ (for the convenience of the reader we provide a proof in Proposition $\ref{vis_prop}$). If $\Omega$ is a bounded open set with Lipschitz boundary, this means that $$\label{intro_dim_ineq2} P_s(E,\Omega)<\infty\qquad\textrm{for every }s\in\big(0,n-\overline{{\textrm{Dim}}}_\mathcal M(\partial^-E,\Omega)\big),$$ since the nonlocal part of the $s$-perimeter of any set $E\subset{\mathbb R}^n$ is $$P_s^{NL}(E,\Omega)\leq2P_s(\Omega)<\infty,\qquad\textrm{for every }s\in(0,1).$$ We show that for the von Koch snowflake $(\ref{intro_dim_ineq})$ is actually an equality. ![[*The first three steps of the construction of the von Koch snowflake*]{}](fiocco){width="100mm"} Namely, we prove the following \[von\_koch\_snow\] Let $S\subset{\mathbb R}^2$ be the von Koch snowflake. Then $$\label{koch1} P_s(S)<\infty,\qquad\forall\,s\in\Big(0,2-\frac{\log4}{\log3}\Big),$$ and $$\label{koch2} P_s(S)=\infty,\qquad\forall\,s\in\Big[2-\frac{\log4}{\log3},1\Big).$$ Therefore $${\textrm{Dim}}_F(\partial S)={\textrm{Dim}}_\mathcal{M}(\partial S)=\frac{\log4}{\log3}.$$ Actually, exploiting the self-similarity of the von Koch curve, we have $${\textrm{Dim}}_F(\partial S,\Omega)=\frac{\log4}{\log3},$$ for every $\Omega$ s.t. $\partial S\cap\Omega\not=\emptyset$. In particular, this is true for every $\Omega=B_r(p)$ with $p\in S$ and $r>0$
{ "pile_set_name": "ArXiv" }
null
--- author: - Seungchul Ryu title: 'Local Area Transform for Cross-Modality Correspondence Matching and Deep Scene Recognition' --- \ Thesis Supervisor: [**Kwanghoon Sohn**]{}\ 1.5cm\ Committee Member: [**Jaihie Kim**]{}\ 1.5cm\ Committee Member: [**Sangyoun Lee**]{}\ 1.5cm\ Committee Member: [**Bumsub Ham**]{}\ 1.5cm\ Committee Member: [**Dongbo Min**]{}\ -2cm *I would like to dedicate this dissertation to my loving family...*
{ "pile_set_name": "ArXiv" }
null
--- author: - 'Nilay Bostan,' - and Vedat Nefer Şenoğuz bibliography: - 'quartic\_radiative\_v3.bib' title: 'Quartic inflation and radiative corrections with non-minimal coupling' --- Introduction {#sec:intro} ============ Inflation [@Guth:1980zm; @Linde:1981mu; @Albrecht:1982wi; @Linde:1983gd], which is an accelerated expansion era thought to occur in the early universe, both helps explaining general properties of the universe such as its flatness and large scale homogeneity, and it leads to the primordial density perturbations that evolve into the structures in the universe. Up to now many inflationary models have been introduced with most of them depending on a scalar field called the inflaton. Predictions of these models are being tested by the cosmic microwave background radiation temperature anisotropies and polarization observations that have become even more sensitive in recent years [@Aghanim:2018eyx; @Akrami:2018odb]. The observational parameter values predicted by different potentials that the inflaton may have were calculated in many articles, see for instance ref. [@Martin:2013tda]. A vast majority of these articles assume that the inflaton is coupled to gravitation solely through the metric. On the other hand the action in general also contains a coupling term $\xi \phi^2 R$ between the Ricci scalar and the inflaton (this is required by the renormalizability of the scalar field theory in curved space-time [@Callan:1970ze; @Freedman:1974ze; @Buchbinder:1992rb]), and inflationary predictions are significantly altered depending on the coefficient of this term [@Abbott:1981rg; @Spokoiny:1984bd; @Lucchin:1985ip; @Futamase:1987ua; @Fakir:1990eg; @Salopek:1988qh; @Amendola:1990nn; @Faraoni:1996rf; @Faraoni:2004pi]. In this work, we first review in [section \[sec:inf\]]{} how to calculate the main observables, namely the spectral index $n_s$ and the tensor-to-scalar ratio $r$, for an inflaton potential in the presence of non-minimal coupling. Next, in [section \[sec:quartic\]]{} we review the $\lambda\phi^4$ quartic potential, providing analytical approximations for $n_s$ and $r$, and showing that the model agrees with current data for $\xi\gtrsim0.005$. We also briefly discuss how and to what extent can the reheating stage affect the values of observables. [Section \[sec:radiative\]]{} introduces two prescriptions that can be used to calculate radiative corrections to the inflaton potential due to inflaton couplings to bosons or fermions. In prescription I, a conformal transformation is applied to express the action in the Einstein frame; and the field dependent mass terms in the one-loop Coleman-Weinberg potential are expressed in this frame. Whereas in prescription II, the field dependent mass terms are taken into account in the original Jordan frame. The next two sections, [section \[sec:p1\]]{} and [section \[sec:p2\]]{} contain a detailed numerical investigation of how the radiative corrections due to inflaton couplings to bosons or fermions modify the predictions of the non-minimal quartic potential, for each prescription. We summarize our results in [section \[sec:conc\]]{}. The effect of radiative corrections to the predictions of the non-minimal quartic potential has been discussed mostly in the context of standard model (SM) Higgs inflation [@Bezrukov:2007ep], see for instance refs. [@Bezrukov:2013fka; @Rubio:2018ogq] and the references within. In this context, since the self coupling $\lambda$ of the inflaton is known, $\xi\gg1$ is required [@Bezrukov:2009db]. In this limit, the observational parameters are given in terms of the e-fold number $N$ by $n_s\approx 1-2/N$ and $r\approx 12/N^2$ [@Komatsu:1999mt; @Tsujikawa:2004my] as in the Starobinsky model [@Starobinsky:1980te; @Kehagias:2013mya]. Radiative corrections lead to deviations from this so called Starobinsky point in the $n_s$ and $r$ plane, however the size of these deviations differ according to the prescription used for the calculation. As discussed in refs. [@Bezrukov:2008ej; @Bezrukov:2009db; @Bezrukov:2013fka], the plateau type structure of the Einstein frame potential remains intact and the deviations in $n_s$ are rather insignificant according to prescription I. However, according to prescription II, radiative corrections lead to a linear term in the Einstein frame potential written in terms of a scalar field with a canonical kinetic term. If the inflaton is dominantly coupling to bosons the coefficient of this term is positive, and as this coefficient is increased the inflationary predictions move towards the linear potential predictions $n_s\approx1-3/(2N)$ and $r\approx4/N$ [@Martin:2013tda]. If the inflaton is dominantly coupling to fermions the coefficient of this term is negative, leading to a reduction in the values of $n_s$ and $r$ [@Okada:2010jf]. In this work we take the inflaton to be a SM singlet scalar field, and take the self-coupling $\lambda$ and $\xi$ to be free parameters, with $\xi\lesssim10^3$ as discussed in [section \[sec:quartic\]]{}.[^1] Radiative corrections for a SM singlet inflaton have been studied by refs. [@Lerner:2009xg; @Lerner:2011ge; @Kahlhoefer:2015jma]. Unlike these works, we focus on studying the effect of radiative corrections for general values of $\xi\lesssim10^3$, including the case of $\xi\ll1$. A related work which includes the case of $\xi\ll1$ is ref. [@Okada:2010jf]. In this work the inflaton is assumed to couple to fermions and prescription II is used. Ref. [@Racioppi:2018zoy] consideres a potential which coincides with the potential discussed in [section \[sec:p2\]]{} for inflaton coupling to bosons.[^2] Here, we extend previous works by considering both prescriptions I and II, and inflaton coupling to bosons or fermions. For each case we calculate the regions in the plane of coupling parameter values for which the spectral index $n_s$ and the tensor-to-scalar ratio $r$ are in agreement with the current data. We also display how $n_s$ and $r$ change due to radiative corrections in these regions. Finally, we note that the non-minimal quartic inflation model given by [eq. (\[lagrangian\])]{} is a special case of the universal attractor models discussed in ref. [@Kallosh:2013tua]. In the strong coupling limit $\xi\to\infty$, the inflationary predictions of these models coincide with those of conformal attractor models [@Kallosh:2013hoa], which correspond to the $\alpha=1$ case of the $\alpha$-attractor models [@Kallosh:2013yoa]. The relation between these types of models is elucidated in ref. [@Galante:2014ifa]. The reheating phase of Higgs and $\alpha$-attractor-type inflation models due to inflaton couplings to additional fields has been discussed in a number of works, see e.g. refs. [@Bezrukov:2008ut; @GarciaBellido:2008ab; @Bezrukov:2011gp; @Ueno:2016dim; @Drewes:2017fmn; @Dimopoulos:2017tud]. While the observational parameter values also depend on the details of the reheating phase in general, for the special case of the non-minimal quartic inflation model and for the range of $\xi$ values that we consider, the average equation of state during reheating is given by $p\approx\rho/3$ as we discuss in [section \[sec:quartic\]]{}. The number of e-folds and the observational parameter values are then to a good approximation independent of the reheat temperature. Thus, in our case the main effect of inflaton couplings to additional fields on the observational parameters is not due to the reheating phase but rather due to the radiative corrections to the potential during inflation, which we focus on this work. Inflation with non-minimal coupling {#sec:inf} =================================== Consider a non-minimally coupled scalar field $\phi$ with a canonical kinetic term and a potential $V_J(\phi)$: $$\label{vjphi} \frac{\mathcal{L}_J}{\sqrt{-g}}=\frac12F(\phi)R-\frac12g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-V_J(\phi)\,,$$ where the subscript $J$ indicates that the Lagrangian is specified in Jordan frame, and $F(\phi)=1+\xi\phi^2$. We are using units where the reduced Planck scale $m_P=1/\sqrt{8\pi G}\approx2.4\times10^{18}\text{ GeV}$ is set equal to unity, so we require $F(\phi)\to1
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We derive the braid relations of the charged anyons interacting with a magnetic field on Riemann surfaces. The braid relations are used to calculate the quasiparticle’s spin in the fractional quantum Hall states on Riemann surfaces. The quasiparticle’s spin is found to be topological independent and satisfies physical restrictions.' address: 'International School for Advanced Studies, SISSA, I-34014 Trieste, Italy ' author: - Dingping Li title: 'Intrinsic quasiparticle’s spin and fractional quantum Hall effect on Riemann surfaces' --- The possibility of fractional statistics on two dimensional surfaces was discovered in Ref.[@lei]. When two fractional-statistics particles (anyons) are interchanged, the wave function changes by a phase $\exp(i\theta)$, where $\theta$ is neither $\theta =0$ (Bose statistics) nor $\theta =\pi$ (Fermion statistics). The quasiparticles in fractional quantum Hall systems (for a review on the fractional quantum Hall effect (FQHE), see Ref.[@girvin]) are anyons [@arovas; @halp] and this picture had been used to construct the hierarchical wave function in the FQHE [@halp]. Anyons may also find applications in some condensed matter systems [@sup]. On higher dimension spaces ($D>2$), only Fermion and Boson exist and the Fermion’s spin is an half-integer, the Boson’s spin is an integer. It will be very interesting to know what is the spin of anyons and the spin-statistics relation of anyons. The spin-statistics relation of anyons is generalized one if the spin $s$ equals to $s={\case\theta/2\pi}$. Anyons in various models, for example, in non-linear sigma models, Chern-Simons field theories and relativistic quantum field theories on $2D$ dimension spaces indeed have the generalized spin-statistics relation [@rdt; @fro]. Naturally, we will ask the question what is the spin-statistics relation of the quasiparticle in the FQHE. The recent discussion about the quasiparticle’s spin (QPS) can be found in Refs.[@lidp; @wen; @kiv]. Reference.[@lidp] calculated the QPS by analyzing the hierarchical wave function or by calculating the Berry phase of the quasiparticles moving in a closed path on the sphere. Reference.[@wen] obtained the QPS by analyzing the Ginzburg-Laudau-Chern-Simons (GLCS) theory of the FQHE on the sphere. On the other hand, Reference.[@kiv] calculated the QPS based on the GLCS theory on the disc geometry. However, the results in Refs.[@lidp; @wen; @kiv] are different from each other. The ambiguity of the QPS in the literature is due to lack of a good definition of the QPS. In this paper, we will calculate the QPS by using braid relations of anyons on Riemann surfaces [@wu; @thou; @ein; @wend; @eina; @imbo; @bla]. We will show that the QPS calculated in the following is consistent with physical restrictions and is intrinsic in the sense of its topological independence. The results of the present paper is agreed with the results of Ref.[@lidp]. Let us consider $N$ spinning anyons on an oriented compact Riemann surface with $g$ handles[@eina; @imbo; @bla]. To define the anyon’s spin, we attach an oriented local frame to every particle. When a particle moves on a curved surface (the torus is a flat surface), the attached frame is parallel transported and a path-dependent frame-rotation is associated with the particle transport. Let us denote the clockwise $2\pi$ rotation of the frame attached to the particle by $R_{2\pi}$. The action of the operator $R_{2\pi}$ on the wave function will give a phase $\exp (i2\pi S)$. We define $S$ as the spin of the particle. $S$ is equal to $1/2$ for the electron by this definition. The braiding operators are $\sigma_i ,\, \rho_{n,i} ,\, \tau_{n,i}$, where $\sigma_i$ interchanges (clockwise) particle $i$ and particle $i+1$ and $\rho_{n,i} ,\, \tau_{n,i}$ take particle $i$ around noncontractable loops on the $m^{,}th$ handle. Here we use the same definition of operators $\rho_{n,i} ,\, \tau_{n,i}$ as Ref. [@eina]. The braid relations for spinning anyons on the Riemann surface with $g$ handles are \[brarel\] $$\sigma_j\sigma_{j+1}\sigma_j =\sigma_{j+1}\sigma_j\sigma_{j+1} , \label{brarel:1}$$ $$\tau_{m,j+1}= \sigma^{-1}_j \tau_{m,j}\sigma_j , \rho_{m,j+1}=\sigma^{-1}_j\rho_{m,j}\sigma_j , \label{brarel:2}$$ $$\rho^{-1}_{m,j}\tau_{m,j+1}\sigma^{-2}_j\rho_{m,j} \sigma^2_j\tau^{-1}_{m,j+1}=\sigma^2_j , \label{brarel:3}$$ $$\sigma_1 \sigma_2 \cdots \sigma^2_{N-1}\cdots \sigma_2 \sigma_1=R^{2(g-1)}_{2\pi} \prod^g_n\rho^{-1}_{n,1}\tau^{-1}_{n,1}\rho_{n,1}\tau_{n,1}. \label{relation}$$ For the spinless anyons on the sphere, Equation (\[relation\]) becomes $\sigma_1 \sigma_2 \cdots \sigma^2_{N-1}\cdots \sigma_2 \sigma_1=1 $, which had been derived in Ref. [@thou]. It expresses the fact that a closed (clockwise) loop of particle $1$ around all the other particles can be continuously shrunk to a point on the rear side of the sphere. Eq. (\[relation\]) is the generalization of the case of the spinless anyons on the sphere to the spinning anyons on general Riemann surfaces[@eina; @imbo]. When we deform the loop of the left side of Eq. (\[relation\]) to the loop of the right side of Eq. (\[relation\]), described by $\prod^g_n\rho^{-1}_{n,1}\tau^{-1}_{n,1}\rho_{n,1}\tau_{n,1}$, the attached spin frame of is rotated ($4\pi (g-1)$ rotation), we obtain a phase $R^{2(g-1)}_{2\pi}$. We need to include this phase in the right side of the equation. However for the charged anyons in magnetic field, which is the case of quasiparticles in FQHE, the braid relation  (\[relation\]) should be changed. We also need to include the Aharonov-Bohm phase $\exp (2\pi iq\Phi )$ in the right of Eq. (\[relation\]) because the charged anyon interacts with the magnetic field, where $q$ is the anyon’s charge and $\Phi$ is the magnetic flux out of the surface. Thus in stead of Eq. (\[relation\]), for the charged anyons on the magnetic field, we have $$\begin{aligned} \sigma_1 \sigma_2 \cdots \sigma^2_{N-1}\cdots \sigma_2 \sigma_1 & = & \exp (2\pi iq\Phi ) R^{2(g-1)}_{2\pi} \nonumber \\ & \times & \prod^g_n\rho^{-1}_{n,1} \tau^{-1}_{n,1}\rho_{n,1}\tau_{n,1} . \label{relation1}\end{aligned}$$ We will only consider the Abelian fractional statistics, which means that the representation of operator $\sigma_i$ is given by $\sigma_i =\sigma =\exp{i\theta}{\bf 1}_M$, where ${\bf 1}_M$ is the $M \times M$ identity matrix. Inserting $\sigma_i =\sigma =\exp{i\theta}{\bf 1}_M$ in Eq. (\[brarel:2\]) and Eq. (\[brarel:3\]), one obtains that $\tau_{m,j}=\tau_m , \, \rho_{m,j}=\rho_m ,\, \tau_m \rho_m =\sigma^2\rho_m \tau_m $. These relations and Eq. (\[relation1\]) yield $$\begin{aligned} \exp [2i(N-1)\theta] & = & \exp [2\pi iq\Phi +4\pi(g-1)S] \nonumber \\ & \times & \exp (-2ig\theta) . \label{brarel3}\end{aligned}$$ If there are several kinds of anyons, we need to introduce mutual statistics[@mutual]. The mutual statistics $\theta_{i,j
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'While internal space-time symmetries of relativistic particles are dictated by the little groups of the Poincaré group, it is possible to construct representations of the little group for massive particles starting from harmonic oscillator wave functions for the quark model. The resulting oscillator wave functions are covariant and can be Lorentz-boosted. It is thus possible to obtain the parton model by boosting the quark model. A review of Wigner’s theory of the little groups is given. It is shown that the covariant oscillator wave functions become squeezed as as the system becomes boosted. It is shown also that the Lorentz-squeezed quark distribution exhibits the peculiarities of Feynman’s parton model including the lack of coherence in the calculation of cross sections. A historical review of the concept of covariance is given.' address: 'Department of Physics, University of Maryland, College Park, Maryland 20742' author: - 'Y. S. Kim' title: Covariant Model of Relativistic Extended Particles based on the Oscillator Representation of the Poincaré Group --- Introduction {#intro} ============ The present form of quantum mechanics works well in atomic systems where electrons are bound by the Coulomb force from the nucleus. Quantum mechanics works also for atomic nuclei where the nucleus is a bound state of nucleons even though it is difficult to perform exact calculations. In both atomic and nuclear physics, nucleons are regarded as point particles. However, it was found by Hofstadter in 1995 [@hofsta55] that the proton’s charge is not concentrated on a point, and its charge distribution has a non-zero space-time extension. This appears in the elastic scattering of electrons by a proton. The observed scattering cross sections deviate from those predicted by the Rutherford scattering formula based on the proton with a point charge. This deviation is commonly called the form factor. In spite of many laudable efforts to explain this form factor within the framework of quantum field theory, the workable model for the form factors did not emerge until after Gell-Mann’s formulation of the quark model, in which all hadrons are bound states of quarks and/or anti-quarks [@gell64]. The question then is whether we can use the existing models of quantum mechanics, such as the nuclear shell model, to explain hadronic mass spectra [@owg64]. For the mass spectra, one of the most effective models has been and still is the model based on harmonic oscillator wave functions [@owg64; @fkr71]. The basic advantage of the oscillator model is that its mathematics is transparent, and it does not bury physics in mathematics even though it does not always produce the most accurate numerical results. In the quark model, the charge distribution within the proton comes from the distribution of the charged particles inside the hadron. The success of the oscillator model for static or slow-moving hadrons does not necessarily mean that the model can be extended to the relativistic regime. Indeed, the calculation of the form factor with Gaussian wave functions results in an exponential decrease for large momentum-transfer variables. However, this wrong behavior comes from the use of non-relativistic wave functions for relativistic problems. Indeed, Feynman [*et al.*]{} made an attempt to construct a covariant oscillator model [@fkr71]. Even though they did not achieve their goal in their paper, Feynman [*et al.*]{} quote the work of Fujimura [*et al.*]{} [@fuji70] who calculated the nucleon form factor by taking into account the effect of the Lorentz-squeeze on the oscillator wave functions. After studying these original papers, we can raise our level of abstraction. We observe first that the spherical harmonics can represent the three-dimensional rotation group, while serving as wave functions for the angular variables. Then, we can ask whether there are wave functions which can represent the Poincaré group. We can specifically ask whether it is possible to construct a set of normalizable harmonic oscillator wave functions to represent the Poincaré group. If YES, the wave functions can be Lorentz-boosted. These wave functions then have to go through another set of tests. Are they consistent with the existing laws of quantum mechanics. If YES, they then have to be exposed to the most cruel test in physics. Do they explain what we observe in high-energy laboratories? The purpose of this paper is to show that we can use the oscillator wave functions to answer the question of whether quarks are partons. While the quark model is valid for static hadrons, Feynman’s parton picture works only in the Lorentz frame where the hadronic speed is close to that of light [@fey69]. The quark model appears to be quite different from the parton model. On the other hand, they are valid in two different Lorentz frames. The basic question is whether the quark picture and the parton picture are two different manifestations of the same covariant entity. In this paper, we shall discuss first the internal space-time symmetries of relativistic particles in terms of appropriate representations of the Poincaré group [@wig39]. We then construct the oscillator wave functions satisfying the above-mentioned theoretical criterions. This oscillator formalism will explains both the quark and the parton pictures in two separate Lorentz frames. This formalism produces all the peculiarities of Feynman’s original form of the parton picture including the incoherence of parton cross sections. In Sec. \[littleg\], we present a brief history of applications of the little groups to internal space-time symmetries of relativistic particles. In Sec. \[covham\], we construct representations of the little group using harmonic oscillator wave functions. In Sec. \[parton\], it is shown that the Lorentz-boosted oscillator wave functions exhibit the peculiarities Feynman’s parton model in the infinite-momentum limit. Much of the concept of Lorentz-squeezed wave function is derived from elliptic deformations of a sphere resulting in a mathematical technique group called contractions [@inonu53]. In Appendix \[o3e2\], we discuss the contraction of the three-dimensional rotation group to the two-dimensional Euclidean group. In Appendix \[contrac\], we discuss the little group for a massless particle as the infinite-momentum/zero-mass limit of the little group for a massive particle. In Appendix \[kant\], the author gives his confession about his educational and cultural backgrounds which led to the research program outlined in this paper. Little Groups of the Poincaré Group {#littleg} =================================== The Poincaré group is the group of inhomogeneous Lorentz transformations, namely Lorentz transformations preceded or followed by space-time translations. In order to study this group, we have to understand first the group of Lorentz transformations, the group of translations, and how these two groups are combined to form the Poincaré group. The Poincaré group is a semi-direct product of the Lorentz and translation groups. The two Casimir operators of this group correspond to the (mass)$^{2}$ and (spin)$^{2}$ of a given particle. Indeed, the particle mass and its spin magnitude are Lorentz-invariant quantities. The question then is how to construct the representations of the Lorentz group which are relevant to physics. For this purpose, Wigner in 1939 studied the subgroups of the Lorentz group whose transformations leave the four-momentum of a given free particle [@wig39]. The maximal subgroup of the Lorentz group which leaves the four-momentum invariant is called the little group. Since the little group leaves the four-momentum invariant, it governs the internal space-time symmetries of relativistic particles. Wigner shows in his paper that the internal space-time symmetries of massive and massless particles are dictated by the $O(3)$-like and $E(2)$-like little groups respectively. The $O(3)$-like little group is locally isomorphic to the three-dimensional rotation group, which is very familiar to us. For instance, the group $SU(2)$ for the electron spin is an $O(3)$-like little group. The group $E(2)$ is the Euclidean group in a two-dimensional space, consisting of translations and rotations on a flat surface. We are performing these transformations everyday on ourselves when we move from home to school. The mathematics of these Euclidean transformations are also simple. However, the group of these transformations are not well known to us. In Appendix \[o3e2\], we give a matrix representation of the $E(2)$ group. The group of Lorentz transformations consists of three boosts and three rotations. The rotations therefore constitute a subgroup of the Lorentz group. If a massive particle is at rest, its four-momentum is invariant under rotations. Thus the little group for a massive particle at rest is the three-dimensional rotation group. Then what is affected by the rotation? The answer to this question is very simple. The particle in general has its spin. The spin orientation is going to be affected by the rotation! If the rest-particle is boosted along the $z$ direction, it will pick up a non-zero momentum component. The generators of the $O(3)$ group will then be boosted. The boost will take the form of conjugation by the boost operator. This boost will not change the Lie algebra of the rotation group, and the boosted little group will still leave the boosted four-momentum invariant. We call this the $O(3)$-like little group. If we use the four-vector coordinate $(x, y, z, t)$, the four-momentum vector for the particle at rest is $(0, 0, 0, m)$, and the three-dimensional rotation group leaves this four-momentum invariant. This little group is generated by $$J_{1} = \pmatrix{0&0&0&0\cr0&0&-i&0\cr0&i&0&0\cr0&0&0&0} , \qquad J_{2} = \pmatrix
{ "pile_set_name": "ArXiv" }
null
--- author: - | M. Csanád$^{1}$, M. I. Nagy$^{1}$, S. Lökös$^{1}$\ $^1$Eötvös Loránd University, H-1117 Budapest, Pázmány P. s. 1/a bibliography: - '../../master.bib' title: Exact solutions of relativistic perfect fluid hydrodynamics for a QCD equation of state --- Introduction ============ The interest in relativistic hydrodynamics grew in past years mainly due to the discovery of the almost perfect fluidity of the experimentally created Quark-Gluon-Plasma [@Zajc:2007ey]. Hydrodynamical models aim to describe the space-time picture of heavy-ion collisions and infer the relation between experimental observables and the initial conditions. Besides numerical simulations there is also interest in models where exact solutions of the hydrodynamical equations are used. In this paper we generalize a previously known class of exact solutions of relativistic perfect fluid hydrodynamics [@Csorgo:2003ry] to the case of arbitrary, temperature dependent speed of sound. The mentioned class of solutions form the basis of the relativistic Buda-Lund hydrodynamical model [@Csanad:2003qa]. This model yields a successful description of hadronic observables at RHIC energies (such as the pseudorapidity and transverse momentum dependence of the azimuthal anisotropy of different hadrons as well as the HBT radii [@Csanad:2003qa]), and the reconstructed final state in this model corresponds to simple explicit scaling solutions of hydrodynamics. The same final state however can be achieved from many initial states, depending on the Equation of State [@Csanad:2009sk]. If one is given a temperature dependent speed of sound as Equation of State, the solution presented in this paper thus can be used to determine the initial state from the reconstructed final state of a heavy-ion collision. As an example, we describe the time dependence of the system if one assumes the Equation of State from lattice QCD. The solutions given in this paper are the first exact analytic solutions of 1+3 dimensional relativistic hydrodynamics, to utilize an arbitrary Equation of State.[^1] Basic equations =============== Let us adopt the following notational conventions: the fluid coordinates are $x^\mu = {\left({t, {\mathbf{r}}}\right)}$, where ${\mathbf{r}}={\left({r_x, r_y, r_z}\right)}$ is the spatial coordinate, and the metric tensor is $g_{\mu\nu}=diag{\left({1,-1,-1,-1}\right)}$. (We denote space-time indices by Greek letters, space indices by Latin letters, and assume the summation convention.) The fluid four-velocity is $u^\mu\equiv\gamma{\left({1,{\mathbf{v}}}\right)}$, where ${\mathbf{v}}$ is the three-velocity and $\gamma=1/\sqrt{1-v^2}$. The thermodynamical quantities are denoted as follows: $p$ is the pressure, $\varepsilon$ is the energy density, $\sigma$ is the entropy density, $T$ is the temperature. If the fluid consists of individual conserved particles, or if there is some conserved charge, then the conserved number density is denoted by $n$, and the corresponding chemical potential by $\mu$. (For more than one conserved number densities, we may use indices to distinguish them.) All these quantities have dependence on $x^\mu$, but mostly this will not be explicitly written out. The basic hydrodynamical equations are the continuity and energy-momentum-conservation equations: $$\begin{aligned} \partial_\mu{\left({n u^\mu}\right)} & = 0,\label{e:cont}\\ \partial_\nu T^{\mu \nu} & = 0\label{e:em}.\end{aligned}$$ The energy-momentum tensor of a perfect fluid is $$\begin{aligned} T^{\mu\nu} ={\left({\varepsilon+p}\right)}u^\mu u^\nu-pg^{\mu \nu} .\end{aligned}$$ [Eq. (\[e:em\])]{} can be then transformed to (by projecting it orthogonal and parallel to $u^\mu$, respectively): $$\begin{aligned} {\left({\varepsilon+p}\right)}u^{\nu}\partial_{\nu}u^{\mu} & ={\left({g^{\mu\nu}-u^{\mu}u^{\nu}}\right)}\partial_{\nu}p,\label{e:euler} \\ {\left({\varepsilon+p}\right)}\partial_{\nu}u^{\nu}+u^{\nu}\partial_{\nu}\varepsilon & = 0\label{e:energy}.\end{aligned}$$ [Eq. (\[e:euler\])]{} is the relativistic Euler equation, while [Eq. (\[e:energy\])]{} is the relativistic form of the energy conservation equation. In Appendix \[s:app:Teq\] we recall the well-known fact that [Eq. (\[e:energy\])]{} is equivalent to the entropy conservation equation: $$\begin{aligned} \label{e:scont} \partial_\mu{\left({\sigma u^\mu}\right)}=0 .\end{aligned}$$ An analytic hydrodynamical solution is a functional form of $\varepsilon$, $p$, $T$, $u^\mu$ (and, if dealt with, $n$), which solves [Eqs. (\[e:euler\]) and (\[e:energy\])]{}, and, if present, $n$ also solves [Eq. (\[e:cont\])]{}. The quantities $\varepsilon$, $p$, $T$, and also $\sigma$, and $n$ are subject to the Equation of State (EoS), which closes the set of equations. We investigate the following EoS: $$\begin{aligned} \label{e:eos} \varepsilon = \kappa{\left({T}\right)} p ,\end{aligned}$$ and for the case when there is a conserved $n$ number density, the additional assumption is $$\begin{aligned} \label{e:tdef} p=nT. \end{aligned}$$ For the case of $\kappa{\left({T}\right)}=\kappa$ constant, an ellipsoidally symmetric solution of the hydrodynamical equations is presented in Ref. [@Csorgo:2003ry]: $$\begin{aligned} \label{e:usol0} u^\mu = \frac{x^\mu}{\tau} ,\quad \tau=\sqrt{t^2-r^2}=\sqrt{x_\mu x^\mu} ,\end{aligned}$$ $$\begin{aligned} \label{e:tsol0} n = n_0\frac{V_0}{V}\nu{\left({s}\right)},\quad T = T_0{\left({\frac{V_0}{V}}\right)}^{{\frac{1}{\kappa}}}{\frac{1}{\nu{\left({s}\right)}}} ,\end{aligned}$$ with $\nu{\left({s}\right)}$ being an arbitrary function and $$\begin{aligned} \label{e:V0s} s = \frac{r_x^2}{X^2} + \frac{r_y^2}{Y^2} + \frac{r_z^2}{Z^2},\quad V=\tau^3 ,\end{aligned}$$ where $X$, $Y$, and $Z$ are the time ($t$) dependent principal axes of an expanding ellipsoid. They have the explicit time dependence as $$\begin{aligned} \label{e:Z0} X = \dot X_0 t,\quad Y = \dot Y_0 t, \quad Z = \dot Z_0 t\end{aligned}$$ with $\dot X_0$, $\dot Y_0$, $\dot Z_0$ constants. The quantity $s$ has ellipsoidal level surfaces, and obeys $u^\nu\partial_\nu s=0$. We call $s$ a *scaling variable*, and $V$ the effective volume of a characteristic ellipsoid[^2]. This solution is *non-accelerating*, ie. obeys $u^\nu\partial_\nu u^\mu=0$. In the next section we present a generalization of this class of solutions to more general EoS. The new solutions will be presented in Section \[s:sols\], while Section \[s:eoseqs\] details their derivation. General Equation of State {#s:eoseqs} ========================= In order to find more general solutions, where a temperature dependent EoS can be used (as in [Eq. (\[e:eos\])]{}), for a given $u^\mu$ velocity field we may *define* the $V$ and $s$ quantities by their properties that $$\begin{aligned} \label{e:V1} u^\mu\partial_\mu V = V \partial_\mu u^\mu,\quad u^\mu\partial_\mu s = 0 .\end{aligned}$$ With these quantities, [Eq. (\[e:cont\])]{} is automatically solved (for the case when there is a conserved charge present) if $$\begin{aligned} \label{e:nsol} n = n_0\frac{V_0}{V}\nu{\left({s}\right)} ,\end{aligned}$$ again, with arbitrary $\nu{\left({s}\right)}$ function. To solve the [(\[e:energy\])]{} energy equation, we must make a distinction between two possible cases. The first
{ "pile_set_name": "ArXiv" }
null
--- author: - 'E. T. Seppälä and M. J. Alava' date: 'Received: December 12, 2000 / Revised version: April 11, 2001' title: 'Energy landscapes, lowest gaps, and susceptibility of elastic manifolds at zero temperature' --- Introduction {#intro} ============ In this paper we study zero-temperature or ground state elastic manifolds that are roughened by bulk disorder, in the presence of an external field. Such objects are relevant in many contexts of condensed matter and statistical physics [@Fis86; @Emig99]. The essential point here is a competition between the elasticity, which prefers flat manifolds, and the disorder, which induces wandering in order to take advantage of the low energy regions in the system. This leads to a glassy, complicated energy landscape and the fact that the quenched randomness dominates thermal effects at low temperatures. In two embedding dimensions (2D) such manifolds are under the name directed polymers (DP) [@KPZ; @HaH95; @Las98] particularly interesting through their connection to the celebrated Kardar-Parisi-Zhang (KPZ) equation of nonlinear surface growth. Directed polymers have an experimental realization as vortex-lines in granular superconductors [@blatter]. In higher dimensions elastic manifolds may be best considered as domain walls (DW) in ferromagnets with quenched impurities. Elastic manifolds have also other connections to charged density waves, the sine-Gordon model with disorder, random substrate problems, and vortex lattices, to name but a few [@blatter; @Cardy82; @Toner90]. Let us start by introducing the classical spin-half random Ising Hamiltonian $${\mathcal H} = - \sum_{\langle ij \rangle} J_{ij} S_i S_j - \sum_i H_i S_i, \label{RHamilton}$$ where $J_{ij}$ is the coupling constant between the nearest-neighbor spins $S_i$ and $S_j$, and $H_i$ is a field assigned to each spin. To this system we apply antiperiodic or domain wall -enforcing boundary conditions in one direction. The spins in the opposite boundaries, let us define in $z$-direction, $z=0$ and $z=L_z$ are forced to be up and down, respectively. In the case of ferromagnetic random bond (RB) Ising systems one has $J_{ij} \geq0$ and $H_i=0$. In the minimum energy state the spins prefer to align on each side of the induced domain wall. When $J_{ij} \lessgtr 0$ the spins become frustrated and the task to find the ground state (GS) structure is most often related to spin glass physics [@review; @droplet; @replica]. On the other hand, when for simplicity $J_{ij} = {\rm const.} = J > 0$, and $H_i \lessgtr 0$ one arrives at random field (RF) Ising systems. The random field Ising model (RFIM) has an experimental realization as a diluted antiferromagnet in a field. In the RFIM the ferromagnetic couplings compete with the random field contribution, which prefers in the ground state to have the spins to be oriented towards the field assigned to them. Here we concentrate mainly on the random bond Ising Hamiltonian, i.e., $J_{ij}>0$ and $H=0$, and in some special cases extend the discussion to RF domain walls as well. In the simplest case the spins are located in a square lattice, in $d=2$, or a cube in $d=3$, so that the lattice orientation is in the {10} and {100} directions, respectively. The elastic manifold is the interface, with the dimension $D=d-1$, which divides the system in two parts of up and down spins. At $T=0$ the problem of finding the ground state domain wall, which minimizes the path consisting of unsatisfied bonds between the spins on opposite sides of the domain wall becomes “global optimization”. In our case the displacement field is one-dimensional, $n=1$, but one can certainly think about generalizations so that the total dimension of a system $d=(D+n)$, where $n \geq 1$ is the dimension in which the manifold is able to fluctuate. The continuum Hamiltonian of elastic manifolds with an external field and $n=1$ may be written as $${\mathcal H} = \int \left[ \frac{\Gamma}{2} \{ \nabla z({\bf x}) \}^2 + V_r \{ {\bf x},z({\bf x})\} + h\{z({\bf x})\} \right] \, {\rm d}^D{\bf x}. \label{H}$$ The elastic energy is proportional to the area of the interface given by the first term, and $\Gamma$ is the surface stiffness of the interface. The second term of the integrand comes from the random potential, and the last term accounts for the potential caused by the external field. The use of random bond disorder means that the random potential is delta-point correlated, i.e., $\langle V_r ({\bf x},z)V_r ({\bf x'},z') \rangle =2 {\mathcal D} \delta ({\bf x}- {\bf x'})\delta (z-z')$. In the random field case $\langle V_r ({\bf x},z)V_r ({\bf x'},z') \rangle \sim \delta ({\bf x}- {\bf x'}) (z-z')$. The Hamiltonian (\[H\]) is also applicable to wetting in a three phase system, where two of the phases are separated by an interface in a random bulk [@Lip86; @Wuttke91; @Dole91]. In that case $h(z)$ is equivalent to the chemical potential, which tries to bind the interface to the wall, and competes with the random potential, in the presence of which the interface tends to wander in the low energy regions of the system. Below the upper critical dimension the geometric behavior of elastic manifolds is characterized by the spatial fluctuations. For the mean-square fluctuations one has $$w^2 = \left \langle \left[z({\bf x}) - \overline{z({\bf x})} \right ]^2 \right \rangle \sim L^{2 \zeta}, \label{w}$$ where $z({\bf x})$ is the height of the interface with the mean $\overline{z}$, ${\bf x}$ is the $D=d-n$ dimensional internal coordinate of the manifold, $L$ is the linear size of the system, and $\zeta$ is the corresponding roughness exponent. At low temperatures in $(D+n)=(1+1)$ dimensions with RB disorder, i.e., when one actually considers a directed polymer in random media [@KPZ; @HaH95], the roughness exponent is calculated exactly via the KPZ formalism to be $\zeta=2/3$. In higher manifold dimensions $D$ with $n=1$ the functional renormalization group (FRG) calculations give the approximatively values $\zeta \simeq 0.208(4-D)$ for RB disorder and $\zeta = (4-D)/3$ for RF disorder [@Fis86; @new]. The expression for $\zeta$ tells also that the upper critical dimension for the elastic manifold is $D_u=4$. For manifolds with varying $n$ and $D$ Balents and D. Fisher have derived using FRG $\zeta \simeq [(4-D)/(n+4)]\{1+(1/4e)2^{-[(n+2)/2]}[(n+2)^2/(n+4)] [1-\ldots]\}$ [@Balents]. At zero temperature the total average energy $\overline{E}$ of an elastic manifold equals its free energy and grows linearly with the system size $L^D$ and its fluctuations scale for all $n$ as $\Delta E = \left \langle ( E- \overline{E} )^2 \right \rangle^{1/2} \sim L^\theta$, where [@HuHe] $$\theta = 2 \zeta +D -2 \label{HuHe}$$ is the first non-analytic correction to the energy. The same hyperscaling law holds for RF manifolds, too, in $D>1$ [@Seppala98]. Having a positive $\theta$ implies that the temperature is an irrelevant variable in the renormalization group (RG) sense and the $T=0$ fixed point dominates. For $D=1$, $n > 2$ there exists a $T_c$, and $T=0$ fixed point dominates only below $T_c$, likewise always for $n\leq 2$ [@Fish91]. At the randomness dominated pinned phase the temperature is “dangerously” irrelevant, which means that in RG calculations the interesting correlation functions cannot be obtained by setting $T$ to zero. Above $T_c$ the fluctuations become random walk -like with $\zeta=1/2$ and $\theta =0$. For $D=1$ with $n>1$ there is no exact result existing for the roughness exponent below $T_c$, and hence whether $\zeta \to 1/2$ and $\theta \to 0$ for a finite $n_c$ is still an open question – that is, what is the upper critical dimension of the KPZ equation. Elastic manifolds self-average in the sense that the intensive fluctuations of the roughness and
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'In the present contribution, we investigate first-order nonlinear systems of partial differential equations which are constituted of two parts: a system of conservation laws and non-conservative first order terms. Whereas the theory of first-order systems of conservation laws is well established and the conditions for the existence of supplementary conservation laws, and more specifically of an entropy supplementary conservation law for smooth solutions, well known, there exists so far no general extension to obtain such supplementary conservation laws when non-conservative terms are present. We propose a framework in order to extend the existing theory and show that the presence of non-conservative terms somewhat complexifies the problem since numerous combinations of the conservative and non-conservative terms can lead to a supplementary conservation law. We then identify a restricted framework in order to design and analyze physical models of complex fluid flows by means of computer algebra and thus obtain the entire ensemble of possible combination of conservative and non-conservative terms with the objective of obtaining specifically an entropy supplementary conservation law. The theory as well as developed computer algebra tool are then applied to a Baer-Nunziato two-phase flow model and to a multicomponent plasma fluid model. The first one is a first-order fluid model, with non-conservative terms impacting on the linearly degenerate field and requires a closure since there is no way to derive interfacial quantities from averaging principles and we need guidance in order to close the pressure and velocity of the interface and the thermodynamics of the mixture. The second one involves first order terms for the heavy species coupled to second order terms for the electrons, the non-conservative terms impact the genuinely nonlinear fields and the model can be rigorously derived from kinetic theory. We show how the theory allows to recover the whole spectrum of closures obtained so far in the literature for the two-phase flow system as well as conditions when one aims at extending the thermodynamics and also applies to the plasma case, where we recover the usual entropy supplementary equation, thus assessing the effectiveness and scope of the proposed theory.' author: - 'Pierre CORDESSE[^1]' - 'Marc MASSOT[^2]' bibliography: - '../biblatex/jabref\_bdd.bib' title: 'Entropy supplementary conservation law for non-linear systems of PDEs with non-conservative terms: application to the modelling and analysis of complex fluid flows using computer algebra[^3]' --- Nonlinear PDEs with non-conservative terms, supplementary conservation law, entropy, computer algebra, two-phase flow, Baer-Nunziato model, multicomponent plasma fluid model 35L60; 68W30; 76N15; 76T10; 82D10 Introduction ============ First-order nonlinear systems of partial differential equations and more specifically systems of conservation laws have been the subject of a vast literature since the second half of the twentieth century because they are ubiquitous in mathematical modelling of fluid flows and are used extensively for numerical simulation in applications and industrial context [[@Bissuel_2018; @Gaillard_2016]]{}. Such systems of equation can either be rigorously derived from kinetic theory of gases through various expansion techniques [[@Ferziger_1972; @Woods_1975]]{}, or can be derived using rational thermodynamics and fluid mechanics including stationary action principle (SAP) [[@Serrin_1959; @Landau_1976; @Truesdell_1969]]{}. As far as Euler or Navier-Stokes equations are concerned for a gaseous flow field, the outcome of both approaches are similar and the mathematical properties of these systems have been thoroughly investigated for the past decades. An interesting related problem is the quest for supplementary conservation laws. Noether’s theorem [[@Olver_1986]]{} leads, within the framework of SAP, to the derivation of supplementary conservation laws based on symmetry transformations of the variational problem under investigation[^4]. Examples of such derivations on two-phase flow modelling can be found in [[@Gavrilyuk_Saurel_2002; @Drui_JFM_2019]]{}. However, to the authors knowledge, no symmetry transformations have been identified yielding a conservative law on the entropy of the system. In fact, SAP does not allow to reach a closed system of equations, and one has to provide a closure for the entropy (see [[@Gouin_2009]]{} for example). A specific type of supplementary conservation equation for smooth solution is especially important, namely the *entropy equation*, derived through the theory developed in [[@Godunov_1961; @Friedrichs_1971]]{} for systems of conservation laws. Such systems of PDEs are hyperbolic at any point where a locally convex entropy function exists [[@Mock_1980]]{}, and when they are equipped with a strictly convex entropy, they can be symmetrized [[@Friedrichs_1971]]{} [[@Harten_Hyman_1983]]{} and thus are hyperbolic. These properties have been at the heart of the mathematical theory of existence and uniqueness of smooth solutions [[@Kawashima_1988]]{} [[@Giovangigli_1998]]{}, but they are also a corner stone for the study of weak solutions for which the work of [[@Kruzkov_1970]]{} proves the well-posedness of Cauchy problem for one-dimensional systems. Nonetheless, for a number of applications, where reduced-order fluid models have to be used for tractable mathematical modelling and numerical simulations, be it in the industry or in other disciplines, micro-macro kinetic-theory-like approaches as well as rational thermodynamics and SAP approaches often lead to system of conservation laws involving *non-conservative terms*. Among the large spectrum of applications, we focus on two types of models, which exemplify the two approaches: 1- two phase flows models which rely on a hierarchy of diffuse interface models among which stands the Baer-Nunziato [[@Baer_Nunziato_1986]]{} model used when full disequilibrium of the phases must be taken into account. Since this model is derived through rational thermodynamics, the macroscopic set of equations can not be derived from physics at small scale of interface dynamics and thus require closure of interfacial pressure and velocity, 2- multicomponent fluid modelling of plasmas flows out of thermal equilibrium, where the equations can be derived rigorously from kinetic theory using a multi-scale Chapman-Enskog expansion mixing a hyperbolic scaling for the heavy species and a parabolic scaling for the electrons [[@Graille_2007]]{}. Concerning the thermodynamics, whereas for the first model it has to be postulated and requires assumptions, it can be obtained from kinetic theory in the second model. In both cases, the models involve non-conservative terms, but these terms do not act on the same fields; linearly degenerate field is impacted for the two-phase flow model, whereas it acts on the genuinely nonlinear fields in the second [[@Wargnier_2018]]{}. Whereas hyperbolicity depends on the closure and is not guaranteed for the first class of models [[@Gallouet_2004]]{}, the second is naturally hyperbolic [[@Graille_2007]]{} and also involves second-order terms and eventually source terms [[@Magin_2009]]{}. Thus, the presence of *non-conservative terms* encompasses several situations and requires a general theoretical framework. While Noether’s theorem can still applied to obtain some supplementary conservation laws, it does not permit to exhibit all of them and especially not an entropy supplementary conservation law. A unifying theory extending the standard approach for systems of conservations laws (entropy supplementary conservation law, entropic symmetrization, Godunov-Mock theorem, hyperbolicity) is still missing for such systems even if some key advances exist. The system has been shown to be symmetrizable by [[@Coquel_2013]]{} – not in the sens of Godunov-Mock – far from the resonance condition for which hyperbolicity degenerates. In [[@Forestier_2011]]{}, the model is proved to be partially symmetrizable in the sense of Godunov-Mock. The present paper first proposes an extension of the theory of supplementary conservation laws for system of conservation laws to first-order nonlinear systems of partial differential equations which are constituted of two parts: a system of conservation laws and *non-conservative first order terms*.We emphasize how the presence of non-conservative terms somewhat complexifies the problem since numerous combinations of the conservative and non-conservative terms can lead to supplementary conservation laws. We then identify a restricted framework in order to design and analyze physical models of complex fluid flows by means of computer algebra and thus obtain the entire ensemble of possible combination of conservative and non-conservative terms to obtain an entropy supplementary conservation law. The proposed theoretical approach is then applied to the two systems identified so far for their diversity of behaviour. Even if the whole theory is valid for any supplementary conservation law, we focus on obtaining an *entropy* supplementary conservation law. For the two-phase flow model, assuming a thermodynamics of non-miscible phases, we derive conditions to obtain an entropy supplementary conservative equation together with a compatible thermodynamics and closures for the non-conservative terms. Interestingly enough, all the closures proposed so far in the literature are recovered [[@Baer_Nunziato_1986; @Kapila_1997; @Bdzil_1999; @Lochon_PhdThesis_2016; @Saurel_Gavrilyuk_2003]]{}. The strength of the formalism lies also in the capacity to derive such conditions for some level of mixing of the phases. By introducing a mixing term in the definition of the entropy, the new theory brings out constraints on the form of the added mixing term. We recover not only the closure proposed to account for a configuration energy as in the context of deflagration-to-detonation [[@Baer_N
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'Let $G$ be a finite group, and $\a$ a nontrivial character of $G$. The McKay graph $\MC(G,\a)$ has the irreducible characters of $G$ as vertices, with an edge from $\c_1$ to $\c_2$ if $\c_2$ is a constituent of $\a\c_1$. We study the diameters of McKay graphs for finite simple groups $G$. For alternating groups, we prove a conjecture made in [@LST]: there is an absolute constant $C$ such that $\hbox{diam}\,{\mathcal M}(G,\a) \le C\frac{\log |\AAA_n|}{\log \a(1)}$ for all nontrivial irreducible characters $\a$ of $\AAA_n$. Also for classsical groups of symplectic or orthogonal type of rank $r$, we establish a linear upper bound $Cr$ on the diameters of all nontrivial McKay graphs.' address: - 'M.W. Liebeck, Department of Mathematics, Imperial College, London SW7 2BZ, UK' - 'A. Shalev, Institute of Mathematics, Hebrew University, Jerusalem 91904, Israel' - 'P.H. Tiep, Department of Mathematics, Rutgers University, Piscataway, NJ 08854, USA' author: - 'Martin W. Liebeck' - Aner Shalev - Pham Huu Tiep title: McKay graphs for alternating and classical groups --- [^1] Introduction ============ For a finite group $G$, and a (complex) character $\a$ of $G$, the [*McKay graph*]{} $\MC(G,\a)$ is defined to be the directed graph with vertex set ${\rm Irr}(G)$, there being an edge from $\c_1$ to $\c_2$ if and only if $\c_2$ is a constituent of $\a\c_1$. A classical result of Burnside and Brauer [@Br] shows that $\MC(G,\a)$ is connected if and only if $\a$ is faithful. The study of McKay graphs for finite simple groups $G$ was initiated in [@LST], with a particular focus on the diameters of these graphs. Theorem 2 of [@LST] establishes a quadratic upper bound $\hbox{diam}\,{\mathcal M}(G,\a) \le Cr^2$ for any simple group $G$ of Lie type or rank $r$ and any nontrivial $\a \in {\rm Irr}(G)$. Notice that the smallest (resp. largest) nontrivial irreducible character degrees of $G$ are at most $q^{cr}$ (resp. at least $q^{c'r^2}$), where $c,c'$ are constants, and hence the maximal diameter of a McKay graph ${\mathcal M}(G,\a)$ is at least a linear function of $r$. Theorem 3 of [@LST] implies a linear upper bound on these diameters for the classical groups $G=\PSL_n^\e(q)$, provided $q$ is large compared to $n$. Our first main result establishes a linear upper bound for the remaining classical groups. \[main1\] Let $G$ be a quasisimple classical group $Sp_n(q)$ or $\O_n^\e(q)$, and let $\a$ be a nontrivial irreducible character of $G$. Then $\hbox{diam}\,{\mathcal M}(G,\a) \le Cn$, where $C=16$ or $32$, respectively. An obvious lower bound for $\hbox{diam}\,{\mathcal M}(G,\a)$ (when $\a(1)>1$) is given by $\frac{\log \bmax(G)}{\log \a(1)}$, where $\bmax(G)$ is the largest degree of an irreducible character of $G$. In [@LST Conjecture 1] we conjectured that for simple groups $G$, this bound is tight up to a multiplicative constant. This conjecture was proved in [@LST Theorem 3] for the simple groups $\PSL_n^\e(q)$, provided $q$ is large compared to $n$. Recently it has also been established for the symmetric groups in [@S]. Deducing it for the alternating groups is not entirely trivial, and this is the content of our next result. \[main2\] There is an effective absolute constant $C$ such that, for all $n \geq 5$ and for all nontrivial irreducible characters $\a$ of $G:=\AAA_n$, $$\hbox{diam}\,{\mathcal M}(G,\a) \le C\frac{\log |G|}{\log \a(1)}.$$ In our final result, we consider covering ${\rm Irr}(G)$ by products of arbitrary irreducible characters, instead of powers of a fixed character. This idea was suggested by Gill [@G], inspired by an analogous result of Rodgers and Saxl [@RS] for conjugacy classes in $G=\SL_n(q)$: this states that if a collection of conjugacy classes of $G$ satisfies the condition that the product of the class sizes is at least $|G|^{12}$, then the product of the classes is equal to $G$. As a piece of notation, for characters $\c_1,\ldots,\c_l$ of $G$, we write $\c_1\c_2\cdots \c_l \supseteq \Irr(G)$ to mean that every irreducible character of $G$ appears as a constituent of $\c_1\c_2\cdots \c_l$. Also, let $g: \N\to \N$ be the function appearing in [@LST Theorem 3]. \[rodsax\] - Let $G$ be a simple group of Lie type of rank $r$, let $l \ge 489r^2$, and let $\c_1,\ldots,\c_l \in \Irr(G) \setminus 1_G$. Then $\c_1\c_2\cdots \c_l \supseteq \Irr(G)$. - Let $G = \PSL_n^\e(q)$ with $q>g(n)$, let $l \in \N$, and let $\c_1,\ldots \c_l \in \Irr(G)$ satisfy $\prod_1^l \c_i(1) > |G|^{10}$. Then $\c_1\c_2\cdots \c_l \supseteq \Irr(G)$. Gill [@G] has conjectured that part (ii) of the theorem holds for all simple groups (with the constant 10 possibly replaced by a different constant). As a stepping stone in the spirit of the linear bound given by Theorem \[main1\], let us pose the following more modest conjecture. \[rsax\] There is an absolute constant $C>0$ such that the following holds. Let $G=\Cl_n(q)$, a classical simple group of dimension $n$, or $\AAA_n$, an alternating group of degree $n\ge 5$. Let $l \ge Cn$, and let $\c_1,\ldots,\c_l \in \Irr(G) \setminus 1_G$. Then $\c_1\c_2\cdots \c_l \supseteq \Irr(G)$. See Proposition \[rs2-an\] for some partial result on Conjecture \[rsax\] in the cae of $\AAA_n$. The layout of the paper is as follows. Section \[prel1\] contains a substantial amount of character theory for symplectic and orthogonal groups that is required for the proof of Theorem \[main1\], which is completed in Section \[pfth1\]. The remaining sections \[pfth2\] and \[pfth3\] contain the proofs of Theorems \[main2\] and \[rodsax\], respectively. Some character theory for symplectic and orthogonal groups {#prel1} ========================================================== Let $V = \F_q^d$ be endowed with a non-degenerate, alternating or quadratic of type $\e = \pm$, form and let $G$ denote the derived subgroup of the full isometry group of the form. Assume that $G$ is quasisimple, so that $G = \Sp(V) = \Sp_d(q)$ or $\O(V) = \O^\e_d(q)$. This section contains a detailed study of some specific irreducible characters $\c$ of $G$ – namely, the constituents of the permutation character $\Ind^G_{[P,P]}(1_{[P,P]})$, where $P$ is the maximal parabolic subgroup of $G$ stabilizing a singular 1-space. Two of the main results of the section are Propositions \[rat-so21\] and \[rat-sp-so22\], which give upper bounds for the character ratios $|\c(g)/\c(1)|$ for $g\in G$. These will be used in Section \[pfth1\] to prove Theorem \[main1\]. Reduction lemmas {#red} ---------------- It is well known that the permutation action of $G$ on the set of singular $1$-spaces of $V$ is primitive of rank $3$, and thus its character is $\rho = 1_G + \a + \b$, with $\a, \b \in \Irr(G)$. Let (the parabolic subgroup) $P=QL$ denote a point stabilizer in this action, with
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We extend the Higgs triplet model so as to include dark matter candidates and a simple suppression mechanism for the vacuum expectation value ($v_\Delta$) of the triplet scalar field. The smallness of neutrino masses can be naturally explained with the suppressed value of $v_\Delta$ even when the triplet fields are at the TeV scale. The Higgs sector is extended by introducing $Z_2$-odd scalars (an ${{\text{SU}}}(2)_L$ doublet $\eta$ and a real singlet $s_2^0$) in addition to a $Z_2$-even complex singlet scalar $s_1^0$ whose vacuum expectation value violates the lepton number conservation by a unit. In our model, $v_\Delta$ is generated by the one-loop diagram to which $Z_2$-odd particles contribute. The lightest $Z_2$-odd scalar boson can be a candidate for the dark matter. We briefly discuss a characteristic signal of our model at the LHC.' author: - Shinya Kanemura - Hiroaki Sugiyama title: | Dark matter and a suppression mechanism for neutrino masses\ in the Higgs triplet model --- introduction {#sec:intro} ============ Existence of dark matter (DM) has been established, and its thermal relic abundance has been determined by the WMAP experiment [@WMAP; @Komatsu:2008hk]. If the essence of DM is an elementary particle, the weakly interacting massive particle (WIMP) would be a promising candidate. It is desired to have a viable candidate for the dark matter in models beyond the standard model (SM). The WIMP dark matter candidate can be accommodated economically by introducing only an inert scalar field [@Silveira:1985rk; @Deshpande:1977rw; @i-nplet], where we use “inert” for the $Z_2$-odd property. The imposed $Z_2$ parity ensures the stability of the DM candidate. Phenomenology in such models have been studied in, e.g., Refs. [@c-i-singlet; @r-i-singlet; @i-doublet; @Lundstrom:2008ai; @Araki:2011hm; @THDM-iSDM; @HTM-iSDM]. On the other hand, it has been confirmed by neutrino oscillation measurements that neutrinos have nonzero but tiny masses as compared to the electroweak scale [@solar; @atm; @acc; @reactor-S; @reactor-L]. The different flavor structure of neutrinos from that of quarks and leptons may indicate that neutrino masses are of Majorana type. In order to explain tiny neutrino masses, many models have been proposed. The seesaw mechanism is the simplest way to explain tiny neutrino masses, in which right-handed neutrinos are introduced with large Majorana masses [@seesaw; @Mohapatra:1979ia]. Another simple model for generating neutrino masses is the Higgs Triplet Model (HTM) [@Mohapatra:1979ia; @HTM]. However, these scenarios do not contain dark matter candidate in themselves. In a class of models where tiny neutrino masses are generated by higher orders of perturbation, the DM candidate can be naturally contained [@KNT; @Ma; @AKX; @Aoki:2011yk; @Kanemura:2010bq; @Gu:2007ug]. In models in Refs. [@KNT; @Ma; @AKX; @Aoki:2011yk; @Kanemura:2010bq], the Yukawa couplings of neutrinos with the SM Higgs boson are forbidden at the tree level by imposing a $Z_2$ parity. The same $Z_2$ parity also guarantees the stability of the lightest $Z_2$-odd particle in the model which can be the candidate of the DM as long as it is electrically neutral. In this paper, we consider an extension of the HTM in which by introducing the $Z_2$ parity $m_\nu$ is generated at the one-loop level and the DM candidate appears. In the HTM, Majorana masses for neutrinos are generated via the Yukawa interaction $h_{\ell{{\ell^\prime}}} \overline{L_\ell^c}\, i\sigma_2 \Delta L_{{\ell^\prime}}$ with a nonzero vacuum expectation value (VEV) of an ${{\text{SU}}}(2)_L$ triplet scalar field $\Delta$ with the hypercharge of $Y=1$. The VEV of $\Delta$ is described by $v_\Delta \sim \sqrt{2} \mu v^2/(2 M_\Delta^2)$, where $v$ is the VEV of the Higgs doublet field $\Phi$ and $M_\Delta$ is the typical mass scale of the triplet field; the dimensionful parameter $\mu$ breaks lepton number conservation at the trilinear term $\mu\,\Phi^T i\sigma_2 \Delta^\dagger \Phi$ which we refer to as the $\mu$-term. As the simplest explanation for the smallness of neutrino masses, the mass of the triplet field is assumed to be much larger than the electroweak scale. On the other hand a characteristic feature of the HTM is the fact that the structure of the neutrino mass matrix $(m_\nu)_{\ell{{\ell^\prime}}}$ is given by that of the Yukawa matrix, $h_{\ell{{\ell^\prime}}} \propto (m_\nu)_{\ell{{\ell^\prime}}}$. The direct information on $(m_\nu)_{\ell{{\ell^\prime}}}$ would be extracted from the decay $H^{\pm\pm} \to \ell^\pm{{\ell^\prime}}^{\prime\pm}$ [@nu-LHC] if $H^{++}$ is light enough to be produced at collider experiments, where $H^{++}$ is the doubly charged component of the triplet field $\Delta$. At hadron colliders, the $H^{\pm\pm}$ can be produced via ${{q\overline{q} \to Z^\ast (\gamma^\ast) \to H^{++}H^{--}}}$ [@HppHmm] and ${{q^\prime\overline{q} \to W^{\pm\ast} \to H^{\pm\pm}H^\mp}}$ [@HppHm]. The $H^{\pm\pm}$ searches at the LHC put lower bound on its mass as $m_{H^{\pm\pm}}^{}\gtrsim 300\,{{\text{GeV}}}$ [@Hpp-CMS; @Hpp-ATLAS], assuming that the main decay mode is $H^{\pm\pm} \to \ell^\pm \ell^{\prime\pm}$. Phenomenological analyses for $H^{\pm\pm}$ in the HTM at the LHC have also been performed in Ref. [@Hpp]. Triplet scalars can contribute to lepton flavor violation (LFV) in decays of charged leptons, e.g., $\mu \to \bar{e}ee$ and $\tau \to \bar{\ell}{{\ell^\prime}}\ell^{{{\prime\prime}}}$ at the tree level and $\ell \to {{\ell^\prime}}\gamma$ at the one-loop level. Relation between these LFV decays and neutrino mass matrix constrained by oscillation data was discussed in Refs. [@Chun:2003ej; @LFV-HTM]. In order to explain the small $v_\Delta$ with such a detectable light $H^{++}$, the $\mu$ parameter has to be taken to be unnaturally much lower than the electroweak scale. Therefore, it would be interesting to extend the HTM in order to include a natural suppression mechanism of the $\mu$ parameter (therefore $v_\Delta$) in addition to the DM candidate. In our model, lepton number conservation is imposed to the Lagrangian in order to forbid the $\mu$-term in the HTM at the tree level while the triplet Yukawa term $h_{\ell{{\ell^\prime}}}^{} \overline{L_\ell^c}\, i\sigma_2 \Delta L$ exists. The VEV of a $Z_2$-even complex singlet scalar $s_1^0$ breaks the lepton number conservation by a unit. An ${{\text{SU}}}(2)_L$ doublet $\eta$ and a real singlet $s_2^0$ are also introduced as $Z_2$-odd scalars in order to accommodate the DM candidate. Then, the $\mu$-term is generated at the one-loop level by the diagram in which the $Z_2$-odd scalars are in the loop. By this mechanism, the smallness of $v_\Delta \ll v$ is realized, and the tiny neutrino masses are naturally explained without assuming the triplet fields to be heavy. The Yukawa sector is then the same as the one in the HTM, so that its predictions for the LFV processes are not changed. See Refs. [@Babu:2001ex; @Chun:2003ej] for some discussions about two-loop realization of the $\mu$-term[^1]. This paper is organized as follows. In Sec. \[sec:HTM\], we give a quick review for the HTM to define notation. In Sec. \[sec:1-loop\], the model for radiatively generating the $\mu$ parameter with the dark matter candidate is presented. Some phenomenological implications are discussed in Sec. \[sec:pheno\], and the conclusion is given in Sec. \[sec:concl\]. The full expressions of the Higgs potential and mass formulae for scalar bosons in our model are given in Appendix. Higgs Triplet Model {#sec
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We demonstrate rotational and vibrational cooling of cesium dimers by optical pumping techniques. We use two laser sources exciting all the populated rovibrational states, except a target state that thus behaves like a dark state where molecules pile up thanks to absorption-spontaneous emission cycles. We are able to accumulate photoassociated cold Cs$_{2}$ molecules in their absolute ground state ($v=0,J=0$) with up to $40\%$ efficiency. Given its simplicity, the method could be extended to other molecules and molecular beams. It also opens up general perspectives in laser cooling the external degrees of freedom of molecules.' author: - 'I. Manai' - 'R. Horchani' - 'A. Fioretti' - 'M. Allegrini' - 'H. Lignier' - 'P. Pillet' - 'D. Comparat' title: Rovibrational cooling of molecules by optical pumping --- It is commonly admitted that optical manipulation of molecules comes up against strong limitations. The difficulty originates from the large number of internal states accessible by spontaneous emission events [@2008_PRL_Ye_direct_cooling]. However, a few recent experiments succeeded to implement molecular optical pumping: for example, the vibrational cooling of Cs$_2$ [@MatthieuViteau07112008], the optical cooling of SrF [@shuman2010laser], or the rotational cooling of molecular ions [@2010NatPhDrewsen_rotational_cooling]. These methods, like the one we present in this article, are fundamentally different from coherent optical manipulations, such as STIRAP, that affect the population of a single quantum state [@JohannDanzl07102008; @2010JinPRL_HyperfinePreparation]. Optical cooling of both the vibration and the rotation of Cs$_2$ molecules is complicated because these two degrees of freedom can not be manipulated independently. The pumping of one of them tends to impair the other one, i.e. the vibrational pumping is likely to modify the rotational quantum number, just as the rotational pumping is likely to modify the vibrational quantum number. As a consequence, a global rovibrational cooling can only be achieved through an interplay between both processes. Our vibrational pumping, already demonstrated in [@MatthieuViteau07112008; @2009_JMOp_Cooling; @2009NJPh...11e5037S; @2009PhRvA..80e1401S; @2012_PRA_Horchani_Conversion], makes use of a broadband laser whose spectrum has been specifically shaped to excite all the vibrational levels but one where molecules accumulate. The frequency resolution is limited to $\sim 0.1$ cm$^{-1}$. For rovibrational pumping, a requirement is the control of the light spectrum with a resolution on the order of magnitude of the rotational constants [@2009_JMOp_Cooling; @2010MolPh.108..795S]. As the Cs$_2$ molecules have a rotational constant of about $0.01$ cm$^{-1}$, which is exceptionally small compared with the great majority of diatomic molecules, we have designed a method improving the spectral resolution of molecular optical pumping with respect to the grating technique. It consists in scanning a narrow-band laser diode in appropriate regions of the rotational spectrum, which allows one to modify the population distribution among the rotational levels characterized by their $J$ quantum numbers. Our experimental setup is based on a caesium magneto-optical trap (MOT). All the manipulations achieved on this source last a total of $100$ ms. During the first 20 ms, a cw Ti:Sa laser is focused on the MOT to form molecules in the electronic ground state by photoassociation (PA). The PA scheme, detailed in Ref. [@2011PCCP...1318910L], consists in exciting a $0^-_g (6s + 6p_{1/2})(v'=26,J')$ rovibrational level, where $v'$ is the vibrational quantum number counted from the dissociation limit, and $J'$ the rotational quantum number. The rotational and vibrational cooling lasers are switched on at the same time as the PA laser, but are respectively applied for $25$ ms and $28$ ms. The vibrational populations are then probed by converting molecules into ions via a resonant enhanced 2 photon ionization scheme (RE2PI). For this purpose, a pulsed dye laser is turned on $1$ ms after switching off the vibrational cooling laser. It allows us to scan the vibrational transitions between the X$^1\Upsigma_{\rm g}^{+}$ state and the C$^1\Uppi_u$ or D$^1\Upsigma^+_u$ intermediate states [@2011PCCP...1318910L]. Cs$_2^+$ ions are finally detected by micro-channel plates. To access the rotational distributions, we insert a 50 $\mu$s pulse of a narrow band laser $0.5$ ms before the application of the ionizing pulsed laser. . Vibrational cooling is performed according to Ref. [@MatthieuViteau07112008], i.e. with a femtosecond laser ($200$ mW final power, $120$ fs pulse duration, $1$ mm$^2$ beam size) tuned to rovibrational transitions between the ground state and the B$^1\Uppi_u$ excited state as shown in Fig. \[fig:figure1\]. This allows us to accumulate molecules in $v=0$. It is conceivable to extend the shaping technique of the femtosecond laser to rotational cooling provided that the molecular species under study has larger rotational constants [@2011PCCP...1318825L; @2009_JMOp_Cooling] than the laser shaping resolution. We stress the fact that this requirement is not fulfilled in the case of Cs$_2$. To manipulate the rotational populations of Cs$_2$, we employ a cw DFB diode laser ($\sim 2$ MHz linewidth, $\sim 6$ mW power, $3.5$ mm$^2$ beam size) scanned across the B$^{1}\Uppi_{\rm u}(v_B=3,J_B)\leftarrow $X$^{1}\Upsigma_{\rm g}^{+}(v=0,J)$ transitions though the diode current (see right part of Fig. \[fig:figure1\]). The choice of these transitions originates from the wavelength accessibility of the diode laser. We can explore the $P$, $Q$, or $R$ rotational branches, schematically shown in Fig. \[fig:figure3\], and thus change the $J$ value as proposed in [@1996_JCP_BahnsLaserCooling]. The trend in the evolution of $J$ is governed by the branch selected at the excitation. The rotational spectroscopy of the population in $v=0$ cannot be achieved with the pulsed dye laser alone. Its resolution ($0.1$ cm$^{-1}$) is not good enough to probe the rotational levels. Therefore, we use an additional narrow band cw DFB laser scanning the B$^{1}\Uppi_{\rm u}(v_B=3,J_B)\leftarrow$X$^{1}\Upsigma_{\rm g}^{+}(v=0,J)$ transitions. After excitation and spontaneous emission, the rovibrational populations in $v=0$ are transferred to several vibrational levels, and mostly to those having a good Franck-Condon (FC) factor with the vibrational excited level [@2010PhRvL.105t3001A]. For our experiment, the FC factor between $v=7$ and $v_B=3$ is the largest available one ($\sim0.15$). The $v=7$ population is probed by the pulsed dye laser through the $9\leftarrow 7$ vibrational transition shown in Fig. \[fig:figure2\]. Examples of rotational spectra obtained by scanning by the DFB laser diode frequency are given in Fig. \[fig:figure3\] and \[fig:figure4\]. In these figures, there is a systematic, irrelevant offset of $\sim7$ ions due to a measurement bias, and the real number of molecules in a specific rotational level is found by dividing the number of detected ions by the FC factor between $v_B=3$ and $v=7$ and the MCP efficiency ($\sim 30\%$). In a first experiment, we perform a simple PA on $0^-_g (6s + 6p_{1/2})(v'=26,J')$. We then find vibrational spectra that hardly depend on the $J'$ value and look like the one shown in Fig. \[fig:figure2\](a). In accordance with the experimental and theoretical study related in Ref. [@2011PCCP...1318910L], we find a distribution spread over more than 60 vibrational levels of the ground state. At this stage, as very few molecules are produced in $v=0$, we are unable to record any valuable rotational spectrum. Next, by adding the femtosecond laser, we obtain the typical vibrational spectrum displayed in Fig. \[fig:figure2\](b) where we see that a substantial fraction ($\sim 25$%) of molecules pumped into $v=0$ [@2011PCCP...1318910L]. The large population in $v=0$ enables us to study its rotational distribution. We find that rotational spectra depend on the $J'$ value: for $J'=1$, we find the upper spectrum shown in Fig. \[fig:figure3\], and for $J'=2$, the upper spectrum shown in Fig. \[fig:figure4\](a). In both cases, the rotational distributions are spread over
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We investigate theoretically the Seebeck effect in materials close to a ferromagnetic quantum critical point to explain anomalous behaviour at low temperatures. It is found that the main effect of spin fluctuations is to enhance the coefficient of the leading $T$-linear term, and a quantum critical behaviour characterized by a spin-fluctuation temperature appears in the temperature dependence of correction terms as in the specific heat.' address: ' Faculty of Engineering, Shizuoka University, 3-5-1 Johoku, Hamamatsu 432-8561,Japan' author: - Takuya Okabe bibliography: - 'main.bib' title: 'Spin-fluctuation drag thermopower of nearly ferromagnetic metals ' --- Introduction ============ Experiments on clean materials near ferromagnetic quantum critical point (QCP) have revealed unusual properties, including non Fermi liquid transport and unconventional superconductivity.[@saxena00; @nblkstbbf05] The effects caused by quantum critical dynamics of spin fluctuations on the specific heat coefficient, the spin susceptibility, the resistivity, and so on, have been elucidated analytically at low temperatures.[@de66; @bmf68; @mathon68; @moriya85; @lrvw07] In most of such theoretical analyses made so far, critical spin fluctuations are regarded to stay in thermal equilibrium. On the other hand, one may conceive of its inequilibrium counterpart of anomalous behaviours as well, which would be of fundamental interest too and should be paid due attention theoretically. As a representative of such phenomena, there are observations suggesting spin-fluctuation (or paramagnon) drag thermopower. In the Seebeck coefficient $S(T)$ of ${\rm UAl_2}$, for example, there have remained a structure at low temperature, which is observed experimentally,[@afss79; @po97] but left unexplained theoretically.[@ijc78; @cij78] Among others, the most typical clear-cut experimental evidence would be those reported by Gratz $et$ $al$.,[@grbbmg95; @gratz97; @gm01] where the pronounced low-temperature minimum in $S(T)$ of strong paramagnet $\rm RCo_2$ (R$=$Sc, Y and Lu) was attributed to the paramagnon drag effect. Recently, Matsuoka $et$ $al$.[@mhittm05] found a similar structure for $\rm A Fe_4Sb_{12}$ (A$=$Ca, Sr and Ba). In effect, Takabatake $et$ $al$.[@tmnhmsus06] made it clear that the anomaly in $S(T)$ is indeed caused by the ferromagnetic spin fluctuations prevalent in the materials by showing that those structure is completely suppressed by applying a uniform magnetic field. In contrast with the accumulating experimental evidence, there seems no theory to compare with the experiments available so far, with the exception of a brief account on a qualitative effect expected for [*localized*]{} spin fluctuations around impurity sites of alloys.[@kaiser76] In this paper, we discuss an effect of uniform spin fluctuations in a translationally invariant system, and intend to provide a more solid footing on which to discuss the phenomenon. In section \[sec:model\], we give an outline of a two-band model, which we adopt as a relevant model, along with approximations and assumptions conventionally made. In section \[sec:sfd\], we introduce a function $\Phi_k^d$ to represent inequilibrium displacement of spin fluctuations. In section \[sec:ltc\], we discuss that the leading effect of spin fluctuations appears on the $T$-linear term of $S(T)$. In effect, in section \[subsec:ee\], we discuss that the leading term contribution follows a universal relation to the specific heat, that is, $ q\equiv e S/ C \simeq \pm 1$ revealed by Behnia $et$ $al$.[@BJF04] In the higher order terms, we have to consider not only a critical effect originating from equilibrium quantities, but also a genuinely non-equilibrium effect which has not been investigated before. In section \[sec:std\], we investigate the latter contributions to find a characteristic temperature dependence, and the results are summarized in the last subsection \[sec:summary\]. In section \[sec:disscs\], we discuss the results and comparison is made with experiment. \[sec:model\]Model ================== Let us introduce a two-band paramagnon model, which is conventionally employed to explain an enhanced resistivity of transition metals at low temperature.[@mathon68; @ml66; @rice68] The model has been applied successfully to explain, e.g., a saturation behaviour at elevated temperatures by taking into account a proper temperature dependence of spin susceptibility.[@jbc74; @um75] The model is comprised of two types of electrons, i.e., wide-band conduction electrons and narrow-band itinerant electrons on the border of ferromagnetism. We denote the former as the $s$ electron and the latter as the $d$ electrons, representatively. The Hamiltonian consists of three parts, $$H=H_s + H_{sd} + H_d.$$ The free Hamiltonian of the $s$ electron is given by $$H_s =\sum_{k\sigma} \varepsilon_s (k) c^\dagger_{k\sigma} c_{k\sigma},$$ where $c^\dagger_{k\sigma}$ and $c_{k\sigma}$ are the creation and annihilation operators for the electron with momentum $k$ and spin $\sigma$. For simplicity, it is often assumed that the $s$ electrons make a parabolic band with mass $m_s$, i.e., $$\varepsilon_s (k) =\frac{k^2}{2m_s}. \label{varepsk}$$ At each site $i$, they are scattered by the spin ${\bi S}_i$ of the $d$ electron at the same site through the Kondo $s$-$d$ coupling, $$H_{sd} = J \sum_i {\bi s}_i\cdot {\bi S}_i, \label{Hsd}$$ where $J$ denotes a coupling constant, and ${\bi s}_i = \frac{1}{2}\sum_{\sigma \sigma'} c^{\dagger}_{i\sigma} {\bi \tau}_{\sigma\sigma'}c_{i\sigma}$ is the spin of the $s$ electron at the site $i$ expressed in terms of the Pauli matrix vector ${\bi \tau}_{\sigma\sigma'}$. Similarly, the $d$ electron spin at the site $i$ is given by ${\bi S}_i = \frac{1}{2}\sum_{\sigma \sigma'} d^{\dagger}_{i\sigma} {\bi \tau}_{\sigma\sigma'}d_{i\sigma}$ in terms of the creation and annihilation operators $d^\dagger_{i\sigma}$ and $d_{i\sigma}$ for the $d$ electron. Spin dynamics of the $d$ electrons is described by the Hubbard Hamiltonian, $$H_d =\sum_{k\sigma} \varepsilon_d (k) d^\dagger_{k\sigma} d_{k\sigma} + U \sum_i n_{i\uparrow}n_{i\downarrow}, \label{Hd}$$ where $n_{i\sigma}=d^\dagger_{i\sigma} d_{i\sigma}$ $(\sigma =\uparrow,\downarrow)$ is the number operator of the $d$ electron at the site $i$. The on-site repulsion $U$ is fixed such that the $d$ band is nearly ferromagnetic. To make analytical evaluation feasible, it is often assumed further that the $d$ electrons are also parabolic with a different mass $m_d$ heavier than $m_s$, i.e., $$\varepsilon_d (k) =\frac{k^2}{2m_d}, \label{varepdk}$$ and $m_d \gg m_s$. The latter inequality is regarded as the basic ingredient of the model. Hence the $d$ electrons act as heavy and fluctuating scatterers against the $s$ electrons through the coupling of (\[Hsd\]). In effect, this is taken into account as the second order effect with respect to the coupling $J$, i.e., through the Born approximation.[@ml66] Then, the $d$ electron comes into play through the (transverse) spin susceptibility $\chi(q, \omega)$. In the random phase approximation, it is given by $$% \chi^{-+} \chi(q,\omega) =\frac{ \chi_{0}(q,\omega) }{ 1-U \chi_{0}(q,\omega)}, \label{chi+-qom}$$ where $$\chi_{0}(q,\omega) = \sum_k \frac{f^0_k -f^0_{k+q}}{\varepsilon_d(k+q) -\varepsilon_d(k) - \omega -{\rm i}\delta }. \label{chi0qomega}$$ Here, $f_k^0 \equiv f^0(\varepsilon_d (k)) %=1/(\exp((f^0(\varepsilon_d (k)-\mu)/T+1) =1/({\rm e}^{(\varepsilon_d (k)-\mu)/T}+1) $ is the Fermi distribution function, and $\delta$ is a positive infinitesimal. To investigate critical properties at low temperatures,[@ikk63] (\[chi0qomega\]) is expanded for small $q$ and $\omega/q$ as $$\chi
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'Quantum feedback control is a technology which can be used to drive a quantum system into a predetermined eigenstate. In this article, sufficient conditions for the experiment parameters of a quantum feedback control process of a homodyne QND measurement are given to guarantee feedback control of a spin-1/2 quantum system in case of imperfect detection efficiency. For the case of pure states and perfect detection efficiency, time scales of feedback control processes are calculated.' author: - Andreas de Vries title: '[Global stability criterion for a quantum feedback control process on a single qubit and exponential stability in case of perfect detection efficiency]{}' --- Introduction ============ In classical control theory, feedback control describes processes in which a closed-loop controller is used to steer the states or outputs of a dynamical system, which in turn effect the inputs of the controller into the system. A remarkable approach to feedback control of quantum spin systems has recently been elaborated in [@van-Handel-et-al-2005]. Here QND measurements are utilized to let a quantum system collapse deterministically onto a predetermined eigenstate. In the present article, the stability and the time scale of quantum feedback control processes are studied. As a result (Theorem \[satz-asymptotically-stable\]), sufficient limits for the experiment control parameters are derived to guarantee asymptotically stable quantum feedback control processes on a spin-$\frac12$ quantum system. It is proved by applying Lyapunov’s method to the stochastic differential equation governing the quantum state evolution, and thus differs from the similar result in [@van-Handel-et-al-2005] proposing numerical methods of semialgebraic geometry and aiming at applicability for higher spin systems where efficient search for Lyapunov functions is practically impossible. For the special case of pure states and perfect detection efficiency, the quantum feedback control process is proved to terminate even exponentially fast in time. The article is organized as follows. First, the notions of QND measurements and quantum feedback control for a spin-$\frac12$ system are shortly reviewed, before the stochastic stability of quantum feedback control processes with imperfect and perfect detection efficiency are studied, and the results are shortly discussed. QND measurements ================ In contrast to a measurement in classical physics, a quantum measurement inevitably changes, or even destroys, the measured quantum system itself [@Goswami-1997; @Nielsen-Chuang-2000]. Theoretical as well as experimental investigation have been intensively made dealing with processes where quantum measurements are utilized constructively, for instance theoretical considerations of measurement determination by the quantum register [@Dusek-Buzek-2002], quantum feedback control by continuous measurements [@Belavkin-1992; @Belavkin-1992b; @Belavkin-1994; @Wiseman-1994], especially in quantum optics [@Armen-et-al-2002; @Stockton-et-al-2002; @Stockton-et-al-2004; @van-Handel-et-al-2005], stabilization and purification of two-level systems [@Wiseman-et-al-2002; @Wiseman-Ralph-2006], conditional measurements of coupled quantum dots by a point contact detector [@Goan-Milburn-2001; @Fujisawa-et-al-2004] or by a SET [@Shnirman-Schoen-1998; @Gurvitz-2003; @Gurvitz-Berman-2005], and the conditional measurement approach due to Sherman and Kurizki [@Sherman-Kurizki-1992] to prepare predetermined field states of atoms trapped in optical QED cavities [@Harel-et-al-1996; @Fortunato-et-al-1996; @Fortunato-et-al-1999], as well as a similar approach analyzed for spin squeezing in Cs clocks [@Oblak-et-al-2005]. Although these approaches differ considerably in detail, most of them utilize repeated quantum nondemolition (QND) measurements [@Joos-et-al-2003 §3.3], i.e., measurements of an observable $Y$ satisfying the *self-nondemolition condition* $ [Y(t), Y(t')] = 0 $ for all times $t$, $t'$, as well as the *back action evasion condition* $ [Y,H_{\mathrm{int}}] = 0, $ where $H_{\mathrm{int}} = \sum_j |j\rangle \langle j| \otimes B_j$ denotes the interaction Hamiltonian between the considered system (the projections $|j\rangle \langle j|$) and the measuring apparatus ($B_j$). The QND observable $Y$ may correspond, for instance, to a Hermitian Lindblad operator $L$, or to a conserved quantity, such as a constant of motion of the considered system like polarization or momentum. Quantum feedback control ======================== Due to ideas of Belavkin [@Belavkin-1992; @Belavkin-1992b; @Belavkin-1994] as well as Wiseman and coworkers [@Wiseman-1994; @Thomsen-et-al-2002], an approach to quantum feedback control of spin systems has been recently developed by van Handel, Stockton, and Mabuchi [@van-Handel-et-al-2005]. In this approach repeated quantum nondemolition measurements are engineered to let quantum spin systems collapse deterministically onto a previously chosen eigenstate. This idea, surprising from a traditional physics perspective, bases on the fact that realistic measurements are not instantaneous but take some finite time. If these reduction time scales are of an order attainable by modern digital electronics, a quantum filter and a controller can respond on the spin system state, feeding back the intermediate nondemolition measurement results to a Hamiltonian parameter. In this way it is possible, for instance, to deterministically prepare highly entangled Dicke states [@Stockton-et-al-2004], to generate and utilize squeezed quantum states of trapped atoms in an optical cavity [@Oblak-et-al-2005], or to improve quantum error correction [@Ahn-et-al-2002]. The quantum stochastic control formalism of van Handel, Stockton, and Mabuchi [@van-Handel-et-al-2005] can be considered as an extension of probability theory, and the traditional formulation of quantum mechanics can be directly recovered from it. In [@van-Handel-et-al-2005 §IV.C] a stabilizing controller is given for a quantum system of spin $j=\frac12$, schematically depicted in Figure \[fig-control\]. 1ex (49,16) ( 0,-0.3)[![\[fig-control\] (Color online) Schema of a quantum feedback control process. The QND measurement output $y_t$ from the quantum system is used to propagate the conditional state of the filter, via the feedback signal $H(t)$. The dashed line indicates (classical) digital processing, the filter is determined by Eq. (\[eq-SME\]). ](./control "fig:"){width="50ex"}]{} (25.0,14.3)[(0,0)[Quantum system]{}]{} (40.0,14.5)[(0,0)\[b\][$y_t$]{}]{} (25.0, 9.5)[(0,0)]{} ( 7.0, 7.5)[(0,0)\[r\][$H(t)$]{}]{} ( 8.5, 3.2)[(0,0)[Controller]{}]{} (25.0, 4.0)[(0,0)[$\rho_t$]{}]{} (41.5, 3.2)[(0,0)[Filter]{}]{} The conditional evolution of the density operator $\rho$ describing the quantum system depends on the probe parameter measurement rate $M>0$ in Hz, and the detection efficiency $\eta$ $\in$ $[0,1]$, a pure number. More precisely, the conditional evolution of $\rho$ is determined by the stochastic master equation $${\,\mathrm{d}}\rho_t = \mathscr{G}^*[H(t), L] \rho_t {\,\mathrm{d}}t +\sqrt\eta\, \mathscr{H}[L] \rho_t {\,\mathrm{d}}W_t , \label{eq-SME}$$ where $H(t)$ is the control Hamiltonian (with $H(t)=0$ in case of no feedback), $L$ is an observable one of whose eigenstates is the desired final state of the system, $\mathscr{G}^* = \mathscr{G}^*[H(t),L]$ is the adjoint generator $$\mathscr{G}^* \rho_t = -{\mathrm{i}}[H(t), \rho_t] + L \rho_t L^* - {\textstyle \frac12} (L^* L \rho_t + \rho_t L^* L) ,$$ $\mathscr{H}$ is the superoperator $$\begin{aligned} \mathscr{H}[L] \rho_t & \hspace*{-1.0ex} = \hspace*{-1.0ex} & L \rho_t + \rho_t L^* - \mathrm{Tr}[\rho_t (L+L^*)]\,\rho_t,\end{aligned}$$ and ${\,\mathrm{d}}W_t$ is the innovations process $${\,\mathrm{d}}W_t = 2 \sqrt{M\eta}\, y_t {\,\mathrm{d}}t - \sqrt\eta \, \mathrm{Tr}[\rho_t (L+L^*)]\, {\,\
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'In recent years, on-policy reinforcement learning (RL) has been successfully applied to many different continuous control tasks. While RL algorithms are often conceptually simple, their state-of-the-art implementations take numerous low- and high-level design decisions that strongly affect the performance of the resulting agents. Those choices are usually not extensively discussed in the literature, leading to discrepancy between published descriptions of algorithms and their implementations. This makes it hard to attribute progress in RL and slows down overall progress [@implementation_matters]. As a step towards filling that gap, we implement &gt;50 such “choices” in a unified on-policy RL framework, allowing us to investigate their impact in a large-scale empirical study. We train over 250’000 agents in five continuous control environments of different complexity and provide insights and practical recommendations for on-policy training of RL agents.' author: - | Marcin Andrychowicz, Anton Raichuk, Piotr Stańczyk, Manu Orsini,\ **Sertan Girgin, Raphael Marinier, Léonard Hussenot, Matthieu Geist,**\ **Olivier Pietquin, Marcin Michalski, Sylvain Gelly, Olivier Bachem**\ \ Google Research, Brain Team bibliography: - 'main.bib' title: 'What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study' --- Introduction ============ Deep reinforcement learning (RL) has seen increased interest in recent years due to its ability to have neural-network-based agents learn to act in environments through interactions. For continuous control tasks, on-policy algorithms such as REINFORCE [@pg], TRPO [@trpo], A3C [@a3c], PPO [@ppo] and off-policy algorithms such as DDPG [@ddpg] and SAC [@sac] have enabled successful applications such as quadrupedal locomotion [@sac2], self-driving [@kendall2019learning] or dexterous in-hand manipulation [@sac2; @openai2018learning; @openai2019solving]. Many of these papers investigate in depth different loss functions and learning paradigms. Yet, it is less visible that behind successful experiments in deep RL there are complicated code bases that contain a large number of low- and high-level design decisions that are usually not discussed in research papers. While one may assume that such “choices” do not matter, there is some evidence that they are in fact crucial for or even driving good performance [@implementation_matters]. While there are open-source implementations available that can be used by practitioners, this is still unsatisfactory: In research publications, often different algorithms implemented in different code bases are compared one-to-one. This makes it impossible to assess whether improvements are due to the algorithms or due to their implementations. Furthermore, without an understanding of lower-level choices, it is hard to assess the performance of high-level algorithmic choices as performance may strongly depend on the tuning of hyperparameters and implementation-level details. Overall, this makes it hard to attribute progress in RL and slows down further research [@henderson2018deep; @implementation_matters; @islam2017reproducibility]. **Our contributions.** Our key goal in this paper is to investigate such lower level choices in depth and to understand their impact on final agent performance. Hence, as our key contributions, we (1) implement &gt;50 choices in a unified on-policy algorithm implementation, (2) conducted a large-scale (more than 250’000 agents trained) experimental study that covers different aspects of the training process, and (3) analyze the experimental results to provide practical insights and recommendations for the on-policy training of RL agents. **Most surprising finding.** While many of our experimental findings confirm common RL practices, some of them are quite surprising, e.g. the policy initialization scheme significantly influences the performance while it is rarely even mentioned in RL publications. In particular, we have found that initializing the network so that the initial action distribution has zero mean, a rather low standard deviation and is independent of the observation significantly improves the training speed (Sec. \[sec:results-arch\]). The rest of of this paper is structured as follows: We describe our experimental setup and performance metrics used in Sec. \[sec:performance\]. Then, in Sec. \[sec:results\] we present and analyse the experimental results and finish with related work in Sec. \[sec:related\] and conclusions in Sec. \[sec:conclusions\]. The appendices contain the detailed description of all design choices we experiment with (App. \[sec:choices\]), default hyperparameters (App. \[sec:default-settings\]) and the raw experimental results (App. \[exp\_final\_losses\] - \[exp\_final\_regularizer\]). Study design {#sec:performance} ============ #### Considered setting. In this paper, we consider the setting of *on-policy reinforcement learning for continuous control*. We define on-policy learning in the following loose sense: We consider policy iteration algorithms that iterate between generating experience using the current policy and using the experience to improve the policy. This is the standard *modus operandi* of algorithms usually considered on-policy such as PPO [@ppo]. However, we note that algorithms often perform several model updates and thus may operate technically on off-policy data within a single policy improvement iteration. As benchmark environments, we consider five widely used continuous control environments from OpenAI Gym [@gym] of varying complexity: Hopper-v1, Walker2d-v1, HalfCheetah-v1, Ant-v1, and Humanoid-v1 [^1]. #### Unified on-policy learning algorithm. We took the following approach to create a highly configurable unified on-policy learning algorithm with as many choices as possible: 1. We researched prior work and popular code bases to make a list of commonly used choices, i.e., different loss functions (both for value functions and policies), architectural choices such as initialization methods, heuristic tricks such as gradient clipping and all their corresponding hyperparameters. 2. Based on this, we implemented a single, unified on-policy agent and corresponding training protocol starting from the SEED RL code base [@seed]. Whenever we were faced with implementation decisions that required us to take decisions that could not be clearly motivated or had alternative solutions, we further added such decisions as additional choices. 3. We verified that when all choices are selected as in the PPO implementation from OpenAI baselines, we obtain similar performance as reported in the PPO paper [@ppo]. We chose PPO because it is probably the most commonly used on-policy RL algorithm at the moment. The resulting agent implementation is detailed in Appendix \[sec:choices\]. The key property is that the implementation exposes all choices as configuration options in an unified manner. For convenience, we mark each of the choice in this paper with a number (e.g., ) and a fixed name (e.g. ) that can be easily used to find a description of the choice in Appendix \[sec:choices\]. #### Difficulty of investigating choices. The primary goal of this paper is to understand how the different choices affect the final performance of an agent and to derive recommendations for these choices. There are two key reasons why this is challenging: First, we are mainly interested in insights on choices for good hyperparameter configurations. Yet, if all choices are sampled randomly, the performance is very bad and little (if any) training progress is made. This may be explained by the presence of sub-optimal settings (e.g., hyperparameters of the wrong scale) that prohibit learning at all. If there are many choices, the probability of such failure increases exponentially. Second, many choices may have strong interactions with other related choices, for example the learning rate and the minibatch size. This means that such choices need to be tuned together and experiments where only a single choice is varied but interacting choices are kept fixed may be misleading. #### Basic experimental design. To address these issues, we design a series of experiments as follows: We create groups of choices around thematic groups where we suspect interactions between different choices, for example we group together all choices related to neural network architecture. We also include in all of the groups as we suspect that it may interact with many other choices. Then, in each experiment, we train a large number of models where we randomly sample the choices within the corresponding group [^2]. All other settings (for choices not in the group) are set to settings of a competitive base configuration (detailed in Appendix \[sec:default-settings\]) that is close to the default PPOv2 configuration[^3] scaled up to $256$ parallel environments. This has two effects: First, it ensures that our set of trained models contains good models (as verified by performance statistics in the corresponding results). Second, it guarantees that we have models that have different combinations of potentially interacting choices. We then consider two different analyses for each choice (e.g, for ): *Conditional 95th percentile*: For each potential value of that choice (e.g., = `N-Step`), we look at the performance distribution of sampled configurations with that value. We report the 95th percentile of the performance as well as a confidence interval based on a binomial approximation [^4]. Intuitively, this corresponds to a robust estimate of the performance one can expect if all other choices in the group were tuned with random search and a limited budget of roughly 20
{ "pile_set_name": "ArXiv" }
null
--- author: - '**[Sarbari Guha and Samarjit Chakraborty]{}**' title: '[**On the gravitational entropy of accelerating black holes** ]{}' --- Abstract {#abstract .unnumbered} ======== In this paper we have examined the validity of a proposed definition of gravitational entropy in the context of accelerating black hole solutions of the Einstein field equations, which represent the realistic black hole solutions. We have adopted a phenomenological approach proposed in Rudjord et al \[20\] and expanded by Romero et al \[21\], in which the Weyl curvature hypothesis is tested against the expressions for the gravitational entropy. Considering the $C$-metric for the accelerating black holes, we have evaluated the gravitational entropy and the corresponding entropy density for four different types of black holes, namely, non-rotating black hole, non-rotating charged black hole, rotating black hole and rotating charged black hole. We end up by discussing the merits of such an analysis and the possible reason of failure in the particular case of rotating charged black hole and comment on the possible resolution of the problem. KEYWORDS: Gravitational entropy, Accelerating Black holes. Introduction ============ The $C$-metric was independently discovered by Levi-Civita [@Levi] and Weyl [@Weyl] in 1917. Ehlers and Kundt [@EK] while working on the classification of the degenerated static vacuum fields, constructed a table in which this metric was placed in the slot “$C$”, leading to the name ‘$C$-metric’. Kinnersley and Walker [@KW] pointed out that this metric is an exact solution of Einstein’s equations which describes the combined electromagnetic and gravitational field of a uniformly accelerating object having mass $m$ and charge $e$, and is an example of “almost everything”. It is for this reason that the $C$-metric is the focus of our attention in this paper. Dray and Walker [@DW] showed that this spacetime represents the gravitational field of a pair of uniformly accelerating black holes. Letelier and Oliveira [@LO], studied the static and stationary $C$-metric and sought its interpretation in details, in particular those cases charaterized by two event horizons, one for the black hole and another for the acceleration. For spacetimes with vanishing or positive cosmological constant, the $C$-metric represents two accelerated black holes in asymptotically flat or de Sitter (dS) spacetime, and for a negative $\Lambda$ term, depending on the magnitude of acceleration [@DL], it may represent a single accelerated black hole or a pair of causally separated black holes which accelerate away from each other [@Krtous]. The acceleration $A$ is due to forces represented by conical singularities arising out of a strut between the two black holes or because of two semi-infinite strings connecting them to infinity [@Podolsky; @GKP]. The second law of thermodynamics is one of the most fundamental laws of physics. We know that for an ensemble of ideal gas molecules confined to a closed chamber, the gas spreads out to fill the entire space once the chamber is opened, thereby reaching a state of maximum entropy. However, in the case of the universe with its matter content modelled as a fluid (or gas), this is not exactly true. The universe was born from a very homogeneous state and later on, small density fluctuations appeared due to the effect of gravity, that ultimately led to the formation of structures in the universe. This evolution is contrary to our expectations from the thermodynamic point of view, since the “gas” condenses into clumps of matter, instead of spreading out. Moreover in the past, the universe was much hotter and at some point of time, matter and radiation were in thermal equilibrium, and the entropy was maximum. So, how can the entropy increase if it was maximum in the past? It appears that if the evolution of the universe is dominated solely by gravity, then we may encounter a violation of the second law of thermodynamics, if we are considering the contribution of the thermodynamic entropy only. To resolve this problem and to provide a proper sequence to the occurrence of gravitational processes, Penrose [@Penrose1] proposed that we must assign an entropy function to the gravitational field itself. He suggested that the Weyl curvature tensor could be used as a measure of the gravitational entropy. The Weyl tensor $C_{\alpha\beta\gamma\delta}$ in $ n $ dimensions is expressed as [@Chandra] $$\label{decom} C_{\alpha\beta\gamma\delta}= R_{\alpha\beta\gamma\delta} - \dfrac{1}{(n-2)}(g_{\alpha\gamma}R_{\beta\delta} + g_{\beta\delta}R_{\alpha\gamma} - g_{\beta\gamma}R_{\alpha\delta} - g_{\alpha\delta}R_{\beta\gamma}) + \dfrac{1}{(n-1)(n-2)}R(g_{\alpha\gamma}g_{\beta\delta}-g_{\alpha\delta}g_{\beta\gamma}),$$ where $R_{\alpha\beta\gamma\delta}$ is the covariant Riemann tensor, $R_{\alpha\beta}$ is the Ricci tensor and $R$ is the Ricciscalar. According to Penrose, initially after the ‘big bang’, when the universe started evolving, the Weyl tensor component was much smaller than the Ricci tensor component of the spacetime curvature. This hypothesis sounds credible because the Weyl tensor is independent of the local energy–momentum tensor. Moreover, the universe was in a nearly homogeneous state before structure formation began, and the FRW models successfully describe this homogeneous phase of the evolution. Further, the Weyl curvature is zero in the FRW models. However, the Weyl is large in the Schwarzschild spacetime. Thus we need a description of gravitational entropy, which should increase throughout the history of the universe on account of formation of more and more structures leading to the growth of inhomogeneity [@Penrose2; @Bolejko], and thus preserve the second law of thermodynamics. But there is still doubt regarding the definition of gravitational entropy in a way analogous to the thermodynamic entropy, which would be applicable to all gravitational systems [@CET]. The definition of gravitational entropy as the ratio of the Weyl curvature and the Ricci curvature faces problems with radiation [@Bonnor]. Once Senovilla showed that the Bel-Robinson tensor is suitable for constructing a measure of the “energy” of the gravitational field [@Senovilla], several attempts were made to define the gravitational entropy based on the Bel-Robinson tensor and also in terms of the Riemann tensor and its covariant derivatives [@PL; @PC]. Many efforts has been made to explain the entropy of black holes using the quantized theories of gravity, such as the string theory and loop quantum gravity. However, in this paper we will handle the problem from a phenomenological approach proposed in [@entropy1] and expanded in [@entropy2], in which the Weyl curvature hypothesis is tested against the expressions for the entropy of cosmological models and black holes. They considered a measure of gravitational entropy in terms of a scalar derived from the contraction of the Weyl tensor and the Riemann tensor, and matched it with the Bekenstein-Hawking entropy [@SWH1; @Bekenstein]. In our current work we will consider the accelerating black holes only, which represent more realistic black holes for several reasons. For instance, collision of galaxies is a rather common phenomenon occurring in the universe, and it inevitably leads to black hole mergers with the associated production of gravitational waves [@POK]. In such situations, we may imagine that the black holes at the centre of these galaxies are accelerating towards each other, although we can always think of any black hole as accelerating since no black hole is gravitationally isolated from the neighboring massive systems. Moreover, a static black hole may be considered as the limiting case of an accelerating black hole. Thus the study of accelerating black holes is very important. Here we will investigate whether the calculations for gravitational entropy proposed in [@entropy1] and [@entropy2] can be applied in this context. The organization of our paper is as follows: Sec. II deals with the definition of gravitational entropy and Sec. III enlists the metrics of accelerating black holes considered by us. Sec. IV provides the main analysis of our paper where we evaluate the gravitational entropy and the corresponding entropy density for these black holes. We discuss our results in Sec. V and present the conclusions in Sec. VI. Gravitational Entropy ===================== The entropy of a black hole can be described by the surface integral [@entropy1] $$S_{\sigma}=k_{s}\int_{\sigma}\mathbf{\Psi}.\mathbf{d\sigma},$$ where $ \sigma $ is the surface of the horizon of the black hole and the vector field $\mathbf{\Psi}$ is given by $$\mathbf{\Psi}=P \mathbf{e_{r}},$$ with $ \mathbf{e_{r}} $ as a unit radial vector. The scalar $ P $ is defined in terms of the Weyl scalar ($ W $) and the Krestchmann scalar ($ K $) in the form $$\label{P_sq} P^2=\dfrac{W}{K}=\dfrac{C_{abcd}C^{abcd}}{R_{abcd}R^{abcd}}.$$ In order to find the gravitational entropy, we need to do our computations in a 3-space. Therefore, we consider the spatial metric which is defined as $$\label{sm} h_{ij}=g_{ij}-\dfrac{g_{i0}g_{j0}}{g_{00}},$$ where $ g_{\mu\nu} $ is the concerned 4-dimensional space-time metric and the Latin indices denote spatial components, $i, j = 1, 2, 3$. So the infinitesimal surface element is given by $$d
{ "pile_set_name": "ArXiv" }
null
--- bibliography: - 'reion.bib' --- Introduction ============ The argument for an extended period of reionization is as follows. The WMAP has detected the correlation between temperature and polarization on large angular scales [@kogut03] that has an amplitude proportional to the total optical depth of CMB photons to Thomson scattering, $\tau$ [@sunyaev80; @zaldarriaga97a; @kaplinghat02]. Modeling reionization with a single sharp transition at $z_{ri}$, a multi–parameter fit to the WMAP data gives $z_{ri} = 17 \pm 5$ [@spergel03]. On the other hand, the evolution of quasar spectra from $z=6.3$ and $z=6.4$ to $z = 6$ shows a rapid decrease in the amount of neutral Hydrogen, indicating the end of reionization [@fan03]. A simple interpretation to explain these two very different datasets is that reionization started early, $z_{ri} \sim 20$, but did not conclude until much later ($z \sim 6$). An extended period of reionization has its effect at low $\ell$ in the polarization (as detected by WMAP), and also at higher $\ell$ in temperature. In Section II we disucss the low $\ell$ effect and in Section III the high $\ell$ effect. In section IV we discuss them in combination. All the work I present here is described in more detail in @haiman03 (reionization models), @kaplinghat03a [@holder03] (low $\ell$ signal) and @santos03 (high $\ell$). The Reionization Bumps ====================== Quadrupole radiation incident upon an electron leads to linear polarization. For a review of CMB polarization, see [@hu97a]. As drawn in Fig. \[fig:bump\] a free electron at high redshift sees a quadrupole from free–streaming of the monopole on its last–scattering surface. A monopole with wavenumber $k$ free streams primarily into $l=k(\eta_{ri} - \eta_{lss})$ so the quadrupole is given primarily by $k=2/(\eta_{ri} - \eta_{lss})$. The linear polarization thus generated at $\eta_{ri}$ projects to $l = k(\eta_0-\eta_{ri}) = 2(\eta_0-\eta_{ri})/(\eta_{ri} - \eta_{lss})$ today. Thus, the feature appears at low $\ell$. The polarization signature is proportional to the amount of scattering and hence the optical depth $\tau$. Therefore $C_l^{EE} \propto \tau^2$ and $C_l^{TE} \propto \tau$. See @zaldarriaga97a and @kaplinghat03a. The same considerations lead to a reionization bump in the tensor B mode with $C_l^{BB} \propto \tau^2$ so higher $\tau$ can improve the detectability of gravitational waves as quantified in @knox02 [@kaplinghat03b]. The polarization angular power spectra are shown in Fig. \[fig:foregrounds\]. The low l polarization has already been detected by WMAP through the correlation of this effect with the temperature map [@kogut03]. From the WMAP measurements only one number can be inferred: a joint fit of the TT and TE power spectra to a six-parameter model results in $\tau = 0.17 \pm 0.06$ [@spergel03]. If the EE power spectrum is measured with cosmic variance precision, there are 5 uncorrelated numbers (the amplitudes of 5 eigenmodes of the ionization history), that can be measured with signal-to-noise greater than 1 [@hu03]. These 5 numbers will provide strong constraints on models of the first generation of stars, since these are presumably what cause the reionization of the inter-galactic medium. Models of foreground polarization indicate that this low l signal can, in principle, be measured with near cosmic variance precision, as seen in Fig. \[fig:foregrounds\]. Although these models are highly uncertain. We will know more soon from further releases of WMAP data. To quantify how much improvement is possible beyond WMAP we show forecasted constraints on $\tau$ and the initial reionization redshift $z_e$ assuming statistical errors only for WMAP, Planck/LFI and Planck/HFI. Model B has sudden and complete reionization at $z_e=15$. Model A has a two–stage reionization, first increasing $X_e$ suddently to 0.42 at $z_e=25$ and then suddenly completely ionizing at $z=6.3$. The ionization fraction, $X_e\equiv n_e/n_p$ cannot be greater than 1.08 if Helium does not doubly ionize. Thus for a given $z_e$ there is a minimum value of $\tau$ that gives rise to the straight lines in the lower portions of the contours around model A. The kSZ power spectrum ====================== An extended period of reionization (as suggested by combination of WMAP data and high-redshift quasar spectra) is likely to be a period with a highly inhomogeneous reionization fraction. Prior to percolation of the ionized regions, the ionization fraction will be near unity in the vicinity of the sources of the reioinizing radiation and zero far away from any sources. If the sources are in high-mass halos, as semi-analytic models suggest, then the patches of high ionization fraction will be highly correlated. These correlated patches can lead to a kinetic SZ power spectrum with an amplitude of about $10^{-12}$ at $l$ values of 1500 and higher. At $l > 3000$ this kinetic SZ power spectrum may be the dominant source of flucutation power on the sky, for components with the same spectral signature as thermal fluctuations about a 2.7 K black body. Here we reproduce a figure from @santos03 showing a range of the kSZ power spectra that come from the reionization models of @haiman03. The error bars on one of the curves are for an observation of 1% of the sky with 0.9’ resolution. They are due to the “noise” from primary + lensing CMB and residual point sources. Instrument noise is assumed to be subdominant. We assume thermal SZ has been removed by taking advantage of its unique spectral dependence. We see that the models do not show much variation in shape, but only in amplitude. There is a degeneracy between the clustering bias of the dark matter halos hosting the reionizing sources and the optical depth. The low $\ell$ measurements can break this degeneracy, allowing us to place some constraint on the clustering bias, and therefore the halo masses. Although predicting the amplitude of the signal is difficult, the shape appears to be robust, particularly at $l < 3000$ where the kSZ power spectrum might contaminate attempts to measure cosmological parameters and reconstruct the gravitational lensing potential. We can therefore model this contaminant with one free parameter: an amplitude. Doing parameter estimation without such modeling (i.e., ignoring the kSZ power spectrum) can lead to significant biases for parameter estimation from Planck and higher–resolution observations [@knox98; @santos03]. But @santos03 find that modeling the kSZ power spectrum as this robust shape times a floating amplitude removes all significant parameter estimation biases even for an experiment that is cosmic–variance limited out to $l=3,000$. The simple prescription for removing the bias works because there is so little variation in the shape of the kSZ power spectrum in our reionization models. This shape is merely a projection of the matter power spectrum from high redshift. The small dependence on shape that there is comes from the different mean angular diameter distances in the different models. If necessary, the modeling could be extended to include this as a free parameter also. If the real kSZ power spectrum shape (at $l < 3,000$) were significantly different from the shapes we calculate then our simple modeling (with one or maybe two parameters) may not be sufficient. However, this could only be the case if the density of free electrons does not trace the density of matter on scales larger than about 3 Mpc. Note that even if $X_e$ were completely homogeneous, the density of free electrons traces the density of matter. Even in this homogeneous $X_e$ case the shape would still be a projection of the matter power spectrum, although this time from a smaller mean angular-diameter distance. Our calculations have relied on semi–analytic models of reionization. For a more numerical approach, see the contribution from Naoshi Sugiyama. For more on patchy reionization, see @aghanim96 [@gruzinov98; @knox98; @valageas01] and other papers already mentioned above.
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'This paper provides a sample of a LaTeX document which conforms, somewhat loosely, to the formatting guidelines for ACM SIG Proceedings.[^1]' author: - Ben Trovato - 'G.K.M. Tobin' - 'Lars Th[ø]{}rv[ä]{}ld' - Valerie Béranger - Aparna Patel - Huifen Chan - Charles Palmer - John Smith - 'Julius P. Kumquat' bibliography: - 'sample-bibliography.bib' subtitle: Extended Abstract title: SIG Proceedings Paper in LaTeX Format --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10010520.10010553.10010562&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computer systems organization Embedded systems&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010520.10010575.10010755&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computer systems organization Redundancy&lt;/concept\_desc&gt; &lt;concept\_significance&gt;300&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010520.10010553.10010554&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computer systems organization Robotics&lt;/concept\_desc&gt; &lt;concept\_significance&gt;100&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10003033.10003083.10003095&lt;/concept\_id&gt; &lt;concept\_desc&gt;Networks Network reliability&lt;/concept\_desc&gt; &lt;concept\_significance&gt;100&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt; [^1]: This is an abstract footnote
{ "pile_set_name": "ArXiv" }
null
--- author: - 'V. M. Passegger$^{1}$, A. Reiners$^{1}$, S. V. Jeffers$^{1}$, S. Wende$^{1}$, P. Schöfer$^{1}$, P. J. Amado$^{2}$, J. A. Caballero$^{3}$, D. Montes$^{4}$, R. Mundt$^{5}$, I. Ribas$^{6}$, A. Quirrenbach$^{3}$, and the CARMENES Consortium' bibliography: - 'passegger.bib' title: 'Spectroscopic characterisation of CARMENES target candidates from FEROS, CAFE and HRS high-resolution spectra' --- Introduction ============ The new CARMENES instrument is mounted at the 3.5 m telescope at Calar Alto Observatory, located in the Sierra de los Filabres in southern Spain. It consists of two fibre-fed high-resolution spectrographs, operating in the visible wavelength range from 0.52 to 0.96 $\mu$m and in the near-infrared from 0.96 to 1.71 $\mu$m, having a spectral resolution of R &gt; 80,000. [@Quirrenbach2010; @Quirrenbach2012; @Quirrenbach2014] Both spectrographs will simultaneously perform high-accuracy radial-velocity measurements of about 300 M dwarfs during three years of guaranteed observing time. The aim is to detect low-mass planets within the habitable zones of these stars.\ For science preparation over 1500 high-resolution spectra have been observed with FEROS, CAFE and HRS to determine effective temperature, surface gravity and metallicity. These parameters are fundamental for characterising star-planet systems. The spectra of M dwarfs are very complex, with molecular lines forming due to the low temperatures. This makes it difficult to use a line-by-line approach and requires a full spectral synthesis, which in turn necessitates for accurate models that take into account the formation of molecules. We use the latest generation PHOENIX model grid, the PHOENIX ACES models [@Husser2013]. These models are especially designed for low temperature stellar atmospheres and use a new equation of state to accurately reproduce molecular lines. Methods and Data ================ Name Resolution Coverage \[nm\] No. Spectra No. Stars Observing Period ------- ------------ ----------------- ------------- ----------- -------------------------- CAFE \~65,000 396-950 623 236 2013-01-21 to 2014-09-26 FEROS 48,000 350-920 455 217 2012-12-31 to 2014-07-11 HRS 60,000 420-1100 93 29 2011-09-29 to 2013-06-18 Table \[tab:obs\] summarizes the properties of the spectrographs used for observation and the data taken. Some observed spectra could not be used for analysis because of different issues, e.g. very low signal-to-noise, observation of wrong target, polluting light from close companions. The method we use was described in detail in [@Passegger2016a]. We fit PHOENIX ACES model spectra to our observed spectra. This is done for different spectral ranges, including the $\gamma$- and $\epsilon$-TiO bands (sensitive to temperature and metallicity), the K- and Na-doublets around 768 nm and 819 nm (sensitive to surface gravity and metallicity) and two CaII-lines. Rotational velocities determined by [@Jeffers2016] are included to account for line broadening due to stellar rotation. Other than [@Passegger2016a] a downhill simplex is implemented for linear interpolation between the model grid points and a $\chi2$ -minimization determines the best fit to the data. Figure \[fig:fit\] shows an example fit to CARMENES data. ![image](fit.jpg){width="0.85\linewidth"} Results and Discussion ====================== We obtained stellar parameters for 351 stars from 977 spectra. We find that most stars lie within 3200-3900 K, corresponding to spectral types M1V-M5V, as shown in the upper left panel of Figure \[fig:results\]. The higher the metallicity the higher the temperature for each spectral type (Figure \[fig:results\], lower left panel). This is consistent with results by [@Mann2015]. They showed that with increasing metallicity the radius increases, for fixed temperature. The spectral types have been calculated using spectral indices [@Schoefer2015]. The green squares correspond to a literature computation by [@PecautMamajek2013] for solar metallicity. A literature comparison with [@RojasAyala2012], [@GaidosMann2014] and [@Maldonado2015] shows that our values for metallicity turn out to be higher than published ones. (Figure \[fig:results\], upper right). One possible explanation for this is that PHOENIX ACES models still cannot reproduce the full depths of some lines (see Figure \[fig:fit\], 4th wavelength range), which might cause the algorithm to choose higher metallicity models to fit the lines. On the other hand it seems that the signal-to-noise ratio is also very important for parameter determination. 75 percent of the stars with \[Fe/H\] higher than 0.6 have SNRs lower than 50. We find good agreement with expected \[Fe/H\] values for SNR&gt;50 (Figure \[fig:results\], lower right). For the first four months of CARMENES data we find that the parameters show better agreement with literature, having better SNRs. ![image](results.jpg){width="0.85\linewidth"} Acknowledgments {#acknowledgments .unnumbered} =============== [CARMENES is an instrument for the Centro Astronómico Hispano-Alemán de Calar Alto (CAHA). CARMENES was funded by the German Max-Planck-Gesellschaft (MPG), the Spanish Consejo Superior de Investigaciones Cientícas (CSIC), the European Union through European Regional Fund (FEDER/ERF), Spanish Ministry of Economy and Competitiveness, the state of Baden-Württemberg, the German Science Foundation (DFG), the Junta de Andalucía, and by the Klaus Tschira Stiftung, with additional contributions by the members of the CARMENES Consortium (Max-Planck-Institut für Astronomie, Instituto de Astrofísica de Andalucía, Landessternwarte Königstuhl, Institut de Ciències de l’Espai, Institut für Astrophysik Göttingen, Universidad Complutense de Madrid, Thüringer Landessternwarte Tautenburg, Instituto de Astrofísica de Canarias, Hamburger Sternwarte, Centro de Astrobiología, and the Centro Astronómico Hispano-Alemán).]{}
{ "pile_set_name": "ArXiv" }
null
--- abstract: | Given a flat injective ring epimorphism $u\colon R\to U$ between commutative rings, we consider the Gabriel topology ${\mathcal{G}}$ associated to $u$ and the class ${\mathcal{D}}_{\mathcal{G}}$ of ${\mathcal{G}}$-divisible modules. We prove that ${\mathcal{D}}_{\mathcal{G}}$ is an enveloping class if and only if it is the tilting class corresponding to the $1$-tilting module $U\oplus U/R$ and for every ideal $J\in {\mathcal{G}}$ the quotient rings $R/J$ are perfect rings. Equivalently, ${\operatorname{p.dim}}U\leq 1$ and the discrete quotient rings $\mathfrak R/\mathfrak RJ$ of the topological ring $\mathfrak R={\operatorname{End}}(U/R)$ are perfect rings. Moreover, we show that every enveloping $1$-tilting class over a commutative ring arises from a flat injective ring epimorphism. address: - | Dipartimento di Matematica “Tullio Levi-Civita”\ Università di Padova\ Via Trieste 63, 35121 Padova (Italy) - | Dipartimento di Matematica “Tullio Levi-Civita”\ Università di Padova\ Via Trieste 63, 35121 Padova (Italy) author: - Silvana Bazzoni - Giovanna Le Gros title: Enveloping classes over commutative rings --- Introduction ============ The classification problem for classes of modules over arbitrary rings is in general very difficult, or even hopeless. Nonetheless, approximation theory was developed as a tool to approximate arbitrary modules by modules in classes where the classification is more manageable. Left and right approximations were first defined in the case of modules over finite dimensional algebras by work of Auslander, Reiten, and Smalø and generalised later by Enochs and Xu for modules over arbitrary rings using the terminology of preenvelopes and precovers. An important problem in approximation theory is when minimal approximations, that is covers or envelopes, over certain classes exist. In other words, for a certain class ${\mathcal{C}}$, the aim is to characterise the rings over which every module has a minimal approximation in ${\mathcal{C}}$ and furthermore to characterise the class ${\mathcal{C}}$ itself. The most famous positive result of when minimal approximations exist is the construction of an injective envelope for every module. Instead, Bass proved in [@Bass] that projective covers rarely exist. In his paper, Bass introduced and characterised the class of perfect rings which are exactly the rings over which every module admits a projective cover. Among the many characterisations of perfect rings, the most important from the homological point of view is the closure under direct limits of the class of projective modules. A comparison between the existence of injective envelopes and projective covers shows that the existence of minimal left or right approximations is not a symmetric phenomenon in general. A class ${\mathcal{C}}$ of modules is called covering, respectively enveloping, if every module admits a ${\mathcal{C}}$-cover, respectively a ${\mathcal{C}}$-envelope. A cotorsion pair $({\mathcal{A}}, {\mathcal{B}})$ admits (special) ${\mathcal{A}}$-precovers if and only if it admits (special) ${\mathcal{B}}$-preenvelopes. This observation lead to the notion of complete cotorsion pairs, that is cotorsion pairs admitting approximations. Results by Enochs and Xu ([@Xu Theorem 2.2.6 and 2.2.8]) show that a complete cotorsion pair $({\mathcal{A}}, {\mathcal{B}})$ such that ${\mathcal{A}}$ is closed under direct limits admits both ${\mathcal{A}}$-covers and ${\mathcal{B}}$-envelopes. Note that in the case of the cotorsion pair $({\mathcal{P}}_0, {\mathrm{Mod}\textrm{-}{R}})$, where ${\mathcal{P}}_0$ is the class of projective modules, Bass’s results state that ${\mathcal{P}}_0$ is a covering class if and only if ${\mathcal{P}}_0$ is closed under direct limits. In this paper we are interested in the conditions under which a class ${\mathcal{C}}$ is enveloping. We will deal with classes of modules over commutative rings and in particular with $1$-tilting classes. An important starting point is the bijective correspondence between faithful finitely generated Gabriel topologies ${\mathcal{G}}$ and $1$-tilting classes over commutative rings established by Hrbek in [@H]. The tilting class can then be characterised as the class ${\mathcal{D}}_{\mathcal{G}}$ of ${\mathcal{G}}$-divisible modules, that is, the modules $M$ such that $JM=M$ for every $J\in {\mathcal{G}}$. We prove that if a $1$-tilting class is enveloping, then $R_{\mathcal{G}}$, the ring of quotients with respect to the Gabriel topology ${\mathcal{G}}$, is ${\mathcal{G}}$-divisible, so that $R\to R_{\mathcal{G}}$ is a flat injective ring epimorphism. It is well known that every flat ring epimorphism $u\colon R\to U$ gives rise to a finitely generated Gabriel topology. We will consider the case of a flat injective ring epimorphism $u\colon R\to U$ between commutative rings and show that if the module $R$ has a ${\mathcal{D}}_{\mathcal{G}}$-envelope, then $U$ has projective dimension at most one. From results by Angeleri Hügel and S[á]{}nchez [@AS], we infer that the module $U\oplus K$, where $K$ is the cokernel of $u$, is a $1$-tilting module with ${\mathcal{D}}_{\mathcal{G}}$ as associated tilting class. In other words, ${\mathcal{D}}_{\mathcal{G}}$ coincides with the class of modules generated by $U$, that is epimorphic images of direct sums of copies of $U$ or also with $K^\perp$, the right Ext-orthogonal of $K$. Assuming furthermore that the class ${\mathcal{D}}_{\mathcal{G}}$ is enveloping, we prove that all the quotient rings $R/J$, for $J\in {\mathcal{G}}$ are perfect rings and so are all the discrete quotient rings of the topological ring $\mathfrak R={\operatorname{End}}(K)$ (Theorems  \[T:rjperfect\] and  \[T:EndK-properfect\]). In the terminology of [@BP2] this means that $\mathfrak R$ is a pro-perfect topological ring. Moreover, the converse holds, that is if $\mathfrak R={\operatorname{End}}(K)$ is a pro-perfect topological ring and the projective dimension of $U$ is at most one, then the class of ${\mathcal{G}}$-divisible modules is enveloping (Theorem \[T:characterisation\]). Consequently, applying results from [@BP2 Section 19], we obtain that ${\mathrm{Add}}(K)$, the class of direct summands of direct sums of copies of $K$, is closed under direct limits. Since ${\mathcal{D}}_{\mathcal{G}}$ coincides with the right Ext-orthogonal of ${\mathrm{Add}}(K)$, we have an instance of the necessity of the closure under direct limits of a class whose right Ext-orthogonal admits envelopes. Therefore in our situation we prove a converse of the result by Enochs and Xu ([@Xu Theorem 2.2.6]) which states that if a class ${\mathcal{A}}$ of modules is closed under direct limits and extensions and whose right Ext-orthogonal ${\mathcal{A}}^\perp$ admits special preenvelopes with cokernel in ${\mathcal{A}}$, then ${\mathcal{A}}^\perp$ is enveloping. The case of a non-injective flat ring epimorphism $u\colon R\to U$ is easily reduced to the injective case, since the class of ${\mathcal{G}}$-divisible modules is annihilated by the kernel $I$ of $u$, so all the results proved for $R$ apply to the ring $R/I$ and to the cokernel $K$ of $u$. As a byproduct we obtain that a $1$-tilting torsion class over a commutative ring is enveloping if and only if it arises from a flat injective ring epimorphism with associated Gabriel topology ${\mathcal{G}}$ such that the factor rings $R/J$ are perfect rings for every $J\in {\mathcal{G}}$ (Theorem \[T: tilting-envelope\]). This provides a partial answer to Problem 1 of [@GT12 Section 13.5] and generalises the result proved in [@B] for the case of commutative domains and divisible modules. The paper is organised as follows. After the necessary preliminaries, in Section \[S:envelope\] we state some general results concerning properties of envelopes with respect to classes of modules. In Section \[S:gab-top\] we recall the notion of a Gabriel topology and outline the properties of the related ring of quotients. In Section \[S:tilting-enveloping\], we consider a $1$-tilting class over a commutative ring and its associated Gabriel topology via Hrbek’s results  [@H]. We prove that if the $1$-tilting class is enveloping
{ "pile_set_name": "ArXiv" }
null
--- author: - 'Chee Sheng Fong$^{1}$' - 'Hisakazu Minakata$^{2,3}$' - 'Hiroshi Nunokawa$^{4}$' title: 'Non-unitary evolution of neutrinos in matter and the leptonic unitarity test\' --- IFT-UAM/CSIC-17-117 Introduction {#sec:introduction} ============ Studies of neutrino oscillation entered into a “matured phase” after the structure of the three-flavor lepton mixing [@Maki:1962mu] is elucidated. The long-lasted discovery phase of neutrino oscillation has been unambiguously concluded by the Super-Kamiokande (Super-K) atmospheric neutrino observation which discovered neutrino oscillation and hence neutrino mass [@Fukuda:1998mi]. It was followed by the KamLAND reactor and the solar neutrino experiments which uncovered the three-flavor nature of the mixing by observing oscillation and/or nonadiabatic flavor conversion of neutrinos in matter [@Mikheev:1986gs; @Wolfenstein:1977ue] in the 1-2 sector [@Eguchi:2002dm; @Ahmad:2002jz].[^1] The last step of understanding the three-flavor structure of neutrino oscillation was carried out by the reactor [@An:2016ses; @RENO:2015ksa; @Schoppmann:2016iww] and the accelerator [@Abe:2017vif; @Adamson:2016tbq] measurement of $\theta_{13}$. It lefts only the two remaining unknowns in the standard three-flavor mixing paradigm, that is, measurement of CP violating phase and determination of neutrino mass ordering.[^2] The completion of the theory of the three-flavor neutrino mixing, however, necessitates the paradigm test. A well-known example of such efforts is to verify unitarity of the quark CKM matrix [@Olive:2016xmw]. We have argued in ref. [@Fong:2016yyh] that we may need a different strategy to test leptonic unitarity. That is, first prepare a generic framework which describes unitarity violation at certain energy scale, and then confront it to experimental data. We contrasted the two typical alternatives, unitarity violation by new physics at high ($E \gg m_{W}$) and low ($E \ll m_{W}$) energy scales, which are dubbed as high-scale and low-scale unitarity violation, respectively.[^3] They differ in certain characteristic features, such as absence (low-scale) or presence (high-scale) of violation of flavor universality and zero-distance flavor transition. In a previous paper, we have proposed a model-independent framework for testing leptonic unitarity assuming low-scale unitarity violation [@Fong:2016yyh]. Our framework is based on the three active plus $N$ sterile lepton (called neutrino) system, which is unitary in the whole $(3+N)$ dimensional state space but restriction to observables in the active neutrino subspace renders the theory non-unitary in that subspace. It is referred to as the “$(3+N)$ space unitary model”. We have shown that the restriction of sterile state masses to $0.1 \,\text{eV}^2 \lsim m^2_{J} \lsim 1 \,\text{MeV}^2$ is sufficient to make the observables sterile-sector model independent. That is, the neutrino oscillation probability can be written in such a way that it is independent of details of the sterile mass spectrum and mixing with active neutrinos. The model-independent nature of the framework will be translated into that of the constraints obtained, thereby making leptonic unitarity test more powerful. See ref. [@Parke:2015goa; @Blennow:2016jkn] for a comprehensive analysis of the currently available neutrino data with the active plus sterile framework, and [@Blennow:2016jkn; @Tang:2017khg] for analyses of the future experiments. Can one distinguish between high-scale and low-scale unitarity violation? The answer is in principle yes, given the above difference between them, which can only be done by detecting violation of flavor universality or zero-distance flavor transition. In ref. [@Fong:2016yyh] we have pointed out another way of distinguishing them by observing the probability leaking term in the oscillation probability, which signals existence of energetically accessible sterile states. Unfortunately, this important trace of the hidden sterile sector has neither been mentioned in the context of unitarity test, nor included into the foregoing analyses before our analysis for JUNO [@JUNO]-like setting. In this paper we will give a comprehensive treatment of the $(3+N)$ space unitary model. Our particular emphasis in this paper is twofold: - We formulate a novel perturbative framework for the $(3+N)$ space unitary model, which we call “[*small unitarity-violation perturbation theory*]{}”. It allows us to calculate the oscillation probability in the presence of matter effect comparable in size to the vacuum effect. - We show that the framework can be used in dual modes: It serves (1) as a suitable framework for leptonic unitarity test in neutrino oscillation experiments, and (2) as a hunting tool for unitarity violation effects, which could serve for another way of distinguishing low-scale unitarity violation from high-scale one. It must be remarked that the sterile sector model-independent nature of the $(3+N)$ space unitary model is demonstrated in ref. [@Fong:2016yyh] only in vacuum and in matter to first order in matter perturbation theory. It is the first goal of this paper to formulate the appropriate framework to address this question, and show that the model-independence holds after inclusion of sizeable matter effect. In fact, we will observe that the same condition on the sterile neutrino masses guarantees this property. It may be useful in the application of our framework to some of the ongoing and next generation neutrino oscillation experiments [@Abe:2017vif; @Adamson:2016tbq; @Abe:2017aap; @Collaboration:2011ym; @Abe:2015zbg; @Acciarri:2015uup; @TheIceCube-Gen2:2016cap; @Adrian-Martinez:2016zzs]. The discussion of the second point above, our second goal, will follow once the formulation of perturbation theory is completed. It may be worth to note that there exists difference between low- and high-scale unitarity violation with regard to need for and implication to future neutrino experiments. The scenario of high-scale unitarity violation has been studied extensively in the literature [@Antusch:2006vwa; @FernandezMartinez:2007ms; @Antusch:2009pm; @Antusch:2009gn; @Antusch:2014woa; @Escrihuela:2015wra; @Fernandez-Martinez:2016lgt; @Blennow:2016jkn; @Escrihuela:2016ube].[^4] In this case, due to preserved $SU(2) \times U(1)$ symmetry at high scales, it is conceivable that the constraints from measurements using probes in the charged lepton sector play a dominant role. On the other hand, in the case of low-scale unitarity violation, neutrino oscillation experiments will play key role in constraining unitarity violation. Essence of the present and the previous papers {#sec:essence} ============================================== In this section, we present essence of the present and the previous [@Fong:2016yyh] papers in which we try to construct an adequate formulation to describe unitarity violation at low energies, $E \ll m_{W}$. Unitary 3 active + $N$ sterile neutrino system {#sec:3+N-system} ---------------------------------------------- We have shown in the $(3+N)$ space unitary model that the active neutrino oscillation probability can be written in a sterile sector model-independent way under the constraint on sterile state masses to $0.1\, \text{eV}^2 \lsim m^2_{J} \lsim 1\, \text{MeV}^2$ both in vacuum and to first order in matter effect [@Fong:2016yyh]. The lower limit of the sterile mass range comes from the requirement that fast oscillations due to active-sterile and sterile-sterile mass differences are averaged out due to decoherence. Whereas the upper limit is for sterile neutrinos to take part in reactor neutrino experiments, which may be relaxed to $m^2_{J} \lsim (1 - 10) \,\text{GeV}^2$ for accelerator neutrinos. Then, the obvious question is whether the conclusion remains the same when all order effect of matter is taken into account. We will answer the question in the positive in this paper. Non-unitary evolution of neutrinos in vacuum {#sec:nonunitarity-vacuum} -------------------------------------------- Despite large state space of $(3+N)$ dimensions the resultant expression of the oscillation probability has a simple form under the above stated restriction on the sterile neutrino mass spectrum. In vacuum it has the form $$\begin{aligned} P(\nu_\beta \rightarrow \nu_\alpha) &=& \mathcal{C}_{\alpha \beta} + \left| \sum_{j=1}^{3} U_{\alpha j} U^{*}_{\beta j} \right|^2 - 2 \sum_{j \neq k} \mbox{Re} \left( U_{\alpha j} U_{\beta j}^* U_{\alpha k}^* U_{\beta k} \right) \
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'The simple linear model $$Y_i = \alpha + \beta \, x_i + \epsilon_i \qquad i=1,2, \ldots,N \geq 2$$ is considered, where the $x_i$’s are given constants and $\epsilon_1, \epsilon_2 , \ldots, \epsilon_N$ are iid with continuous distribution function $F$. An estimator of $\beta$ is proposed, based on the stochastic process in (\[due\]) and defined as $\tilde{\beta} = \frac 12 \, \left\{ \sup (b: G(\underline y;b) >0) + \right. $ $ \left. \inf (b: G(\underline y;b) <0) \right\}.$ The properties of $\tilde{\beta}$ and of the related confidence interval are studied. Some comparisons are given, in terms of asymptotic relative efficiency, with other estimators of $\beta$ including that obtained with the method of least squares.' author: - 'D. Michele Cifarelli' date: '[**Translation from Italian of the paper:**]{} Cifarelli, D.M. (1978). “La stima del coefficiente di regressione mediante l’indice di cograduazione di Gini", [*Rivista di matematica per le scienze economiche e sociali*]{} (now: [*Decisions in economics and finance*]{}), 1, 7–38.' title: 'Estimation of the regression slope by means of Gini’s cograduation index' --- [example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore Introduction and summary ======================== Consider the simple linear model $$Y(x_i) = Y_i = \alpha + \beta \, x_i + \epsilon_i \qquad i=1,2, \ldots,N,$$ where 1. $x_1, x_2, \ldots , x_N$ are known constants, supposed to be all distinct and increasingly ordered 2. $\epsilon_1, \epsilon_2, \ldots, \epsilon_N$ are mutually independent random variables with the same distribution function $F$ 3. $\alpha$ and $\beta$ are unknown parameters. The usual estimators of $\alpha$ and $\beta$ are those derived from the least squares method. As known, if the $\epsilon_i$’s have finite variance, such estimators possess some good properties. More specifically, they are unbiased and have minimum variance in the class of linear estimators (BLUE). When, in addition, the $\epsilon_i$’s are assumed to be normal, the above estimators coincide with the ones obtained by the maximum likelihood method and, besides being unbiased, they have minimum variance in the class of all unbiased estimators (MVUE) and they are normally distributed. Consider then the least squares estimator of $\beta$: $$\label{uno} \hat{\beta} = \frac{\displaystyle{\sum_{1\leq i \leq N}} (x_i - \bar x) (Y_i - \bar Y)}{\displaystyle{\sum_{1\leq i \leq N}} (x_i - \bar x)^2} \qquad \bar Y = \frac 1N \sum_{1\leq i \leq N} Y_i, \quad \bar x = \frac 1N \sum_{1\leq i \leq N} x_i.$$ As the corresponding estimate of $\beta$ strongly depends on the observed values $y_1, y_2, \ldots , y_N,$ the occurrence of outliers, that is of observations deviating from the main core of data, will likely influence such a procedure. This chance will often arise when the distribution of the disturbances $\epsilon_i$ has heavy tails, like in the case of the Cauchy, the double-exponential and other distributions. It is quite a serious drawback of the estimator $\hat{\beta}$ and attempts are occasionally made to remedy it by unconventionally deleting the most extreme observations. Another completely different problem of least squares concerns the interval estimation of $\beta.$ The possibility of producing a confidence interval for $\beta,$ or equivalently of testing the hypothesis $\beta = \beta_0,$ rests indeed on the assumption of normality for the variables $\epsilon_i$’s, so that, at least for limited values of $N,$ the whole procedure proves to be fairly “unrobust" when such an assumption is not met (even if the asymptotic normality of $\hat{\beta}$ is assumed). The asymptotic theory for such intervals cannot be always invoked, besides, for such a theory rests on the asymptotic normality of $\hat{\beta}$ which is not always ensured (\[1\]). Two distinct methods can be used to solve the first of the problems above: two distinct ways can be tried: one can decide to delete outliers or, alternatively, to base the estimation of $\beta$ on suitable functions of ranks, which are possibly unaffected by the extreme observations. Common thinking is that the deletion of outliers must follow rules that are clearly stated before, and not after, data are available; this task cannot then rely on a subjective judgment, which will deprive the researcher of any foundation to study the related procedure. The papers by Brown and Mood (\[2\]), Adichie (\[3\]), Theil (\[4\]) and Sen (\[5\]) are framed, instead, in the logic of ranks, which proved to be able to overcome both the drawbacks outlined above. To introduce such kinds of procedures, notice that the estimator (\[uno\]) can be rewritten so that the slopes $$P_{ij} = \frac{Y_j-Y_i}{x_j-x_i}, \qquad i<j,$$ are explicitly shown. Indeed, $$\hat{\beta} = \frac{\displaystyle{\sum_{i<j}} P_{ij} \, (x_j - x_i)^2}{\displaystyle{\sum_{i<j}} (x_j-x_i)^2} = \frac{\displaystyle{\sum_{i<j}} (Y_j-Y_i) \, (x_j - x_i)}{\displaystyle{\sum_{i<j}} (x_j-x_i)^2}.$$ The above equality shows that $\hat{\beta}$ can be regarded as a mean of the $P_{ij}$’s with weights $(x_j-x_i)^2.$ To solve the problem of outliers, one can then obviously substitute such a weighted mean with a suitable function of the slopes $P_{ij},$ so as to result unaffected (at least less affected) by the extreme observations. This approach is substantially the one used by Theil, who proposed, as an estimator of $\beta,$ the median of the slopes $P_{ij}$, or the central value of the median interval when dealing with an even number of slopes. Theil’s procedure is related to the one by Sen, who derived an estimator of $\beta$ by using a measure of concordance, which is essentially Kendall’s $\tau,$ between the ranks of $Y_i-bx_i$ and those of $x_i,$ $i=1, 2, \ldots , N.$ The obtained estimator is the same proposed by Theil, but it can be applied under the general assumption that the $x_i$’s are not all distinct. It is interesting to note that the same result can be obtained by starting from a completely different point of view, namely by using the minimax estimator with a non-quadratic loss function (\[6\]). The study of the asymptotic properties of both the point and the interval estimators is due to Sen as well, along with the determination of the asymptotic relative efficiency of the proposed estimator with respect to the one of least squares and to other estimators, proposed by Adichie (\[3\]), which were generalized, somehow under a more general framework, by Koul (\[7\]). To have an idea of the efficiency gained by the Theil-Sen estimator, $\beta^*,$ with respect to that of least squares, $\hat{\beta},$ it suffices to notice that there are cases where $$\lim_{N \rightarrow +\infty} \frac{{{\rm Var}}(\hat{\beta})}{{{\rm Var}}(\beta^*)} = \alpha > 1$$ and that, even in the normal case, if the constants $x_i$’s are conveniently chosen, $$\lim_{N \rightarrow +\infty} \frac{{{\rm Var}}(\hat{\beta})}{{{\rm Var}}(\beta^*)} = \frac{3}{\pi} \simeq 0.95.$$ Instead of measuring the concordance between the residuals $Y_i-bx_i$ and $x_i,$ $i=1,2,\ldots, N,$ by means of $\tau$ or other indices, as later proposed (\[8\]), one can obviously consider Gini’s cograduation index $G.$ This procedure is quite different from the one proposed by Adichie, who used a class of indices which are functions of the ranks of residuals $Y_i-bx_i$ and of the values $x_i,$ while $G$ is based, as known, on the ranks of $Y_i-bx_i$ and on the [*ranks*]{} of $x_i,$ $i=1, 2, \ldots, N.$ In addition, the results gained using $G$ are likely to be structurally different from the ones obtained from $\tau$ or Spearman’s $R,$ because $G$ is believed
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We consider operators arising from regular Dirichlet forms with vanishing killing term. We give bounds for the bottom of the (essential) spectrum in terms of exponential volume growth with respect to an intrinsic metric. As special cases we discuss operators on graphs. When the volume growth is measured in the natural graph distance (which is not an intrinsic metric) we discuss the threshold for positivity of the bottom of the spectrum and finiteness of the bottom of the essential spectrum of the (unbounded) graph Laplacian. This threshold is shown to lie at cubic polynomial growth.' address: - | Mathematisches Institut\ Friedrich Schiller Universit[ä]{}t Jena\ 07743 Jena, Germany - | Mathematisches Institut\ Friedrich Schiller Universit[ä]{}t Jena\ 07743 Jena, Germany - | York College of the City University of New York\ Jamaica, NY 11451\ USA author: - Sebastian Haeseler - Matthias Keller - 'Rados[ł]{}aw K. Wojciechowski' title: Volume growth and bounds for the essential spectrum for Dirichlet forms --- Introduction and Main Results ============================= In 1981 Brooks proved that the bottom of the essential spectrum of the Laplace Beltrami operator on a complete non compact Riemannian manifold with infinite measure can be bounded by the exponential volume growth rate of the manifold [@Br]. Following this, similar results were proven in various contexts, see [@DK; @Fuj; @Hi; @Hi2; @OU; @Stu]. Very recently it was shown in [@KLW] that such a result fails to be true in the case of the (non-normalized) graph Laplacian when the volume is measured with respect to the natural graph distance. Indeed, there are graphs of cubic polynomial volume growth that have positive bottom of the spectrum and slightly more than cubic growth already allows for purely discrete spectrum. This suggests that one should look for other candidates for a metric on a graph. In this work we use the context of regular Dirichlet forms (without killing term) and the corresponding concept of intrinsic metrics, see [@Stu] and [@FLW], to prove a Brooks-type theorem. The purpose of this approach is threefold. First, we provide a set up which includes all known examples (and various others, e.g., quantum graphs) and give a unified treatment. Additionally, our estimates are slightly better than most of the previous results. Secondly, our method of proof seems to be much clearer and simpler than most of the previous works. Finally, graph Laplacians are now included and the disparity discussed above is resolved by considering suitable metrics. As an application, we can now prove that the examples found in [@KLW] for Laplacians on graphs do indeed give the borderline for positive bottom of the spectrum. In particular, for the natural graph distance the threshold for zero bottom of the essential spectrum and the discreteness of the spectrum lies at cubic growth. Let $X$ be a locally compact separable metric space and $m$ a positive Radon measure of full support. Let ${{\mathcal E}}$ be a closed, symmetric, non-negative form on the Hilbert space $L^{2}(X,m)$ of real-valued square integrable functions with domain $D$. We assume that ${{\mathcal E}}$ is a regular Dirichlet form without killing term (for background on Dirichlet forms see [@Fuk], more details are given in Section \[s:DF\]). Let $L$ be the positive self adjoint operator arising from ${{\mathcal E}}$. Define $$\begin{aligned} {{\lambda}}_{0}(L):=\inf{{\sigma}}(L)\quad\mbox{and}\quad {{\lambda}}_{0}^{\mathrm{ess}}(L):=\inf{\sigma_{\!\mathrm{ess}}}(L)\end{aligned}$$ where ${\sigma_{\!\mathrm{ess}}}(L)$ denotes the essential spectrum of $L$. We let $\rho$ be an intrinsic pseudo metric in the sense of [@FLW]. For $x_{0}\in X$ and $r\geq 0$, we define the distance ball $B_{r}=B_{r}(x_{0})=\{x\in X\mid \rho(x,x_{0})\leq r\}$. Let the *exponential volume growth* be defined as $$\begin{aligned} \mu=\liminf_{r\to\infty}\frac{1}{r}\log m(B_{r}(x_{0})).\end{aligned}$$ Note that, in contrast to previous works on manifolds [@Br], graphs [@Fuj] and strongly local forms [@Stu], we consider a $\liminf$ here, rather than a $\limsup$. If $\rho$ takes values in $[0,\infty)$, then $X=\bigcup_{r} B_{r}(x_{0})$. In this case $\mu$ does not depend on the particular choice of $x_{0}$. There is another constant first introduced in [@Stu] which we call the *minimal exponential volume growth* and which is defined as $$\begin{aligned} {\widetilde{ \mu}}=\liminf_{r\to\infty}\frac{1}{r}\inf_{x\in X}\log \frac{m(B_{r}(x))}{m(B_{1}(x))}.\end{aligned}$$ In this paper we prove the following theorem. \[t:main\] Let $L$ be the positive self adjoint operator arising from a regular Dirichlet form ${{\mathcal E}}$ without killing term and let $\rho$ be an intrinsic metric such that all distance balls are compact. Then, $$\begin{aligned} {{\lambda}}_{0}(L)\leq \frac{{\widetilde{ \mu}}^{2}}{4}.\end{aligned}$$ If additionally $m(\bigcup_{r}B_{r}(x_{0}))=\infty$ for some $x_{0}$, then $$\begin{aligned} {{\lambda}}_{0}^{\mathrm{ess}}(L)\leq \frac{\mu^{2}}{4}.\end{aligned}$$ This has the following immediate corollary. The corollary has various consequences, for example, the exponential instability of the semigroup $(e^{-tL})_{t\geq0}$ on $L^{p}(X,m)$, $p\in [1,\infty]$, see [@Stu Corollary 2]. Suppose that $(X,d)$ is of subexponential growth, i.e., ${\widetilde{ \mu}}=0$ (respectively, $\mu=0$). Then, ${{\lambda}}_{0}(L)=0$ (respectively, ${{\lambda}}_{0}^{\mathrm{ess}}(L)=0$). \(a) Let us discuss Theorem \[t:main\] in the perspective of the present literature: For the Laplace Beltrami operator on a Riemannian manifolds an estimate for ${{\lambda}}_{0}^{\mathrm{ess}}$ can be found in [@Br], see also [@Hi2]. In [@Stu] the statement for ${{\lambda}}_{0}$ is proven for strongly local Dirichlet forms. For non-local operators such results were known only for normalized Laplacians on graphs, see [@DK; @Fuj; @Hi; @OU]. These operators are of a very special form, in particular, they are always bounded. For unbounded Laplacians on graphs the conclusions of the theorem do not hold if one considers volume with respect to the natural graph metric, see [@KLW]. However, by [@FLW] (see also [@GHM]), there is now a suitable notion of intrinsic metric for non-local forms. Let us stress that our result covers the results in [@Br; @DK; @Fuj; @OU; @Stu]. The results of type [@Hi; @Hi2] could certainly also be obtained with slightly more technical effort which we avoid here for clarity of presentation. \(b) Despite the fact that our result is much more general, we have a unified method of proof for the bounds on the spectrum and the essential spectrum. Moreover, for the essential spectrum, the proof is significantly simpler than the one of [@Br; @Fuj] as we use test functions that converge weakly to zero and, therefore, avoid a cut-off procedure. \(c) Indeed, we prove a slightly more general result than above for non-local forms in Section \[s:nonlocal\]. In particular, for some special cases we prove much better estimates and recover the results of [@DK; @Fuj; @OU] in Corollary \[c:normalized\] in Section \[s:graph\]. \(d) If we assume that $\rho$ takes values in $[0,\infty)$, then we can clearly replace the assumption that $m(\bigcup_{r}B_{r}(x_{0}))=\infty$ with $m(X)=\infty$. The case when $m(X) < \infty$ is notably different, see [@HKLW2] for more details. \(e) If $\inf_{x\in X}m(B_{1}(x))>0$, then one can also show that $ {{\lambda}}_{0}^{\mathrm{ess}}(L)\leq {{\widetilde{ \mu}}^{2}}/{4}$. \(f) Our result deals exclusively with Dirichlet forms with vanishing killing term. The major challenge in the case of non vanishing killing term is to give a proper definition of volume which incorporates the killing term. We shortly discuss a strategy of how one could approach this case: We need an positive generalized harmonic function $u$, i.e., ${{\mathcal E}}(u,{{\varphi}})=0$ for all ${{\varphi}}\in D$, where $u$ is
{ "pile_set_name": "ArXiv" }
null
--- author: - Ulrich Langer - 'Stephen E. Moore' bibliography: - 'DGIGASurface\_MooreLanger.bib' title: Discontinuous Galerkin Isogeometric Analysis of Elliptic PDEs on Surfaces --- Introduction {#sec1:Introduction} ============ The Isogeometric Analysis (IGA) was introduced by [@HughesCottrellBazilevs:2005a] and has since been developed intensively, see also monograph [@CottrellHughesBazilevs:2009a], is a very suitable framework for representing and discretizing Partial Differential Equations (PDEs) on surfaces. We refer the reader to the survey paper by [@DziukElliot:2013a] where different finite element approaches to the numerical solution of PDEs on surfaces are discussed. Very recently, [@DednerMadhavanStinner:2013a] have used and analyzed the Discontinuous Galerkin (DG) finite element method for solving elliptic problems on surfaces. The IGA of second-order PDEs on surfaces that avoid errors arising from the approximation of the surface, has been introduced and numerically studied by [@DedeQuarteroni:2012a]. [@Brunero:2012a] presented some discretization error analysis of the DG-IGA applied to plane (2d) diffusion problems that carries over to plane linear elasticity problems which have recently been studied numerically in [@ApostolatosSchmidtWuencherBletzinger:2013a]. The efficient generation of the IGA equations, their fast solution, and the implementation of adaptive IGA schemes are currently hot research topics. The use of DG technologies will certainly facilitate the handling of the multi-patch case. In this paper, we use the DG method to handle the IGA of diffusion problems on closed or open, multi-patch NURBS surfaces. The DG technology easily allows us to handle non-homogeneous Dirichlet boundary condition as in the Nitsche method and the multi-patch NURBS spaces which can be discontinuous across the patch boundaries. We also derive discretization error estimates in the DG- and $L_{2}$-norms. Finally, we present some numerical results confirming our theoretical estimates. Surface Diffusion Model Problem {#sec2:SurfaceDiffusionModelProblem} =============================== Let us assume that the physical (computational) domain $\Omega$, where we are going to solve our diffusion problem, is a sufficiently smooth, two-dimensional generic (Riemannian) manifold (surface) defined in the physical space $\mathbb{R}^{3}$ by means of a smooth multi-patch NURBS mapping that is defined as follows. Let $\mathcal{T}_{H}= \{\Omega^{(i)}\}_{i=1}^{N}$ be a partition of our physical computational domain $\Omega$ into non-overlapping patches (subdomains) $\Omega^{(i)}$ such that $\overline{\Omega}= \bigcup_{i=1}^{N} \overline{\Omega}^{(i)} $ and $\Omega^{(i)} \cap \Omega^{(j)}= \emptyset $ for $i \neq j$, and let each patch $\Omega^{(i)}$ be the image of the parameter domain $\widehat{\Omega} = (0,1)^2 \subset \mathbb{R}^{2}$ by some NURBS mapping $G^{(i)} : \widehat{\Omega} \rightarrow \Omega^{(i)} \subset \mathbb{R}^{3}, \mathbf{\xi} = (\mathbf{\xi}_1,\mathbf{\xi}_2) \mapsto \mathbf{x} = (\mathbf{x}_1,\mathbf{x}_2,\mathbf{x}_3)=G^{(i)}(\mathbf{\xi})$, which can be represented in the form $$\label{sec2:GeometricalMappingRepresentation} G^{(i)}(\xi_{1},\xi_{2}) = \sum_{k_{1}=1}^{n_{1}} \sum_{k_{2}=1}^{n_{2}} \mathbf{P}^{(i)}_{(k_{1},k_{2})} \widehat{R}^{(i)}_{(k_{1},k_{2})}(\xi_{1},\xi_{2})$$ where $\{ \widehat{R}^{(i)}_{(k_{1},k_{2})} \}$ are the bivariate NURBS basis functions, and $\{\mathbf{P}^{(i)}_{(k_{1},k_{2})} \}$ are the control points, see [@CottrellHughesBazilevs:2009a] for a detailed description. Let us now consider a diffusion problem on the surface $\Omega$ the weak formulation of which can be written as follows: find $u \in V_g$ such that $$\label{sec2:VariationalFormulation} a(u,v) = \langle F,v \rangle \quad \forall v \in V_0,$$ with the bilinear and linear forms are given by the relations $$a(u,v) = \int_\Omega \alpha \, \nabla_\Omega u \cdot \nabla_\Omega v \, d \Omega \quad \mbox{and} \quad \langle F,v \rangle = \int_\Omega f v \, d \Omega + \int_{\Gamma_N} g_N v \,d \Gamma,$$ respectively, where $\nabla_\Omega$ denotes the so-called tangential or surface gradient, see e.g. Definition 2.3 in [@DziukElliot:2013a] for its precise description. The hyperplane $V_g$ and the test space $V_0$ are given by $V_g=\{v \in V = H^1(\Omega): v=g_D \;\mbox{on}\; \Gamma_D\}$ and $V_0=\{v \in V: v=0 \;\mbox{on}\; \Gamma_D\}$ for the case of an open surface $\Omega$ with the boundary $\Gamma = \overline{\Gamma}_D \cup \overline{\Gamma}_N$ such that $\mbox{meas}_1(\Gamma_D) > 0$, whereas $V_g=V_0=\{v \in V: \int_\Omega v \, d \Omega =0\}$ in the case of a pure Neumann problem ($\Gamma_N = \Gamma$) as well as in the case of closed surfaces unless there is a reaction term. In case of closed surfaces there is of course no integral over $\Gamma_N$ in the linear functional on the right-hand side of (\[sec2:VariationalFormulation\]). In the remainder of the paper, we will mainly discuss the case of mixed boundary value problems on an open surface under appropriate assumptions (e.g., $\mbox{meas}_1(\Gamma_D) > 0$, $\alpha$ - uniformly positive and bounded, $f\in L_2(\Omega)$, $g_D \in H^{\frac{1}{2}}(\Gamma_{D})$ and $g_{N} \in L_{2}(\Gamma_{N})$ ) ensuring existence and uniqueness of the solution of (\[sec2:VariationalFormulation\]). For simplicity, we assume that the diffusion coefficient $\alpha$ is patch-wise constant, i.e. $\alpha = \alpha_i$ on $\Omega^{(i)}$ for $i=1,2,\ldots,N$. The other cases including the reaction-diffusion case can be treated in the same way and yield the same results like presented below. DG-IGA Schemes and their Properties {#sec3:DGIGASchemesAndProperties} =================================== The DG-IGA variational identity $$\label{sec3:DG-VariationalIdentity} a_{DG}(u,v) = \langle F_{DG},v \rangle \quad \forall v \in \mathcal{V} = H^{1+s}(\mathcal{T}_{H}),$$ which corresponds to (\[sec2:VariationalFormulation\]), can be derived in the same way as their FE counterpart, where $H^{1+s}(\mathcal{T}_{H}) =\{v \in L_{2}(\Omega): v|_{\Omega^{(i)}} \in H^{1+s}(\Omega^{(i)}), \; \forall \, i = 1,\ldots,N\}$ with some $s > 1/2$. The DG bilinear and linear forms in the *Symmetric Interior Penalty Galerkin* (SIPG) version, that is considered throughout this paper for definiteness, are defined by the relationships $$\begin{aligned} \label{sec3:DG-BilinearForm} \nonumber a_{DG}(u,v) &=&\sum_{i=1}^{N} \int_{\Omega^{(i)}} \alpha_{i} \nabla_{\Omega} u \cdot \nabla_{\Omega} v \, d\Omega\\ \nonumber && -\sum_{\gamma \in \mathcal{E}_{I} \cup \mathcal{E}_{D}} \int_{\gamma} \left( \{ \alpha \nabla_{\Omega} u \cdot \mathbf{n}\} [v] + \{\alpha \nabla_{\Omega} v \cdot \mathbf{n}\} [u] \right)\,d\Gamma\\ && + \sum_{ \gamma \in \mathcal{E}_{I} \cup \mathcal{E}_{D}} \frac{\delta}{ h_{\gamma} } \int_{\gamma} \alpha_{\gamma} [u][v]\,d\Gamma\end{aligned}$$ and $$\
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'The response of ultracold atomic Bose gases in time-dependent optical lattices is discussed based on direct simulations of the time-evolution of the many-body state in the framework of the Bose-Hubbard model. We focus on small-amplitude modulations of the lattice potential as implemented in several recent experiment and study different observables in the region of the first resonance in the Mott-insulator phase. In addition to the energy transfer we investigate the quasimomentum structure of the system which is accessible via the matter-wave interference pattern after a prompt release. We identify characteristic correlations between the excitation frequency and the quasimomentum distribution and study their structure in the presence of a superlattice potential.' author: - Markus Hild - Felix Schmitt - Ilona Türschmann - Robert Roth title: 'Ultracold Bose gases in time-dependent 1D superlattices: response and quasimomentum structure' --- Ultracold atomic gases in optical lattices are a versatile laboratory for the study of fundamental quantum phenomena. The accurate control of the important physical parameters over a wide range has been utilized for detailed experimental investigations of quantum phase transitions, e.g. the superfluid to Mott insulator phase transition (SF-MI) [@JaBr98; @GrMa02a; @GrMa02b]. The primary experimental observable is the matter-wave interference pattern of the atoms after release from the confining potentials and ballistic expansion. The interference pattern obtained after a sudden release of the atoms provides direct information on the (quasi)momentum distribution of the system before the release. The interference pattern thus helps with the identification of different quantum phases, such as the SF and MI phases [@GrMa02a; @GrMa02b]. In recent experiments it was also used to study the response of the system in a two-photon Bragg-spectroscopy scheme based on a temporal modulation of the lattice potential. The broadening of the central interference peak after re-thermalization in a shallow lattice was used as a measure for the response [@StMo04; @FaLy07]. These techniques can also be employed in the context of more complicated irregular lattices as they can be produced, e.g., by using speckle patterns [@LyFa05; @ClVa05; @FoFa05; @ScDr05] or two-color superlattice potentials [@RoBu03b; @RoBu03c; @FaLy07]. In pioneering experiments on the response in two-color superlattices the impact of a superlattice has been investigated [@FaLy07]. In agreement with theoretical predictions a broadening of the response as function of modulation frequency was observed with increasing superlattice amplitude. Motivated by these experiments we study the response of a bosonic system to the modulation of the superlattice potential deep in the Mott-insulating regime. Going beyond the observables used in experiments, we study the quasimomentum structure of the system as it is revealed by the interference pattern after a prompt release of the atoms without any re-thermalization phase. To this end we perform an explicit time-evolution of the many-body state in the presence of a time-dependent lattice based on the Bose-Hubbard Hamiltonian [@HiSc06]. Similar studies have been done using the time-dependent density matrix renormalization group method (tDMRG) [@KoIu06]. Our simulations reveal subtle correlations between the modulation frequency within the resonance region and the quasimomentum distribution, which should be accessible to experiment. A system of $N$ bosons in an one-dimensional superlattice potential with $I$ sites at zero temperature is well described by the single-band Bose-Hubbard model (BHM) [@JaBr98]. The Bose-Hubbard Hamiltonian, formulated in second quantization in the basis of localized Wannier states of the lowest bands, reads $$\begin{aligned} \label{eq_sec2_hamiltonian} {{\hat{\mathrm{H}}}}_0= -J_0\!\sum_{\langle{}i,j\rangle}{{\hat{\mathrm{a}}}^{\dagger}}_i{{\hat{\mathrm{a}}}^{\phantom{\dagger}}}_j +\frac{U_0}{2}\!\sum_{i}{\hat{\mathrm{n}}}_i({\hat{\mathrm{n}}}_i\!-\!1) +\Delta_0\!\!\sum_{i}\epsilon_i{\hat{\mathrm{n}}}_i,\end{aligned}$$ where the first sum runs over adjacent sites including a term connecting the first and last site of the lattice (cyclic boundary conditions). The Hamiltonian with the bosonic creation (annihilation) operators ${{\hat{\mathrm{a}}}^{\dagger}}_i$ (${{\hat{\mathrm{a}}}^{\phantom{\dagger}}}_i$) and the occupation number operator ${\hat{\mathrm{n}}}_i$ consists of three terms: The first term describes the tunneling between adjacent sites, the second term accounts for the on-site interaction of the atoms, and the third term represents a site-dependent external potential. We introduce the superlattice potential via the latter term and describe its spatial structure by the reduced on-site energies $\epsilon_i \in [-1,0]$. The physics of the BHM is governed by the competition between the relative strengths of these three terms, i.e. the tunneling strength $J_0$, the interaction strength $U_0$, and the superlattice amplitude $\Delta_0$. The basis of the BHM for $N$ bosons and $I$ sites is spanned by the occupation number states ${\,\big|{\{n_1,n_2,\dots,n_I\}_{\alpha}}\big> }$ for all possible sets of occupation numbers $n_i$ with $\sum_{i} n_i = N$. An arbitrary state can be expanded in this number-state basis leading to ${\,\big|{\psi}\big> }=\sum_{\alpha=1}^D C_{\alpha}{\,\big|{\{n_1,n_2,\dots,n_I\}_{\alpha}}\big> }\label{eq_sec2_state}$ with complex coefficients $C_{\alpha}$. The coefficients of the eigenstates ${\,\big|{\nu}\big> }$ can be obtained by the numerical solution of the eigenvalue problem of the Hamilton matrix. ![image](fig1.eps){width="80.00000%"} The basis dimension $D$ grows factorially with $I$ and $N$, thus limiting any calculation using the full basis to small systems. However, in the strongly interacting regime ($U_0\gg{}J_0$) only a few basis states contribute to the low-lying eigenstates. This allows for a physically motivated truncation [@HiSc06; @ScHi07] of the many-body basis. The relevant basis states are identified using the expectation value of the Hamiltonian ${\big<{\{n_1,n_2,\dots,n_I\}}\big|\,{{{\hat{\mathrm{H}}}}_0}\big|\,{\{n_1,n_2,\dots,n_I\}}\big> } \leq E_\text{trunc}$, and only states below the truncation energy $E_\text{trunc}$ are included. By varying $E_\text{trunc}$ one can explicitly assess and control the impact of the truncation on observables. For regular lattices ($\Delta_0\!=\!0$) and filling factor $N/I\!=\!1$ this truncation allows for all relevant $n$-particle–$n$-hole states with $n \leq E_\text{trunc}/U_0$ with respect to the reference state ${\,\big|{1,1,\dots,1}\big> }$. We investigate the dynamics and response of the system induced by external time-dependent perturbations based on the explicit time evolution of the many-body state. Our calculations are motivated by recent experiments [@StMo04; @FaLy07] using a sinusoidal modulation of the lattice depth to perform two-photon Bragg spectroscopy. Unlike our calculations, these experiments include a rethermalization phase in the superfluid regime before the time-of-flight measurement in order to assess the energy transfer to the system. We assume a prompt release without rethermalization to directly access the quasimomentum distribution after a certain modulation time. Formally, the time-dependent lattice potential is written as $V(x,t)=[1+F\sin(\omega t)]\,V(x)$, where $V(x)$ is the static spatial lattice, $\omega$ the frequency, and $F$ the relative amplitude of the modulation. All simulations are performed with a small amplitude $F=0.1$ in accord with experiment. The time-dependence enters the Bose-Hubbard Hamiltonian ${{\hat{\mathrm{H}}}}(t)$ via the time-dependent parameters $J(t)$, $U(t)$, and $\Delta(t)$, which are obtained within a Gaussian approximation for the localized Wannier functions [@KBM05; @HiSc06]. The parameters oscillate around their initial values $J_0$, $U_0$, and $\Delta_0$ at $t\!=\!0$. We investigate the response of the system deep inside the Mott-insulating regime for fixed $U_0/J_0=40$. The modulation frequency $\omega$ is varied in the range $\omega/J_0=32$ to $52$, which covers the so-called 1U resonance at $\omega/J_0\approx U_0/J_0 = 40$. The ground state obtained for the initial Bose-Hubbard Hamiltonian ${{\hat{\mathrm{H}}}}_0$
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'Simulations of vortex tube dynamics reveal that the non-Gaussian nature of turbulent fluctuation originates in the effect of random advection. A similar non-Gaussian distribution is found numerically in a simplified statistical model of random advection. An analytical solution is obtained in the mean-field case.' author: - 'Y-h. Taguchi' - Hideki Takayasu --- v Non-Gaussian distribution in Random advection dynamics Institut für Festkörperforschung, Forschungszentrum Jülich,\ D-5170 Jülich, Germany\ and\ Department of Physics, Tokyo Institute of Technology,\ Oh-okayama, Meguro-ku, Tokyo 152, Japan[@present] Department of Earth Science, Kobe University, Kobe 657, Japan. In the study of statistical physics for non-equilibrium systems the deviation from the Gaussian distribution has been the central issue. We are expecting the existence of universal mechanisms of producing non-Gaussian distributions as in the case of thermal equilibrium systems, however, our knowledge has been still in the elementary level. Non-Gaussian probability distribution function (NGPDF) is especially important in fluid turbulence. Turbulence is a typical far-from-equilibrium phenomena as it is characterized by the energy cascade which violates both the detailed balance and equi-partition. The appearance of NGPDF is so common and we can not construct any theory without taking the non-Gaussian nature into account[@Batchelor; @Monin]. Latest technique of direct observation of turbulence[@Goldburg] and also direct numerical integration of Navier-Stokes equation[@Kida] are clarifying the details NGPDF from experimental viewpoints. Recently, theoretical investigation to NGPDF in turbulence is attracting much attention. An approach is to assume the multifractality in the geometry of turbulent velocity field as an ideal limit[@Benzi], and another one called the mapping closure conjectures the existence of a kind of smooth map which describes the time evolution of statistical quantities[@She91; @Kraichnan]. Yakhot et al.[@Yakhot] also present a phenomenological excellent approach in order to derive a NGPDF for vorticities in the fluid turbulence. Although it is fairly reasonable, physical meanings in their assumptions are not clear. In this paper we first introduce a numerical model of vortex dynamics on lattice and show that the vorticity distribution in a turbulent steady state is actually far from Gaussian but closer to a symmetric exponential distribution. In order to clarify the origin of this NGPDF we modify the dynamics so that vortex tubes move randomly by passive advection. It is shown that this modification does not seem to affect the vorticity distribution, which implies that the random advection is playing an important role. Then, we focus our attention to the effect of random advection and introduce a random scalar advection model on lattice. Surprisingly the scalar also follows a distribution which is almost identical to the former NGPDFs. A mean-field version of the scalar model is then solved analytically showing a clear evidence of NGPDF. Let us first introduce our lattice vortex model[@Tag] which is equivalent to the following set of vorticity equations of incompressible fluid in the continuum limit $$\begin{aligned} \frac{\partial \w}{\partial t} + (\v \cdot \nabla ) \w &=& (\w \cdot \nabla)\v + \nu \Delta \w, \label{vor}\\ \rot \v &=& \w, \label{rot}\\ \div \v &=& 0, \label{div}\\ \nonumber\end{aligned}$$ where $\v, \w$ and $\nu$ are the velocity, vorticity and viscosity, respectively. We assign vorticities on a simple cubic lattice. Every bond has only one vorticity component along its direction. This means that an $x$-bond has only $x$ component of vorticity, i.e., a bond can be viewed as a vortex tube. For a given configuration of vorticities the vorticity field is calculated by using the Biot-Savier law (equivalent to eqs.(\[rot\]) and (\[div\]). The fluid velocity of a bond at $\r,\v(\r)$, is estimated at the mid-point of the bond. Dynamics is defined so as to satisfy the vorticity equation(\[vor\]). Due to the advection term $(\v \cdot \nabla) \w$ in eq. (\[vor\]) the vorticity flows with the fluid as shown by the name of Kelvin’s theorem[@Landau], the vorticity $\omega_z$ at a $z$-bond is transported to its six neighboring $z$-bonds. By this effect the value of $\omega_z$ at $\r \pm \hi$ is increased by $\pm J_{iz} \Delta t / 2$, where $\hi$ is a unit vector directing either $x$, $y$ or $z$ direction, and $J_{ij}$ is the flux of vorticity, $\tvi \wj$. The coefficient $1/2$ of the vorticity flux and the signs $\pm$ are introduced to keep the spatial symmetry. The first term on the right hand side of eq.(\[vor\]) shows the vortex stretching term, so we have to modify the vorticities around the advectively transported bonds so that modified vorticities make loops as shown in Fig.\[dyna\]. In this procedure we add $\mp J_{iz} \Delta t/2$ to the bonds at $\r \pm \hi/2 \pm \hz/2$, namely, $x$ and $y$ component appear due to the elongation of vortex tube. The diffusion term in eq. (\[vor\]) is discretized by the usual finite difference method. It has been shown that this dynamics is equivalent to eq.(\[vor\]) in the continuum limit and numerical simulation of this vortex tube dynamics on a periodic $24 \times 24 \times 24$ lattice has reproduced many basic properties of fluid turbulence[@Tag]. As for the probability distributions we obtained the following results: 1.The velocity components (for example $v_x$) follows nearly Gaussian. 2\. The distribution of differentiated quantities, such as $\partial v_x /\partial x, \partial v_x / \partial y $ and $\omega_x$, are non-Gaussian and close to exponential (see Fig.\[wx\]). 3\. By removing lower wave number components, the distribution of $v_x$ also becomes closer to an exponential as firstly found by She et al[@She88]. We also observe a kind of conditional distributions for velocities on bonds whose vorticities are in a fixed range. The distribution are nearly identical both for large $\omega$ and for small $\omega$, indicating that the velocity on a bond is nearly independent of its vorticity strength. We now modify our model to seek the origin of the NGPDF. The modification is to randomize the velocity field at each time step keeping its distribution. Namely, we do not use Biot-Savier law, but the vorticities are transported and elongated by random advection. The probability distribution of $\omega$ after several hundreds time steps is quite similar to that of original lattice vortex model (see Fig.\[wx\]). This result clearly shows that the appearance of NGPDF is independent of the details of velocity field. As suggested by Sinai and Yakhot[@Sinai; @Yakhot] NGPDF may be caused by random advection, and if so, we can expect a large universality class including another exponential-like distribution in thermal convection flow[@Siggia; @Yanagita]. In order to see the effect of random advection more clearly we introduce a scalar model on lattice. The model is governed by the following stochastic equation $$\omega (r_0,t + \Delta t) = \omega (r_0,t) - v(r_0,t) \omega (r_0,t) \Delta t + \sum_{r} P(r_0,r,t) v(r,t) \omega (r,t) \Delta t, \label{eq:adv}$$ where $v(r,t)$ and $P(r_0,r,t)$ are independent random variables. $v(r,t)$ takes non-negative values, and $P(r_0,r,t)=1$ when a flow from site $r$ to $r_0$ exists at time $t$, otherwise $P(r_0,r,t)=0$ and it is normalized as $\sum_r P(r_0,r,t)=1$. The one-dimensional version of eq.(\[eq:adv\]) is defined by the special case that either $P(r_0,r_0-1,t)$ or $P(r_0,r_0+1,t)$ is equal to 1 with probability $1/2$. We perform the simulation on a 1-dimensional lattice of size $10000$ with the periodic boundary condition. The random number $v(r,t)$ is in the range of $[0,0.5]$ and $\Delta t =0.5$. For a given random initial condition of ${\omega (r,0)}$ we observe the distribution of $\omega(r,t)$ at sufficiently large $t$. The result is also plotted in Fig.\[wx\]. The distribution is far from Gaussian and is very close to those of former cases. Now it seems obvious that the origin of NGPDF is in the simple random advection transport
{ "pile_set_name": "ArXiv" }
null
--- abstract: | The high-frequency Raman-active phonon modes of metallic single-walled carbon nanotubes (SWNTs) are thought to be characterized by Kohn anomalies (KAs) resulting from the combination of SWNTs intrinsic one-dimensional nature and a significant electron-phonon coupling (EPC). KAs are expected to be modified by the doping-induced tuning of the Fermi energy level $\epsilon_F$, obtained through the intercalation of SWNTs with alkali atoms or by the application of a gate potential. We present a Density-Functional Theory (DFT) study of the phonon properties of a (9,9) metallic SWNT as a function of electronic doping. For such study, we use, as in standard DFT calculations of vibrational properties, the Born-Oppenheimer (BO) approximation. We also develop an analytical model capable of reproducing and interpreting our DFT results. Both DFT calculations and this model predict, for increasing doping levels, a series of EPC-induced KAs in the vibrational mode parallel to the tube axis at the $\mathbf\Gamma$ point of the Brillouin zone, usually indicated in Raman spectroscopy as the $G^-$ peak. Such KAs would arise each time a new conduction band is populated. However, we show that they are an artifact of the BO approximation. The inclusion of non-adiabatic (NA) effects dramatically affects the results, predicting KAs at $\mathbf\Gamma$ only when $\epsilon_F$ is close to a band crossing $E_{X}$. For each band crossing a double KA occurs for $\epsilon_F=E_{X}\pm \hbar\omega/2$, where $\hbar\omega$ is the phonon energy. In particular, for a 1.2 $nm$ metallic nanotube, we predict a KA to occur in the so-called $G^-$ peak at a doping level of about $N_{el}/C=\pm 0.0015$ atom ($\epsilon_F\approx \pm 0.1 ~eV$) and, possibly, close to the saturation doping level ($N_{el}$/$C$$\sim$ 0.125), where an interlayer band crosses the $\pi^*$ nanotube bands. Furthermore, we predict that the Raman linewidth of the $G^-$ peak significantly decreases for $|\epsilon_F| \geq \hbar\omega/2$. Thus our results provide a tool to determine experimentally the doping level from the value of the KA-induced frequency shift and from the linewidth of the $G^-$ peak. Finally, we predict KAs to occur in phonons with finite momentum **q** not only in proximity of a band crossing, but also each time a new band is populated. Such KAs should be observable in the double-resonant Raman peaks, such as the defect-activated $D$ and peak, and the second-order peaks $2D$ and $2G$. author: - Nicolas Caudal - 'A. Marco Saitta' - Michele Lazzeri - Francesco Mauri title: 'Kohn anomalies and non-adiabaticity in doped carbon nanotubes' --- Introduction ============ Since their discovery in 1991 [@Iijima91], carbon nanotubes have raised an enormous interest both from the academic and the technological points of view. They exhibit in fact a variety of exciting features: their quasi-one-dimensional nature, due to a diameter of 1-2 $nm$ and a length of up to several micrometers, makes them sharp probes for scanning tunneling microscopes and an excellent model for one-dimensional physics. Their mechanical and tensile strength make them of great interest in composite materials, and, most significantly of all, they have very unusual and extremely promising electronic properties, displaying metallic or semiconducting behavior according to their structure and helicity [@Saito; @Reich]. The electronic properties of SWNTs, already particularly interesting from the technological point of view, promise to be the future of nano-electronics due to their *tunability*, achieved by doping the nanotubes through intercalation with alkali atoms [@Ye03; @Bendiab01; @Bendiab01b; @Bendiab01c; @BendiabThesis; @Furtado05; @Meunier02; @Liu03; @Rauf04; @Bantignies05] or application of a gate potential [@Corio04; @Cronin04; @Rafailov05; @Wang06]. However, there are still a number of experimental challenges to be solved in order to fully develop SWNT-based nano-electronic technology, in particular the low-cost industrial-scale synthesis of nanotubes of good chemical purity, crystalline quality and given helicity. An experimental tool largely used to characterize SWNTs is Raman spectroscopy [@Bendiab01; @Bendiab01b; @Bendiab01c; @BendiabThesis; @Ye03; @Furtado05; @Maultzsch02; @Maultzsch03; @Cronin04; @Rafailov05; @Maultzsch05; @Son06; @Wang06; @Jorio02; @Jorio05]. Typical Raman spectra of carbon nanotubes display a peak around 150-300 $cm^{-1}$, due to the radial breathing mode (RBM), which has been recently used as a tool to infer the size and chirality of the nanotubes [@Maultzsch05; @Jorio05]. Other important features of SWNT Raman spectra include a peak around 1350 $cm^{-1}$, activated by defects and impurities, and known in the literature as $D$ peak, and a large structure around 1570 $cm^{-1}$, due to modes tangential to the nanotube and known as $G$ peak. This last feature is thought to have two components, usually referred to as $G^+$ and $G^-$, originating from the $E_{2g}$ in-plane modes of graphite. In refs. [@Lazzeri06; @Piscanec06; @Popov06] it has been shown that in metallic SWNTs the $G^+$ component corresponds to the tangential vibrational mode *perpendicular* to the nanotube axial direction, while the $G^-$ component corresponds to the tangential vibrational mode *parallel* to the nanotube axial direction. In the following, we will refer to the former as “the nanotube TO tangential mode”, and to the latter as “the nanotube LO axial mode”. Some Raman studies [@Bendiab01; @Bendiab01b; @Bendiab01c; @BendiabThesis] show that the frequency of the $G$ peak increases up to 1600 $cm^{-1}$ at low doping levels, and then suddenly drops to about 1550 $cm^{-1}$ at the saturation threshold of alkali intercalation, estimated around a number of electrons per carbon atom $N_{el}/C=0.12$ (MC$_{\rm 8}$). Other experimental [@Maultzsch02; @Rafailov05; @Furtado05] and theoretical [@Dubay02; @Dubay03; @Akdim05] works on the effect of doping on SWNTs report a similar $G$-peak softening or even a Luttinger-Fermi liquid transition [@Rauf04]. Since the LO axial and TO tangential modes are particularly sensitive to the electronic structure of SWNTs around the Fermi energy  [@Dubay02; @bohnen04; @connetable05; @Lazzeri06; @Popov06] $\epsilon_F$, and $\epsilon_F$ directly depends on the charge doping level, a profound understanding of the interplay between the vibrational and the electronic properties of nanotubes looks to be crucial for technological development. In this work we report our theoretical study of the electronic and vibrational properties of doped SWNTs, based on DFT first-principles calculations and analytical results. We will show in the following that: *i)* the vibrational properties of SWNTs can be obtained from the so-called *electronic zone-folding* of a graphene sheet; *ii)* their behavior as a function of the charge doping level can be determined by the knowledge of electron-phonon coupling (EPC) in graphene; *iii)* ordinary quantum-mechanics calculations relying on the adiabatic Born-Oppenheimer approximation fail when applied to SWNTs, where non-adiabatic effects are enhanced by their intrinsic one-dimensional nature. Our paper is organized as follows: in section \[theo\] we will describe our theoretical framework, and in particular how the Raman-active modes of a nanotube can be accurately obtained through an appropriate electronic sampling of the graphene Brillouin Zone (BZ). We will then show in section \[model\] that the DFT results can be almost perfectly reproduced by an integral model, that uses the graphite EPC and the slope of the electronic bands around the (undoped) Fermi level $\epsilon^0_F$ (usually referred to as the $\pi$ and $\pi^*$ bands) as external inputs, and that becomes analytical in the limit of vanishing temperature. The results on the DFT and the model-derived LO axial and TO tangential modes of a SWNT will be presented, showing that Kohn anomalies (KA) would occur, within the adiabatic approximation, each time a new electronic band is populated by the electrons at $\epsilon_F$. In section \[nonadia\] we show that when the Born-Oppenheimer approximation is lifted the outcome is dramatically different, and that a drop in the frequency of the
{ "pile_set_name": "ArXiv" }
null
--- abstract: | Photoproduction of $ K\Sigma^{*}(1385)$ on the nucleon is investigated within the Regge framework and the reaction mechanism is analyzed based on the data existing in the channels $\gamma p\to K^+\Sigma^{*0}$ and $\gamma n\to K^+\Sigma^{*-}$. The Reggeization of the $t$-channel meson exchanges $K(494)+K^*(892)+K_2^*(1430)$ is employed to construct the photoproduction amplitude. The Rarita-Schwinger formalism is applied for the spin-3/2$^+$ strangeness-baryon $\Sigma^*$ with a special gauge prescription utilized for the convergence of these reaction processes. Within a set of coupling constants determined from the symmetry arguement for the $K$ and $K^*$ and from the duality and vector dominance for the $K_2^*$, the data of the both processes are reproduced to a good degree. The production mechanism of these processes are featured by the dominance of the contact term plus the $K$ exchange with the role of the $K_2^*$ following rather than the $K^*$. author: - 'Byung-Geel Yu' - 'Kook-Jin Kong' title: ' Photoproduction of $\gamma N\to K^+ \Sigma^*(1385)$ in the Reggeized framework ' --- Introduction ============ Kaon photoproduction off the nucleon target has been a useful tool to investigate strangeness production with data on a clean background from the electromagnetic probe. The experimental studies of the reactions involving $\Lambda(1116)$, and $\Sigma(1190)$ hyperons, or their resonances in the final state have been extensively conducted up to recent at the electron/photon accelerator for hadron facilities [@mc; @glander; @bradford; @moriya]. Of recent experimental achievements on these reactions the measurements of reaction cross sections for the $\gamma p\to K^+\Sigma^{*0}(1385)$ process from the CLAS [@moriya; @mattione] and LEPS [@niiyama], and the $\gamma n\to K^+\Sigma^{*-}(1385)$ process from the LEPS [@hicks] Collaboration draw our attention. In these reactions One reason for our interest is an advantage of studying baryon resonances whose existences have been predicted by the quark model, but are still missing, or remain an indefinite state. On the other hand, these reactions have their own issues of how to deal with the spin-3/2 baryon resonance in describing the reaction, because the propagation of the spin-3/2 resonance would give rise to a divergence as the reaction energy increases [@bgyu-pi-delta; @bgyu-rho-delta]. Theoretical investigation of baryon resonances in the $\gamma p\to K^+\Sigma^{*0}$ process was carried out in Ref. [@ysoh], where a set of $\Delta$ and $N^*$ resonances was considered in the effective Lagrangian approach. In this pioneering work the role of the baryon resonances was analyzed up to spin-5/2 state in the $s$- and $u$-channel contributions to the reaction process. Meanwhile, as an extension to the high energy realm a Regge plus resonance approach was applied for the $\gamma p\to K^+\Sigma^{*0}$ and $\gamma n\to K^+\Sigma^{*-}$ processes in Refs. [@junhe; @wang] with the empirical data up-dated by the recent experiments. However, in these works, the description of the reactions was complicated by using a hybrid-type propagation which mixed the pure Regge-pole and the Feynman propagator in the $t$-channel, apart from the cutoff functions to suppress the divergence at high energies, as in Ref. [@ysoh]. In this paper, we investigate photoproduction of $K\Sigma^*$ in two different isospin channels, $\gamma p \to K^+ \Sigma^{*0}$ and $\gamma n\to K^+\Sigma^{*-}$, where the Reggeization of the $t$-channel meson exchange is exploited for the photoproduction amplitude at forward angles and high energies. Our focus here is to describe these reaction processes up to high energy without fit-parameters rather than to search for baryon resonances, because their roles in these reactions are found to be less important as discussed in Ref. [@ysoh]. Avoiding such complications as mentioned above, we will utilize the model of the $\gamma N\to \pi^\pm\Delta$ in Ref. [@bgyu-pi-delta] to apply to the present processes with the coupling constant $f_{KN\Sigma^*}$ considered from the SU(3) symmetry. Since the $\Sigma^*$ of $3/2^+$ is the lowest mass hyperon in the baryon decuplet, this will be a valuable test of the flavor SU(3) symmetry with an expectation that the production mechanism of $K\Sigma^*$ is essentially identical to the $\pi\Delta$ case. For the analysis of the process involving the spin-3/2 baryon resonance, in particular, it is worth asking how to describe the process without cutoff functions because they could sometimes hide the pieces of the reaction mechanism that are missing, or malfunctioning through the adjustment of the cutoff masses. From the previous studies on photoproduction of $\pi\Delta$ [@bgyu-pi-delta] we have learned two important things as to the dynamical feature of the spin-3/2 baryon photoproduction: The minimal gauge prescription is the one requisite for a convergence of the reaction cross section and the other is the role of the tensor meson $a_2(1320)$ significant in the high energy region. Therefore, as a natural extension of the model in Ref. [@bgyu-pi-delta] to strangeness sector, we here consider the $K(494)+K^*(892)+K_2^*(1430)$  exchanges in the $t$-channel to analyze the production mechanism of the $\gamma p\to K^+\Sigma^{*0}$ and $\gamma n\to K^+\Sigma^{*-}$ processes. This paper is organized as follows. In Sec. II, we discuss the construction of the photoproduction amplitude in association with the gauge-invariant $K$ exchange in the $t$-channel. This will include a brief introduction of the minimal gauge, and the new coupling vertex for the tensor meson interaction $K_2^* N \Sigma^*$ which has been missed in previous works. Numerical results in the total and differential cross sections as well as the beam polarization asymmetry are presented for both reactions in Sec. III. We give a summary and discussion in Sec. IV. The SU(3) coefficients for the octet and decuplet baryons coupling to octet mesons are given in the Appendix. formalism ========= For a description of the reaction, $$\begin{aligned} &&\gamma(k)+N(p)\to K(q)+\Sigma^{*}(p'),\end{aligned}$$ with the momenta of the initial photon, nucleon and the final $K$ and $\Sigma^*$ denoted by $k$, $p$, $q$, and $p'$, respectively, we first construct the photoproduction amplitude which is gauge invariant as to the coupling of photon with particles in the reaction process. Then, the Reggeization of the $t$-channel meson-pole follows as has been done before. ![Feynman diagrams for $\gamma N \to K^+ \Sigma^{*}$. The exchange of $K$ in the $t$-channel $(a)$, the proton-pole in the $s$-channel $(b)$, the $\Sigma^*$-pole in the $u$-channel $(c)$, and the contact term $(d)$ are the basic ingredients for gauge invariance of the reaction. The $K^*$ and $K_2^*$ exchanged in the $t$-channel $(a)$ are themselves gauge-invariant.[]{data-label="fig1"}](fig1.eps){width="0.6\hsize"} Photoproduction amplitude ------------------------- Viewed from the $t$-channel meson exchange the Born amplitudes in four different isospin channels are read as $$\begin{aligned} &&{M}_{\gamma p\to K^{+}\Sigma^{*0}}={M}_K+{M}_{K^*}+{M}_{K_2^*},\label{p+}\\ % &&{M}_{\gamma n\to K^{+}\Sigma^{*-}}=\sqrt{2}\left({M}_K+{M}_{K^*}+{M}_{K_2^*}\right),\label{n+}\\ % &&{M}_{\gamma p\to K^{0}\Sigma^{*+}}=-\sqrt{2}\left({ M}_{K^*}+{M}_{K_2^*}\right),\label{p0}\\ % &&{M}_{\gamma n\to K^{0}\Sigma^{*0}}=-\left({M}_{K^*}+{M}_{K_2^*}\right),\label{n0}\end{aligned}$$ where the $\sqrt{2}$ factors and signs result from our convention of the meson-baryon-decuplet coupling of the $10-8-8$ type presented in the Appendix. Hereafter, we call the reaction process in Eq. (\[p+\]), the $\gamma p$ process, and the process in Eq. (\[n+\]), the $\gamma n$ process, respectively. In experimental sides, the cross sections for total and differential were measured recently for the charged state Eq. (\[p+\]) at the CLAS [@moriya] and LEPS [@niiyama] Collaborations, and the differential
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We report in this paper the proofs that the pulse shape analysis can be used in some bolometers to identify the nature of the interacting particle. Indeed, while detailed analyses of the signal time development in purely thermal detectors have not produced so far interesting results, similar analyses on bolometers built with scintillating crystals seem to show that it is possible to distinguish between an electron or $\gamma$-ray and an $\alpha$ particle interaction. This information can be used to eliminate background events from the recorded data in many rare process studies, especially Neutrinoless Double Beta decay search. Results of pulse shape analysis of signals from a number of bolometers with absorbers of different composition (CaMoO$_4$, ZnMoO$_4$, MgMoO$_4$ and ZnSe) are presented and the pulse shape discrimination capability of such detectors is discussed.' address: - 'INFN - Milano Bicocca, Italy' - 'Dipartimento di Fisica - Università di Milano Bicocca, Italy' author: - 'C.Arnaboldi' - 'C.Brofferio' - 'O.Cremonesi' - 'L.Gironi' - 'M.Pavan' - 'G.Pessina' - 'S.Pirro' - 'E.Previtali' title: A novel technique of particle identification with bolometric detectors --- Bolometers ,Scintillators ,Pulse shape discrimination (PSD) 23.40B ,95.35.+d ,07.57.K ,29.40M ,66.70.-f Rare event searches {#RES} =================== Rare event studies, such as the search for Neutrinoless Double Beta decay () [@BBreview] or the identification of Weakly Interacting Massive Particle (WIMP) interactions with ordinary matter [@DMreview], are of extreme interest in astroparticle physics, since they would imply new physics beyond the Standard Model. In both cases, as in all the rare event studies, spurious events are a limiting factor to the reachable sensitivity of the experiment. Unfortunately natural radioactive background is often present in the detector itself or in the materials surrounding it, no matter how much one can try to reduce it with shieldings, selection of materials and complicated purification techniques. In order to handle the residual unavoidable background, all the envisaged approaches require both a good energy resolution (which always helps in the comprehension of the different structures of an energy spectrum) and the capability to identify the nature of the projectile that interacted with the detector. Indeed, the searched event has always well defined signatures helping to distinguish it from background, for instance two electrons with a fixed sum energy in the case of the . Bolometers [@bolometers] are based on the detection of phonons produced after an energy release by an interacting particle and can have both an excellent energy resolution and extremely low energy threshold with respect to conventional detectors. They can be fabricated from a wide variety of materials, provided they have a low enough heat capacity at low temperatures, which is the only requirement really unavoidable to build a working bolometer. The latter is a priceless feature for experiments that aim at detectors containing particular atomic or nuclear species to optimize the detection efficiency. If other excitations (such as ionization charge carriers or scintillation photons) are collected in addition to phonons, bolometers have already shown to be able to discriminate nuclear recoils from electron recoils, or $\alpha$ particles from $\beta$ particles and $\gamma$-rays. In this paper we will report on the possibility to obtain similar results just by pulse shape analysis, without the requirement of a double readout for phonons and ionization or scintillation light. Bolometric Technique and Scintillating Bolometers {#BolTec} ================================================= Bolometers can be essentially sketched as a two-component object: an energy absorber in which the energy deposited by a particle is converted into phonons, and a sensor that converts thermal excitations into a readable signal. The absorber must be coupled to a constant temperature bath by means of a weak thermal conductance. Denoting by C the heat capacity of the bolometer, the temperature variation induced by an energy release E in the absorber can be written as $$\label{eq:temperature} \Delta T = \frac{E}{C}$$ The accumulated heat flows then to the heat sink through the thermal link and the absorber returns to the base temperature with a time constant $\tau$ = C/G, where G is the thermal conductance of the link: $$\label{eq:signal} \Delta T(t) = \frac{E}{C} e^{ - \frac{t}{\tau}}$$ In order to obtain a measurable temperature rise the heat capacity of the absorber must be very small: this is the reason why bolometers need to be operated at cryogenic temperatures (of the order of 10-100 mK). A real bolometer is somewhat more complicated than the naive description presented above. It is made of different elements and it is therefore represented by more than one heat capacity and heat conductance. As such, the time development of the thermal pulse is characterized by various time constants. In principle, if the bolometer performs as an ideal calorimeter and if the conversion of the energy into heat deposited by the particle is instantaneous (as assumed in equation \[eq:signal\]), then the device is insensitive to the nature of the interacting particle. Although this situation is generally very far from reality, it is however true that the small differences are difficult to detect and the goal has been so far achieved only relying on complicated solutions. Among these are scintillating bolometers. The concept of a scintillating bolometer is very simple: a bolometer coupled to a light detector [@CaF2]. The first must consist of a scintillating absorber thermally linked to a phonon sensor while the latter can be any device able to measure the emitted photons. The driving idea of this hybrid detector is to combine the two available informations, the heat and the scintillation light, to distinguish the nature of the interacting particles, exploiting the different scintillation yield of $\beta$/$\gamma$, $\alpha$ and neutrons. Dark Matter as well as searches can benefit of this capability of tagging the different particles, and more generally this technique can be exploited in any research where background suppression or identification is important. Dark matter experiments look for a very rare signal generally hidden in a huge background. The signal is a nuclear recoil with an energy of few keV (or less) induced by the scattering of a WIMP off a target nucleus. Experiments like CDMS [@CDMS], Edelweiss [@Edelweiss] or CRESST [@CRESST] clearly show that in such energy region the background is dominated by $\beta$/$\gamma$ interactions. A second source of background are $\alpha$ decays, contributing through energy degraded $\alpha$’s and nuclear recoils. The capability to distinguish a nuclear recoil - candidate for a WIMP interaction - from $\alpha$ or $\beta$/$\gamma$ clearly allows to improve drastically the experimental sensitivity. A similar approach was proposed also for applications in searches [@CaF2]. More recently, such a possibility has been demonstrated to be viable for a number of candidate nuclei [@Pirr06]. In this case the major interest is the identification of $\alpha$ interactions. Indeed the other important source of background, namely $\gamma$-rays, is virtually indistinguishable from the signal. The suggested way to eliminate the problem of $\gamma$-rays contribution is to study isotopes with a transition energy above 2615 keV. This corresponds in fact to the highest energy $\gamma$-ray line from natural radioactivity and is due to $^{208}$Tl. Above this energy there are only extremely rare high energy $\gamma$’s from $^{214}$Bi (all the active isotopes with Q$_{\beta\beta}>$2615 keV are listed in Tab. \[tab:isotopes\]). Once $\gamma$-rays are no more a worrisome source of background, what is left - on the side of radioactivity- are $\alpha$ emissions. Indeed $\alpha$ surface contaminations not only can represent the dominant background source for searches based on high transition energy isotopes, but they are already recognized as the most relevant background source in the bolometric experiment CUORICINO [@CUORICINO; @CUOpotential; @ArtChambery] and as a limiting factor for the experiment CUORE [@CUOpotential; @CUORE]. Both the experiments search for the of $^{130}$Te whose transition energy is at 2527 keV, therefore in a region where $\gamma$ background (mainly due to Compton events produced by 2615 keV photons) can be still important. [ccc]{} Isotope & Q$_{\beta\beta}$ \[MeV\] & natural abundance\ $^{116}$Cd & 2.80 & 7.5 %\ $^{82}$Se & 3.00 & 9.2 %\ $^{100}$Mo & 3.03 & 9.6 %\ $^{96}$Zr & 3.35 & 2.8 %\ $^{150}$Nd & 3.37 & 5.6 %\ $^{48}$Ca & 4.27 & 0.19 %\ \[tab:isotopes\] The $\alpha$ contribution to the background in the region (i.e. at about 3 MeV) is the following. In the natural chains we have various nuclei that decay emitting an $\alpha$ particle with an energy between 4 and 8 MeV, their energy is quite higher than most Q-values. However, if the radioactive nucleus is located
{ "pile_set_name": "ArXiv" }
null
-1truecm 2truecm 1truecm ****\ 0.3truecm [Nordita, KTH Royal Institute of Technology and Stockholm University,\ Roslagstullsbacken 23, SE-106 91 Stockholm, Sweden]{}\ [Email: yenchin.ong@nordita.org]{}\ 1truecm Christodoulou and Rovelli have shown that black holes have large interiors that grow asymptotically linearly in advanced time, and speculated that this may be relevant to the information loss paradox. We show that there is no simple relation between the interior volume of an arbitrary black hole and its horizon area. That is, the volume enclosed is not necessarily a monotonically increasing function of the surface area. {#section .unnumbered} An asymptotically flat Schwarzschild black hole has a spherical event horizon. The usual metric of this geometry in 4-dimensions, in the units $G=c=1$, is $$g[\text{Sch}]=-\left(1-\frac{2M}{r}\right)dt^2 + \left(1-\frac{2M}{r}\right)^{-1} dr^2 + r^2\left(d\theta^2 + \sin^2\theta d\phi^2 \right),$$ where $M$ is the ADM mass, and $r$ is the areal radius. That is, the area of the event horizon $r_h$ is the area of the 2-sphere: $4\pi r_h^2$. Unlike its surface area, the concept of “the volume” of a black hole is a tricky one. The reason is that volume depends on the choice of 3-dimensional spacelike hypersurface. As such, no unique volume can be prescribed to a black hole. Furthermore, the interior of a static black hole is nevertheless dynamical, so one should definitely *not* think of a black hole as a black box that bounds a certain amount of volume that can be easily estimated from knowing the size of its area. This is well-known: a maximally extended Schwarzschild \[Kruskal-Szekeres\] geometry has an infinitely large asymptotically flat region on the “other side”, connected via the Einstein-Rosen bridge. Similarly, one could attach a closed FLRW universe to the interior of a black hole via the Einstein-Rosen bridge, resulting in the so-called Wheeler’s “bag-of-gold” geometry [@Wheeler]. Even non-black hole configurations can have arbitrarily large interiors than their areas might suggest [@monsters]. What about black holes that were formed from gravitational collapse and have no second asymptotic region? One important motivation to study the interiors of *generic* black holes is of course the information loss paradox. As matter falls into a black hole and the black hole gradually Hawking evaporates away, it seems that when the black hole disappears, the information of the in-fallen matter will be lost forever, thereby threatening the unitarity of quantum mechanics. One proposal to resolve this paradox is the idea that black holes do not completely evaporate away, but instead settle down to a \[meta-\]stable remnant. \[See [@1412.8366] for a recent review of black hole remnants.\] An obvious shortcoming of such a proposal is that, as the black hole shrinks in size it would seem that room is running out for storing a large amount of information. This objection presumably does not arise if the interior of a rather small black hole can remain large[^1]. However, since “bag-of-gold” geometry is unlikely to be generic, and Kruskal-Szekeres geometry only holds for eternal black holes, we have to look elsewhere to resolve the information loss paradox, which is most important in the case of black holes formed from gravitational collapse. Fortunately, it turns out that even such black holes have large interiors. \[Whether or not such interior volumes remain sufficiently large even as the black hole shrinks to Planck scale is, of course, a relevant and important question.\] In a recent paper, Christodoulou and Rovelli [@1411.2854] pointed out that, while there is no unique volume that can be prescribed to the interior of a black hole, one could look at the volume of the *largest* spacelike spherically-symmetric surface bounded by the event horizon of the black hole. Such a volume is a geometrical property that is coordinate-independent. Christodoulou and Rovelli \[hereinafter, CR\] showed that most of the volume contribution comes from a region which is not causally connected with matter that has fallen far into the black hole \[see the work of Bengtsson and Jakobsson [@1502.01907] for an explicit and nice illustration of this fact, as well as their generalization of the work of CR to the case of asymptotically flat Kerr black holes.\] We will henceforth refer to such a volume measure as the “CR-volume”. For an asymptotically flat Schwarzschild black hole in 4-dimensions, most of the CR-volume contribution is given by the integral [@1411.2854; @1502.01907] $$\label{int1} \text{Vol.} \sim \int^v \int_{S^2} \max\left[r^2 \sqrt{\frac{2M}{r}-1}\right] \sin\theta ~d\theta d\phi dv,$$ where $v$ is the advanced time defined by $$v := t + \int \frac{dr}{f(r)} = t + r + 2M \ln |\frac{r}{2M}-1|; ~~f(r):=1-\frac{2M}{r}.$$ Following [@1502.01907], we omitted the lower limit in the integral with respect to $v$, which only contributes to a negligible finite value, whereas the integral will be dominated by its upper limit $v$. The coefficient of $v$ can be maximized by maximizing the function $$\mathcal{F}(r) := r^2 \sqrt{\frac{2M}{r}-1}.$$ Elementary calculus shows that $r=3M/2$ maximizes $\mathcal{F}(r)$. Indeed, most of the volume comes from the contribution of this constant $r$ slice. \[Note that this slice is rather “close” to the event horizon $r=2M$.\] This leads to $$\text{Vol.} \sim 3\sqrt{3} \pi M^2 v.$$ That is, the CR-volume grows asymptotically linearly in $v$. In other words, even though a static black hole looks the same to the exterior observer no matter how long one waits \[this is a classical statement without taking into account Hawking radiation\], its interior gets larger with time \[c.f. another proposal of black hole volume in [@0508108]\]. The estimate given by CR is that the supermassive black hole at the center of our galaxy, Sagittarius A$^*$, contains sufficient space to fit a *million* solar systems, despite its areal radius is only a factor of 10 or so larger than the Earth-Moon distance. Taking into account the rotation of the black hole does not change this result by much, despite the rotation rate of Sagittarius A$^*$ is about 90% of the extremal limit. In other words, the CR-volume for asymptotically flat black holes seem to be robust against rotational effect, as long as it stays below $\sim 99\%$ of the extremal limit [@1502.01907]. In view of the potentially important role the CR-volume may play in the resolution of the information loss paradox \[as suggested by CR [@1411.2854]\], the properties of the CR-volume of other black hole solutions should be investigated. For example, one may wish to consider black holes with other asymptotic geometries. One especially interesting arena to explore is the anti-de Sitter \[AdS\] space — in the presence of a negative cosmological constant, a large class of black hole solutions are allowed. Unlike their asymptotically flat cousins, these black holes can have non-trivial horizon topologies, hence are often referred to as “topological black holes” [@9705004; @9705012; @9709039; @9808032]. The metric tensor of a static topological black hole in $(n+2)$-dimensions \[$n\geqslant 2$\] is given by: \[metric\] g\[\^k\_[n+2]{}\] =& -dt\^2\ & + \^[-1]{} dr\^2 + r\^2 d\^2 \[X\^k\_n\], where $L$ is the AdS curvature length scale, and $d\Omega^2[X^k_n]$ is a metric of constant curvature $k=\left\{-1,0,+1\right\}$ on the $n$-dimensional Riemannian manifold $X^k_n$, with \[dimensionless\] area $\Gamma[X^k_n]$. For example, if $X^k_n$ is a 2-sphere, then $\Gamma[X^k_n]=\Gamma[S^1_2] = 4\pi$. Note that in general, the geometry is only asymptotically *locally* AdS. {#section-1 .unnumbered} In the $k=0$ case, the metric Eq.(\[metric\]) reduces to the simpler form: \[flatmetric\] g\[\^0\_[n+2]{}\] =& - dt\^2\ &+\^[-1]{}dr\^2 +r\^2, where the $\zeta_i$’s are dimensionless coordinates on a flat space. If the black hole has a compact horizon with toral topology \[
{ "pile_set_name": "ArXiv" }
null
8.5in -30pt ł c [FERMILAB–Pub–94/XXX-A\ March 1994]{} [**Quantum Cosmology and Higher-Order Lagrangian Theories**]{}\ Henk van Elst$^{1a}$, James E. Lidsey$^{2b}$ & Reza Tavakol$^{1c}$\ $^1$[*School of Mathematical Sciences\ Queen Mary & Westfield College\ Mile End Road\ London E1 4NS, UK\ *]{} $^2$[*NASA/Fermilab Astrophysics Center\ Fermi National Accelerator Laboratory\ Batavia IL 60510, USA*]{} > In this paper the quantum cosmological consequences of introducing a term cubic in the Ricci curvature scalar $R$ into the Einstein–Hilbert action are investigated. It is argued that this term represents a more generic perturbation to the action than the quadratic correction usually considered. A qualitative argument suggests that there exists a region of parameter space in which neither the tunneling nor the no-boundary boundary conditions predict an epoch of inflation that can solve the horizon and flatness problems of the big bang model. This is in contrast to the $R^2$–theory. > > e-mail: $^a$hve@maths.qmw.ac.uk; $^b$jim@fnas09.fnal.gov;  $^c$reza@maths.qmw.ac.uk > > PACS number(s): 98.80.Hw; 04.50.+h; 04.60.Kz; 98.80.Cq Introduction ============ An important motivation for the development of the quantum cosmology programme has been to explain the initial conditions for the emergence of the Universe as a classical outcome. In principle one must find the form of the wave function $\Psi$ satisfying the Wheeler–DeWitt equation [@wdw]. This equation describes the annihilation of the wave function by the Hamiltonian operator and since it admits an infinite number of solutions, one must also choose the boundary conditions in order to specify the wave function uniquely. Such boundary conditions must be viewed as an additional physical law since, by definition, there is nothing external to the Universe. In practice one assumes, at least implicitly, that a finite subset of all possible boundary conditions is favoured by cosmological observations, in the sense that the wave functions corresponding to such boundary conditions predict outcomes which are compatible with observations. For example, if one believes in the inflationary scenario, the requirement that sufficient inflation occurred, in order to solve the assorted problems of the standard big bang model can, in principle, restrict the number of plausible boundary conditions. Among the set of all possible choices the Vilenkin, or [*tunneling from nothing*]{}, boundary condition [@v1; @v2] and the Hartle–Hawking, or [*no-boundary*]{}, boundary condition [@HH83] have been the subject of intense discussion. Given the non-uniqueness of such conditions, the question arises as to the consequences of choosing different boundary conditions for the resulting wave function of the Universe and its corresponding probability measures. An important study in this regard is due to Vilenkin [@v2], who considered the effects of the above boundary conditions within the context of Einstein gravity minimally coupled to a self-interacting scalar field. He restricted his analysis to the minisuperspace corresponding to the spatially closed, isotropic and homogeneous Friedmann–Lemaître–Robertson–Walker (FLRW) Universe and showed that the tunneling wave function predicts initial states that are likely to lead to sufficient inflation, whereas the Hartle–Hawking wave function does not. It is sometimes argued that this result indicates that observations favour the tunneling as opposed to the no-boundary boundary condition. However, the precise relation between the boundary conditions and the observations is determined by the specific models employed and since such models always involve idealisations in the form of a set of simplifying assumptions, it follows that the above conclusion can not be made [*a priori*]{}. Indeed it only makes sense in general if the correspondence between the observations and the boundary conditions is robust under physically motivated perturbations to the underlying quantum cosmological model. Consequently, it is important to consider the ‘stability’ of the above conclusions. In particular, are the conclusions robust under higher-order perturbations to the Einstein–Hilbert action? Quadratic and higher-order terms in the Riemann curvature tensor and its traces appear in the low-energy limit of superstrings [@canetal85] and they also arise when the usual perturbation expansion is applied to General Relativity [@barchr83; @anttom86]. Such terms diverge as the initial singularity is approached, but can in principle be eliminated if higher-order corrections are included in the action. In four-dimensional space-times the Hirzebrucht signature and Euler number imply that the most general, four-dimensional gravitational action to quadratic order is S = d\^4x    , where $R$ is the Ricci curvature scalar of the space-time with metric tensor $g_{\mu\nu}$, $g={\rm det}\,g_{\mu\nu}$, $C_{\alpha\beta\gamma\delta}$ is the Weyl conformal curvature tensor, $\kappa^{2}$ is the gravitational coupling constant and $\epsilon_1$ and $\gamma$ are coupling constants of dimension $(\mbox{length})^{2}$. The action simplifies further for spatially homogeneous and isotropic four-geometries, since the conformal flatness of these space-times implies that the Weyl tensor vanishes. The effects of including quadratic terms have been investigated in Refs. [@bg93; @hawlut84; @mijetal89]. In particular Mijić et al [@mijetal89] studied the effects of such perturbations on Vilenkin’s result [@v2] and found that those results remain robust in the sense that the inflationary scenario still favours the tunneling boundary condition in the presence of quadratic terms in the action. On the other hand Biswas and Guha have recently arrived at the opposite conclusion [@bg93]. The renormalisation of higher loop contributions introduces terms into the effective action that are higher than quadratic order. Consequently it is important to also study the effects of these additional terms. In this paper we shall investigate what happens to the wave function if an $R^3$-contribution is present. By employing the conformal equivalence of higher-order gravity theories with Einstein gravity coupled to matter fields, we argue that this term represents a more general perturbation to the Einstein–Hilbert action than the $R^2$-correction, at least within the context of four-dimensional FLRW space-times. We then consider the conditional probability that an inflationary epoch of sufficient duration can occur. We estimate how the qualitative behaviour of this quantity changes when higher-order perturbations to the action are included. Our main result is that for the $R^3$–theory there exists a finite region of parameter space in which neither of the boundary conditions discussed above predict an epoch of inflationary expansion that leads to the observed Universe. We use (dimensionless) Planckian units defined by $\hbar = c = G = 1$ throughout and define $\kappa^2 = 8\pi$. Higher-Order Lagrangians as Einstein Gravity plus Matter {#Lagrange} ======================================================== The wave function of the Universe in higher-order Lagrangian theories can be determined in one of two ways. It is well known that theories with a Lagrangian given by a differentiable function of the Ricci curvature scalar are conformally equivalent to Einstein gravity with a matter sector containing a minimally coupled, self-interacting scalar field [@whitt84; @m]. The precise form of the self-interaction is uniquely determined by the higher-derivative metric terms in the field equations. It follows that one can start either from the original action or the conformal action and derive the corresponding Wheeler–DeWitt equation [@hall91]. One takes the related Lagrangian as the defining feature of the theory and then applies the canonical quantisation rules. The advantage of the conformal transformation is that it allows the known results from Einstein gravity to be carried over to the higher-order examples and we shall follow such an approach in this paper. Consider the general, $D$-dimensional, vacuum theory S = d\^Dx   , where the Lagrangian $f(R)$ is some arbitrary differentiable function of the Ricci curvature scalar satisfying $\{ f(R), df(R)/dR \} > 0$ and $g_D$ is the determinant of the $D$-dimensional space-time metric $g_{D\,\mu\nu}$. If we perform the conformal transformation [@m] ł[conf]{} \_[D]{} = \^2g\_[D]{} \^2 = ( 2\^2)\^[2/(D-2)]{} , and define a new scalar field ł[phi]{} | ( )\^[1/2]{}  , the conformally transformed action takes the Einstein–Hilbert form ł[conformal]{} S = d\^D x   , where the self-interaction potential is given by ł[\*]{} U(|) ( 2\^2 )\^[-D/(D-2)]{}( R(|) -f\[R(|)\] ) . Definition (\[phi\]) yields a correspondence between the values of the Ricci curvature scalar $R$ and the values of the scalar field $\bar{\phi}$. We shall consider the quadratic and cubic Lagrangians \[lagr\] f\_2 (R) & = & (R + \_1R\^2)\ f\_3 (R) & = & (R + \_1R\^2 +\_2R\^[3]{}) , in four dimensions, where the parameters $\epsilon_{1}$ and $\epsilon_{2}$ have dimensions
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We present results from a Partial-Wave Analysis (PWA) of diffractive dissociation of 190 [ $\text{GeV}\! / c$]{} [$\pi^-$]{} into [${\ensuremath{\pi^-}}{\ensuremath{\pi^+}}{\ensuremath{\pi^-}}$]{} final states on nuclear targets. A PWA of the data sample taken during a COMPASS pilot run in 2004 on a [$\text{Pb}$]{} target showed a significant spin-exotic ${\ensuremath{J^{PC}}}= 1^{-+}$ resonance consistent with the controversial ${\ensuremath{\pi_1}}(1600)$, which is considered to be a candidate for a non-[$q$]{}[${ \overline{q}}$]{} mesonic state. In 2008 COMPASS collected a large diffractive [${\ensuremath{\pi^-}}{\ensuremath{\pi^+}}{\ensuremath{\pi^-}}$]{} data sample using a hydrogen target. A first comparison with the 2004 data shows a strong target dependence of the production strength of states with spin projections $M = 0$ and $1$.' address: | Excellence Cluster Universe, Technische Universität München,\ Boltzmannstr. 2, 85748 Garching, Germany.\ bgrube@ph.tum.de author: - BORIS GRUBE for the COMPASS Collaboration title: | DIFFRACTIVE DISSOCIATION OF 190 [ $\text{GeV}\! / c$]{} [$\pi^-$]{} \ INTO [${\ensuremath{\pi^-}}{\ensuremath{\pi^+}}{\ensuremath{\pi^-}}$]{} FINAL STATES AT COMPASS --- The COmmon Muon and Proton Apparatus for Structure and Spectroscopy (COMPASS)[@compass] is a fixed-target experiment at the CERN Super Proton Synchrotron. It is a two-stage spectrometer that covers a wide range of scattering angles and particle momenta with high angular resolution. The target is surrounded by a Recoil Proton Detector (RPD) that measures the time of flight of the recoil protons. COMPASS uses the M2 beamline which can deliver secondary hadron beams with a momentum of up to 300[ $\text{GeV}\! / c$]{} and a maximum intensity of [ ]{} $\text{sec}^{-1}$. The negative hadron beam consists of 96.0 % [$\pi^-$]{}and 3.5 % [$K^-$]{}. Two ChErenkov Differential counters with Achromatic Ring focus (CEDAR) upstream of the target are used to identify the incoming beam particles. During a pilot run in 2004 and subsequent data taking periods in 2008 and 2009 COMPASS has acquired large data sets of diffractive dissociation of 190[ $\text{GeV}\! / c$]{} [$\pi^-$]{} on H$_2$, Ni, W, and [$\text{Pb}$]{}targets. In these events the beam pion is excited to some resonance $X^-$ via $t$-channel Reggeon exchange with the target. At 190[ $\text{GeV}\! / c$]{}the process is dominated by Pomeron exchange. Diffractive reactions are known to exhibit a rich spectrum of produced states and are characterized by two kinematic variables: the square of the total center-of-mass energy and the squared four-momentum transfer from the incoming beam particle to the target, $t = (p_\text{beam} - p_X)^2$. It is customary to use the variable $t' = {{|{t}|}} - {{|{t}|}}_\text{min}$ instead of $t$, where ${{|{t}|}}_\text{min}$ is the minimum value of ${{|{t}|}}$ allowed by kinematics. In 2004 the trigger selected one incoming and at least two outgoing charged particles, whereas in 2008 a signal from the recoil proton was required in the RPD. In the offline event selection diffractive events were enriched by an exclusivity cut of $\pm 4$ GeV around the nominal beam energy. The $t'$ region between 0.1 and 1.0[ $(\text{GeV}\! / c)^2$]{} was selected for the analysis (see [[Fig. \[fig:tPrime\]]{}]{}). ![[${\ensuremath{\pi^-}}{\ensuremath{\pi^+}}{\ensuremath{\pi^-}}$]{} invariant mass distribution of the selected data sample for $t' \in [0.1, 1.0]{~\ensuremath{(\text{GeV}\! / c)^2}}$.[]{data-label="fig:threePiMass"}](tprime_zoom){width="\textwidth"} ![[${\ensuremath{\pi^-}}{\ensuremath{\pi^+}}{\ensuremath{\pi^-}}$]{} invariant mass distribution of the selected data sample for $t' \in [0.1, 1.0]{~\ensuremath{(\text{GeV}\! / c)^2}}$.[]{data-label="fig:threePiMass"}](Invariant_Mass_of_2008_data){width="\textwidth"} [[Figure \[fig:threePiMass\]]{}]{} shows the [${\ensuremath{\pi^-}}{\ensuremath{\pi^+}}{\ensuremath{\pi^-}}$]{} invariant mass distribution of the selected 2008 data sample. It exhibits clear structures in the mass regions of the well-known resonances ${\ensuremath{a_1}}(1260)$, ${\ensuremath{a_2}}(1320)$, and ${\ensuremath{\pi_2}}(1670)$. In order to find and disentangle the various resonances in the data, a PWA was performed, in which the total cross section was assumed to factorize into a resonance and a recoil vertex. The isobar model[@isobar] is used to decompose the decay $X^- \to {\ensuremath{{\ensuremath{\pi^-}}{\ensuremath{\pi^+}}{\ensuremath{\pi^-}}}}$ into a chain of successive two-body decays: The $X^-$ with quantum numbers [$J^{PC}$]{} and spin projection $M^\epsilon$ decays into a di-pion resonance, the so-called isobar, and a bachelor pion. The isobar has spin $S$ and a relative orbital angular momentum $L$ with respect to ${\ensuremath{\pi^-}}_\text{bachelor}$. A partial wave is thus defined by ${\ensuremath{J^{PC}}}M^\epsilon[\text{isobar}]L$, where $\epsilon = \pm 1$ is the reflectivity[@reflectivity]. The production amplitudes are determined by extended maximum likelihood fits performed in 40[ $\text{MeV}\! / c^2$]{} wide bins of the three-pion invariant mass $m_X$. In these fits no assumption is made on the produced resonances $X^-$ other then that their production strengths are constant within a $m_X$ bin. The PWA model includes five [${\ensuremath{\pi^+}}{\ensuremath{\pi^-}}$]{}isobars[@compassExotic]: ${\ensuremath{\pi}}{\ensuremath{\pi}}$ $s$-wave, ${\ensuremath{\rho}}(770)$, ${\ensuremath{f_0}}(980)$, ${\ensuremath{f_2}}(1270)$, and ${\ensuremath{\rho_3}}(1690)$. It consists of 41 partial waves with $J \leq 4$ and $M \leq 1$ plus one incoherent isotropic background wave. In order to describe the data, mostly positive reflectivity waves are needed. This corresponds to production with natural parity exchange. The three most dominant waves [$1^{++}$ $0^{+}$ $[{\ensuremath{\rho}}{\ensuremath{\pi}}] S$]{}, [$2^{++}$ $1^{+}$ $[{\ensuremath{\rho}}{\ensuremath{\pi}}] D$]{}, and [$2^{-+}$ $0^{+}$ $[{\ensuremath{f_2}}{\ensuremath{\pi}}] S$]{} contain resonant structures that correspond to the ${\ensuremath{a_1}}(1260)$, ${\ensuremath{a_2}}(1320)$, and ${\ensuremath{\pi_2}}(1670)$, respectively. The resonance parameters extracted from the 2004 data are in good agreement with the PDG values[@compassExotic]. In addition the 2004 data exhibit a resonant peak around 1660[ $\text{MeV}\! / c^2$]{} in the spin-exotic [$1^{-+}$ $1^{+}$ $[{\ensuremath{\rho}}{\ensuremath{\pi}}] P$]{} wave consistent with the disputed ${\ensuremath{\pi_1}}(1600)$[@compassExotic]. A first comparison of the 2008 H$_2$ data with the 2004 Pb data without acceptance corrections shows a surprisingly large dependence on the target material. The data — normalized to the narrow $a_2(1320)$ resonance in the [$2^{++}$ $1^{+}$ $[{\ensuremath{\rho}}{\ensuremath{\pi}}] D$]{} wave — exhibit a strong suppresssion of $M = 1$ waves on the H$_2$ target, whereas the corresponding $M = 0$ waves are enhanced such that the intensity sum over $M$ remains about the same. As an example [[Fig. \[fig:MDep\]]{}]{} shows this effect for the $a_1(1260)$ peak in the ${\ensuremath{J^{PC}}}= 1^{++}$ waves. ![Normalized intensity sums of the ${\ensuremath{J^{PC}}}= 1^{++}$ partial waves for different spin projection quantum numbers $M = 1$ on the left and $M = 0$ on the right hand side. The top row shows data from the Pb, the bottom row data from the H$_2$ target. The wave intensities are dominated by a broad structure around 1.2[ $\text{GeV}\! / c^2$]{} which is the ${\ensuremath{a_1}}(1260)$.[]{data-label="fig:MDep"}](a1_M1_2004 "fig:"){width="\textwidth
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'For cosmologies including scale dependence of both the cosmological and the gravitational constant, an additional consistency condition dictated by the Bianchi identities emerges, even if the energy-momentum tensor of ordinary matter stays individually conserved. For renormalization-group (RG) approaches it is shown that such a consistency relation ineluctably fixes the RG scale (which may have an explicit as well as an implicit time dependence), provided that the solutions of the RG equation for both quantities are known. Hence, contrary to the procedures employed in the recent literature, we argue that there is no more freedom in identification of the RG scale in terms of the cosmic time in such cosmologies. We carefully set the RG scale for the RG evolution phrased in a quantum gravity framework based on the hypothetical existence of an infrared (IR) fixed point, for the perturbative regime within the same framework, as well as for an evolution within quantum field theory (QFT) in a curved background. In the latter case, the implications of the scale setting for the particle spectrum are also briefly discussed.' author: - 'A. Babić' - 'B.Guberina' - 'R. Horvat' - 'H. Štefančić [^1]' title: | Renormalization-group running cosmologies\ - a scale-setting procedure --- Recently, indisputable evidence has been mounting to suggest that the expansion of our universe is accelerating owing to the nonvanishing value of unclustered dark energy with negative pressure, see [@1]. The crucial evidence for the existence of dark energy \[or the cosmological constant (CC)\] relies on the CMB observations [@2] which strongly support a spatially flat universe as predicted by inflationary models. By combinations of all data a current picture of the universe emerges, in which about 2/3 of the critical energy density of the present universe is made up by a background dark energy with the parameter of the equation of state $-1.38 \leq w \leq -0.82$ at $95\% $ confidence level [@3]. Pressed by these data, theorists now need explain not only why the CC is small, but also why dark-energy domination over ordinary matter density has occurred for redshifts $z { \setlength{\unitlength}{12pt} \begin{picture}(1.4,1.) \put(.7,-0.3){\makebox(0.0,1.)[t]{$<$}} \put(.7,-0.3){\makebox(0.0,1.)[b]{$\sim$}} \end{picture}1}$ (the coincidence problem). Although models with a truly static CC fit these data well, they have two additional drawbacks (besides the coincidence problem): (1) they cannot theoretically explain why the CC is today small but nonvanishing; (2) they have a problem to ensure a phase of inflation, an epoch in the early universe when the CC dominated other forms of energy density. Assessing the possibility to have dynamical dark energy, rolling scalar field models (quintessence fields) [@35] with generic attractor properties [@4] that make the present dark energy density insensitive to the broad range of unknown initial conditions, have been aimed at dealing with the coincidence problem. Still, a quintessence potential has to be fine-tuned to yield the present ratio of ordinary matter to quintessence energy density, at the same time allowing a phase dominated by matter so that structure can form; therefore these models cannot address the coincidence problem. In addition, such models may have difficulties in achieving the current quintessence equation of state with its parameter below -0.8 [@5]. Although extended models with spatially homogeneous light scalar fields based on a nonlinear kinetic energy (k-essence) [@6] have been put forward, it seems that today a trustworthy solution to the coincidence problem, beyond invoking an anthropic principle [@7], is still lacking. Recently, motivated by the observational data, models of dark energy with the supernegative equation of state ($w < -1$) have been introduced [@Cald]. This form of dark energy, named phantom energy, leads to many interesting phenomena, the most striking being the possibility of divergence of the scale factor in finite time, the so-called “Big Rip" effect [@Cald2]. Another class of variable dark-energy models which could successfully mimic quintessence models and may also shed some light on the coincidence problem, have been put forward recently. They are based on the observation [@8; @9] that even a “true" CC (with the equation of state being precisely -1) cannot be fixed to any definite constant (including zero) owing to the renormalization-group (RG) running effects. In [@9], the variation of the CC arises solely from particle field fluctuations, without introducing any quintessence-like scalar fields. Particle contributions to the RG running of $\Lambda $ due to vacuum fluctuations of massive fields have been properly derived in [@10], with a somewhat unexpected outcome that more massive fields do play a dominant role in the running at any scale. In the model [@11], the RG running is due to non-perturbative quantum gravity effects and the hypothesis of the existence of an IR attractive RG fixed point. Both models [@9; @11] also promote the gravitational constant to a RG running quantity [^2], with a prominent scaling behavior found in [@11]. In both models the presence of quintessence-like scalar fields is redundant and not required for consistency with observational data. It should be noted that in the above RG running models the amount of running will depend not only on the parameters of an underlying physical theory, but also on the characteristic RG scale, which must be correctly identified. Several different scenarios for RG-scale adoption have been contemplated in the literature. In [@12] the RG scale was identified with the Hubble parameter, which for the present time is $H_0 \sim 10^{-33} \;\mbox{\rm eV}$. This leads to extremely slow running of the CC and the gravitational constant. Another choice, given by the fourth root of the total energy density [@9; @10], produces much faster running of the CC. The advantage of such a choice is that it entails a direct association with particle momenta (and therefore with the temperature of interacting particle species) in the past radiation epoch. In the models [@8; @85; @11] the RG scale (cutoff) was identified with the inverse of the cosmological time, which is essentially equivalent to the previous case of the Hubble parameter for cosmologies with $a \propto t^n $. Another choice for the relevant cutoff with the implicit time dependence in the form of the inverse scale factor was also analyzed [@85]. With the above choices, it was found that no consistent solution existed in the former case for curved universes (aside for the radiation-dominated era) and even for flat universes in the latter case. In the present paper, we argue that the consistency relation as dictated by the Bianchi identity, which relates the time dependencies of the CC, the gravitational constant and the ordinary matter density, does unambiguously set the RG scale once the RG solutions are known. Hence, any conclusion about the consistency of cosmological solutions with an arbitrary choice of the RG scale may be quite misleading. Our scale-setting procedure works well for both flat and curved universes as well as for the matter background with an arbitrary equation of state. We note that if the ordinary energy-momentum tensor is not separately conserved (for instance, owing to the interaction between matter and the CC, which causes a transfer of energy from matter to the CC or vice versa), then the consistency relation becomes more complicated and the scale-setting procedure is no longer straightforward. In the following, we explain the scale-setting procedure in detail and apply it to several relevant RG solutions obtained in quantum gravity and in QFT in a curved background. [**Scale-setting procedure: importance and limitations**]{} {#sec2} =========================================================== In many cosmological approaches based on the renormalization-group running of the fundamental quantities such as $\rho_{\Lambda}$ and $G$, the fundamental procedure of the underlying quantum (field) theory specifies the running of the aforementioned quantities in terms of the appropriate running scale. From the viewpoint of quantum field theory, the running scale has a natural interpretation in the form of the scaling factor of external momenta or the energy cutoff. The formulation of the consistent cosmological model comprising running quantities requires translation of the running in terms of the running scale to the evolution of these quantities in terms of cosmological variables. This identification of the running scale with a specific function of cosmological variables is a crucial step in all approaches relying on the RGE approach for the dynamics of quantities such as $\rho_{\Lambda}$ and $G$. All efforts that have been undertaken in this direction so far have concentrated on the argumentation in favor of some seemingly “natural” choices for the RGE scale, without a proper procedure of their derivation. What is meant by “natural” choices is to a great extent a matter of taste and depends strongly on the outcome one wishes to achieve. Thus, one should not be surprised that the literature contains choices for the RG scale which, at present, differ up to 30 orders of magnitude, see, e.g., Refs.\[13, 14, 16\]. Needless to say, with different choices for the RG scale one automatically selects different cosmologies, thus making RG approaches devoid of any firm and generic prediction. In addition to noting that even a phenomenological setting of the RG scale is not an obvious matter in cosmological setups, one also finds for both RG cosmologies discussed in this paper (for detailed discussions, see the subsequent chapters below) that the situation with the scale setting is intrinsically not free from ambiguities. For instance, for the RG evolution in a conventional field-theoretical model in the classical curved background (see
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'The complex transmission of $\rm Y_{1-x}Pr_xBa_2Cu_3O_7$ single crystal thin films has been measured in the range 0.2-1.0 THz using time domain spectroscopy. The complex conductivity is calculated without using a Kramers-Kronig analysis. All of the superconducting samples show a peak in $\sigma_1(T)$ below $T_c$. The underdoped samples show a deviation from $1/(\alpha+\beta T)$ behavior above $T_c$ that may be linked with the onset of a spin gap.' address: 'Max-Planck-Institut für Festkörperforschung, Postfach 800665, 70506 Stuttgart, Germany' author: - 'J.O. White[^1], R. Buhleier, S.D. Brorson[^2], I.E. Trofimov[^3], H.-U. Habermeier, and J. Kuhl' title: 'Complex conductivity of $\rm Y_{1-x}Pr_xBa_2Cu_3O_7$ thin films measured by coherent terahertz spectroscopy' --- \#1 \#2[$\rm Y_{#1}Pr_{#2}BCO$]{} \#1 \#2[$\rm Y_{#1}Pr_{#2}Ba_2Cu_3O_7$]{} INTRODUCTION ============ The spectroscopic data on the high-$T_c$ superconducting compound $\rm Y_1Ba_2Cu_3O_7$ measured in the far infrared (FIR) as well as the GHz-region has revealed many interesting features. The new technique of THz-time-domain spectroscopy bridges the gap between the two regimes. Here we report results of a THz investigation into the xYPrBCO) system. We investigated five YPrBCO samples having Pr composition $x$ = 0.0, 0.2, 0.3, 0.4, and 1.0. The films are grown to a thickness of 150 nm by pulsed laser deposition [@habermeier91] onto NdGaO$_3$ substrates. Our spectroscopic technique involves a time-domain measurement of the electric field of a microwave pulse transmitted through the sample [@brorson94]. A Fourier transform yields the complex transmission spectrum. To calculate the conductivity, we make use of a multiple reflection formula for the field transmitted through the YBCO layer. RESULTS ======= The conductivity spectra at 50 K are shown in Fig. \[sigma\_nu\]. The addition of Pr to YBCO should have at least two interrelated effects: a) The suppression of $T_c$ changes the partitioning between normal and superconducting carriers. b) The total number of carriers $N$ (or their mobility) may be reduced. Both factors a) and b) cause $\sigma_2$ to decrease with \[Pr\]. Only normal carriers contribute to $\sigma_1$ for $\omega \neq 0$, but now factors a) and b) compete. At 50 K, $\sigma_1$ decreases with \[Pr\], therefore the effect of a reduction in $N$ dominates the effect of the shift in $T_c$. For 30% and 40% Pr, we observe a frequency dependent $\sigma_1$ and can directly measure $\tau(T)$, the quasi-particle scattering time. Pure PrBCO is a dielectric at 50 K, as seen by a conductivity proportional to frequency. Examining $\sigma_2(T)$ at a fixed frequency (Fig. \[sigma\_T\]a), we see that it is close to zero at high temperature, but rises sharply at the onset of superconductivity, thus providing an ac measurement of $T_c$ (Table 1). In all of the superconducting alloys, $\sigma_1$ displays a peak below $T_c$ which has been seen previously only in pure YBCO. It has been attributed to a sharp rise in the scattering time dominating the effect of a decrease in the number of normal carriers [@nuss91]. The normal state behavior of our samples is particularly interesting because other [*underdoped*]{} materials such as oxygen deprived (123)YBCO undergo a phase transition associated with the opening of a spin gap at $T_D>T_c$ [@ito93]. If the normal carriers couple strongly to spin fluctuations, the opening of a spin gap should be accompanied by an [*increase*]{} in the scattering time $\tau$, giving rise to an [*enhancement*]{} in $\sigma_1$ below $T_D$ for $\omega<1/\tau$. For pure (optimally doped) YBCO, at 480 GHz, $\sigma_1$ shows only a single transition at $T_c$. For the (underdoped) alloys, $\sigma_1$ has two transitions, one at $T_c$, the other at a higher temperature which increases with \[Pr\]. To accentuate the two transitions, Fig. \[sigma\_T\]b is shaded in the region bounded by $T_c$, the experimental curve, and a dashed line representing $1 / (\alpha+\beta T)$ behavior. The higher transition temperature seen at 480 GHz matches that of a transition also observed in the dc resistivity. We evaluate the penetration depth $\lambda_L$ and the plasma frequency $\omega_p$ by fitting the data to a two fluid model of the form: $$\sigma(\omega) = {\epsilon_0 \omega_p^2 \tau \over {1-i\omega\tau}}x_n + {1 \over {\mu_0\lambda_L^2}}\left(- \pi\delta(\omega) + {i \over \omega} \right) x_s, \label{2fluid}$$ where $x_n$ and $x_s$ are the fractions of normal and superconducting carriers. The results are shown in Table 1. The decrease in $\omega_p$ with \[Pr\] supports the theory that Pr suppresses superconductivity by reducing the population of mobile holes in the $\rm CuO_2$ planes. $x$ $d$ (nm) $\lambda_L$ (nm) $\omega_p$ (cm$^{-1}$) $T_c^{\rm ac}$ (K) ----- ---------- ------------------ ------------------------ -------------------- 0.0 155 170 9500 92 0.2 134 350 4600 72 0.3 170 380 4200 59 0.4 170 590 2700 41 [9]{} H. U. Habermeier [*et al.*]{}, Physica C [**180**]{}, 17 (1991). S. D. Brorson [*et al.*]{}, Phys. Rev. B [**49**]{}, 6185 (1994). M. C. Nuss [*et al.*]{}, Phys. Rev. Lett [**66**]{}, 3305 (1991). T. Ito, K. Takenaka, and S. Uchida, Phys. Rev. Lett. [**70**]{}, 3995 (1993). [^1]: on leave from: Hughes Research Laboratories, Malibu. [^2]: present addr.: Teledanmark Laboratories, Copenhagen. [^3]: present address: Rutgers University, Piscataway, NJ.
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We have used an extension of our slow light technique to provide a method for inducing small density defects in a Bose-Einstein condensate. These sub-resolution, micron-sized defects evolve into large amplitude sound waves. We present an experimental observation and theoretical investigation of the resulting breakdown of superfluidity. We observe directly the decay of the narrow density defects into solitons, the onset of the ‘snake’ instability, and the subsequent nucleation of vortices.' author: - | \ \ \ \ \ \ \ \ \ \ \ \ \ title: 'Observation of Quantum Shock Waves Created with Ultra Compressed Slow Light Pulses in a Bose-Einstein Condensate' --- Superfluidity in Bose condensed systems represents conditions where frictionless flow occurs because it is energetically impossible to create excitations. When these conditions are not satisfied, various excitations develop, and experiments on superfluid helium, for example, have provided evidence that the nucleation of vortex rings occurs when ions move through the fluid faster than a critical speed ([*1,2*]{}). Under similar conditions, shock waves would occur in a normal fluid ([*3*]{}). Such discontinuities are not allowed in a superfluid and instead topological defects, such as quantized vortices and solitons, are nucleated when the spatial scale of density variations becomes on the order of the healing length. This is the length scale at which the kinetic energy, associated with spatial variations of the macroscopic condensate wave function, becomes on the order of the atom-atom interaction energy ([*2,4*]{}). It is therefore also the minimum length over which the density of a condensate can adjust. Bose-Einstein condensates (BECs) of alkali atoms ([*5*]{}) have provided a system for the study of superfluidity, which is theoretically more tractable than liquid helium and allows greater experimental control. An exciting recent development is the production of solitons and vortices. Experiments have employed techniques that manipulated the phase of the BEC ([*6-9*]{}), or provided the system with a high angular momentum which makes vortex formation energetically favorable ([*10,11*]{}). However, a direct observation of the formation of vortices via the breakdown of superfluidity has been lacking. Rather, rapid heating from ’stirring’ with a focused laser beam above a critical velocity was observed as indirect evidence of this process ([*12,13*]{}). In this context, it is natural to ask what would happen if one were to impose a sharp density feature in a BEC with a spatial scale on the order of the healing length. Optical resolution limits have prevented direct creation of this kind of excitation. In this paper we present an experimental demonstration of creation of such defects in sodium Bose-Einstein condensates, using a novel extension of the method of ultra slow light pulse propagation ([*14*]{}) via electromagnetically induced transparency (EIT) ([*15,16*]{}). Our slow-light setup is described in ([*14*]{}). By illuminating a BEC with a ‘coupling’ laser, we create transparency and low group velocity for a ‘probe’ laser pulse subsequently sent into the cloud. In a geometry where the coupling and probe laser beams propagate at right angles, we can control the propagation of the probe pulse from the side. By introducing a spatial variation of the coupling field, along the pulse propagation direction, we vary the group velocity of the probe pulse across the cloud. Here we accomplish this by blocking half of the coupling beam so it illuminates only the left hand side ($z<0$) of the condensate, setting up a light ‘roadblock’. In this way, we compress the probe pulse to a small spatial region at the center of the BEC, while bypassing the usual bandwidth requirements for slow light ([*17*]{}). The technique produces a short wavelength excitation by suddenly removing a narrow disk of the condensate, with the width of the disk determined by the width of the compressed probe pulse. We find that this excitation results in short wavelength, large amplitude sound waves that shed off ‘gray’ solitons ([*18-20*]{}), and we make the connection to the formation of shock waves in classical fluids. The ‘snake’ (Kadomtsev-Petviashvili) instability is predicted to cause solitons to decay into vortices ([*21-24*]{}). This has been observed with optical solitons ([*25*]{}) and recently the JILA group predicted and observed that solitons in a BEC decay into vortex rings ([*9*]{}). Here we present a direct observation of the dynamics of the snake instability in a BEC and the subsequent nucleation of vortices. The images of the evolution are compared to numerical propagation of the Gross-Pitaevskii equation in two dimensions. Details of our Bose-Einstein condensation apparatus can be found in ([*26*]{}). We use condensates with about 1.5 million sodium atoms in the state $|1 \rangle \equiv |3S_{1/2},F=1,M_F=-1 \rangle$ and trapped in a 4-Dee magnet. For the experiments presented here, the magnetic trap has an oscillator frequency $\omega_z=(2\pi) 21$ Hz along its symmetry direction and a frequency $\omega_x = \omega_y = 3.8 \omega_z$ in the transverse directions. The peak density of the condensates is then $9.1 \times 10^{13}~\mathrm{cm}^{-3}$. The temperature is $T \sim 0.5\,T_c$, where $T_c=300$ nK is the critical temperature for condensation, and so the vast majority ($\sim 90 \%$) of the atoms occupy the ground state. To create slow and spatially localized light pulses, the coupling beam propagates along the $x$ axis (Fig. 2B), is resonant with the $D_1$ transition from the unoccupied hyperfine state $|2 \rangle \equiv |3S_{1/2},F=2,M_F=-2 \rangle$ to the excited level $ |3 \rangle \equiv |3P_{1/2}, F=2, M_F=-2 \rangle$, and has a Rabi frequency $\Omega_c = (2\pi) 15$ MHz ([*27*]{}). We inject probe pulses along the $z$ axis, resonant with the transition and with peak Rabi frequency $\Omega_p = (2\pi) 2.5$ MHz. The pulses are Gaussian shaped with a $1/e$ half-width of $\tau = 1.3\;\mu$s. With the entire BEC illuminated by the coupling beam, we observe probe pulse delays of $4\;\mu$s for propagation through the condensates, corresponding to group velocities of 18 m/s at the center of the clouds. A pulse with a temporal half-width $\tau$ is spatially compressed from a length $2 \, c \tau$ in vacuum to ([*14,17,28,29*]{}) $$L=2 \tau V_g = 2 \tau \frac{|\Omega_c|^2}{\Gamma f_{13} \sigma_0 n_c}$$ inside the cloud, where $\Gamma = (2 \pi) 10$ MHz is the decay rate of state $|3 \rangle$, $n_c$ is the cloud density, $\sigma_0 = 1.65 \times 10^{-9}\;\mathrm{cm}^2$ is the absorption cross-section for light resonant with a two-level atom, and $f_{13}=1/2$ is the oscillator strength of the $|1 \rangle \rightarrow |3 \rangle$ transition. The atoms are constantly being driven by the light fields into a dark state, a coherent superposition of the two hyperfine states $|1 \rangle$ and $ |2 \rangle$ ([*15*]{}). In the dark state, the ratio of the two population amplitudes is varying in space and time with the electric field amplitude of the probe pulse as $$\psi_2 = - \frac{\Omega_p}{\Omega_c}\,\psi_1,$$ where $\psi_1,\;\psi_2$ are the macroscopic condensate wave functions associated with the two states $|1 \rangle$ and $ |2 \rangle$. For the parameters listed above, the probe pulse is spatially compressed from 0.8 km in free space to only 50 $\mu$m at the center of the cloud, at which point it is completely contained within the atomic medium. The corresponding peak density of atoms in $|2 \rangle$, proportional to $|\psi_2|^2$, is $1/34$ of the total atom density. The $|1 \rangle$ atoms have a corresponding density depression. From Eqs. 1 and 2, it is clear that to minimize the spatial scale of the density defect, we need to use short pulse widths and low coupling intensities. However, for all the frequency components of the probe pulse to be contained within the transmission window for propagation through the BEC ([*17*]{}), we need a pulse with a temporal width $\tau$ of at least $2 \sqrt{D} \Gamma / {\Omega_c}^2 \approx 0.3~\mu$s (here $D \approx 520$ is the optical density of a condensate for on-resonance two level transitions) to avoid severe attenuation and distortion. Furthermore, we see from Eq. 2 that to maximize the amplitude of the density depression would favor use of a peak Rabi frequency for the probe of $\Omega_p \sim \Omega_c$. This also severely distorts the pulse. Both of these distortion effects accumulate as the pulse propagates through large optical densities.
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'Nanopore based sequencing has demonstrated significant potential for the development of fast, accurate, and cost-efficient fingerprinting techniques for next generation molecular detection and sequencing. We propose a specific multi-layered graphene-based nanopore device architecture for the recognition of single DNA bases. Molecular detection and analysis can be accomplished through the detection of transverse currents as the molecule or DNA base translocates through the nanopore. To increase the overall signal-to-noise ratio and the accuracy, we implement a new ”multi-point cross-correlation” technique for identification of DNA bases or other molecules on the molecular level. we demonstrate that the cross-correlations between each nanopore will greatly enhance the transverse current signal for each molecule. We implement first-principles transport calculations for DNA bases surveyed across a multi-layered graphene nanopore system to illustrate the advantages of proposed geometry. A time-series analysis of the cross-correlation functions illustrates the potential of this method for enhancing the signal-to-noise ratio. This work constitutes a significant step forward in facilitating fingerprinting of single biomolecules using solid state technology.' author: - Towfiq Ahmed - 'Jason T. Haraldsen' - 'John J. Rehr' - Massimiliano Di Ventra - Ivan Schuller - 'Alexander V. Balatsky' title: Correlation dynamics and enhanced signals for serial DNA sequencing --- Introduction ============ ![ Illustration of the three layer graphene based nanopore as a possible multilayered sequencing device. [**[b]{}**]{} Schematic of transmission currents through two graphene layers where isolated DNA bases pass through the nanopores. The current vs. time spectra are recorded for each layer independently. A cross correlation between the current data from multipores reveals useful information by increasing signal to noise ratio as described in the text; [**[c]{}**]{} Hydrogen capped graphene nanoribbons and the DNA bases inside the pore. Here, only the flat orientation of the DNA bases are shown. The other angular orientations are shown in the supplementary section.are shown. The other angular orientations are shown in the supplementary section.[]{data-label="f1"}](fig_1.eps){width="3.5in"} With applications ranging from explosives and drug detection to DNA sequencing and biomolecular identification, the ability to detect specific molecules and/or molecular series presents many challenges for scientists. With a specific need for timely and accurate measurements and evaluation, it is essential that researchers investigate both the manner of detection as well as explore new and improved computational methods for analysis to keep up with the growing pace of the individual fields. The field of DNA sequencing is rapidly evolving due to increasing support and technology. As this occurs, sequencing techniques are challenged by the need for a rapid increase of accuracy, speed, and resolution for smaller amounts of material [@xprize]. Nanopore-based sequencing [@Zwolak2008; @Branton2008] and serial methods [@towfiq_dna; @Kilina2007] provide promising alternatives to the well established Sanger method [@sanger], particularly for identifying single DNA bases using transverse conductance [@ventra_2005; @ventra_2006]. Such an approach relies on the ability to resolve the electronic fingerprints of DNA one relevant unit at a time (‘serial’) as DNA translocates through a nanochannel. It has been established that experimental methods are capable of achieving single-base resolution, which has prompted investigations into the local electrical properties of single DNA bases [@tanaka; @Yarotski2009]. Concurrently, the theoretical underpinnings of this approach have been continuously developing [@towfiq_dna; @Kilina2011; @Kilina2007; @ventra_2005; @ventra_2006]. The single-molecule sensitivity of nanopore sequencing has been recently demonstrated by Kawai [*[et al.]{}*]{} [@kawai] and Lindsay[*[et al.]{}*]{} [@Chang2010]. The sequence of DNA/RNA oligomers and microRNA by tunneling has also been demonstrated [@kawai2012]. Despite such high-quality experimental methods, the most pressing challenge in serial sequencing lies in overcoming effects of noise that lead to a small signal to noise (S/N) ratio in the measured current $I$. The signal fluctuations generally originate from thermal agitation and bond formation between base and nanopore/electrode walls or interactions with a substrate. In an effort to avoid these limitations, we propose the sequential measurement of transverse current cross-correlations, as obtained from multiple pairs of electrodes. The experimental set up for such a nanopore arrangement is schematically shown in \[f1\]. To be specific, we focus on graphene as the porous material, because it is atomically thick and exhibits extraordinary thermal and electronic properties. Besides these geometric advantages and good conductivity, graphene also possesses high tensile strength and can endure a high transmembrane pressure environment [@graphene_mechanical]. Consequently, graphene has been proposed as an effective substrate and conducting medium for nanopore sequencing by numerous groups [@branton; @merchant; @schneider; @prezhdo; @tanaka; @scheicher]. We emphasize, however, that the method for nanopore sequencing may be useful in any other method in which serial measurements (e.g., time series) are made to ascertain individual properties (resistivity here) of the bases. Although this challenge is much more severe for protein based or solid state nanopores, the nature of an atomically thick graphene nanopore wall cannot completely rule out the $\pi-\pi$ stacking between carbon and DNA bases. In addition, vibration and other electronic fluctuations present in the graphene membrane can significantly mask the conductance signals, making it difficult to differentiate the individual DNA bases. Previous theoretical [@Kilina2007; @Kilina2011] and experimental [@tanaka] studies of the interactions between DNA bases and graphene derivatives have revealed the local electronic structure of single bases. The experimental realization of a single layer graphene-based nanopore device is made possible by combining several state of the art techniques e.g., mechanical exfoliation from graphite on SiO$_2$ substrate. Transverse tunneling current(conductance) measurements, as the single strand (ss)DNA translocates through a monolayer graphene nanopore, were previously reported by Schneider [*et al.*]{} [@schneider]. AFM studies [@Yarotski2009] and theoretical simulations of scanning tunneling spectroscopy (STS) [@towfiq_dna] support the identification of electronic features with varying spatial extent and intensity near the HOMO-LUMO band. To make nanopore sequencing and detection a viable method for determining translocating molecules, one must overcome this the noise to signal problem. Therefore, we propose a multilayered graphene device in which the transverse conductance is measured through each nanopore independently, as a series of DNA bases or other molecules translocates through them (see  \[f1\]). As molecules translocate, they create a time dependent sequence of translocation currents through each of the layers. One then monitors the translocation currents at different pores and acquires a record of sequential current of the same base as it arrives and moves through the individual pores (shown in  \[f2\]). The time series of the cross correlation currents can then be used to reduce the uncorrelated, independent noise source, and hence enhance the signal to noise ratio and improve the differentiation between bases. While our device is being discussed under the idea of DNA sequencing, the general method and device setup can be used for any molecule small enough to fit through a nanopore. While we are focusing on the area of DNA sequencing and biomolecules, this cross-correlation method for data analysis of the transverse currents can be utilized for the analysis of any molecular series given the proper understanding of the molecules electronic properties. Results and Discussion ====================== We first discuss our first-principles calculations of transmittance for individual DNA bases inside the graphene nanopore, as presented in  \[f3\]. Then in  \[f4\], we show the partial signal recovery using our time-simulation model with three layer graphene nanopores and the cross-correlation between the corresponding signals. In our first-principles approach, for each DNA base, we have taken three random angular orientation with the graphene membrane, while calculating the transmittance between the two electrodes with 0.7 V bias voltage. The configuration averaged transmittance for A, C, G, and T are shown in the solid blue curve in  \[f3\](a)-(d). The conductance of a pure graphene nanoribbon with hydrogenated nanopore is shown in solid red curve in  \[f3\] for comparison. The transmittance curve is analogous to the non-equilibrium density of states in the presence of the bias voltage where the zero of energy is the Fermi energy of the central graphene region. The vertical dashed lines are at -0.35 eV and +0.35 eV, which are the chemical potentials of the left and right electrodes respectively. For each base ( \[f3\](a)-(d)), the transmittance curve (solid blue line) in between the left and right electrode chemical potentials is significantly enhanced compared to the pure graphene membrane with a nanopore (solid red line). The features in this region are characteristic of the four bases. For example, a comparison of the Guanine transmittance ( \[f3\](c)) with that of Thymine ( \[f3\](d)), shows the presence of a characteristic broad peak. For a systematically study of the difference between the transmittance among the four bases, we also plotted the difference curves (the top three) in  \[f3\](a)-(d). If the signatures of one or more of the DNA
{ "pile_set_name": "ArXiv" }
null
**THE CONSTITUTIVE RELATIONS AND THE** **MAGNETOELECTRIC EFFECT FOR  MOVING MEDIA** Tomislav Ivezić *Ruer Bošković Institute, P.O.B. 180, 10002 Zagreb, Croatia* E-mail: ivezic@irb.hr**** In this paper the constitutive relations for moving media with homogeneous and isotropic electric and magnetic properties are presented as the connections between the generalized magnetization-polarization bivector $\mathcal{M}$ and the electromagnetic field $F$. Using the decompositions of $F$ and $\mathcal{M}$, it is shown how the polarization vector $P(x)$ and the magnetization vector $M(x)$ depend on $E$, $B$ and two different velocity vectors, $u$ - the bulk velocity vector of the medium, and $v$ - the velocity vector of the observers who measure $E$ and $B$ fields. These constitutive relations with four-dimensional geometric quantities, which correctly transform under the Lorentz transformations (LT), are compared with Minkowski’s constitutive relations with the 3-vectors and several essential differences are pointed out. They are caused by the fact that, contrary to the general opinion, the usual transformations of the 3-vectors $\mathbf{E}$, $\mathbf{B}$, $\mathbf{P}$, $\mathbf{M}$, etc. are not the LT. The physical explanation is presented for the existence of the magnetoelectric effect in moving media that essentially differs from the traditional one. *Keywords:* Constitutive relations; moving media; magnetoelectric effect. **1. Introduction** In this paper the constitutive relations for moving media with homogeneous and isotropic electric and magnetic properties are presented using the abstract four-dimensional (4D) geometric quantities and their representations in the standard basis. These results are compared with Minkowski’s constitutive relations$^{1}$ with the 3-vectors and with Minkowski’s constitutive relations that are obtained by means of exterior forms.$^{2}$ The paper is organized as follows. First, in Sec. 2, we present a review of some previous results that are important for the theory presented here. In the recent paper,$^{3}$ a formulation of the field equations for moving media is developed by the generalization of an axiomatic geometric formulation of the electromagnetism in vacuum.$^{4}$ As mentioned in Ref. 3 almost the entire physics literature deals with the electromagnetic excitation tensor $\mathcal{H}$, i.e. with the electric $D$ and magnetic $H$ excitations (see Eq. (\[h1\])) and the constitutive relations refer to the connections between them and $F$, i.e. $E $ and $B$, respectively. But, as shown in Ref. 3, it is physically better founded to formulate the field equations for moving media in terms of $F$ and the generalized magnetization-polarization bivector $\mathcal{M}$ instead of, as usual, $F$ and $\mathcal{H}$. In Sec. 2, the decompositions of $F$ (\[E2\]) and $\mathcal{M}$ (\[M1\]) are presented. $F$ is decomposed into vectors $E$, $B$ and $v$ - the velocity vector of the observers who measure $E$ and $B$ fields. $\mathcal{M}$ is decomposed into the polarization vector $P$, the magnetization vector $M$ and $u$ - the bulk velocity vector of the medium. In Ref. 3, the field equations are written in terms of $F$ and $\mathcal{M}$ and also in terms of vectors $E$, $B$, $P$, $M $ and the velocity vectors $u$ and $v$. These field equations are also quoted in Sec. 2. Furthermore, in Sec. 2, the usual transformations (UT) of the 3-vectors $\mathbf{E}$ and $\mathbf{B}$, (\[JCB\]), and of $\mathbf{P}$ and $\mathbf{M}$, (\[ps\]), are written and their difference relative to the Lorentz transformations (LT) of vectors, as 4D geometric quantities, e.g., the electric field vector $E$, (\[T1\]), is pointed out. In Sec. 3, we formulate the constitutive relations as the relations between $\mathcal{M}$ and $F$, Eqs. (\[cr1\]) and (\[cr2\]). Then, using the decompositions of $F$ (\[E2\]) and $\mathcal{M}$ (\[M1\]) we get from (\[cr1\]) and (\[cr2\]) how $P(x)$, (\[P\]), and $M(x)$, (\[M\]), depend on $E$, $B$ and two different velocity vectors, $u$ and $v$. The constitutive relations (\[P\]) and (\[M\]) are the basic relations that are obtained in this paper and they are not reported in previous approaches. The last term in (\[P\]) and (\[M\]) describes the magnetoelectric effect in a moving dielectric in a new way. In Sec. 4, we have represented all 4D geometric quantities from (\[P\]) and (\[M\]) in the standard basis in order to compare them with some usual formulations. This procedure yields Eqs. (\[po\]) and (\[ma\]). In Sec. 5, Minkowski’s constitutive relations with the 3-vectors (\[de\]) are quoted. The equations (\[de\]) are considered to be the fundamental constitutive equations for moving media in the whole physics community. In Sec. 5.1, the relations (\[de\]) are written in equivalent forms asconstitutive equations which explicitly express the 3-vectors $\mathbf{P}$ and $\mathbf{M}$ as the functions of the 3-vectors $\mathbf{E}$ and $\mathbf{B}$, (\[pl\]) and (\[mg\]), or (\[mp\]). These forms of Minkowski’s constitutive relations are compared with our relations (\[po\]) and ([ma]{}), i.e., with Eqs. (\[pc\]) and (\[ma1\]), and several essential differences are pointed out. It is argued that these differences appear since Minkowski’s constitutive relations are with the 3-vectors and they are derived using the UT and not the LT. In Sec. 5.2, it is shown that the same differences remain in the low velocity limit. In Sec. 5.3, it is presented the comparison with Minkowski’s constitutive relations that are obtained by means of exterior forms.$^{2}$ It is shown that the constitutive relations with exterior forms from Ref. 2 are completely equivalent to Minkowski’s constitutive relations with the 3-vectors and, accordingly, they also differ from the relations obtained in this paper. In Sec. 6, Eq. (\[i\]) represents the interaction term in the Lagrangian for the interaction between the electromagnetic field $F$ and the dipole moment bivector $D$, whereas Eq. (\[1\]) is its low velocity limit. The last two terms in (\[i\]), or (\[1\]), contain the direct interaction of $E$ with the magnetic dipole moment vector$\ m$ and $B$ with the electric dipole moment vector $d$. These terms give the physical explanation for the existence of the magnetoelectric effect in moving media. That explanation markedly differs from the traditional one. In Sec. 7, some remarks are given, which refer to the general constitutive relations. In Sec. 8, the conclusions are presented. **2. A Short Review of Some Previous Results** We shall deal with 4D geometric quantities, i.e. in the geometric algebra formalism. For the exposition of the geometric algebra see Ref. 5. The generators of the spacetime algebra are four basis vectors $\left\{ \gamma _{\mu }\right\} ,\mu =0...3,$ satisfying $\gamma _{\mu }\cdot \gamma _{\nu }=\eta _{\mu \nu }=diag(+---)$. This basis, the standard basis, is a right-handed orthonormal frame of vectors in the Minkowski spacetime $M^{4}$ with $\gamma _{0}$ in the forward light cone, $\gamma _{0}^{2}=1$ and $\gamma _{k}^{2}=-1$ ($k=1,2,3$). The standard basis $\left\{ \gamma _{\mu }\right\} $ corresponds to Einstein’s system of coordinates in which the Einstein synchronization of distant clocks$^{6}$ and Cartesian space coordinates $x^{i}$ are used in the chosen inertial frame of reference. First, we briefly expose the formulation of the field equations from Ref. 3. As shown in Ref. 3, the field equations (5), $\partial (\varepsilon _{0}F+\mathcal{M})=j^{(C)}/c$; $\partial \cdot (\varepsilon _{0}F+\mathcal{M})=j^{(C)}/c$, $\partial \wedge F=0$, can be taken as the primary equations for the electromagnetism in moving media. The bivector $F=F(x)$ represents the electromagnetic field and, as shown in Ref. 4, it can be taken as the primary quantity for the whole electromagnetism. $j^{(C)}$ is the conduction current density of the *free* charges. $\mathcal{M}$ is the generalized magnetization-polarization bivector $\mathcal{M=M}(x)$, which is connected with the magnetization-polarization current density of the *bound* charges $j^{(\mathcal{M})}=-c\partial \mathcal{M}=-c\partial \cdot \mathcal{M}$. (According to Eq. (3),$^{3}$ the total current density vector $j$ can be decomposed as $j=j^{(C)}+j^{(\mathcal{M})}$.) The field equation with sources is written in the ‘source representation’ in Eq. (7)$^{3}$ $\partial \
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'An approximation model based on convolutional neural networks (CNNs) is proposed for flow field predictions. The CNN is used to predict the velocity and pressure field in unseen flow conditions and geometries given the pixelated shape of the object. In particular, we consider Reynolds Averaged Navier-Stokes (RANS) flow solutions over airfoil shapes. The CNN can automatically detect essential features with minimal human supervision and shown to effectively estimate the velocity and pressure field orders of magnitude faster than the RANS solver, making it possible to study the impact of the airfoil shape and operating conditions on the aerodynamic forces and the flow field in near-real time. The use of specific convolution operations, parameter sharing, and robustness to noise are shown to enhance the predictive capabilities of CNN. We explore the network architecture and its effectiveness in predicting the flow field for different airfoil shapes, angles of attack, and Reynolds numbers.' author: - Saakaar Bhatnagar - Yaser Afshar - Shaowu Pan - Karthik Duraisamy - Shailendra Kaushik date: 'Received: date / Revised version: date' title: Prediction of Aerodynamic Flow Fields Using Convolutional Neural Networks --- Introduction {#intro} ============ With advances in computing power and computational algorithms, simulation-based design and optimization has matured to a level that it plays a significant role in an industrial setting. In many practical engineering applications, however, the analysis of the flow field tends to be the most computationally intensive and time-consuming part of the process. These drawbacks make the design process tedious, time consuming, and costly, requiring a significant amount of user intervention in design explorations, thus proving to be a barrier between designers from the engineering process. Data-driven methods have the potential to augment [@Duraisamy:2019] or replace [@Guo:2016] these expensive high-fidelity analyses with less expensive approximations. Learning representations from the data, especially in the presence of spatial and temporal dependencies, have traditionally been limited to hand-crafting of features by domain experts. Over the past few years, deep learning approaches [@Bengio:2009; @Schmidhuber:2015] have shown significant successes in learning from data, and have been successfully used in the development of novel computational approaches [@Raissi:2018; @Raissi:2018b; @Raissi:2019]. Deep learning presents a fast alternative solution as an efficient function approximation technique in high-dimensional spaces. Deep learning architectures such as deep neural networks (DNNs), routinely used in data mining, are well-suited for application on big, high-dimensional data sets, to extract multi-scale features. Deep convolutional neural networks (CNN) belong to a class of DNNs, most commonly applied to the analysis of visual imagery. Previous works [@Lecun:1998; @Taylor:2010; @Zuo:2015] have illustrated the promise of CNNs to learn high-level features even when the data has strong spatial and temporal correlations. Increasing attention being received by CNNs in fluid mechanics partly originates from their potential benefit of flexibility in the shape representation and scalability for 3D and transient problems. Figure \[fig:fig1\] illustrates the simplified layout of a typical CNN, LeNet-5 [@Lecun:1998] applied to the handwritten digit recognition task. The main advantage of a CNN is that it exploits the low dimensional high-level abstraction by convolution. The key idea of CNN is to learn the representation and then to use a fully connected standard layer to fit the relationship between the high-level representation and output. State of the art in application of CNNs in fluid dynamics {#State_of_the_art} --------------------------------------------------------- The use of deep neural networks in computational fluid dynamics recently has been explored in some rudimentary contexts. [@Guo:2016] reported the analysis and prediction of non-uniform steady laminar flow fields around bluff body objects by employing a convolutional neural network (CNN). The authors reported a computational cost lower than that required for numerical simulations by GPU-accelerated CFD solver. Though this work was pioneering in the sense that it demonstrated generalization capabilities, and that CNNs can enable a rapid estimation of the flow field, emphasis was on qualitative estimates of the velocity field, rather than on precise aerodynamic characteristics. [@Miyanawala:2017] used a CNN to predict aerodynamic force coefficients of bluff bodies at a low Reynolds number for different bluff body shapes. They presented a data-driven method using CNN and the stochastic gradient-descent for the model reduction of the Navier-Stokes equations in unsteady flow problems. [@Lee:2017; @Lee:2018] used a generative adversarial network (GAN) to predict unsteady laminar vortex shedding over a circular cylinder. They presented the capability of successfully learning and predicting both spatial and temporal characteristics of the laminar vortex shedding phenomenon. [@hennigh:2017b] presented an approach to use a DNN to compress both the computation time and memory usage of the Lattice Boltzmann flow simulations. The author employed convolutional autoencoders and residual connections in an entirely differentiable scheme to shorten the state size of simulation and learn the dynamics of this compressed form. [@Tompson:2016] proposed a data-driven approach for calculating numerical solutions to the inviscid Euler equations for fluid flow. In this approach, an approximate inference of the sparse linear system used to enforce the Navier-Stokes incompressibility condition, the “pressure projection” step. This approach cannot guarantee an exact solution pressure projection step, but they showed that it empirically produces very stable divergence-free velocity fields whose runtime and accuracy is better than the Jacobi method while being orders of magnitude faster. [@Zhang:2017] employed a CNN as feature extractor for a low dimensional surrogate modeling. They presented the potential of learning and predicting lift coefficients using the geometric information of airfoil and operating parameters like Reynolds number, Mach number, and angle of attack. However, the output is not the flow field around the airfoil but the pressure coefficients at several locations. It is unclear whether this model would have good performance in predicting the drag and pressure coefficient when producing the flow field at the same time. The primary contribution of the present work is a framework that can be used to predict the flow field around different geometries under variable flow conditions. Towards this goal and following [@Guo:2016], we propose a framework with a general and flexible approximation model for near real-time prediction of non-uniform steady RANS flow in a domain based on convolutional neural networks. In this framework, the flow field can be extracted from simulation data by learning the relationship between an input feature extracted from geometry and the ground truth from a RANS simulation. Then without standard convergence requirements of the RANS solver, and its number of iterations and runtime, which are irrelevant to the prediction process, we can directly predict the flow behavior in a fraction of the time. In contrast to previous studies, the present work is focused on a more rigorous characterization of aerodynamic characteristics. The present study also improves on computational aspects. For instance, [@Guo:2016] use an separated decoder, whereas the present work employs shared-encoding and decoding layers, which are computationally efficient compared to the separated alternatives. Methodology {#Methodology} ============ CFD Simulation {#CFD} -------------- In this work, flow computations and analyses are performed using the OVERTURNS CFD code [@Duraisamy:2005; @Lakshminarayan:2010]. This code solves the compressible RANS equations using a preconditioned dual-time scheme [@Pandya:2003]. Iterative solutions are pursued using the implicit approximate factorization method [@Pulliam:1981]. Low Mach preconditioning [@Turkel:1999] is used to improve both convergence properties and the accuracy of the spatial discretization. A third order Monotonic Upwind Scheme for Conservation Laws (MUSCL) [@VanLeer:1979] with Koren’s limiter [@Koren:1993] and Roe’s flux difference splitting [@Roe:1986] is used to compute the inviscid terms. Second order accurate central differencing is used for the viscous terms. The RANS closure is the SA [@Spalart:1992] turbulence model and $\gamma - \overline{Re_{\theta t}}$ model [@Medida:2011] is used to capture the effect of the flow transition. No-slip boundary conditions imposed on the airfoil surface. The governing equations are provided in the Appendix. Simulations are performed over the S805 [@Somers:1997a], S809 [@Somers:1997b], and S814 [@Somers:2004] airfoils. S809 and S814 are among a family of airfoils which contain a region of pressure recovery along the upper surface which induces a smooth transition from laminar to turbulent flow (so-called “transition-ramp”). These airfoils are utilized in wind turbines [@Aranake:2012]. Computations are performed using structured C-meshes with dimensions $394 \times 124$ in the wrap-around and normal directions respectively. Figure \[fig:fig2\] shows the airfoils and their near-body meshes. [0.45]{} [0.45]{} [0.45]{} Simulations are performed at Reynolds numbers $0.5,~1,~2,~\text{and}~3 \times 10^6$, respectively, and a low Mach number of $0.2$
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We analyse the proposal of sliding phases (SP) in layers hosting global U(1) symmetric variables with finite inter-layer Josephson coupling. Based on the Kosterlitz-Thouless renormalization group (RG) approach, such phases were predicted to exist in various layered (or 1D quantum coupled) systems. The key in the RG argument is treating the coupling as though the variables are non-compact. Large scale Monte Carlo simulations of a layered model, where the SP is supposed to exist, finds no indication of such a phase. Instead, 3D behavior is observed. This result is consistent with the asymptotically exact analytical solution. A generic argument against SP in translationally invariant systems with short range interactions is provided. We have also suggested an alternative model for the SP – adding long-range interactions to the inter-layer Josephson term.' author: - 'S. Vayl' - 'A.B. Kuklov' - 'V. Oganesyan' title: 'Sliding phases in U(1) symmetric systems – mirage of the renormalization group' --- Introduction ============ The idea behind the SP was put forward by Efetov in 1979 in the context of layered superconductor with parallel magnetic field [@Efetov]. It was suggested that the field can suppress the inter-layer Josephson coupling so that the low energy properties of this 3D system can be described as being essentially of 2D character. Later, in Ref. [@Korshunov] it was shown that the frustration due to the magnetic field is not sufficient to fully suppress the coupling. In the context of quantum 1D chains (equivalent to 2D classical layers by the virtue of the quantum to classical mapping) the possibility of the decoupling between chains has also been explored [@PWA; @Wen]. The main argument for such a decoupling is based on the scaling dimensions of the Josephson coupling determined with respect to the Luttinger liquid parameter in each chain: if it is larger than 2, the coupling should become irrelevant [@Wen]. These proposals have been criticized in Refs.[@Castellani; @Fabrizio] where it was shown that the inter-chain tunneling is always relevant. Further argumentation in favor of the SP and, actually, the name [*Sliding Phases*]{} have been proposed in Refs.[@Lubensky; @Toner] where the inter-layer gradient couplings between classical XY variables in each layer have been considered in addition to the Josephson one. Such gradient terms can independently control the scaling dimensions of the Josephson coupling and of the vortex fugacity in each layer so that the first one can become irrelevant above some temperature $T_d$ (of the [*dimensional reduction*]{} [@Sondhi]) which is below the temperature of the Berizinskii-Kosterlitz-Thouless (BKT) transition in the layers. Thus, there is a range of temperatures where the SP are supposed to exist. This approach was also developed for the case of quantum 1D Luttinger liquids coupled by both the Josephson and the gradient terms [@Kane_2001; @Ashwin_2001] which are the analog of the Andreev-Bashkin drag effect [@AB_effect]. It is important to note that the proposal of SP is based on applying the RG logic to compact variables characterized by global U(1) symmetry. While these early suggestions were more of a purely academic interest, expanding capabilities of ultra-cold-atoms techniques in recent years emphasize the importance of these suggestions especially in the context of possible new phases in composite lattices [@Cazalila] and in the presence of disorder [@Demler]. In more general terms, the question is if it is possible to realize a phase transition from a low- to higher- dimensional behavior. Here we will analyze a simplest classical XY system characterized by the gradient interactions and the Josephson coupling $u$. The gradient terms are chosen in such a way that the SP is supposed to exist in some range where the renormalized value $u_r$ of $u$ scales to zero as layers size $L$ grows. We will present the results of the large scale Monte Carlo simulations of this system. Our analysis is based on the dual formulation of the model – in terms of the closed loops. The main result is that no SP state exists in such a system. Instead, the value of $u_r$ is always finite. This behavior will be compared with the standard asymmetric XY layered model where no SP are expected to occur. We will also derive the analytical result for $u_r$ in the asymptotic limit when the intra-layer stiffness is much larger than $u$. The numerical results have been found to be consistent with the analytical solutions for both models. Our paper is organized as follows. In Sec.\[Sec:I\] we introduce the bilayer model and provide the RG solution for SP. Then, we construct the dual representation in Sec. \[sec:dual\]. Using the duality we have found the asymptotic analytical solution for the renormalized Josephson coupling $u_r$ in Sec. \[sec:AS\]. The Monte Carlo simulations of the bilayer model are presented in Sec.\[sec:num\]. Then, in Sec.\[Sec:Nz\] we present the results on a stack of bilayers along the same lines as for the bilayer. Finally, in Sec. \[sec:dis\] we discuss the implications of our analytical and numerical results and also provide an alternative model for the SP. Bilayer model of SP {#Sec:I} =================== Here we introduce a model of two asymmetric parallel layers, each being a square lattice of linear size $L=1,2,3,...$ (in terms of the inter-site shortest distance) characterized by two fields $\psi_1=\exp(\phi_{1})$ and $\psi_2=\exp(\phi_{2})$ on the layers $z=1,2$, respectively.The action can be written as H&=& - \_[ij]{} \[t\_1 (\_[ij]{} \_1 - A\_[ij]{}) +t\_2 (\_[ij]{} \_2 -g\_2 A\_[ij]{})\ &+& A\^2\_[ij]{}\] - \_i u(\_2(i)-\_1(i)) \[2N\] where $t_1>,t_2>0, g>0$ and $g_2$ are parameters; $\langle ij\rangle$ denotes summation over nearest neighbor sites within each layer; $\nabla_{ij} \phi_a \equiv \phi_a(i) - \phi_a(j)$; $A_{ij}$ is a bond vector field (that is, $A_{ij}=-A_{ji}$) oriented along the bond $\langle ij\rangle$. It is introduced in order to generate the “current-current” interaction (cf. [@Lubensky; @Toner; @Sondhi; @Kane_2001; @Ashwin_2001]) consistent with the compact nature of the fields $\phi_{1,2}$. This action is to be used in the partition function Z=DA D\_[1]{} D\_[2]{} (- H) \[ZZ2\] where the temperature is absorbed into the the parameters $t_1,t_2,u,g$. Our focus is on verifying the applicability of the RG analysis to the renormalization of the Josephson coupling $u$. Hence, we will not discuss physical origins of the variables and the parameters. The RG solution for bilayer {#sec:RG} --------------------------- In the approximation ignoring compact nature of the variables, the terms $-\cos(\nabla_{ij} \phi_1 - A_{ij})$ and $-\cos(\nabla_{ij} \phi_2 -g_2 A_{ij})$ are replaced by $(\nabla_{ij} \phi_1 - A_{ij})^2/2$ and $(\nabla_{ij} \phi_2 -g_2 A_{ij})^2/2$, respectively. Then, the gaussian integration over $A_{ij}$ can be carried out explicitly in Eq.(\[ZZ2\]), so that (\[2N\]) in terms of the remaining variables becomes H\_0= \_[ij]{} K\_[ab]{} \_[ij]{} \_a \_[ij]{} \_b - \_i u(\_2-\_1), \[2N0\] where the matrix $K_{ab}, \, a,b =1,2$ is related to the original parameters as K\_[11]{}&=& , K\_[22]{}= ,\ K\_[12]{} &=& - . \[Ktg\] . The stability of $H_0$ is guaranteed by K\_[11]{}K\_[22]{} - K\_[12]{}\^2= &gt;0. \[stab\] The condition for SP can be obtained along the lines of the logic [@Wen; @Lubensky; @Toner; @Sondhi; @Kane_2001; @Ashwin_2001] which ignores the compactness of $\phi_{1,2}$. Specifically, introducing the variables $\varphi =\phi_1 + \phi_2$ and $\theta= \phi_2 -\phi_1$ and, then, integrating out $\varphi$, the resulting partition function becomes Z\_0=D\^[-H\_]{}, H\_ = d\^2 x , \[Z\_0\] where the notation K= \[K\] is introduced and the long wave limit is considered – so that the summation along the layers is replaced by the integration $\int d^2 x ...$. As long as the compactness of $\theta$ is ignored, Eqs. (\[Z\_0\]) represent the standard Sine-Gordon model in 2D. The RG analysis predicts (see in, e.g., [@Lubensky_book]) that at $
{ "pile_set_name": "ArXiv" }
null
--- abstract: | This paper focuses on reinforcement learning (RL) with limited prior knowledge. In the domain of swarm robotics for instance, the expert can hardly design a reward function or demonstrate the target behavior, forbidding the use of both standard RL and inverse reinforcement learning. Although with a limited expertise, the human expert is still often able to emit preferences and rank the agent demonstrations. Earlier work has presented an iterative preference-based RL framework: expert preferences are exploited to learn an approximate policy return, thus enabling the agent to achieve direct policy search. Iteratively, the agent selects a new candidate policy and demonstrates it; the expert ranks the new demonstration comparatively to the previous best one; the expert’s ranking feedback enables the agent to refine the approximate policy return, and the process is iterated.\ In this paper, preference-based reinforcement learning is combined with active ranking in order to decrease the number of ranking queries to the expert needed to yield a satisfactory policy. Experiments on the mountain car and the cancer treatment testbeds witness that a couple of dozen rankings enable to learn a competent policy. author: - Riad Akrour - Marc Schoenauer - Michèle Sebag bibliography: - 'ourbib.bib' title: 'APRIL: Active Preference-learning based Reinforcement Learning' --- \ **Acknowledgments**. The first author is funded by FP7 European Project [*Symbrion*]{}, FET IP 216342, <http://symbrion.eu/>. This work is also partly funded by ANR Franco-Japanese project Sydinmalas ANR-08-BLAN-0178-01.
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We review the status of non-perturbative analyses of multi-jet event shape distributions and mean values, highlighting the physical insight on QCD dynamics they can provide.' author: - 'A. Banfi' title: 'Why multi-jet studies?' --- INTRODUCTION ============ In every QCD observable, perturbative (PT) and non-perturbative (NP) dynamics are inseparable, the reason being that the observed degrees of freedom are not quarks and gluons, the elementary particles entering the QCD bare Lagrangian, but hadrons, whose description in terms of partons is well beyond the domain of PT theory. Fortunately, at least for sufficiently inclusive observables, the difference between parton and hadron level predictions is suppressed by inverse powers of the hard scale of the process. As far as the latter is short-distance dominated, one can safely compute QCD observables using the PT parton language, and interpret the discrepancy with experimental data, in case any is seen, as the need for NP hadronisation corrections. One can then adopt two complementary approaches. One is to consider observables whose hadronisation corrections are almost negligible, for instance total cross sections or inclusive non-QCD particle distributions. These observables can be computed in PT QCD and exploited to determine the value of the coupling ${\alpha_s}$ [@alpha-exp]. The other approach is to consider observables which are very sensitive to NP physics, in order to have an insight on the hadronisation mechanism. The best known example is event shape variable distributions and mean values. These variables are constructed by combining final state momenta to obtain a number that gives an idea of the geometrical properties of hadron energy-momentum flow. The value of an event shape $V$ is related to the scale at which hadrons are probed, so that measuring event shape distributions makes it possible to study physics at very different scales, which range from the domain of PT QCD ($V\sim 1$) down to the confinement region ($V \sim \Lambda_\mathrm{QCD}$) where the quark/gluon language is scarcely applicable. For this reason, although shape variables have been used to measure ${\alpha_s}$, they are ideal for investigating properties of QCD dynamics. So far, both experimental and theoretical investigations have been restricted only to two-jet event shapes (see [@DSreview] for a review). Here we will discuss how the extension of such studies to multi-jet event shapes can shed further light on the interplay between PT and NP physics in QCD observables. TWO-JET STUDIES =============== NP correction to event shapes {#sec:NPcorr} ----------------------------- Experimental data clearly indicates that PT QCD alone is not enough to predict event-shape distributions and mean values [@evshape-exp]. Let us consider the mother of all event shapes, the thrust [@thrustdef] $$\label{eq:thrust} T = \max_{\vec n_T}\frac{ \sum_h|\vec p_h \cdot \vec n_T|}{\sum_h |\vec p_h|}\>, \qquad \tau \equiv 1-T\>.$$ The thrust is a measure of particle alignment, and is a typical two-jet variable, since it vanishes in the limit of two narrow jets. As one can see from fig. \[fig:meanT\], the dependence of the mean value of $1-T$ on the $e^+e^-$ centre-of-mass energy $Q$ is correctly described only after adding to the QCD fixed order prediction a NP $1/Q$-suppressed correction $$\label{eq:tau-mean} {\left\langle \tau \right\rangle}={\left\langle \tau \right\rangle}_{\mathrm{PT}}+ {\left\langle \tau \right\rangle}_{\mathrm{NP}}\>,\qquad {\left\langle \tau \right\rangle}_{\mathrm{PT}}= {\alpha_s}(Q) \>\tau_1+{\alpha_s}^2(Q)\> \tau_2+\ldots \>,\qquad {\left\langle \tau \right\rangle}_{\mathrm{NP}}\simeq \frac{C_\tau}{Q}\>,$$ where $C_\tau\simeq 1\mathrm{GeV}$ when ${\left\langle \tau \right\rangle}_{\mathrm{PT}}$ is evaluated at next-to-leading order (NLO). ![image](mean_thrust.eps){width=".5\textwidth"} One might think that such a discrepancy could be removed by including higher orders in the PT expansion. Actually Sterman observed that a term $18 \>\alpha_s^3$ can already mimic a $1/Q$ behaviour [@Sterman]. However, theoretical analyses show that the PT series is an intrinsically ill-defined object, since it is doomed to diverge factorially (for a review, see [@Beneke]). Attempts to regularise such a divergence give rise to a power-suppressed ambiguity, known as infrared (IR) renormalon. Looking more closely at the origin of the divergence, one can see that it arises when resumming [*renormalon chain*]{} graphs containing an arbitrary number of linked quark or gluon bubbles [@renormalon]. A renormalon resummation for the thrust mean value yields $$\label{eq:tau-PT} {\left\langle \tau \right\rangle}_{{\mathrm{PT}}} \simeq \frac{4C_F{\alpha_s}}{\pi}\sum_{n=0}^\infty n!\left(\frac{2\beta_0{\alpha_s}}{4\pi}\right)^n n^{\beta_1} \simeq 2 C_F \int\frac{dk_t}{k_t}d\eta \> \frac{{\alpha_s}(k_t)}{\pi} \>\frac{k_t}{Q}e^{-|\eta|}\>,$$ where $\beta_0$ and $\beta_1$ are the first two coefficients of the QCD beta function. After a renormalon analysis, the series in eq. (\[eq:tau-PT\]) gives a $1/Q$ ambiguity. The last equality in eq. (\[eq:tau-PT\]) results from the fact that the series can be seen as the PT expansion of the integral of the running coupling ${\alpha_s}(k_t)$ down to the infrared. Since the PT coupling develops a Landau singularity at low momenta, one may be tempted to ascribe the divergence to the presence of the Landau pole in the $k_t$ integration contour. However, eq. (\[eq:tau-PT\]) shows that the divergence of ${\left\langle \tau \right\rangle}_{\mathrm{PT}}$ is determined only by $\beta_0$ and $\beta_1$, i.e. it is independent of the particular coupling adopted. This naive observation is confirmed by more refined theoretical analyses, which show that IR renormalons are always present whatever is the behaviour of the coupling at low scales [@DU]. The main message of this discussion is that, in order to obtain a satisfactory [*theoretical*]{} understanding of event shape observables, PT theory must always be supplemented with information on QCD dynamics at low scales. In particular, a size of the power correction of around $1\mathrm{GeV}$ suggests that NP effects arise from partons which have started the blanching process that leads to the formation of hadrons. Two-jet shapes and the Feynman tube model {#sec:Feynman} ----------------------------------------- Consider now the specific case of an event-shape $V$ in $e^+ e^-$ annihilation that vanishes in the two-jet limit (thrust, $C$-parameter, heavy-jet mass, etc.). The mean value ${\left\langle V \right\rangle}$ will receive its main contribution from hard particles whose transverse momenta (with respect to the thrust axis) are of the order of the hard scale $Q$. There are however soft hadrons, with transverse momenta up to about $1\mathrm{GeV}$, whose contribution to ${\left\langle V \right\rangle}$ cannot be fully computed with PT techniques. The Feynman tube model [@tube] gives a phenomenological description of such hadrons by assuming that their distribution is uniform in rapidity: $$\label{eq:dnh} \frac{dn_h}{d\ln k_t d\eta} = \Phi_h(k_t)\>.$$ Observing also that their contribution $\delta V$ to $V$ is additive $$\label{eq:deltaV} \delta V \simeq \sum_i \frac{k_{ti}}{Q} f_V(\eta_i)\>,$$ one obtains a $1/Q$-suppressed NP correction ${\left\langle V \right\rangle}_{\mathrm{NP}}$ to ${\left\langle V \right\rangle}$, given by $$\label{eq:VNP} {\left\langle V \right\rangle}_{\mathrm{NP}}= \sum_h \int \frac{dk_t}{k_t}d\eta \>\Phi_h(k_t) \frac{k_t}{Q} f_V(\eta) = c_V\frac{{\left\langle k_t \right\rangle}_{\mathrm{NP}}}{Q}\>,\qquad c_V = \int_{-\!\infty}^\infty\!\!\! d\eta \>f_V(\eta)\>,$$ where $c_V$ is a calculable coefficient and $$\label{eq:ktnp} {\left\langle k_t \right\rangle}_{\mathrm{NP}}=
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'Recent evidence for pentaquark baryons is critically reviewed in the light of new high statistics data. The search of the WA89 experiment for the $\Xi^{--}(1860)$ is presented in detail and consequences of its non-observations are discussed.' author: - | Josef Pochodzalla\ [*Institut für Kernphysik, Universität Mainz*]{}\ title: | PENTAQUARKS - FACTS AND MYSTERIES\ or SISYPHUS AT WORK --- =11.6pt The Myth of Sisyphus ==================== Giving these days a talk on pentaquarks or - even worse - writing afterwards a report for the proceedings reminds very much on Sisyphus, a man eternally condemned to roll a rock to the top of a mountain, whence the stone would fall back to its own weight. Having just finished the transparencies for the talk, the next paper with a new – positive or negative – result appears. In that sense, the present manuscript written during june 2004 represents an updated version of the talk given at the PANDA workshop in march 2004. But may be there is even a deeper link between the pentaquark search and the destiny of Sisyphus. Since its advent in 1964 the quark model[@QUARK] is very much appreciated for describing the vast amount of strongly interacting particles, the so called hadron-zoo. Experimentally there is no doubt of the existence of baryons, made up of three quarks, and mesons, consisting of a quark anti-quark pair. A priori the quark model imposes no upper limit on the number of quarks/anti-quarks a hadron can be built of. However, it is widely agreed upon, that the colour quantum numbers of the constituents should add up to the colour neutral state. As a consequence physicist desperately seek for exotic quark and gluon structures which differ from the well known meson and baryon structure. Narrow resonances with exotic quark content would be of course particularly welcome because the theoretical interpretation would be very much simplified. In the past many new particles have been spotted like the tetra quark $U$(3100)[@WA62_U; @BIS2_U], the $f_J$(2230) seen first by the MARK III collaboration[@fj2230_1] and the $S$(1936)[@S1936]. Unfortunately none of these narrow resonances survived detailed studies with high statistics. So here we go again... The Experimental Situation of the $\Theta^+(1530)$ ================================================== At present twelve experimental groups have reported evidence for a narrow baryonic resonance in the KN channel at a mass of about 1530[2]{} (see Refs. 6-17) (for an updated list of references see[@Theta:Lit]). Based on previous predictions[@Theta:Diakonov] (for some earlier references see also[@Theta:Walliser]) this resonance was - because of its exotic quark content - interpreted as a pentaquark state. As a consequence already the first observations triggered a flood of theoretical papers which is still increasing with an increment of about one paper each second day (top part of Fig. \[fig:WA8901\]). Figure \[fig:WA8902\] shows the first nine published results which gave evidence for the existence of the so called $\Theta^+(1530)$. Unlike in the original publications I prefer to show here the data points including the statistical error bars. Obviously a common drawback of the individual observations is the limited statistics and hence limited confidence[@Bit00] of the peaks. A little bit disturbing is also the fact that the magnitude of the effect is nearly independent of the experimental situation. Because of the low statistics it is important to note that any cuts applied during the search process can modify the statistical significance of an a priori unknown peak unless the cuts are determined with an independent data sample or Monte Carlo data (see e.g.[@Neeb]). The low statistics of the experiments shown in Fig. \[fig:WA8902\] did usually not allow to separate the data in two distinct data samples. It is furthermore interesting that the position of the various peaks are not fully consistent. Indeed already quite early doubts have been raised because of possible experimental artifacts[@Theta:Dzierba; @Theta:Zavertyaev]. A recent analysis of the HYPER-CP collaboration also underlines the necessity to remove so called ghost tracks, i.e near-duplicate tracks, during the analysis[@Theta:HYPERCP]. Using the positive track from a $\Lambda$ decay twice as a $\pi^+$ and a proton produces a peak near 1.54[2]{} (cf. also the discussion on Fig. \[fig:WA8907\] below). Finally, even if the observed peaks were real, more conventional processes can not be excluded completely at the moment[@Theta:Nussinov; @Theta:Kahana; @Theta:Kishimoto; @Theta:Bicudo] (see however Ref.[@Theta:Llanes]). Since the beginning of this year also quite a number of negative results became available (see lower part of Fig. \[fig:WA8901\]). No signals of the $\Theta^+(1530)$ could be found by BES[@Theta:BES], HERA-B[@Xi:HERAB], OPAL[@Theta:OPAL], PHENIX[@Theta:PHENIX], DELPHI[@Theta:DELPHI], ALEPH[@Theta:ALEPH], HYPER-CP[@Theta:HYPERCP], E690[@Theta:E690], CDF[@Theta:CDF] and BABAR[@Theta:BABAR]. Although a direct comparison of the positive and negative results is quite difficult, the discovery potential of the various experiment can be judged by the observed yield of known resonances. Whereas the experiments with a positive result have – if mentioned in the publications at all – typical $\Lambda(1520)$ yields of at most a few hundred, the experiments with negative outcome report in several cases a few thousand identified $\Lambda(1520)$ events. So while counting naively just the number of reported results, the situation is presently at near-balance (see Fig. \[fig:WA8901\]), it seems that the critics have gained already an advantage. It is therefore indisputable that further high-statistics experiments are needed to establish the observed resonance beyond any doubt. Once this has been achieved – preliminary high statistics data of the LEPS collaboration seem to confirm their first observation[@Theta:LEPSD] – the observation and non-observation of these resonance in different reactions may help to shed some light on the production mechanism and possibly also on the internal structure of these exotic states. ![*$x_F$ distribution of positive (open symbols) and negative (closed symbols) $\Sigma$ resonances studied by WA89.*[]{data-label="fig:WA8904"}](WA89_all0.eps){width="5.7cm"} ![*$x_F$ distribution of positive (open symbols) and negative (closed symbols) $\Sigma$ resonances studied by WA89.*[]{data-label="fig:WA8904"}](WA89_sigma.eps){width="5.2cm"} The $\Xi(1860)$ - Another Stone for Sisyphus? ============================================= The interpretation of the observed peaks in terms of a five-quark state was significantly strengthened by the subsequent observation of another member of the anticipated antidecuplet of pentaquarks. Based on 1640 $\Xi^-$ candidates produced in p+p interactions at 160[1]{} beam momentum, both in the $\Xi^-\pi^+$ and the $\Xi^-\pi^-$ channels narrow peak structures at an invariant mass of 1.860[2]{} were observed by the NA49 collaboration[@Xi:NA49]. Possible signals of a $\Xi^*$ resonance at 1.860[2]{} decaying into ${\Xi^-}\pi^+$ and $Y\overline{K}$ were reported already 1977 for K$^-$p interactions at 2.87[1]{}[@Bri77]. However, no corresponding signals have been seen in other K$^-$ induced reactions (for a compilation and a discussion of these data see Ref.[@Xi:Fischer]). A preliminary analysis of proton-nucleus interactions at 9201 by the HERA-B collaboration using a total of 19000 reconstructed $\Xi^-$ and $\overline{\Xi}^+$ events, shows no indication for the $\Xi^{--}$ nor the $\Theta^+$ resonances[@Xi:HERAB]. Searches for the $\Xi(1860)$ resonances are also being performed by the ZEUS, CDF, ALEPH, E690 and the BABAR collaboration. The ZEUS data comprise 1361 $\Xi^-$ and 1303 $\overline{\Xi}^+$ events, the CDF sample contains 19150 $\Xi^-$ and 16736 $\overline{\Xi}^+$ and the ALEPH collaboration collected about 1800 $\Xi^-$ . Negative – though still preliminary – results have been reported by all three collaborations at the DIS04 conference[@Xi:CDFZEUS]. The E690[@Theta:E690] and BABAR[@Xi:BABAR] experiments could not find a significant signal despite a large data sample of 512000 and 258000 observed $\Xi^-$, respectively. First preliminary results of the WA89 collaboration were presented at the HYP03 conference already in october 2003[@HYP03]. The final result presented in the following section are available in Ref. [@Xi:WA89] ![*Upper histogram: $x_F$ distribution of the observed $\Xi^-$ events within a $\pm$2$\sigma$ mass window. Lower histogram: $x_F$ distribution
{ "pile_set_name": "ArXiv" }
null
--- abstract: | Let $H_c$ be the rational Cherednik algebra of type $A_{n-1}$ with spherical subalgebra $U_c=eH_ce$. Then $U_c$ is filtered by order of differential operators, with associated graded ring $\operatorname{gr}U_c={\mathbb{C}}[{\mathfrak{h}}\oplus{\mathfrak{h}}^*]^{{W}}$ where ${{W}}$ is the $n$-th symmetric group. We construct a filtered ${\mathbb{Z}}$-algebra $B$ such that, under mild conditions on $c$: - the category $B{\text{-}{\textsf}{qgr}}$ of graded noetherian $B$-modules modulo torsion is equivalent to $U_c{\text{-}{\textsf}{mod}}$; - the associated graded ${\mathbb{Z}}$-algebra $\operatorname{gr}B$ has $\operatorname{gr}B{\text{-}{\textsf}{qgr}}\simeq \operatorname{{\textsf}{Coh} }\operatorname{Hilb(n)}$, the category of coherent sheaves on the Hilbert scheme of points in the plane. This can be regarded as saying that $U_c$ simultaneously gives a noncommutative deformation of ${\mathfrak{h}}\oplus{\mathfrak{h}}^*/{{W}}$ and of its resolution of singularities $\operatorname{Hilb(n)}\to {\mathfrak{h}}\oplus{\mathfrak{h}}^*/{{W}}$. As the companion paper [@GS2] shows, this result is a powerful tool for studying the representation theory of $H_c$ and its relationship to $\operatorname{Hilb(n)}$. address: - 'Department of Mathematics, Glasgow University, Glasgow G12 8QW, Scotland' - 'Department of Mathematics, University of Michigan, Ann Arbor, MI 48109-1109, USA.' author: - 'I. Gordon' - 'J. T. Stafford' title: Rational Cherednik algebras and Hilbert schemes --- [^1] Introduction ============ {#sec101} This is the first of two closely related papers on rational Cherednik algebras. In their short history, Cherednik algebras have been influential in a surprising range of subjects: for example they have been used to answer questions in integrable systems, combinatorics, and symplectic quotient singularities (see [@BEGqi; @gordc; @BFG; @GK]). In this paper we strengthen the connections between Cherednik algebras and geometry by showing that they can be regarded as noncommutative deformations of Hilbert schemes of points in the plane. In the sequel [@GS2] this will be used to show the close relationship between modules over the Cherednik algebra and sheaves on the Hilbert scheme as well as to answer various open problems about these modules. {#intro-1.2} Fix $c\in {\mathbb{C}}$. We assume throughout the paper that $c\notin \frac{1}{2} + {\mathbb{Z}}$ and, for simplicity, we will also assume that $c\not\in \mathbb{R}_{\leq 0}$ in this introduction, see and for the more general case. Let $H_c= H_{1,c}$ be the rational Cherednik algebra of type $A_{n-1}$ with spherical subalgebra $U_c = eH_ce$. The formal definition of $H_c$ is given in but one may regard it as a deformation of the twisted group ring $D({\mathfrak{h}})\ast {{W}}$, where $D({\mathfrak{h}})$ is the ring of differential operators on ${\mathfrak{h}}\cong {\mathbb{C}}^{n-1}$ with the natural action of the symmetric group ${{W}}= \mathfrak{S}_n$. The algebra $U_c$ is then the corresponding deformation of the fixed ring $D({\mathfrak{h}})^{{W}}$. The algebras $U_c$ and $H_c$ have a natural filtration by order of differential operators with associated graded rings $\operatorname{gr}{U_c} \cong {\mathbb{C}}[{\mathfrak{h}}\oplus {\mathfrak{h}}^\ast]^{{{W}}}$ and $\operatorname{gr}{H_c} \cong {\mathbb{C}}[{\mathfrak{h}}\oplus {\mathfrak{h}}^\ast]\ast {{{W}}}$. Thus we may also regard $U_c$ as a deformation of ${\mathbb{C}}[{\mathfrak{h}}\oplus {\mathfrak{h}}^\ast]^{{{W}}}$. In this introduction we will mostly be concerned with $U_c$, but since $U_c$ and $H_c$ are Morita equivalent (Corollary \[morrat-cor\]) the results we prove for $U_c$ also apply to $H_c$. It is well-known that ${\mathfrak{h}}\oplus{\mathfrak{h}}^*/{{W}}$ has a crepant resolution $\operatorname{Hilb(n)}\to {\mathfrak{h}}\oplus{\mathfrak{h}}^*/{{W}}$, where $\operatorname{Hilb(n)}$ is a variant on the Hilbert scheme of $n$ points in the plane (see for the formal definition). The ring $U_c$ has finite global homological dimension (see Corollary \[gldim\]) and so one should expect that it has the properties of a smooth deformation of ${{\mathbb{C}}[{\mathfrak{h}}\oplus {\mathfrak{h}}^*]}^{{W}}$; in other words its properties should be more closely related to those of $\operatorname{Hilb(n)}$ than to ${\mathfrak{h}}\oplus{\mathfrak{h}}^*/{{W}}$. Hints of this have been reported in [@gordc] and [@BEGfd]: finite dimensional $H_c$-modules deform the sections of some remarkable sheaves on $\operatorname{Hilb(n)}$. The main aim of this paper is to formalise this idea by showing that there is a second way of passing to associated graded objects that maps $U_c{\text{-}{\textsf}{mod}}$ precisely to $\operatorname{{\textsf}{Coh} }(\operatorname{Hilb(n)})$. {#intro-1.3} We take our cue from the theory of semisimple Lie algebras. When $n=2$, $U_c$ is isomorphic to a factor of $U(\mathfrak{sl}_2)$ [@EG Section 8] and, for all $n$, the properties of $U_c$ are similar to those of $U(\mathfrak{g})/P$, where $P$ is a minimal primitive ideal in the enveloping algebra of a complex semisimple Lie algebra $\mathfrak{g}$ (see, for example [@ginz; @GGOR; @guay]). The intuition from the last paragraph not only holds for enveloping algebras but can also be formalised through the Beilinson-Bernstein equivalences of categories. This gives a diagram $$\begin{CD} D_{\mathcal{B}} @< \sim << U(\mathfrak{g})/P \\ @V \operatorname{gr}VV @VV \operatorname{gr}V \\ {\mathcal{O}}_{T^*\mathcal{B}} @<\tau << {\mathcal{O}}(\mathcal{N}) \end{CD}$$ where $\mathcal{B}=G/B$ is the flag variety, the primitive ideal $P$ has trivial central character and $\tau: T^*\mathcal{B}\to \mathcal{N}$ is the Springer resolution of the nullcone $\mathcal{N}$. The Morita equivalence from the sheaf of differential operators $D_{\mathcal{B}}$ to $ U(\mathfrak{g})/P$ is obtained by taking global sections under the identification $U(\mathfrak{g})/P\cong D(\mathcal{B})$. Ginzburg has raised the question of whether a similar phenomenon holds for Cherednik algebras (see [@GK Conjecture 1.6] for a variant on this conjecture). In other words, can one complete the following diagram? $$\begin{CD} ? @< \sim << U_c \\ @V \operatorname{gr}VV @VV \operatorname{gr}V \\ {\mathcal{O}}_{\operatorname{Hilb(n)}} @< \tau<<{\mathcal{O}}({\mathfrak{h}}\oplus {\mathfrak{h}}^*/{{W}}) \end{CD}$$ The main result of the paper gives a positive answer to this question. Given a graded ring $R$, we write $R{\text{-}{\textsf}{qgr}}$ for the quotient category of noetherian graded $R$-modules modulo those of finite length. Main Theorem {#mainthm-intro} ------------ *There exists a graded ring $B$, filtered by order of differential operators, such that* 1. there is an equivalence of categories $U_c {\text{-}{\textsf}{mod}}\simeq B{\text{-}{\textsf}{qgr}}$; 2. there is an equivalence of categories $\operatorname{gr}B{\text{-}{\textsf}{qgr}}\simeq \operatorname{{\textsf}{Coh} }(\operatorname{Hilb(n)})$. {#intro-1.4} The construction of $B$ needs some explanation. For $n>2$, it can be shown that the Hilbert scheme $\operatorname{Hilb(n)}$ is not a cotangent bundle, so we cannot use sheaves of differential operators as a non-commutative model. Instead we take as our starting point Haiman’s description of $\operatorname{H
{ "pile_set_name": "ArXiv" }
null
--- author: - 'Akash Kumar, Anand Sengupta, and Shashikiran Ganesh' title: Autonomous Dome for Robotic Telescope --- Introduction {#sec:intro} ============ Physical Research Laboratory operates a robotic 50cm telescope, Autonomous Telescope for Variability Studies(ATVS), at its observatory at Mount Abu (latitude : $24^\circ 39^m 9^s$ North, longitude: $72^\circ 46^m 47^s$ East, altitude : $1680 m$), India. The operating conditions are quite tough in terms of range in humidity, temperature and wind. The telescope is protected from the environment by a fibre glass dome manufactured by Sirius Observatories.\ Dome controller for autonomous operations ========================================= The telescope operates in the robotic mode[@Ganesh2013] and efforts are on way to fully automate it using Remote Telescope System (RTS2, <http://rts2.org/>). The observatory has over 200 clear nights in a year but needs to be completely closed and sealed during the Indian monsoon season (generally mid June to September) every year. Apart from this there are occasions when the skies become cloudy and/or wind conditions also increase drastically. For autonomous operations to succeed, therefore, we need to have the dome controller be very reliable with independent inputs/monitoring of the weather conditions so that the shutters could be closed automatically in case of bad weather.\ Humidity being a major cause of repeated failures of electronic boards, it has not been found feasible to use commercially available solutions for controlling the dome. Hence we decided to go for a locally engineered solution using cheap, general purpose, electronic boards available locally. The objective of this work was to make an autonomous dome controller for a robotic telescope using easily replaceable electronics. The controller is in charge of the dome functions like closing and opening of dome’s shutter, clockwise and counter-clockwise motion of the dome to track the telescope. It also keeps the record of the position of the dome and can be controlled by a Windows or Linux based computer using appropriate drivers.\ We built the dome controller using the ubiquitous Arduino boards. An Arduino is an open source physical computing platform based on a simple micro-controller board. It consists of a physical programmable circuit board. It can be programmed via the USB port of the computer using the Arduino IDE(integrated Development Environment) built with the Processing platform. The Arduino IDE includes support for various electronic components such as encoders and other sensors, relay boards etc. We used one Arduino board to control the dome’s shutter movement (opening and closing) and another one to control the dome rotation (clockwise or counter-clockwise). Both the boards communicate with each other using RF transceivers. This is necessitated because the dome rotation controller is connected to the PC and the shutter controller is connected via the dome controller. The shutter controller, powered by a battery, is mounted in a box on the dome and rotates with the dome. Thus wireless communication is a must. For monitoring the orientation of the dome we use an incremental rotary encoder which converts the angular motion of the dome into a series of digital pulses which encode the movement of the dome. To control the motion of motors of the shutter and dome we have used semiconductor relay boards with physical limit switches . These semiconductor relay boards are operated using digital pulses from the Arduino microcontroller board.\ Drive Development Phases {#sec:errors} ======================== ![View of Sirius Dome housing the PRL 50cm telescope at Mount Abu (left panel). Block diagram of dome and shutter control logic (right panel).[]{data-label="fig:simple"}](Dome_pic.PNG){width="\columnwidth"} The work was divided in different phases and work of all the phases was accomplished one by one and finally integrated to make the drive. The different phases were\ (i) Making a programmable controller of the dome’s shutter\ (ii) Making a programmable controller of the dome’s rotation\ (iii) Conversion of the dome’s motion into digital code\ (iv) Setting up a wireless connection between the dome’s rotation controller and the dome’s shutter\ (v) Making it compatible with RTS2.\ In the first phase, a mechanical controller circuit for the dome’s shutter was made using limit switches and was successfully tested. After that a programmable control circuit for the dome’s shutter was made using an arduino board and semiconductor relays to control the motors and was coded for opening, intermediate stop and closing operations of the shutter. Work on the position encoder was also started to record the dome’s position.\ Simultaneously, work on the rotation part of the dome was started and a programmable control circuit was made using another Arduino board and was coded for clockwise, counter-clockwise motion and for sensing home position. Then the position encoder was integrated with this controller and the code was modified accordingly. Work on the wireless communication between rotation and shutter controllers was started using radio frequency transceivers and a communication protocol was established and encoded for transferring the command to the controller boards.\ The electronic circuits have been tested at the observatory and software coding has been developed for synchronizing the dome orientation with the azimuth being pointed by the telescope. We plan to integrate the shutter control board with independent cloud, rain and wind sensors to allow quick closure of the shutters independent of any computer control / manual intervention.\ [*Acknowledgement*]{} [**The first author would like to acknowledge support received from the organizers towards local hospitality which enabled his participation in the workshop. His travel to Malaga was funded by IIT Gandhinagar. Work at PRL is funded by the Dept of Space, Govt. of India. We thank colleagues at PRL for their support of this work.**]{} Ganesh, S., Baliyan, K. S., Chandra, S., Joshi, U. C., Kalyaan, A., Mathur, S. N, Automated telescope for variability studies, 31st ASI Meeting, ASI Conference Series, 2013, Vol. 9, pp 99 Edited by Pushpa Khare & C. H. Ishwara-Chandra
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We consider the pair production of color triplet spin–$\frac{3}{2}$ quarks and their subsequent decays at the LHC. This particle, if produced, will most likely decay into top quark and gluon, bottom quark and gluon, or a light quark jet and gluon, depending on the quantum number of the spin–$\frac{3}{2}$ particle. This would lead to signals with $t\bar{t}jj$, $b\bar{b}jj$, or $4j$ in the final states. We present a detailed analysis of the signals and backgrounds at $\sqrt{s}= 7$, $8$ and $14$ TeV and show the reach for such particles by solving for observable mass values for the spin–$\frac{3}{2}$ quarks through its decay products.' author: - 'Duane A. Dicus$^{1,}$[^1], Durmus Karabacak$^{2,}$[^2], S. Nandi$^{2,}$[^3], and Santosh Kumar Rai$^{2,}$[^4]' title: --- \[sec:intro\]Introduction ========================= The Standard Model (SM) of particle physics has been extensively tested by many independent experiments and the results are in agreement with the predictions of the SM. The Large Hadron Collider (LHC) at CERN is designed to explore the energy and intensity frontier which could show physics beyond the SM. The initial results released by the ATLAS and CMS experiments not only confirm the predictions of the SM, including the discovery of the Higgs boson [@cms:2012gu; @atlas:2012gk], but have also started pushing the energy scale required by new physics models including exotic fermions and gauge bosons which are not present in the SM. Among exotic fermions one possible new particle is a spin–$\frac{3}{2}$ excitation of quarks. We will assume this spin–$\frac{3}{2}$ particle to be a color triplet like an ordinary quark and consider the pair production and the decay of such an exotic particle at the LHC. It is not outside the realm of possibility that a spin–$\frac{3}{2}$ quark could exist as a fundamental particle. We could also have spin–$\frac{3}{2}$ bound states of ordinary quarks with gluons or the Higgs boson. There are also theoretical models in which spin–$\frac{3}{2}$ quarks arise as bound states of three heavy quarks for sufficiently strong Yukawa couplings [@tay]. The masses of these bound states are typically expected to be a few TeV. A heavy spin–$\frac{3}{2}$ quark could also exist as the lightest Regge recurrences of light spin–$\frac{1}{2}$ quarks or as Kaluza-Klein modes in string theory if one or more of the compactification radii is of the order of the weak scale rather than the Planck scale and such weak compactification in the framework of both string theory and field theory has been popular [@sure]. In this work we restrict ourselves to the collider production of point-like spin–$\frac{3}{2}$ color triplet quarks. The production of spin–$\frac{3}{2}$ quarks by hadronic collisions has been previously considered by Moussallam and Soni [@mous] and by Dicus, Gibbons, and Nandi [@Dicus:1998yc]. There are several studies on production of spin–$\sp$ fermions at lepton colliders [@Walsh:1999pb; @Almeida:1995yp; @Cakir:2007wn] and also the virtual effects of such particles on $t\bar{t}$ production [@Stirling:2011ya]. Our paper is organized as follows. In Section \[sec:frules\], we give the Feynman rules relevant for the production of spin–$\frac{3}{2}$ quarks. In Section \[sec:cross\], we give the explicit analytic formulae for the squares of the amplitude, various subprocess cross sections and total production cross sections. In Section \[sec:signals\], we present the analysis of the signal of spin–$\frac{3}{2}$ particle decaying into light jets or into heavy flavor modes. Here we make the physics analysis of relevant background and signal for three different decay scenarios. Section \[sec:summary\] contains a summary. \[sec:frules\]Feynman Rules for Spin–$\frac{3}{2}$ Particles ============================================================ The Lagrangian and the equations of motion for a free spin–$\sp$ particle of mass $M$ can be written as [@rari; @mold] $$\label{eq:lag} \mathcal{L} = \bar{\psi}_{\alpha} \Lambda^{\alpha\beta}\psi_\beta$$ $$\Lambda^{\alpha\beta}\psi_\beta=0$$ where $$\Lambda_{\alpha\beta} =(i\slashed \partial- M)g_{\alpha\beta} + iA(\gamma_\alpha \partial_\beta+\gamma_\beta \partial_\alpha) +\frac{iB}{2}\gamma_\alpha \slashed \partial \gamma_\beta + CM\gamma_\alpha \gamma_\beta $$ with $B \equiv 3A^2+2A+1$ and $C \equiv 3A^2+3A+1$. The parameter $A$ is arbitrary except that $A \not=-\frac{1}{2}$. The field $\psi_\alpha$ satisfies the subsidiary conditions $$\begin{aligned} \gamma^\alpha \psi_\alpha &= 0 \label{R1}\\ \partial^\alpha \psi_\alpha & =0. \label{R2}\end{aligned}$$ The propagator $S_{\alpha\beta}$ is given by $$\begin{split} S_{\alpha\beta}(p)=&\frac{1}{\slashed p -M} \bigg[g_{\alpha\beta} - \frac{1}{3}\gamma_\alpha \gamma_\beta - \frac{2}{3 M^2}p_\alpha p_\beta + \frac{1}{3M}(p_\alpha \gamma_\beta - p_\beta \gamma_\alpha)\bigg]\\ & + \Bigg\{\frac{a^2}{6M^2} \slashed p \gamma_\alpha \gamma_\beta - \frac{ab}{3M}\gamma_\alpha \gamma_\beta + \frac{a}{3M^2}\gamma_\alpha p_\beta + \frac{ab}{3M^2}\gamma_\beta p_\alpha \Bigg\} \end{split}$$ where $$\begin{aligned} a =\frac{A+1}{2A+1} \quad \mbox{and} \quad b =\frac{A}{2A+1} ~~ .\end{aligned}$$ From Eq.(\[R1\]) and Eq.(\[R2\]) the terms depending on the parameter $A$ in the propagator vanish on the mass shell. A redefinition of the spin–$\frac{3}{2}$ field $\psi_\alpha$ allows one to remove the $A$ dependent terms in the propagator [@pasc]. However, in our analysis we have kept the $A$ dependence in the propagator and in the interaction vertices and used the disappearance of A as a check on our calculations. The minimal substitution in Eq.(\[eq:lag\]) gives the interaction of spin–$\frac{3}{2}$ quarks with gluon and photon fields, $$\mathcal{L}_I = g \bar{\psi}_\alpha \bigg( \frac{B}{2} \gamma^\alpha \gamma^\mu \gamma^\beta + A g^{\alpha\mu}\gamma^\beta + A \gamma^\alpha g^{\mu\beta} + g^{\beta\alpha}\gamma^\mu \bigg) T_a \psi_\beta A_\mu^a\,\,,$$ where $g$ is the coupling constant, $T_a$’s are the group generators and $A_\mu^a$ are the gauge fields. For on-shell particles only the last term is nonzero. \[sec:cross\]Calculation of Cross Sections ========================================== In this section we provide the expressions necessary for the process, $$\label{eq:ppQQ} p p \rightarrow Q_{3/2} \bar{Q}_{3/2} + X\,\,$$ where $Q_{3/2}$ is the spin–$\sp$ quark. There are two subprocesses which contribute, $q\bar{q}$ annihilation and gluon fusion. The Feynman diagrams are shown in Fig.\[feyndiag\] where *(a)* represents the $q\bar{q}$ annihilation while *(b)–(d)* represent the $t$,$u$ and $s$-channel contributions of the gluon fusion subprocess respectively. Just as for top quark production the largest contribution to the production of spin–$\sp$ at LHC energies is through gluon fusion. ![*The leading order (LO) Feynman diagrams for the pair production of spin–$\sp$ quarks through (a) $q\bar{q}$ and $gg$ initial states in (b) $t$-channel, (c) $u$-channel and (d) $s$-channel.*[]{data-label="feyndiag"}](Fig1.eps){width="6.6in" height="1.1in"} The t-channel amplitude shown in Fig.\[feyndiag\] is given by $$\label{eq:mt} \begin{split} \mathcal{M}_t =~ & g_s^2\bar{
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'The mid infrared emission of early type galaxies traces the presence of intermediate age stellar populations as well as even tiny amounts of ongoing star formation. Here we discuss high S/N [*Spitzer*]{} IRS spectra of a sample of Virgo early type galaxies, with particular reference to NGC 4435. We show that, by combining mid infrared spectroscopic observations with existing broad band fluxes, it is possible to obtain a very clean picture of the nuclear activity in this galaxy.' date: '?? and in revised form ??' --- Introduction ============ With the advent of the [*Spitzer Space Telescope*]{} new frontiers have been opened in the study of stellar population content of early-type galaxies (ETGs) and, in particular, the ability to quantify the occurrence and strength of the [*rejuvenation*]{} episodes. By means of mid infrared (MIR) observations it is possible to detect the presence of intermediate age stellar populations in passively evolving galaxies, and measure even tiny amounts of ongoing star formation activity. Bressan, Granato & Silva (1998) suggested that the MIR spectral region of old and intermediate age stellar populations should be affected by the presence of mass-losing oxigen-rich AGB giants. Their integrated emission around 10$\mu$m should be clearly seen in passively evolving galaxies; its analysis, in combination with UV, optical and NIR observations, should provide accurate age-metallicity ranking, unbiased by the age-metallicity degeneracy. Moreover, ongoing star formation can be easily detected in the MIR, from the presence of prominent emission features such as PAHs and atomic or molecular emission lines (e.g. Kaneda et al. 2005, Bressan et al. 2006a,b, Panuzzo et al. 2007). Last but not least, MIR nebular lines constitute a strong diagnostic to disentangle star formation and AGN activity and they also allow a [*direct and perhaps unique*]{} determination of the chemical abundance of the surrounding gas (Panuzzo et al. 2007). For the above reasons we begun a systematic study of the properties of ETGs in the mid infrared spectral region with the [*Spitzer Space Telescope*]{}. Here we report on the results obtained with [*Spitzer*]{} IRS (Houck et al. 2004) MIR spectroscopic observations of a sample of ETGs in the Virgo cluster (Bressan et al 2006a,b; Bressan et al 2007). Early-type galaxies in the mid infrared. ======================================== Eighteen ETGs among those that define the colour-magnitude relation of the Virgo cluster (Bower, Lucy & Ellis 1992) were observed in standard staring mode with the low resolution IRS modules between 5 and 20$\mu$m, in January and July 2005. The calibration and spectra extraction procedures are discussed in detail in Bressan et al. (2006a). The spectra of these galaxies are shown in Bressan et al. (2006a) and Bressan et al. (2007). For thirteen galaxies (76%) of our sample, the MIR spectrum is characterized by the presence of a broad emission features above 10$\mu$m, [*without any other narrow emission feature*]{}. The analysis of the IRS spectra indicates that the [*10$\mu$m feature*]{} has an extended spatial distribution; moreover its spatial distribution is consistent with that obtained below 8$\mu$m, where the spectra are dominated by stellar photospheres. This result has been confirmed by the analysis of [*Spitzer*]{} IRS Peak-Up imaging observations in the blue (16$\mu$m) filter of selected galaxies (Annibali et al. in preparation). It is also in agreement with previous ISOCAM observations that indicated spatially resolved emission at both 6.7 and 15 $\mu$m (Athey et al. 2002, Ferrari et al 2002, Xilouris et al. 2004). In view of these considerations and based on preliminary fits with our models of passively evolving old simple stellar populations, we argued that we have detected the 10$\mu$m features, due to silicate emission from the circumstellar envelopes of mass losing AGB stars, as predicted by Bressan et al. (1998). Bressan et al. (2007) have recently shown that the 10$\mu$m feature observed in early type galaxies is similar in shape but about a factor four larger than the [*semi empirical*]{} one obtained for the globular cluster 47 Tuc, consistent with a metallicity variation of the same order. We are now computing new isochrones and SSP models that account for a more realistic description of the AGB phase and of their dusty envelopes. ![Comparison between the observed SED of the central region of NGC 4435 and the GRASIL model. The thick solid line represents the model for the total SED, i.e. the starburst component plus the old stellar component; the three dots-dashed dark green line represents the contribution from the old stellar population, and the dashed blue line represents the total contribution from the burst of star formation, the dotted red line represents the emission from molecular clouds, the dot-dashed green line represents the diffuse medium emission and the thin solid cyan line denotes the emission from stars of the starburst component without applying the extinction from dust. The filled red circles are the broad band data. *Left*: Comparison from 0.1$\mu$m to 100 MHz. *Right*: Comparison for the MIR wavelengths. The thickest solid blue line represents the IRS *Spitzer* spectrum. []{data-label="active"}](bressan_fig1.eps "fig:"){width="50.00000%"} ![Comparison between the observed SED of the central region of NGC 4435 and the GRASIL model. The thick solid line represents the model for the total SED, i.e. the starburst component plus the old stellar component; the three dots-dashed dark green line represents the contribution from the old stellar population, and the dashed blue line represents the total contribution from the burst of star formation, the dotted red line represents the emission from molecular clouds, the dot-dashed green line represents the diffuse medium emission and the thin solid cyan line denotes the emission from stars of the starburst component without applying the extinction from dust. The filled red circles are the broad band data. *Left*: Comparison from 0.1$\mu$m to 100 MHz. *Right*: Comparison for the MIR wavelengths. The thickest solid blue line represents the IRS *Spitzer* spectrum. []{data-label="active"}](bressan_fig2.eps "fig:"){width="50.00000%"} Among bright Virgo cluster ETGs observed by our team, four galaxies (24%) show various levels of activity. NGC 4636 (optically classified as a LINER) shows low ionization emission lines (\[ArII\]7$\mu$m, \[NeII\]12.8$\mu$m, \[NeIII\]15.5$\mu$m and \[SIII\]18.7$\mu$m) on a continuum similar to other passive galaxies. NGC 4486 (M87) shows the same emission lines on a continuum dominated by the AGN emission above 8$\mu$m. The broad continuum feature above 10$\mu$m in M87 could be caused by silicate emission from the dusty torus (Siebenmorgen et al. 2005, Hao et al. 2005). NGC 4550 shows some PAH emission features while the MIR SED of NGC 4435 is characteristic of a star forming object. The panchromatic SED of NGC 4435 ================================ NGC 4435 is an S0 galaxy interacting with NGC 4438 and it hosts a circumnuclear disk. Panuzzo et al. (2007) combined the *Spitzer* IRS spectra of NGC 4435 with IRAC and MIPS archival data and existing broad band measurements from X-ray to radio wavelengths to obtain an accurate panchromatic spectral energy distribution (SED) of this galaxy. The SED was analysed with GRASIL (Silva et al. 1998) and well reproduced at all wavelengths. The analysis shows that the circumnuclear disk experienced a burst of star formation activity which is now fading. The IRS data themselves provide precise answers on important questions such as the nature of the nuclear activity suspected from optical (Ho et al. 1997) and X-ray (Machacek et al. 2004) observations, and the metallicity of the gas in the circumnuclear disk. We fail to detect any high excitation nebular emission lines in the IRS spectrum; the \[NeIII\]15.5/\[NeII\]12.8 ratio constrains the contribution of a possible AGN to the ionizing flux to be less than 2%. The upper limit on the temperature derived from H$_2$ S(1) and S(2) rotational lines is lower than expected for AGN excitation and PAH features are well reproduced by star formation models. Moreover, the X-ray emission is within the range expected from X-ray binaries in an advanced phase of the starburst. As for the metallicity of the nuclear disk, the comparison of observed MIR nebular lines with those predicted by the GRASIL model (Panuzzo et al 2003) indicates that it is almost solar. This is one of the first accurate [*direct*]{} estimates of the gas metallicity in ETGs. The age of the starburst, $\sim$180 Myr, corresponds to the epoch of the onset of the interaction with NGC 4438 derived from dynamical simulations (Combes et al. 1988). The mass of stars born during the starburst ($\sim 1.22\times10^8~M_\odot$) amounts to about 1.5% of the stellar mass sampled by the central 5 arcsec aperture. Conclusions =========== We have obtained with
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'In this paper, we present a method of applying integral action to enhance the robustness of energy shaping controllers for underactuated mechanical systems with matched disturbances. Previous works on this problem have required a number of technical assumptions to be satisfied, restricting the class of systems for which the proposed solution applies. The design proposed in this paper relaxes some of these technical assumptions.' author: - 'Joel Ferguson$^{1}$, Alejandro Donaire$^{2}$, Romeo Ortega$^{3}$ and Richard H. Middleton$^{1}$[^1][^2][^3]' bibliography: - 'libraryURLRemoved.bib' title: '**Matched disturbance rejection for energy-shaping controlled underactuated mechanical systems** ' --- at (current page.south) ; Introduction ============ Interconnection and damping assignment passivity-based control (IDA-PBC) is a nonlinear control method whereby the closed-loop system is a passive port-Hamiltonian (pH) system with desired characteristics to comply with the control objectives [@Ortega2004]. Many systematic solutions have been proposed for the stabilization of nonlinear systems using IDA-PBC, but the general procedure is still limited by the designers ability to solve the so called *matching equations*. Although the matching equation are difficult to solve in some cases, IDA-PBC has been successful applied to a variety of nonlinear systems such as electrical machines [@Petrovic2001; @Gonzalez2008], power converters [@Rodriguez2000; @Rodriguez2001] and underactuated mechanical systems [@Acosta2005]-[@Donaire2016a]. In general, the equilibrium of a mechanical system stabilised with IDA-PBC will be shifted when an external disturbance acts on the system. In this paper we are interested in robustifying IDA-PBC [*vis-á-vis*]{} constant external disturbances. A general design for the addition of integral action to pH systems with the objective of rejecting disturbances was first presented in [@Donaire2009] and further discussed in [@Ortega2012]. The approach relies on a (possibly implicit) change of coordinates to satisfy the matching equations. The integral action scheme was tailored to fully actuated mechanical systems in [@Romero2013a] and underactuated mechanical systems in [@Donaire2016]. While in both cases the required change of coordinates to satisfy the matching equations were given explicitly, a number of technical assumption were imposed to do so. In both cases, the proposed integral action controllers were shown to preserve the desired equilibrium of the system, rejecting the effects of an unknown matched disturbance. More recently, an alternative method for the addition of integral action to pH systems was presented in [@Ferguson2015], [@Ferguson]. In these works, the controller is constructed from the open-loop dynamics of the plant. The energy function of the controller is chosen such that it couples the plant and controller states, which allows the matching equations to be satisfied by construction. In addition, the control system studied in [@Ferguson] has a physical interpretation and is shown to be equivalent to a control by interconnection (CbI) scheme, another PBC technique [@Ortega2007]. The method in [@Ferguson2015] was shown to be applicable to mechanical systems with constant mass matrix. In this paper, we extend the integral action design proposed in [@Ferguson2015] to underactuated mechanical systems subject to matched disturbances. The assumption of a constant mass matrix is relaxed, and general mechanical systems are considered. The method proposed in this paper is constructed to directly satisfy the matching equations without the need of the technical assumptions previously used in [@Donaire2016]. Specifically, the presented scheme allows the open-loop mass matrix, shaped mass matrix and input mapping matrix to be state dependant. [**Notation.**]{} In this paper we use the following notation: Let $x \in\mathbb{R}^n$, $x_1\in\mathbb{R}^m$, $x_2\in\mathbb{R}^s$. For real valued function $\mathcal{H}(x)$, $\nabla\mathcal{H}\triangleq \left(\frac{\partial \mathcal{H}}{\partial x}\right)^\top$. For functions $\mathcal{G}(x_1,x_2)\in\mathbb{R}$, $\nabla_{x_i}\mathcal{G}\triangleq \left(\frac{\partial \mathcal{G}}{\partial x_i}\right)^\top$ where $i \in\{1,2\}$. For fixed elements $x^\star\in \mathbb{R}^n$, we denote $\nabla \mathcal{H}^\star\triangleq \nabla \mathcal{H}(x)|_{x=x^\star}$. For vector valued functions $\mathcal{C}(x)\in\mathbb{R}^m$, $\nabla_x \mathcal{C}$ denotes the transposed Jacobian matrix $\left(\frac{\partial \mathcal{C}}{\partial x}\right)^\top$. Problem Formulation {#ProbForm} =================== In this paper, we consider mechanical systems that have been stabilised using IDA-PBC. This class of systems can be expressed as[^4]: $$\label{mecdist} \begin{split} \begin{bmatrix} \dot{q} \\ \dot{\bp} \end{bmatrix} &= \underbrace{ \begin{bmatrix} 0_{n\times n} & M^{-1}(q)\mathbf{M}_d(q) \\ -\mathbf{M}_d(q)M^{-1}(q) & \mathbf{J}_2(q,\bp)-R_d(q) \end{bmatrix}}_{F_m(q,\mathbf{p})} \begin{bmatrix} \nabla_q \mathbf{H}_d \\ \nabla_\mathbf{p} \mathbf{H}_d \end{bmatrix} \\ &\phantom{---}+ \underbrace{ \begin{bmatrix} 0_{m\times n} & G^\top(q) \end{bmatrix}^\top}_{G_m(q)} (u-d) \\ \mathbf{y} &= G^\top(q)\nabla_\mathbf{p} \mathbf{H}_d, \end{split}$$ with Hamiltonian $$\label{mechOLHam} \mathbf{H}_d(q,\mathbf{p})= \frac 12 \bp^\top \mathbf{M}_d^{-1}(q) \bp + V_d(q),$$ where $q,\mathbf{p} \in \mathbb{R}^n$ are the generalised configuration and momentum vectors respectively, $n$ is the number of degrees of freedom of the system, $u\in\mathbb{R}^m$ is the input, $y\in\mathbb{R}^m$ is the output, $d\in\mathbb{R}^m$ is a constant disturbance, $M(q) > 0$ and $\mathbf{M}_d(q) >0$ are the open-loop and shaped mass matrices of the system respectively, $V_d(q)$ is the shaped potential energy, $G(q)$ is the full-rank input matrix, $R_d (q)= G(q)K_p(q)G^\top(q)$ for some $K_p(q) \geq 0$ is the damping matrix and $\mathbf{J}_2(q,\mathbf{p}) = -\mathbf{J}_2^\top(q,\mathbf{p})$ is a skew-symmetric matrix. We assume that has a strict minimum at the desired operating point $(q,\mathbf{p}) = (q^\star,0_{n\times 1})$. For the remainder of the paper, the explicit state dependency of terms and various mapping are assumed and omitted. The control objective is to develop a dynamic controller $u=\beta(q,\bp,\zeta)$, where $\zeta \in \mathbb{R}^m$ is the state of the controller, that ensures asymptotic stability of the desired equilibrium $(q,\bp,\zeta) = (q^{\star},0,\zeta^\star)$, for some $\zeta^\star \in \mathbb{R}^m$, even under the action of constant disturbances $d$. Previous Work {#PrevSol} ============= A nonlinear PID controller was proposed in [@Donaire2016] as a solution to the matched disturbance rejection problem. Under the assumptions: - $G$ and $\mathbf{M}_d$ are constant - $G^\perp\nabla_q(\bp^\top M^{-1}\bp) = 0_{(n-m)\times 1}$, the control law was proposed to be $$\begin{split} u = &-\left[ K_pG^\top \mathbf{M}_d^{-1}GK_1G^\top M^{-1} + K_1G^\top \dot{M}^{-1} + K_2K_I \right. \\ &\left. \times(K_2^\top + K_3^\top G^\top \mathbf{M}_d^{-1}GK_1)G^\top M^{-1} \right]\nabla V_d \\ & -\left[K_1G^\top M^{-1}\nabla^2V_dM^{-1} + (G^\top G)^{-1}G^\top J_2\mathbf{M}_d^{-1} \right. \\
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'Stationary whirling of slender and homogeneous (continuous) elastic shafts rotating around their axis, with pin-pin boundary condition at the ends, is revisited by considering the complete deformations in the cross section of the shaft. The stability against a synchronous sinusoidal disturbance of any wave length is investigated and the analytic expression of the buckling amplitude is derived in the weakly non-linear regime by considering both geometric and material (hyper-elastic) non-linearities. The bifurcation is super-critical in the long wave length domain for any elastic constitutive law, and sub-critical in the short wave length limit for a limited range of non-linear material parameters.' author: - Serge Mora title: | Synchronous whirling of spinning homogeneous elastic cylinders:\ linear and weakly non-linear analyses --- Introduction ============ A homogeneous and balanced elastic cylinder rotating around its axis is unstable beyond a critical angular velocity, leading to transverse deformations and whirling if the ends of the cylinder are constraint for instance with bearings. This instability results from the competition between the destabilizing effect of the centrifugal force that tends to drive the cylinder away from the axis of rotation, and the elastic forces opposed to the deformation. The whirling of rotating cylinders, as well as the propagation of vibrations in the neighborhood of the critical angular velocity, have been extensively investigated in the context of rotor-dynamics [@Kramer1993; @Genta2005] because of their damaging effects on the smooth running of rotating machinery such as compressors, pumps, turbines, turbochargers, jet engines [@Chen2005]. Understanding the stability of spinning shafts and their post-buckling behavior is crucial for the success in the design of this kind of rotating systems. While most of the studies have dealt with small deformations linearized at leading order [@Ehrich1964], few studies have considered non-linear effects [@Noah1995; @Yamamoto2012]. The non-linear dynamic behaviour of a uniform, slender rotating shaft made of a viscoelastic material with external damping mechanism has been studied by considering geometric non-linearities resulting from large transverse displacements [@Shaw1989; @Kurnik1994; @Hosseini2013]. Using the center manifold technique [@Henry1981] and the normal form method, the effects of external and internal damping on the whirling of rotating shafts have been investigated in terms of Hopf or double eigenvalues bifurcations. By pushing expansions up to order 2 in terms of the characteristic magnitude of the infinitesimal strain, $\varepsilon$, but Hookean elasticity for the strain-stress relation, the whirling amplitude in steady state configurations have been computed as the radius of a limit cycle in phase portraits [@Hosseini2013]. However, the intrinsic non-linear features of material constitutive law have been neglected in these studies. Indeed, order $\varepsilon^2$ in the expansion of the governing equations originates both from geometrical non-linearities (arising from the expression of the local curvature of the center line of the cylinder) together with non-linearities in the constitutive law of the elastic material. These last non-linearities are essential in order to fulfilled the requirement of material objectivity [@Ogden1984]. An expansion of the bending energy based on a scalar non-linear constitutive law [@Haslach1985] has been proposed in order to calculate non-synchronous whirling of rotating shafts [@Cveticanin1998]. Because of the scalar features of the constitutive law used by the author, this approach is limited to deformation with large wave length (compared with the radius of the shaft) and the issues related to Poisson effect are ignored. In addition, the rotating shaft was supposed to be not extensible which is not relevant for pin-pin ends since the extension of the center-line with pin-pin ends is of order $\varepsilon^2$ and cannot be neglected. A linear analysis of the whirling bifurcation of infinite rotating cylinders under axial tension has been developed in [@Ogden1980a], based on non-linear constitutive equations in three dimensions so that this analysis is relevant for any wave length of the deformation, but the non-linear analysis is still missing. In previous papers [@Richard2018; @Richard2019], the bifurcations of spinning undeformable shafts, surrounded by a compliant elastic layer, have been investigated both in the linear and the non-linear regimes, under plane strain assumption.\ In this paper, a non-linear analysis of the [*stationary*]{} whirling of [*homogeneous*]{} rotating cylinders is developed, based on the hypothesis of negligible external damping [@Ehrich1964] so that the system is conservative. The steady states are reached once transient vibrations are damped thanks to dissipative processes (internal damping) occurring inside the elastic material. The cylinders are supposed to be slender, their length $L$ being far larger than the radius $r_0$. The elastic material is assumed to be isotropic and incompressible. The buckling amplitude of synchronous and steady sinusoidal perturbations of any wave length is calculated without any further assumption for the constitutive law of the elastic material. The analysis relies on the complete three dimensional equations so that the results are relevant for any wave length of the whirling, including wave length of the same order of magnitude as the radius of the shaft. The complete (non-linear) equations governing the equilibrium steady states are derived in Section \[sec : base equations\]. A Lagrange multiplier accounts for the incompressibility constraint and the equations for the three components of displacement field are established in strong form. Section \[sec : linear\] is devoted to the linear stability analysis. The critical angular velocity is found to depend on the shear modulus of the elastic material, its mass density, the radius of the rotating cylinder, and in a non trivial manner on the ratio of the wave length of the deformation to the radius of the cylinder. The weakly non-linear analysis of the bifurcation is carried out in Section \[sec : non linear\]. The bifurcation is found to be super-critical for neo-Hookean materials, and can be sub-critical at small wave length for particular constitutive laws. Predictions of sections \[sec : linear\]-\[sec : non linear\] are checked in Section \[sec : FEM\] by means of numerical simulations based on the Finite Element Method. The last part (Section \[sec : conclusion\]) of the paper is devoted to a conclusion. Equilibrium equations based on a finite strain theory {#sec : base equations} ===================================================== In this section the non-linear equations governing the equilibrium (steady) configurations of a rotating elastic cylinder are derived, considering an arbitrary hyper-elastic incompressible isotropic material. Let $r_{0}$ denote the radius of the undeformed cylinder, $\rho$ its mass density and $\mu$ its initial shear modulus, [*i.e.*]{} the shear modulus for infinitesimal strain. The cylinder is spun with an angular velocity $\omega$ about its axis, as sketched in Figure \[fig : scheme\]. ![Sketches of an elastic cylinder of length $L$ and radius $r_0$ rotating around its axis with the angular velocity $\omega$. The two ends of the cylinder are pinned at the axis. (a) Three dimensional view of the reference (unbuckled) configuration. (b) Side view of this reference configuration. (c) Side view of a perturbation of characteristic wave vector $k$ parallel to the axis.[]{data-label="fig : scheme"}](scheme.pdf){width="55.00000%"} In the co-rotating frame, both the elastic force and the centrifugal force are conservative. The equilibrium can therefore be derived from the condition that the total potential energy is stationary. The position $\mathbf{R}$ of a material point in the deformed configuration is given as a map ${\mathbf{R}}(\mathbf{r})$ in terms of the position $\mathbf{r}$ in the undeformed configuration. For an isotropic and incompressible elastic material, the strain energy density is a function of the two first invariants, $I_1$ and $I_2$, of Green’s deformation tensor $\mathbf{C}=\mathbf{F}^T\cdot\mathbf{F}$, where $\mathbf{F}=\partial \mathbf{R}/\partial \mathbf{r}$ is the deformation gradient: $$\begin{array}{lll} I_1 & = & \mathrm{tr}~ \mathbf{C} - 3,\\ I_2 & = & \frac{1}{2}\left(\left(\mathrm{tr}~ \mathbf{C}\right)^2 - \mathrm{tr}~\left(\mathbf{C}^2\right) \right) - 3. \end{array} \label{eq:invariants}$$ The strain energy density is then written as $\mu\, W(I_1,I_2)$ where $W$ is the dimensionless strain energy density. For the strain energy $\mu\, W(I_1,I_2)$ to be consistent with the initial shear modulus $\mu$, the following normalization condition must be enforced: $$\frac{\partial W}{\partial I_1}(0,0) + \frac{\partial W}{\partial I_2}(0,0)=\frac{1}{2}. \label{eqn : tangent modulus}$$ For an incompressible neo-Hookean solid [@Ogden1984; @Macosko94] and for an incompressible Mooney-Rivlin solid [@Mooney1940; @Rivlin1948], the dimensionless strain energy density are respectively $W=\frac{1}{2}(I_1-3)$ and $W=\frac{1}{2}\left(\beta(I_1-3)+(1-\beta)(I_2-3)\right)$, with $\beta$ a material constant in the range
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We calculate the ratio $R_{\ell\ell}$ of same sign (SS) to opposite sign (OS) dileptons in type I and generalized inverse seesaw models and show that it can be anywhere between 0 and 1 depending on the detailed texture of the right-handed neutrino mass matrix. Measurement of $R_{\ell\ell}$ in hadron colliders can therefore provide a way to probe the nature of seesaw mechanism and also to distinguish between the two types of seesaw mechanisms. We work within the framework of left-right symmetric model as an example. We emphasize that coherence of the final states in the $W_R$ decay is crucial for this discussion and it requires the right-handed neutrinos to be highly degenerate. We isolate the range of parameters in the model where this effect is observable at the LHC and future colliders.' author: - Arindam Das - 'P. S. Bhupal Dev' - 'Rabindra N. Mohapatra' title: Same Sign versus Opposite Sign Dileptons as a Probe of Low Scale Seesaw Mechanisms --- Introduction ============ Different kinds of seesaw mechanism have been proposed as ultraviolet (UV)-complete theories that lead to the dimension-5 Weinberg operator [@weinberg] for understanding small neutrino masses. Two of them are the so called type-I [@seesaw1; @seesaw2; @seesaw3; @seesaw4; @seesaw5] and inverse seesaw [@ISS1; @ISS2], which have been widely discussed in the literature. The type-I seesaw involves adding SM-singlet heavy fermions $N$ with Majorana masses that violate lepton number maximally, whereas in the inverse seesaw, one adds two SM-singlet heavy neutrinos $N$ and $S$ and a small $L$-violating mass for one set of the new singlet fermions. A simple UV-complete extension of the Standard Model (SM) that incorporates all the key ingredients of both type I seesaw and inverse seesaw and leads to them naturally is the left-right symmetric model [@LR1; @LR2; @LR3]. No extra symmetries need to be added to generate the right texture for getting tiny neutrino masses. The right-handed neutrino (RHN), predicted by anomaly considerations in this theory, couples to the right-handed (RH) gauge boson $W_R$ and is the source of the lepton number violating (LNV) signal [@KS] we will discuss. In this paper we will work within the framework of the minimal left-right model and assume that $W_R$ is kinematically accessible to the colliders. In other words, for $\sqrt s=14$ TeV LHC, we assume the mass of the $W_R$ boson to be less than 5 TeV or so [@Ferrari:2000sp; @Deppisch:2015qwa]. A key predictions for the TeV-scale left-right type-I seesaw model is that it leads to a spectacular LNV signal in hadron colliders in the form of two same-sign leptons and two jets with no missing energy [@KS]. This arises from the production and decay of heavy RHNs, both mediated by the $W_R$ gauge boson in the $s$-channel. The Majorana nature of the RHN dictates that the final states with same-sign (SS) dileptons ($\ell^\pm\ell^\pm$) appear in equal number with opposite-sign (OS) dilepton states ($\ell^\pm\ell^\mp$). In other words, the minimal left-right type-I seesaw prediction is that the ratio of the number of events in the two final states, $R_{\ell\ell}\equiv N_{\rm SS}/N_{\rm OS}=1$. This in fact is considered a ‘smoking gun’ signal for TeV-scale type-I seesaw in general[^1] and, more specifically, for the left-right seesaw model and has been extensively studied in the literature, both for the LHC [@KS; @Ferrari:2000sp; @Deppisch:2015qwa; @Nemevsek:2011hz; @Das:2012ii; @AguilarSaavedra:2012gf; @Han:2012vk; @Chen:2013fna; @Khachatryan:2014dka; @Ng:2015hba; @Dev:2015kca; @Gluza:2016qqv; @Mitra:2016kov; @Ruiz:2017nip; @Aad:2015xaa; @Dev:2016dja; @Roitgrund:2017byx], as well as other future colliders [@Lindner:2016lxq; @Mondal:2015zba; @Biswal:2017nfl; @Golling:2016gvc]. On the other hand, in the inverse seesaw mechanism, lepton number breaking is very small, because the heavy singlet neutrino ($N$) is paired with another singlet fermion ($S$) to form a pseudo-Dirac pair and the Majorana nature of the neutrino emerges from a keV-scale Majorana mass $\mu_S$ of $S$ fermion (for TeV-scale seesaw). This model when embedded into the TeV-scale left-right framework exhibits some interesting features. There are two versions of this model: the minimal version where there is no majorana mass for the $N$ [@Dev:2009aw; @An:2011uq; @Chen:2011hc] and a second more general one where there is a Majorana mass $\mu_R$ for $N$ [@Dev:2012sg; @Awasthi:2013ff; @Dev:2015pga]. In the minimal version, the leading order prediction for collider signal is that final states will approximately conserve lepton number, implying that $R_{\ell\ell}\simeq 0$ [@Chen:2011hc]. In the more general inverse seesaw, which can also arise from left-right seesaw models [@Dev:2015pga], the neutrino mass formula remains unaffected at the tree-level, although there is an unavoidable one-loop contribution from electroweak radiative corrections [@Dev:2012sg]; however the $N$ fermion has a potentially large Majorana mass that breaks lepton number by two units. The question remains as to how do the dilepton final states look like in this general case i.e. is $R_{\ell\ell}=1$ or different? This question has been recently studied in some special cases [@Dev:2015pga; @Anamiati:2016uxp; @Antusch:2017ebe] and was shown that due to interference between two heavy Majorana neutrino mass eigenstates, one could in principle realize a scenario with $R_{\ell\ell}$ anywhere between 0 and 1. The goal of this study is to do a more general analysis and discuss whether analyzing dilepton states in a hadron collider via production of a $W_R$ boson, one can probe the details of the RHN mass matrix and distinguish between the type-I and general inverse seesaw mechanisms. The rest of the paper is organized as follows. In Section \[sec:coh\] we discuss the coherence condition for interference between two heavy Majorana neutrino mass eigenstates, which plays a crucial role in our discussion. In Section \[sec:typeI\], we apply the coherence conditions to discuss the nature of dilepton final states in type-I seesaw. In Section \[sec:inv\] we explain the general inverse seesaw model. In Section \[sec:inv2\] we apply the coherence conditions for the inverse seesaw case to get the $R_{\ell\ell}$ as a function of parameters of inverse seesaw model. We give our conclusions in Section \[sec:con\]. Some useful three-body decay widths for the RHN are listed in Appendix \[sec:app\]. Coherence Conditions for Interference {#sec:coh} ===================================== When a $W_R$ gauge boson is produced in proton-proton collisions, it decays into flavor eigenstates of the RHNs $N_{\ell}$ along with the corresponding charged lepton $\ell_R$ (where $\ell=e,\mu,\tau$). For simplicity, let us consider two RHNs, say $N_e$ and $N_\mu$. When these flavor eigenstates evolve, they do so as linear combination of mass eigenstates $N_{1,2}$. The $N_{1,2}$ are linear combinations of $N_e$ and $N_\mu$ in the type-I seesaw case and of $N$ and $S$ in the inverse seesaw case. The $N_{1,2}$ are Majorana fermions and they will evolve and interfere as they produce the charged leptons (along with two jets) in their final state. Only if the coherence condition (discussed below) is satisfied, they will interfere; otherwise they will simply give equal number of SS and OS dilepton final states. The coherence conditions for light neutrinos have been discussed inRefs. [@Kayser:1981ye; @Akhmedov:2007fk]. There are two conditions that must be satisfied for interference between the two states to take place: (i) coherence in emission and (ii) the coherence must be maintained till the RHNs decay i.e. for their full decay length. The results imply that the first condition is satisfied when the uncertainty in their mass square exceeds their actual mass difference. We now transplant their argument to the case of two RHNs at hand. Denoting by $\sigma_{m^2
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'The effective evolution of an inhomogeneous cosmological model may be described in terms of spatially averaged variables. We point out that in this context, quite naturally, a measure arises which is identical to a fluid model of the ‘Kullback–Leibler Relative Information Entropy’, expressing the distinguishability of the local inhomogeneous mass density field from its spatial average on arbitrary compact domains. We discuss the time–evolution of ‘effective information’ and explore some implications. We conjecture that the information content of the Universe – measured by Relative Information Entropy of a cosmological model containing dust matter – is increasing.' author: - 'Akio Hosoya[^1]' - 'Thomas Buchert[^2]' - 'Masaaki Morita[^3]' title: 'Information Entropy in Cosmology[^4]' --- A Measure of Inhomogeneity in the Universe ========================================== Cosmology is based on the hypothesis of simplicity called the cosmological principle, i.e. homogeneity and isotropy. The departure of the actual mass distribution from the homogeneous universe model is quantified in terms of density contrast or a statistical quantity like the two–point correlation function, which both have been studied either by perturbation theory or numerical simulations. Behind these investigations there is a belief that the Universe is homogeneous on some large enough scale. This belief has to be quantitatively confronted with observation, explicitly introducing a measure of inhomogeneity for a domain of the Universe. In this [*Letter*]{} we propose a measure which quantifies the distinguishability of the actual mass distribution from its spatial average, borrowing a well–known concept in standard information theory. Suppose we are told that the probability distribution is $\{q_{i}\}$ and would like to examine how close this distribution is to the actual one $\{p_{i}\}$ by carrying out observations or coin tossing; the relevant quantity in information theory is the [*relative entropy*]{}, $${\cal S} \lbrace p || q\rbrace = \sum_{i} p_{i}\ln \frac{p_{i}}{q_{i}}\;\;,$$ which is positive for ${q_{i}}\neq {p_{i}}$, and zero if the actual distribution $\{p_{i}\}$ agrees with the presumed one $\{q_{i}\}$. Note that this relative entropy is not symmetric for the two distributions $\{p_{i}\}$ and $\{q_{i}\}$. It is known that this measure always decreases or stays the same under Markovian stochastic processes (i.e., a [*linear*]{} positive map). Namely, the actual distribution becomes less and less distinguishable from the priorly informed distribution due to the random process. In cosmology we are interested in how the real matter distribution is different from its spatial average. For a continuum the relevant quantity would be $$\label{entropy} \frac{{\cal S} \lbrace \varrho || \langle\varrho\rangle_{\cal D}\rbrace}{V_{\cal D}} \;=\; \Bigl\langle \varrho \ln \frac{\varrho}{\langle\varrho\rangle_{\cal D}}\Bigr\rangle_{\cal D}\;\;,$$ where $\varrho$ is the actual distribution and $\langle \cdots \rangle_{\cal D}$ its spatial average in the volume $V_{\cal D}$ on the compact domain $\cal D$ of the manifold $\Sigma$. We shall conjecture that the measure ${\cal S} \lbrace \varrho || \langle\varrho\rangle_{\Sigma}\rbrace$ continues to grow indefinitely, if $\Sigma$ is compact. The resolution of the apparent discrepancy between the gravitational system and the ordinary stochastic system will be, (i) we are considering in cosmology a non–isolated system defined by a comoving region $\cal D$ in contrast to an isolated system for an ordinary stochastic process, and (ii) the time evolution dictated by Einstein’s equations induces a negative feed–back due to the attractive nature of the gravitational force, which tends to make the matter distribution more and more inhomogeneous. Deduction of the Measure ======================== To begin with let us emphasize that the functional (\[entropy\]), known as the ‘Kullback–Leibler Relative Information Entropy’ ([*cf*]{} [@kullback], [@kullback_leibler], [@cover:entropy]) is not assumed as a measure [*a priori*]{}, rather it can be [*deduced*]{} from a fundamental kinematical relation that refers to the [*non–commutativity*]{} of two operations: spatially averaging and evolving the material mass density field. The specific form of the information measure is, thus, inherently determined by the physical problem at hand, and does not need to be justified empirically or axiomatically as is the common status of information measures in the literature. We define the averaging operation in terms of Riemannian volume integration, restricting attention to scalar functions $\Psi (t,X^i)$, $$\label{average} \langle \Psi (t, X^i)\rangle_{\cal D}: = \frac{1}{V_{\cal D}}\int_{\cal D} \sqrt{g} d^3 X \;\;\Psi (t, X^i) \;\;\;,$$ with the Riemannian volume element $d\mu_g := \sqrt{g} d^3 X$, $g:=\det(g_{ij})$, and the volume of an arbitrary compact domain, $V_{\cal D}(t) : = \int_{\cal D} \sqrt{g} d^3 X$; $X^i$ are coordinates in a $t=const.$ hypersurface (with $3-$metric $g_{ij}$) that are comoving with fluid elements of dust: $$ds^2 = -dt^2 + g_{ij}dX^idX^j \;\;.$$ It is evident from the above setting that we predefine a simple time–orthogonal foliation (which restricts the matter to an irrotational dust continuum) in order to simplify the framework in which we discuss our measure as a concept of a [*spatial*]{} average. We wish to emphasize that the formalism below could be carried over to more general settings (e.g. to perfect fluids or scalar fields ([*cf.*]{} [@buchert:grgfluid]) with possibly further interesting implications. The above–mentioned ‘non–commutativity’ has been fruitfully exploited in previous work on the averaging problem of inhomogeneous cosmologies [@buchert:average; @buchert:grgdust; @buchert:grgfluid; @buchert:onaverage; @buchertcarforaPRL], and can be compactly written in terms of a [*commutation rule*]{} for the averaging of a scalar field $\Psi$: $$\begin{aligned} \label{commutationrule} \langle \Psi{\dot \rangle}_{\cal D} - \langle{\dot \Psi}\rangle_{\cal D} = \langle \Psi\theta\rangle_{\cal D} - \langle \Psi\rangle_{\cal D}\langle\theta\rangle_{\cal D} \nonumber \\ =\langle\Psi\delta\theta\rangle_{\cal D} =\langle\theta\delta\Psi\rangle_{\cal D} =\langle\delta\Psi\delta\theta\rangle_{\cal D}\;\;\;,\end{aligned}$$ where $\theta$ denotes the local expansion rate (as minus the trace of the extrinsic curvature of the hypersurfaces $t=const.$). We have rewritten the r.h.s. of the first equality in terms of the deviations of the local fields from their spatial averages, $\delta\Psi := \Psi - \langle\Psi\rangle_{\cal D}$ and $\delta\theta := \theta - \langle\theta\rangle_{\cal D}$. The key–statement of the [*commutation rule*]{} (\[commutationrule\]) is that the operations [*spatial averaging*]{} and [*time evolution*]{} do not commute. In cosmology we may think of initial conditions at the epoch of last scattering, when the fluctuations imprinted on the Cosmic Microwave Background are considered to be averaged–out on a restframe of a standard Friedmann–Lemaître–Robertson–Walker (FLRW) cosmology. In this picture the evolution of the Universe is described by first averaging–out (or ignoring) inhomogeneities and then evolving the average distribution by a homogeneous (in the above case homogeneous–isotropic) universe model. A realistic model would first evolve the inhomogeneous fields and, at the present epoch, the resulting fields would have to be evaluated by spatial averaging to obtain the final values of, e.g., the averaged density field. In particular, this comment applies to all cosmological parameters (see, e.g., [@buchert:grgdust] and [@buchertcarforaPRL]). Let us illustrate this statement for the mass density field. Setting $\Psi = \varrho$, Eq. (\[commutationrule\]) reads: $$\label{commutationdensity1} \langle \varrho{\dot\rangle}_{\cal D} + \langle\theta\rangle_{\cal D}\langle \varrho\rangle_{\cal D} = \langle{\dot \varrho} + \theta\varrho\rangle_{\cal D} \;\;\;.$$ Since the r.h.s. vanishes due to the continuity equation, we also have a continuity equation for the averages: $$\label{continuity2} \langle \varrho{\dot\rangle}_{\cal D} + \langle\theta\rangle_{\cal D}\langle \varrho\rangle_{\cal D} = 0\;\;\;,$$ which simply expresses the conservation of the total material mass, $M_{\cal D} = \int_{\cal D} \sqrt{g} d^3 X\;\varrho$, in our comoving and synchronous gauge
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'The influence of Rashba spin-orbit coupling on zero conductance resonances appearing in one-dimensional conducting rings asymmetrically coupled to two leads is investigated. For this purpose, the transmission function of the corresponding one-electron scattering problem is derived analytically and analyzed in the complex energy plane with focus on the zero-pole structure characteristic of transmission (anti)resonances. The lifting of real conductance zeros due to spin-orbit coupling in the asymmetric Aharonov-Casher ring is related to the breaking of spin reversal symmetry in analogy to the time-reversal symmetry breaking in the asymmetric Aharonov-Bohm ring.' author: - Urs Aeberhard - Katsunori Wakabayashi - Manfred Sigrist bibliography: - 'paper.bib' title: 'Effect of spin-orbit coupling on zero-conductance resonances in asymmetrically coupled one-dimensional rings' --- Introduction\[sec:1\] ===================== An important feature of one-dimensional ring shaped conductors or electronic devices is the appearance of quantum interference effects under the influence of electromagnetic potentials, known as Aharonov-Bohm[@ab:59] (AB) and Aharonov-Casher[@ac:84] (AC) effect. In numerous investigations, the transmission properties of mesoscopic AB and AC-rings coupled to current leads were studied under various aspects such as AB-flux and coupling dependence of resonances[@buettiker:84], geometric (Berry) phases[@loss:90; @ady:92; @alg:93; @quian:94; @hentschel:03] and spin flip, precession and interference effects[@yi:97; @nitta:99; @hentschel2:03; @frustaglia:04; @molnar:04]. Most of the investigated models use symmetrically coupled rings. There are however mesoscopic systems like nanographite ribbons showing conductance properties that are based on asymmetric configurations [@wakabayashi:01], giving rise to a specific dip structure of anti-resonances (zero-conductance resonances) in the model transmission. The effects of asymmetry on the transmission were considered mainly in quantum network models[@wu:91; @yi:03; @bercioux:04]. In quasi 1d systems, real conductance zeros appear under the condition of conserved time reversal symmetry[@lee:99; @lee:01] (TRS). The (anti)resonances in the transmission due to local quasi-bound states correspond to a specific zero-pole structure in the complex energy plane[@porod:93; @porod:94; @deo:94; @price:93]. The application of an external magnetic field modifies this zero-pole structure, shifting the transmission zeros away from the real axis, with the shift as a function of the AB-phase[@kim:02]. Thus, the lifting of conductance zeros is related to the breaking of TRS. In this paper, the influence of spin-orbit coupling (SOC) on zero-conductance resonances in asymmetrically coupled rings is investigated by means of an AC-ring where an effective in-plane magnetic field results from the *Rashba* effect[@rashba:60] of moving electrons in the presence of an electric field perpendicular to the ring plane, as considered in Ref. and . This means that the role of time reversal symmetry is now transfered to inversion symmetry (parity). We will show that parity connected with the Rashba spin orbit coupling can be viewed in an analogous way as the case of time reversal symmetry for spinless particles. This paper is organized as follows. In Sec. \[sec:2\], a single-particle description of the one-dimensional ring subject to Rashba-SOC in terms of Hamiltonian, eigenstates and eigenenergies is given, following Ref. . The section concludes with the results for the transmission of the asymmetric AC-ring in the one-electron scattering picture which is derived in the appendix. The analytic expression for the transmission function is analyzed in Sec. \[sec:3\] with focus on geometry and SOC dependence of the transmission zeros. Sec. \[sec:4\] contains a symmetry argument which establishes an analogy between formation and lifting of the zeros due to Rashba-SOC in the AC-ring and the corresponding effects on spinless electrons due to the magnetic field in the AB-ring. The main results are summarized in the conclusions of Sec. \[sec:5\]. AC-ring in single particle picture \[sec:2\] ============================================ The coupling of electron spin and orbital degrees of freedom is due to the magnetic field generated in the reference frame of a moving electron by an electric field in the reference frame of the laboratory. In two dimensional systems (e.g. due to the presence of a confinement potential along a specific direction), an important contribution of electric fields is the Rashba effect, a consequence of lack of inversion symmetry, that causes a spin band-splitting proportional to the momentum. In the ring system under consideration, the Rashba field results from the asymmetric confinement along the direction perpendicular to the ring plane. Hamiltonian ----------- In the following investigation of one-dimensional rings, $z$ is chosen as the direction of confinement, perpendicular to the plane of motion. The various SO-coupling mechanisms are accounted for using the following model Hamiltonian: $$\hat{H}_{SO}=\frac{\alpha}{\hbar}(\hat{\vec{\sigma}}\times\hat{\vec{p}})_{z}=i\alpha\left(\hat{\sigma}_{y}{\frac{\partial}{\partial x}}-\hat{\sigma}_{x}{\frac{\partial}{\partial y}}\right),$$ where $\frac{\hbar}{2}\hat{\vec{\sigma}}$ is the spin operator in terms of the Pauli spin matrices, $\hat{\vec{\sigma}}=(\sigma_{x},\sigma_{y},\sigma_{z})$ and $\alpha$ is the Rashba parameter characterizing the strength of the SOC corresponding to an electric field $\vec E_{R}=\left(0,0,E_{z}\right)$ in $z$-direction, arising from a potential $V(z)$ due to structural or confinement asymmetry. In polar coordinates $x=r\cos\varphi$ and $y=r\sin\varphi$ the total Hamiltonian in effective mass approximation reads[@meijer:02] $$\begin{aligned} \hat{H}(r,\varphi)=&-\frac{\hbar^{2}}{2m^{*}}\left[{\partial^{2}_{ r}}+\frac{1}{r}{\frac{\partial}{\partial r}}+\frac{1}{r^{2}}{\partial^{2}_{ \varphi}}\right]-\frac{i\alpha}{r}(\cos\varphi\sigma_{x}\nonumber\\&+\sin\varphi\sigma_{y}){\frac{\partial}{\partial \varphi}}+i\alpha(\cos\varphi\sigma_{y}-\sin\varphi\sigma_{x}){\frac{\partial}{\partial r}}, \end{aligned}$$ with the effective mass $m^{*}$. In the case of a one-dimensional ring, a confining potential $V(r)$ needs to be added in order to force the electron wave functions to be localized on the ring in the radial direction. It is shown in Ref. that the exact form of the confining potential is not essential. A simple possibility is the harmonic potential centered around $r=\r$, $V(r)=\frac{1}{2}K(r-\r)^{2}$ where $ \r $ is the radius of the ring. ![(a) Momentum dependent in-plane Rashba field $\vec{B}_{R}$, (b) Up and down spin eigenstates do not generally align with the Rashba field $\vec{B}_{R}$, but make a tilt angle $\theta$ with the electric field $\vec{E}_{R}$ perpendicular to the ring plane ($\vec{E}_{R}$, $\vec{B}_{R}$ and $\vec{v}_{g}$ form an orthogonal coordinate system). []{data-label="fig:tiltangle1"}](fig1a.eps "fig:"){width="2.5in"}![(a) Momentum dependent in-plane Rashba field $\vec{B}_{R}$, (b) Up and down spin eigenstates do not generally align with the Rashba field $\vec{B}_{R}$, but make a tilt angle $\theta$ with the electric field $\vec{E}_{R}$ perpendicular to the ring plane ($\vec{E}_{R}$, $\vec{B}_{R}$ and $\vec{v}_{g}$ form an orthogonal coordinate system). []{data-label="fig:tiltangle1"}](fig1b.eps "fig:"){width="2.6in"} Considering only the lowest radial mode, the resulting one-dimensional Hamiltonian for fixed radius $\r$ is (see Ref. for a complete derivation) $$\begin{aligned} \hat{H}_{1D}(\varphi)=&\langle R_{0}(r)|\hat{H}(r,\varphi)|R_{0}(r)\rangle\nonumber\\ =&-\frac{\hbar^2}{2m^{*}\r^{2}}\frac{\partial^{2}}{\partial\varphi^{2}}-\frac{i\alpha}{\r}(\cos\varphi\sigma_{x}\nonumber\\&+\sin\varphi\sigma_{y})\frac{\partial}{\partial\varphi}-\frac{i\alpha}{2\r}(\cos\varphi\sigma_{y}-\sin\varphi\sigma_{x}). \label{eq:1dham} \end{aligned}$$ The last term in the above expression for the 1D-Hamiltonian encodes the correction due to the radial confinement. The Hamiltonian in Eq.(\[eq:1dham\])
{ "pile_set_name": "ArXiv" }
null
--- abstract: | We introduce a new class of non-standard variable-length codes, called *adaptive codes*. This class of codes associates a variable-length codeword to the symbol being encoded depending on the previous symbols in the input data string. An efficient algorithm for constructing adaptive codes of order one is presented. Then, we introduce a natural generalization of adaptive codes, called *GA codes*. [**Keywords:**]{} adaptive mechanisms, compression rate, data compression, entropy, prefix codes, variable-length codes. title: '****' --- Introduction ============ The theory of variable-length codes [@bp1] originated in concrete problems of information transmission. Especially by its language theoretic branch, the field has produced a great number of results, most of them with multiple applications in engineering and computer science. Intuitively, a *variable-length code* is a set of strings such that any concatenation of these strings can be uniquely decoded. We introduce a new class of non-standard variable-length codes, called *adaptive codes*, which associate a variable-length codeword to the symbol being encoded depending on the previous symbols in the input data string. The paper is organized into six sections. After this introductory section, the definition of adaptive codes and several theoretical remarks are given in Section 2, as well as some characterization results for adaptive codes. The main results of this paper are presented in Section 3, where we focus on designing an algorithm for constructing adaptive codes of order one. In Section 4, we compute the entropy bounds for this algorithm. A natural generalization of adaptive codes is presented in Section 5. Finally, the last section contains a few concluding remarks. Before ending this introductory section, let us present some useful notation used throughout the paper [@rs1; @as1], and then review some basic concepts. We denote by $|S|$ the *cardinality* of a set $S$; if $x$ is a string of finite length, then $|x|$ denotes the length of $x$. The *empty string* is denoted by $\lambda$. For an alphabet $\Sigma$, we denote by $\Sigma^{*}$ the set $\bigcup_{n=0}^{\infty}\Sigma^{n}$, and by $\Sigma^{+}$ the set $\bigcup_{n=1}^{\infty}\Sigma^{n}$, where $\Sigma^{0}$ is defined by $\{\lambda\}$. Let us denote by $\Sigma^{\leq n}$ the set $\bigcup_{i=0}^{n}\Sigma^{i}$ and by $\Sigma^{\geq n}$ the set $\bigcup_{i=n}^{\infty}\Sigma^{i}$. Let $X$ be a finite and nonempty subset of $\Sigma^{+}$, and $w\in\Sigma^{+}$. A *decomposition of w* over $X$ is any sequence of words $u_{1}, u_{2}, \ldots, u_{h}$ with $u_{i}\in X$, $1\leq i\leq h$, such that $w=u_{1}u_{2}\ldots u_{h}$. A *code* over $\Sigma$ is any nonempty set $C\subseteq\Sigma^{+}$ such that each word $w\in\Sigma^{+}$ has at most one decomposition over $C$. A *prefix code* over $\Sigma$ is any code $C$ over $\Sigma$ such that no word in $C$ is proper prefix of another word in $C$. Adaptive Codes ============== In this section we introduce a new class of non-standard variable-length codes, called adaptive codes. These codes are based on adaptive mechanisms, that is, the variable-length codeword associated to the symbol being encoded depends on the previous symbols in the input data string. Let $\Sigma$ and $\Delta$ be two alphabets. A function ${c:\Sigma\times\Sigma^{\leq{n}}\rightarrow\Delta^{+}}$, with $n\geq{1}$, is called an if its unique homomorphic extension ${\overline{c}:\Sigma^{*}\rightarrow\Delta^{*}}$ given by: - $\overline{c}(\lambda)=\lambda$, - $\overline{c}({\sigma_{1}\sigma_{2}\ldots\sigma_{m}})=$ $c(\sigma_{1},\lambda)$ $c(\sigma_{2},\sigma_{1})$ $\ldots$ $c(\sigma_{n-1},{\sigma_{1}\sigma_{2}\ldots\sigma_{n-2}})$ $c(\sigma_{n},{\sigma_{1}\sigma_{2}\ldots\sigma_{n-1}})$ $c(\sigma_{n+1},{\sigma_{1}\sigma_{2}\ldots\sigma_{n}})$ $c(\sigma_{n+2},\sigma_{2}\sigma_{3}\ldots\sigma_{n+1})$ $c(\sigma_{n+3},\sigma_{3}\sigma_{4}\ldots\sigma_{n+2})\ldots$ $c(\sigma_{m},\sigma_{m-n}\sigma_{m-n+1}\ldots\sigma_{m-1})$, for all ${\sigma_{1}\sigma_{2}\ldots\sigma_{m}}\in\Sigma^{+}$, is injective. Let us take an example in order to better understand the adaptive mechanisms presented in the definition above. Let $\Sigma=\{{\texttt{\textup{a}}},{\texttt{\textup{b}}}\}$, $\Delta=\{0,1\}$ be alphabets, and ${c:\Sigma\times\Sigma^{\leq{2}}\rightarrow\Delta^{+}}$ a function given by the table below. One can easily verify that the function $\overline{c}$ is injective, and according to Definition 2.1, $c$ is an adaptive code of order two. Let $x={\texttt{\textup{abaa}}}\in\Sigma^{+}$. Using the definition above, we encode $x$ by $\overline{c}(x)=c({\texttt{\textup{a}}},\lambda)c({\texttt{\textup{b}}},{\texttt{\textup{a}}})c({\texttt{\textup{a}}},{\texttt{\textup{ab}}})c({\texttt{\textup{a}}},{\texttt{\textup{ba}}})=0101$. Let ${c:\Sigma\times\Sigma^{\leq{n}}\rightarrow\Delta^{+}}$ be an adaptive code of order $n$, $n\geq{1}$. We denote by $C_{c, \sigma_{1}\sigma_{2}\ldots\sigma_{h}}$ the set $\{c(\sigma,\sigma_{1}\sigma_{2}\ldots\sigma_{h}) \mid \sigma\in\Sigma\}$, for all $\sigma_{1}\sigma_{2}\ldots\sigma_{h}\in\Sigma^{\leq{n}}-\{\lambda\}$, and by $C_{c, \lambda}$ the set $\{c(\sigma,\lambda) \mid \sigma\in\Sigma\}$. We write $C_{\sigma_{1}\sigma_{2}\ldots\sigma_{h}}$ instead of $C_{c, \sigma_{1}\sigma_{2}\ldots\sigma_{h}}$, and $C_{\lambda}$ instead of $C_{c, \lambda}$ whenever there is no confusion. If $w\in\Sigma^{+}$ then we denote by $w(i)$ the $i$-th symbol of $w$. In the rest of this paper we denote by ${\it AC}(\Sigma,\Delta,n)$ the set $\{{c:\Sigma\times\Sigma^{\leq{n}}\rightarrow\Delta^{+}} \mid$ $c$ is an adaptive code of order $n\}$. Let $\Sigma$ and $\Delta$ be two alphabets, and let ${c:\Sigma\times\Sigma^{\leq{n}}\rightarrow\Delta^{+}}$ be a function. If $C_{{\sigma_{1}\sigma_{2}\ldots\sigma_{h}}}$ is a prefix code, for all ${\sigma_{1}\sigma_{2}\ldots\sigma_{h}}\in\Sigma^{\leq{n}}$, then $c\in{{\it AC}(\Sigma,\Delta,n)}$. **Proof** Let us assume that $C_{{\sigma_{1}\sigma_{2}\ldots\sigma_{h}}}$ is prefix code, for all ${\sigma_{1}\sigma_{2}\ldots\sigma_{h}}\in\Sigma^{\leq{n}}$, but $c\notin{{\it AC}(\Sigma,\Delta,n)}$. By Definition 2.1, the unique homomorphic extension of c, denoted by $\overline{c}$, is not injective. This implies that $\exists$ $u\sigma u', u\sigma'u''\in\Sigma^{+}$, with $\sigma,\sigma '\in\Sigma$ and $u,u',u''\in\Sigma^{*}$, such that $\sigma\neq\sigma'$ and $(*)$ $ $ $\overline{c}(u\sigma u')=\overline{c}(u\sigma'u'')$. We can rewrite $(*)$ by $(**)$ $ $ $\overline{c}(u)c(\sigma,P_{n}(u))\overline{c}(u')=$ $\overline{c}(u)c(\sigma',P_{n}(u))\overline{c}(u'')$, where $P_{n}(u)$ is given by $$P_{n}(u)= \left\{ \begin{array}{ll} \lambda & \textrm{if $u=\lambda$,} \\ u_{1}\ldots u_{q} & \textrm{if $u=u_{1}u_{2}\ldots u_{q}$ and $u_{1},u_{2},\ldots,u_{q}\in\Sigma$ and
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We study the convexity of the entropy functional along particular interpolating curves defined on the space of finitely supported probability measures on a graph.' author: - 'Erwan Hillion [^1]' title: 'Entropy along $W_{1,+}$-geodesics on graphs' --- Introduction ============ The Wasserstein distance $W_p(\mu_0,\mu_1)$ between two finitely supported probability measures on a metric space $(X,d)$ with its Borel $\sigma$-algebra is defined for $p \geq 1$ by $$\label{eq:WassersteinDef} W_p(\mu_0,\mu_1)^p := \inf_{\pi \in \Pi(\mu_0,\mu_1)} \int_{X \times X} d(x_0,x_1)^p d\pi(x_0,x_1),$$ where $\Pi(\mu_0,\mu_1)$ is the (non-empty) set of couplings between $\mu_0$ and $\mu_1$, i.e. the set of probability measures on $X \times X$ having $\mu_0$ and $\mu_1$ as marginals. The optimization problem defined by equation  is called the Monge-Kantorovitch problem and any minimizer for  is called optimal coupling between $\mu_0$ and $\mu_1$. For a comprehensive study of optimal transportation theory, the reader is referred to the textbooks [@VillaniBook1] and [@VillaniBook2] by Villani. Under mild conditions, it is possible to show that the set $\Pi_p(\mu_0,\mu_1)$ of optimal couplings between $\mu_0$ and $\mu_1$ is non-empty. Furthermore, under the additional assumptions that $p>1$, $(X,d)$ is the Euclidean space $(\mathbb{R}^d,|.|)$ and $\mu_0$ is absolutely continuous with respect to the Lebesgue measure, one can prove the existence of a measurable map $T : \mathbb{R}^d \rightarrow \mathbb{R}^d$ such that the coupling $\pi := (Id \times T)_*\mu_0$ is a minimizer for . In particular, $\mu_1$ is the pushforward of $\mu_0$ by the application $T$: $\mu_1 := T_*\mu_0$ and equation  can be rewritten $$W_p(\mu_0,\mu_1)^p = \int_{\mathbb{R}^d} |x-T(x)|^p d\mu_0(x).$$ It is possible to go further by considering, for $0 \leq t \leq 1$, the measure $\mu_t := (T_t)_*\mu_0$, where the application $T_t : \mathbb{R}^d \rightarrow \mathbb{R}^d$ is defined as the barycenter $T_t(x) := (1-t)x + t T(x)$. One can then show that the family $(\mu_t)_{t \in [0,1]}$ is a geodesic for the Wasserstein distance $W_p$, in the sense that $$W_p(\mu_0,\mu_1) = \sup_{0 = t_0 \leq t_1 \cdots \leq t_{n} = 1} \sum_{i=0}^{n-1} W_p(\mu_{t_i},\mu_{t_{i+1}}).$$ Moreover, a fundamental property of optimal couplings asserts that $T_t$ is injective, which allows us to define unambiguously a velocity field $(v_t)_{t \in [0,1]}$ by $$v_t(T_t(x)) := T(x)-x.$$ The terminology ’velocity field’ comes from the fact that, if we write $d\mu_t(x) = f_t(x) dx$, then the density $f_t(x)$ satisfy, at least formally, the transport equation $$\label{eq:velocitytransport} \frac{\partial}{\partial t} f_t(x) + \operatorname{div}(v_t(x) f_t(x)) = 0.$$ Moreover, the velocity field $v_t(x)$ satisfies the Hamilton-Jacobi-type equation $$\label{eq:velocityoptimalHJ} \frac{\partial}{\partial t} v_t(x) + \frac{1}{2} \operatorname{grad}|v_t(x)|^2 = 0,$$ which can be simplified into $$\label{eq:velocityoptimal} \frac{\partial}{\partial t} v_t(x) = - \operatorname{div}(v_t(x)) v_t(x).$$ In [@BenamouBrenier], Benamou and Brenier proved that both equations  and  can be used to give a characterization of $W_p$-geodesics, more precisely we have: \[th:BenamouBrenier\] Given two finitely supported probability measures $d\mu_0(x) := f_0(x) dx$ and $d\mu_1(x) := f_1(x) dx$, we have $$\label{eq:MKBenamou} W_p(\mu_0,\mu_1)^p = \inf \int_0^1 \int_{\mathbb{R}^d} |v_t(x)|^p d\mu_t(x),$$ where the infimum is taken over the set of curves $(\mu_t)_{t \in [0,1]} = (f_t(x) dx)_{t \in [0,1]}$ joining the prescribed measures $\mu_0$ and $\mu_1$, and where $(v_t(x))_{t \in [0,1]}$ is a velocity field such that equation  holds. Moreover, the formal optimality condition for the optimization problem  is given by equation . Theorem \[th:BenamouBrenier\] is also true for families of probability measures defined on a Riemannian manifold, having smooth enough densities with respect to the Riemannian volume measure. However, in this framework, equations and are no longer equivalent. The optimality condition  is the starting point of the article [@HillionGeodesic] by the author. The main idea is the following: given two distinct probability measures $f_0,f_1$ on a graph $G$, there is no interpolating curve $(f_t)_{t \in [0,1]}$ with a finite length for the Wasserstein $W_p$, for any $p>1$. However, in generic cases there are infinitely many geodesics $(f_t)_{t \in [0,1]}$ for the $W_1$ distance. The aim of [@HillionGeodesic] is to choose among this set a particular $W_1$-geodesic satisfying a discrete version of equation . These interpolating curves are called $W_{1,+}$-geodesics on $G$; we recall their basic properties in Section 2. The purpose of this article is to study the behaviour of the entropy functional along a $W_{1,+}$-geodesic $(f_t(x))_{t \in [0,1], x \in G}$ on a graph $G$. More precisely, we will study the convexity of the function $t \mapsto H(t)$ defined by $$H(t) := \sum_{x \in G} f_t(x) \log(f_t(x)),$$ where by convention $0 \log 0 = 0$. The methods used to prove such convexity properties are adapted from the previous article [@HillionContraction] by the author, and use the first-order-calculus formalism introduced in [@HillionGeodesic]. The motivation behind this research work comes from Sturm-Lott-Villani theory, developed in the articles [@SturmRicci01], [@SturmRicci02] and  [@LottVillani]. The main idea of this theory is the following: it is possible to obtain some information about the geometry of a measured length space $(X,d,\nu)$ by studying the behaviour of entropy functionals along $W_2$-geodesics on the space of probability measures over $(X,d)$. A major result asserts that a compact Riemannian manifold $(M,g)$ satisfies the Ricci curvature bound $\operatorname{Ric}\geq K g$ if and only if each pair of absolutely continuous probability measures $\mu_0, \mu_1$ can be joined by a Wasserstein $W_2$-geodesic $(\mu_t)_{t \in [0,1]}$ such that $$\label{eq:LVSconvexity} H(\mu_t) \leq (1-t) H(\mu_0) + t H(\mu_1) - K \frac{t(1-t)}{2} W_2(\mu_0,\mu_1)^2,$$ where the relative entropy $H(\mu)$ is defined by $H(\mu): = \int_{M} \rho \log(\rho) d\operatorname{vol}$ if $d\mu = \rho . d\operatorname{vol}$ and by $H(\mu)=\infty$ if $\mu$ is not absolutely continuous with respect to the Riemannian volume measure. It is then possible to define the curvature condition ’
{ "pile_set_name": "ArXiv" }
null
--- abstract: | Hyperfine structures of the triplet $n^3S-$states in the four-electron Be-atom(s) and Be-like ions are considered. It is shown that to determine the hyperfine structure splitting in such atomic systems one needs to know the triplet electron density at the central atomic nucleus $\rho_T(0)$. We have developed the procedure which allows allows one to determine such an electron density $\rho_T(0)$ for arbitrary four-electron atoms and ions. PACS number(s): 31.30.Gs, 31.15.vj and 32.15.Fn author: - 'Alexei M. Frolov' title: 'On the hyperfine structure of the triplet $n^{3}S-$states of the four-electron atoms and ions' --- Introduction ============ In this communication we develop the new ab-initio method which can be applied for accurate evaluation of the hyperfine structure splittings in the triplet ${}^3S-$states of the four-electron atoms and/or ions. As follows from experiments such triplet $S(L = 0)-$states have an interesting hyperfine structure. For simplicity, let us consider, the triplet $2^3S-$state of the four-electron beryllium atom(s) (${}^{7}$Be, ${}^{9}$Be and ${}^{\infty}$Be). In general, if $F$ is the total electron-nuclear spin of the $2^3S-$state of an atom and $I_N$ is the spin of atomic nucleus (Be) and $I_N \ge 1$, then in experiments one can observe splitting of this state into a triplet of states. The total spin of these states equals $F = I_N + 1, I_N$ and $I_N - 1$, respectively. If $I_N = \frac12$, then we can see only a doublet of states with $F = \frac12$ and $\frac32$. This method can also be used to determine the hyperfine structure splitting for an arbitrary bound (triplet) $n^3S-$states in four-electron atoms and ions. This includes the triplet $2^3S-$state of the four-electron Be atoms (different isotopes). Our analyisis of the hyperfine structure of the four-electron atoms and ions is based on the generalization of the method developed earlier by Fermi [@Fermi] for the doublet $n^{2}S-$states of three-electron Li-atom(s) and Li-like ions (see also [@Fro2016]). First, we need to introduce the triplet electron density in a few-electron atom/ion. Formally, the triplet electron density is the spatial two-electron density distribution of the two atomic electrons which form one triplet pair. If we have a number of such pairs in atom/ion, then we need take into account all triplet electron pairs. Singlet electron pairs do not contribute to the triplet electron density. In reality, for accurate evaluations of the hyperfine structure splitting in the triplet states of four-electron atoms/ions one needs to know the triplet electron density at the central atomic nucleus which has non-zero electric charge $Q e$. The general definition of the electron density at the central atomic nucleus is written in from $\rho(0) = \sum^{N_e}_{i=1} \langle \delta({\bf r}_{iA}) \rangle$, where $i$ is the electron’s index, while index $A$ means the central atomic nucleus. For instance, for the singlet ground $1^1S-$state in the two-electron helium atom one finds $\rho_{S}(0) \approx$ 1.8104293185013928, while for the triplet $2^3S-$state of the helium-3 atom we have $\rho_{T}(0) \approx$ 1.31963500836957 [@Fro2007]. The last numerical value leads to the following hyperfine structure splitting in the $2^3S-$state of the ${}^{3}$He atom: $\Delta E_{hss}$ = 6740.452154 $MHz$ [@Fro2007] (see also [@BS]). The corresponding experimental value is $\Delta E_{hss}$ = 6739.701171(16) $MHz$ [@PRA70]. However, such a definition of the electron density cannot be used for the triplet states in atoms/ions, if the total number of bound electrons exceeds two. The reason is obvious, since all atoms/ions with more than two bound electrons always have the shell electronic structure. This means that the internal electrons form a number of closed electron shells which have zero spin, i.e. singlet electron shells. The electrons from the outer-most shell(s) can interact with the nuclear spin $I_N$, its numerical value it differs from zero. In general, this leads to the appearance of the hyperfine structure splitting in atoms and ions, if the total spin of outer-most electrons exceeds zero. This leads to the appearance of the hyperfine structure splitting in $N_e-$electron atoms and ions, where $N_e \ge 3$. It is clear that in actual atoms and ions we have small interactions between electrons from internal and outermost electron shells, e.g., electronic correlations, spin-spin interactions, etc. As follows from this picture the analysis and numerical computations of the hyperfine structure splitting are significantly more complicated than for the two-electron helium atom. In the lowest-order approximation we need to define the triplet electron density at the atomic nucleus must be given in a different manner. An alternative definition of the triplet electron density at the atomic nucleus ($A$) can be written in the form (see, e.g., [@Fermi], [@Fro2016], [@Lars]) $$\rho_T(0) = \sum^{N_e}_{i=1} \langle \delta({\bf r}_{iA}) (\sigma_{z})_i \rangle = \langle \Psi \mid \sum^{N_e}_{i=1} \delta({\bf r}_{iA}) (\sigma_{z})_i \mid \Psi \rangle \; \; \; \label{eq1}$$ where $\delta({\bf r}_{iA})$ is the electron-nucleus delta-function (the symbol $A$ designates the atomic nucleus) and $(\sigma_{z})_i$ is the $\sigma_z$ matrix of the $i$-th atomic electron, i.e. $(\sigma_{z})_i \alpha(i) = \alpha(i)$ and $(\sigma_{z})_i \beta(i) = - \beta(i)$ (see, e.g, [@LLQ], [@Dirac]). In Eq.(\[eq1\]) and everywhere below we assume that the wave function of the bound $2^{3}S-$state of the four-electron atom/ion has unit norm. As follows from Eq.(\[eq1\]) the triplet electron density $\rho_T(0)$ equals zero identically for an arbitrary singlet state in a few-electron atom, including the two-electron He atom. For the triplet states in the helium-3 atom the new definition of the electron density, Eq.(\[eq1\]), leads to the same hyperfine structure splitting as mentioned above. For the ground $2^{2}S-$state in the three-electron Li atoms and analogous ions such a definition of the doublet electron density at the central atomic nucleus, Eq.(\[eq1\]), allows one to evaluate the correct numerical values of the hyperfine structure splittings (see, e.g., [@Fro2016]) which are in good agreement with the known experimental values [@Liatom]. By using this definition of the triplet electron density at the atomic nucleus, Eq.(\[eq1\]), we can write the following formula (Fermi-Segré formula (see, e.g., [@LLQ])) for the hyperfine structure splitting of the $2^3S-$states in the Be atom (see, e.g., [@LLQ]) $$\Delta E_{hf} = \frac{8 \pi \alpha^2}{3} \mu_B \mu_N g_e g_N \; \; \rho_T(0) \; \; \frac12 [F(F + 1) - I_N(I_N + 1) - S_e(S_e + 1)] \; \; \label{eq2}$$ where ${\bf S}_{e}$ is the total electron spin of the atom, ${\bf I}_{N}$ is the spin of the nucleus in those isotopes of the Be atom(s) for which $\mid {\bf I}_{N} \mid \ne 0$ and ${\bf F}$ is the total angular momentum operator ${\bf F} = {\bf L} + {\bf S} = {\bf S}_e + {\bf I}_N$ of the four-electron atom/ion. For the triplet $S-$states in the four-electron atoms/ions the vector-operator ${\bf F} = {\bf S}_e + {\bf I}_N$ can be considered as the total spin of the atom, i.e. the sum of the electron and nuclear spins. Also, in this formula $\alpha \approx 7.297352568 \cdot 10^{-3} (\approx \frac{1}{137})$ is the dimensionless fine structure constant, $\mu_B$ is the Bohr magneton ($\mu_B = \frac12$ in atomic units) and $\mu_N = \mu_B \frac{m_e}{m_p}$, where $\frac{
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'Spectroscopic studies of electronic phenomena in graphene are reviewed. A variety of methods and techniques are surveyed, from quasiparticle spectroscopies (tunneling, photoemission) to methods probing density and current response (infrared optics, Raman) to scanning probe nanoscopy and ultrafast pump-probe experiments. Vast complimentary information derived from these investigations is shown to highlight unusual properties of Dirac quasiparticles and many-body interaction effects in the physics of graphene.' author: - 'D. N. Basov' - 'M. M. Fogler' - 'A. Lanzara' - Feng Wang - 'Yuanbo Zhang ()' title: '*Colloquium*: Graphene spectroscopy' --- [UTF8]{}[gbsn]{} Introduction {#sec:Introduction} ============ Scope of this review -------------------- ![image](Basov_Fig1){width="6.00in"} Graphene is a single atomic layer of $sp^2$-hybridized carbon atoms arranged in a honeycomb lattice. This two-dimensional (2D) allotrope of carbon is characterized by a number of superlative virtues [@Geim2009gsa], e.g., a record-high electronic mobility at ambient conditions [@Morozov2008gic], exceptional mechanical strength [@Lee2008mot], and thermal conductivity [@Balandin2008stc; @Ghosh2008eht] Remarkable properties of graphene have ignited tremendous interest that resulted in approximately 50,000 publications at the time of writing. A number of authoritative reviews[^1] have been written to survey this body of literature but no single review can any longer cover the entire topic. The purpose of this Colloquium is to overview specifically the spectroscopic experiments that have helped to shape the modern understanding of the physical properties of graphene. While selected topics in graphene spectroscopy have been discussed,[^2] here we aim to present a panoramic view of physical phenomena in graphene emerging from both spectroscopy and imaging (Fig. \[fig:1.2.1\]C). Spectroscopic observables can be formally categorized as either quasiparticle or current/density response functions. The former are fermionic, the latter are bosonic. The former is traditionally measured by photoemission and tunneling spectroscopy, while the latter can be investigated by, e.g., optical spectroscopy. Yet it may be possible to infer both quasiparticle and collective properties from the same type of measurements. For example, fine anomalies of the quasiparticle spectra seen in photoemission can give information about interactions between quasiparticles and collective modes (Sec. \[sec:e-ph-pl\]) Conversely, optical conductivity, which is a collective response, enables one to infer, with some approximation, the parameters of a quasiparticle band-structure (Secs. \[sec:direct\], \[sec:renormalization\], \[sec:Landau\], and \[sec:BLG\]). Finding such connections is facilitated by spectacular tunability of graphene. For example, with photoemission or tunneling techniques one can monitor the chemical potential $\mu$ of graphene as a function of the electron concentration $N$ and thereby extract the thermodynamic density of states. The same physical quantity can be measured by a very different technique, the scanning single-electron transistor microscopy. In our analysis of such complementary data we focus on what we believe are the most pressing topics in the physics of graphene, e.g., many-body effects. Additionally, our review covers information obtained by scanned probes and out-of-equilibrium methods that greatly expand available means to study graphene in space and time domains. Finally, we briefly address phenomena that arise when physical properties of graphene are altered via its environment and nanostructuring. Graphene morphology {#sec:forms} ------------------- Graphene can be isolated or fabricated in a number of different forms, which is an important consideration in spectroscopy. Effectiveness of a given spectroscopic tool depends on the accessibility of the sample surface to the incident radiation. The size of the accessible area must normally be larger than the wavelength of the incident beam unless near-field probes are employed (Sec. \[sec:plasmons\]) Mosaic structure and defects may affect momentum and energy resolution of the measurement. Graphene differs widely in terms of these parameters depending on preparation method. Mechanical exfoliation of graphite typically produces single, bi-, and multi-layer graphene (SLG, BLG, and MLG, respectively) of a few $\mu\mathrm{m}$ in size, although occasionally samples of dimensions of hundreds of $\mu\mathrm{m}$ can be obtained. Exfoliated samples can be transferred onto insulating substrates, after which they can be gated and subject to transport measurements. The sign and the magnitude of carrier concentration $N$ in gated samples can be precisely controlled over a wide range. The lower bound on $|N| \sim 10^{10}\,\mathrm{cm}^{-2}$ is set by inhomogeneities (Sec. \[sec:Inhomogeneities\]). The upper bound $|N| \sim 10^{13}\,\mathrm{cm}^{-2}$ is limited by the dielectric breakdown strength of the substrate, although still higher $|N|$ are achievable by electrolytic gating .[^3] The carrier concentration can also be controlled by doping [@Chen2008cis]. Morphologically, exfoliated samples are single crystals. They hold the record for transport mobility $\mu_{tr}$ although it varies much with the type of the substrate. Currently, high-quality hexagonal boron nitride (hBN) substrates enable one to achieve $\mu_{tr} \sim 10^5\,\mathrm{cm}^2/\mathrm{Vs}$, which is about an order of magnitude higher than what is typical for graphene on SiO$_2$ and corresponds to $\mu\mathrm{m}$-scale mean-free path [@Dean2010bns; @Mayorov2011msb]. The highest mobility $\sim 10^6\,\mathrm{cm}^2/\mathrm{Vs}$ is demonstrated by exfoliated graphene that is suspended off a substrate and subject to current annealing [@Du2008abt; @Bolotin2008uem; @Elias2011dcr]. Mechanical instabilities limit the size of suspended devices to $1$–$2\,\mu\mathrm{m}$ and restrict the maximum $|N|$ to a few times $10^{11}\,\mathrm{cm}^{-2}$. Large-area graphene can be made by another method: epitaxial growth on SiC by thermal desorption of Si [@vanBommel1975laa]. Epitaxial graphene may contain a single or many dozens of layers. The initial layer (layer number $L = 0$) has strong covalent bonds to the SiC substrate and is electronically different from the ideal SLG [@deHeer2007eg]. The morphology and electron properties of the subsequent layers, $L > 0$, depend on which SiC crystal face it is grown: the Si-terminated $(0001)$ face or the C-terminated $(000\bar{1})$ face.[^4] According to @deHeer2011laa, the Si-face grown graphene is orientationally ordered and has the Bernal stacking (as in graphite). The structure of the C-face epitaxial graphene is consistent with a stacking where every other layer is rotated by approximately $\pm 7^\circ$ with respect to a certain average orientation. The rotations inhibit interlayer tunneling so that the band structure of each layer is similar to SLG (see also Sec. \[sec:substrate\]). The morphology of the epitaxial graphene after annealing resembles a carpet draping over the staircase [@Emtsev2009tws]. It is characterized by domains a few $\mu\mathrm{m}$ wide and up to $50\,\mu\mathrm{m}$ long that mirror the underlying SiC terraces [@Emtsev2009tws; @deHeer2011laa]. The graphene/SiC interface is charged, inducing the $n$-type doping of about $10^{13}\,\mathrm{cm}^{-2}$ in the first ($L = 1$) graphene layer. Other layers have much smaller carrier concentration because of screening. The screening length of about one layer was measured by ultrafast infrared (IR) spectroscopy [@Sun2010smo]. The doping of the surface layers can be altered by depositing charged impurities [@Ohta2006ces; @Zhou2008mti]. Relatively low mobility $\mu_{tr} = 500$–$10,000\,\mathrm{cm}^2/\mathrm{Vs}$, the inhomogeneity of the doping profile, and the lack of its *in situ* control can be seen as drawbacks of (the first generation of) epitaxial compared to exfoliated graphene. On the other hand, the much larger surface area of the epitaxial graphene is advantageous for spectroscopic studies and applications [@deHeer2007eg]. An important recent breakthrough is epitaxial growth of graphene on high-quality hBN substrates [@Yang2013egs]. Graphene samples of strikingly large 30-in width [@Bae2010rtr] can be produced by the chemical vapor deposition (CVD) on metallic surfaces, e.g., Ru,Ni or Cu that act as catalysts. CVD graphene can be transferred to insulating substrates making it amenable to gating and transport experiments [@Kim2009lsp; @Bae2010rtr]. The microstructure of CVD graphene sensitively depends on the roughness of the metallic substrate and the growth conditions. Typical structural defects of CVD graphene are wrinkles and folds induced by transfer process and also by thermal expansion of graphene upon cooling. Grain boundaries are other common defects that have been directly imaged by micro-Raman [@Li2010gfw], transmission electron microscopy [@Huang2011gag], scanning tunneling microscopy [@Tapaszto2012mep;
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'Stackelberg Games are gaining importance in the last years due to the raise of Adversarial Machine Learning (AML). Within this context, a new paradigm must be faced: in classical game theory, intervening agents were humans whose decisions are generally discrete and low dimensional. In AML, decisions are made by algorithms and are usually continuous and high dimensional, e.g. choosing the weights of a neural network. As closed form solutions for Stackelberg games generally do not exist, it is mandatory to have efficient algorithms to search for numerical solutions. We study two different procedures for solving this type of games using gradient methods. We study time and space scalability of both approaches and discuss in which situation it is more appropriate to use each of them. Finally, we illustrate their use in an adversarial prediction problem.' author: - Roi Naveiro - David Ríos Insua title: 'Gradient Methods for Solving Stackelberg Games[^1]' --- Introduction {#sec:intro} ============ Over the last decade, the introduction of machine learning applications in numerous fields has grown tremendously. In particular, applications in security settings have grown substantially, [@mcdaniel2016machine]. In this domain, it is frequently the case that the data distribution at application time is different of the training data distribution, thus violating one of the key assumptions in machine learning. This difference between training and test distributions generally comes from the presence of adaptive adversaries who deliberately manipulate data to avoid being detected. The field of Adversarial Machine Learning (AML) studies, among other things, how to guarantee the security of machine learning algorithms against adversarial perturbations [@biggio2018wild]. A possible approach consists of modelling the interaction between the learning algorithm and the adversary as a game in which one agent controls the predictive model parameters while the other manipulates input data. Several different game theoretic models of this problem have been proposed, as reviewed in [@voro2018]. In particular, [@bruckner2011stackelberg] view adversarial learning as a Stackelberg game in which, a *leader* (she), the defender in the security jargon, makes her decision about choosing the parameters in a learning model, and, then, the *follower* or attacker (he), after having observed the leader’s decision, chooses an optimal data transformation. Mathematically, finding Nash equilibria of such Stackelberg games requires solving a bilevel optimization problem, which, in general cannot be undertaken analytically, [@sinha2018review], and numerical approaches are required. However, standard techniques are not able to deal with continuous and high dimensional decision spaces, as those appearing in AML applications. In this paper, we propose two procedures to solve Stackelberg games in the new paradigm of AML and study their time and space scalability. In particular, one of the proposed solutions scales efficiently in time with the dimension of the decision space, at the cost of more memory requirements. The other scales well in space, but requires more time. The paper is organized as follows: in Section \[sec:stack\_games\] we define Stackelberg games. Section \[sec:solution\_method\] presents the proposed solution methods as well as a discussion of the scalability of both approaches. The proposed solutions are illustrated with an AML experiment in Section \[sec:experiments\]. Finally, we conclude and present some lines for future research. Stackelberg games {#sec:stack_games} ================= We consider a class of sequential games between two agents: the first one makes her decision, and then, after having observed the decision, the second one implements his response. These games have received various names in the literature including sequential Defend-Attack [@Brown:2006] or Stackelberg [@Gibbons:1992; @tambe2011security] games. As an example, consider adversarial prediction problems, [@bruckner2011stackelberg]. In them, the first agent chooses the parameters of a certain predictive model; the second agent, after having observed such parameters, chooses an optimal data transformation to fool the first agent as much as possible, so as to obtain some benefit. As we focus on applications of Stackelberg games to AML, we restrict ourselves to the case in which the Defender ($D$) chooses her defense $\alpha \in \mathbb{R}^n$ and, then, the Attacker ($A$) chooses his attack $\beta \in \mathbb{R}^m$, after having observed $\alpha$. The corresponding bi-agent influence diagram, [@BAIDS], is shown in Fig. \[fig:baid1\]. The dashed arc between nodes $D$ and $A$ reflects that the Defender choice is observed by the Attacker. The utility function of the Defender, $u_D(\alpha, \beta)$, depends on both, her decision, and the attacker’s decision. Similarly, the Attacker’s utility function has the form $u_A(\alpha, \beta)$. In this type of games, it is assumed that the Defender knows $u_A(\alpha, \beta)$. This assumption is known as the common knowledge hypothesis. \(A) [$D$]{}; (B) \[right of=A\] [$A$]{}; (C) \[below of=A\] [$U_D$]{}; (E) \[below of=B\] [$U_A$]{}; \(B) edge node (C) edge node (E) (A) edge node (C) edge node (E) edge\[out = 90, in = 90, dashed\] node (B); Mathematically, finding Nash equilibrium of Stackelberg games requires solving a bilevel optimization problem, [@bard1991some]. The defender’s utility is called *upper level* or *outer* objective function while the attacker’s one is referred to as *lower level* or *inner* objective function. Similarly, the upper and lower level optimization problems, correspond to the defender’s and the attacker’s problem, respectively. These problems are also referred to as outer and inner problems. It is generally assumed that the attacker will act rationally in the sense that he will choose an action that maximizes his utility, [@french2000statistical], given the disclosed defender’s decision $\alpha$. Assuming that there is a unique global maximum of the attacker’s utility for each $\alpha$, and calling it $\beta^*(\alpha)$, a Stackelberg equilibrium is identified using backward induction: the defender has to choose $\alpha^*$ that maximizes her utility subject to the attacker’s response $\beta^*(\alpha)$. Mathematically, the problem to be solved by the defender is $$\label{bilevel} \begin{aligned} & \operatorname*{arg\,max}_{\alpha} & & u_D[\alpha, \beta^*(\alpha)]\\ & \text{s.t.} & & \beta^*(\alpha) = \operatorname*{arg\,max}_{\beta} u_A(\alpha,\beta). \end{aligned}$$ The pair $\left( \alpha^{*}, \beta^* (\alpha^{*}) \right)$ is a Nash equilibrium and a sub-game perfect equilibrium [@Hargreaves:2004]. When the attacker problem has more than one global maximum, several types of equilibrium have been proposed. The two more important are the optimistic and the pessimistic solutions, [@sinha2018review]. In an optimistic position, the defender expects the attacker to choose the optimal solution which gives the higher upper level utility. On the other hand, the pessimistic approach suggests that the defender should optimize for the worst case attacker solution. In this paper, we just deal with the case in which the inner utility has a unique global maximum. Solution Method {#sec:solution_method} =============== Bilevel optimization problems can rarely be solved analytically. Indeed even extremely simple instances of bilevel problems have been shown to be NP-hard, [@jeroslow1985polynomial]. Thus, numerical techniques are required. Several classical and evolutionary approaches have been proposed to solve , as reviewed by [@sinha2018review]. When the inner problem adheres to certain regularity conditions, it is possible to reduce the bilevel optimization problem to a single level one replacing the inner problem with its Karush-Kuhn-Tucker (KKT) conditions. Then, evolutionary techniques could be used to solve this single-level problem, thus making possible to relax the upper level requirements. As, in general, this single-level reduction is not feasible, several other approaches have been proposed, such as nested evolutionary algorithms or metamodeling-based methods. However, most of these approaches lack scalability: increasing the number of upper level variables produces an exponential increase on the number of lower level tasks required to be solved being thus impossible to apply these techniques to solve large scale bilevel problems as the ones appearing in the context of AML. In [@bruckner2011stackelberg] the authors face the problem of solving Stackelberg games in the AML context. However, they focus on a very particular type of game which can be reformulated as a quadratic program. In this paper, we provide more general procedures to solve Stackelberg games that are useful in the AML paradigm in which decision spaces are continuous and high dimensional. To this end, we focus on gradient ascent techniques to solve bilevel optimization problems. Let us assume that for any $\alpha$ the solution of the inner problem is unique. This solution defines an implicit function $\beta^*(\alpha)$. Thus, problem may be viewed solely in terms of the defender decisions $\alpha$. The underlying idea behind gradient ascent techniques is the following: given a defender decision $\alpha \in \mathbb{R}^n$ a direction along which the defender’s utility increases while maintaining feasibility must be found, and then, we move $\alpha$ in that direction. Thus, the
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'Do experiments based on superconducting loops segmented with Josephson junctions (e.g., flux qubits) show macroscopic quantum behavior in the sense of Schrödinger’s cat example? Various arguments based on microscopic and phenomenological models were recently adduced in this debate. We approach this problem by adapting –to flux qubits– the framework of large-scale quantum coherence, which was already successfully applied to spin ensembles and photonic systems. We show that contemporary experiments might show quantum coherence more than one hundred times larger than experiments in the classical regime. However, we argue that the often used demonstration of an avoided crossing in the energy spectrum is not sufficient to conclude about the presence of large-scale quantum coherence. Alternative, rigorous witnesses are proposed.' author: - 'Florian Fröwis$^1$, Benjamin Yadin$^{2,3}$, Nicolas Gisin$^1$' bibliography: - 'SQUID.bib' title: 'Insufficiency of avoided crossings for witnessing large-scale quantum coherence in flux qubits' --- Introduction {#sec:introduction} ============ The experimental demonstration of a massive object in a superposition of two well separated positions is generally considered as a positive test of quantum mechanics on large scales. Typical examples are interference of large molecules or proposed experiments with levitating nanospheres [@Arndt_Testing_2014]. These situations are often compared to Schrödinger’s thought experiment of a cat in a superposition of dead and alive [@Schrodinger_gegenwartige_1935]. It was argued that superconducting quantum interference devices (SQUIDs), that is, superconducting loops segmented with Josephson junctions, can exhibit a similar characteristic [@Leggett_Macroscopic_1980; @Leggett_Testing_2002]. In certain parameter regimes, the magnetic flux through the loop can be seen as an appropriate analog to the position of a massive object, where the capacitance of the circuit plays the role of the mass (see Fig. \[fig:schematics\]). The nonlinearity of the Josephson junction can lead to an effective double-well potential, in which the minima of the wells correspond to well separated (i.e., “macroscopically distinct”) flux states. There has been a debate about the precise implications of successfully demonstrating a coherent superposition between the two wells [@Leggett_Testing_2002; @Korsbakken_Electronic_2009; @Korsbakken_size_2010; @Leggett_Note_2016; @Bjork_size_2004; @Marquardt_Measuring_2008; @Nimmrichter_Macroscopicity_2013]. On the one hand, it was argued that recent experiments [@Friedman_Quantum_2000; @Wal_Quantum_2000; @Hime_Solid-State_2006] operate in the 100 nA or $\mu$A regime implying the presence of up to $10^{9}$ electrons. This, together with the experimental evidence of “macroscopic coherence”, should be seen as a genuine “macroscopic quantum effect” [@Leggett_Testing_2002]. On the other hand, arguments based on microscopic modeling suggest that effectively at most a few thousand electrons make the difference between the two states localized in each well [@Korsbakken_Electronic_2009; @Korsbakken_size_2010] (see [@Leggett_Note_2016] for a critique of this argument). Further contributions also assign an “effective size”, that is, a number that should, for example, reflect the number of electrons that effectively participate in the observed quantum effect [@Bjork_size_2004; @Marquardt_Measuring_2008; @Nimmrichter_Macroscopicity_2013] (see also table \[tab:results\]). While the differences in the precise frameworks of [@Leggett_Testing_2002; @Korsbakken_Electronic_2009; @Korsbakken_size_2010; @Leggett_Note_2016; @Bjork_size_2004; @Marquardt_Measuring_2008; @Nimmrichter_Macroscopicity_2013] are expected to lead to some deviation in the obtained results, it is astonishing by how much they vary. We find effective sizes ranging from two [@Marquardt_Measuring_2008] to 10$^{10}$ [@Leggett_Testing_2002]. In addition, none of the theory papers present a conclusive, testable witness for their claim. ![\[fig:schematics\] (a) Schematics of a superposition in the flux coordinate $\phi$ in both wells (thick: probability distribution; thin: potential). Experiments witnessing coherence between the two wells are sometimes considered to resemble a Schrödinger-cat situation if the inter-well distance and the system size (e.g., number of participating electrons) are large. (b) Most basic schematics of a superconducting ring with a single Josephson junction (cross; see also Sec. \[sec:squid-experiment-\]). In a certain parameter regime the magnetic flux $\phi$ through the ring is an appropriate choice for the relevant degree of freedom [@Friedman_Quantum_2000; @Wal_Quantum_2000; @Leggett_Testing_2002; @Caldeira_Introduction_2014]. The SQUID is then called flux qubit. The external flux $\phi_x$ controls the effective potential for $\phi$. In practice, the single Josephson junction is replaced by several junctions for an *in situ* control of some experimental parameters [@Friedman_Quantum_2000; @Wal_Quantum_2000; @Hime_Solid-State_2006].](DoubleWell2.pdf){width="\columnwidth"} In this paper, we argue that one important aspect of the large-scale quantum nature of these experiments is the amount and the spread of quantum coherence in the flux coordinate since it is a direct test of the superposition principle. As already successfully done for spins and photons [@Frowis_Measures_2012; @Frowis_Linking_2015; @Oudot_Two-mode_2015], we rescale the coherence of the target state by the coherence of a classical reference state (i.e., the state confined in a single well). In this way, we introduce an “effective size” that quantifies the scale of the quantum effect. The advantage of this approach is the applicability to experimental data. After a short review of the motivation and the theory of the framework in Sec. \[sec:conc-large-quant\], the phenomenological model of the flux qubit experiments is presented in Sec. \[sec:squid-experiment-\]. Identifying the flux as a relevant observable, we show that indeed the ideally generated target states show large quantum coherence. With the parameters from experiment [@Friedman_Quantum_2000], we find that the quantum coherence of the target states in flux basis is almost two hundred times larger than of a classical reference state (i.e., the ground state of a single potential well). Ideal cat states could even reach numbers 1000 times larger than that of the reference state. The experimental proof for the generation of the eigenstates was argued to be the avoided crossing in the energy spectrum when tuning the imbalance of the two well minima. In Sec. \[sec:cert-large-quant\], we show that this evidence is not sufficient to conclude the presence of large-scale quantum coherence. We do so by presenting a simple dephasing model in the flux basis that leads to a drastic reduction of the quantum coherence (to the level of the classical reference state) while keeping the feature of the avoided crossing. The framework of large-scale quantum coherence provides testable witnesses, which often turn out to be feasible in practice [@Frowis_Lower_2017]. In Sec. \[sec:suff-exper-test\], we discuss the possibility of lower-bounding large quantum coherence by witnessing a strong response of the system to flux dephasing. The paper is summarized and discussed in Sec. \[sec:discussion\]. Framework of large quantum coherence {#sec:conc-large-quant} ==================================== One of the simplest and most fundamental questions in quantum mechanics is the validity of the superposition principle on all scales. There exist several alternative models that, for example, prevent a persistent superposition of two significantly distinct positions for massive particles [@Bassi_Models_2013]. Given an observable $X$ with a natural macroscopic limit (such as position or magnetization), quantum coherence between two far-distant parts of the spectrum arguably challenges our classical intuition more than quantum coherence constraint to a small (microscopic) regime [@Schrodinger_gegenwartige_1935]. For pure states, a superposition between far-distant spectral parts of $X$ implies that the wave function has a large spread. The simplest way to measure the spread of a state $| \psi \rangle $ is the variance $\mathrm{var}_{\psi}(X)$. For mixed states, the variance is no longer a faithful measure of coherence since it does not distinguish between coherent superposition and incoherent mixture. The convex roof construction is a well-known technique to overcome the shortcomings of the variance. Given a mixed state $\rho$, one considers the infinite set of all pure state decompositions (PSD) $\rho = \sum_i p_i \left| \psi_i \right\rangle\!\left\langle \psi_i\right| $ and minimizes the average variance $$\label{eq:1} \mathcal{I}(\rho,X) = \min_{\mathrm{PSD}}\sum_i p_i \mathrm{var}_{\psi_i}(X).$$ Since the incoherent part is eliminated with the convex roof construction, we call $\mathcal{I}(\rho,X)$ the quantum coherence of $\rho
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'Modern face recognition systems leverage datasets containing images of hundreds of thousands of *specific* individuals’ faces to train deep convolutional neural networks to learn an embedding space that maps an *arbitrary* individual’s face to a vector representation of their identity. The performance of a face recognition system in face verification (1:1) and face identification (1:N) tasks is directly related to the ability of an embedding space to discriminate between identities. Recently, there has been significant public scrutiny into the source and privacy implications of large-scale face recognition training datasets such as MS-Celeb-1M and MegaFace, as many people are uncomfortable with their face being used to train dual-use technologies that can enable mass surveillance. However, the impact of an individual’s inclusion in training data on a derived system’s ability to recognize them has not previously been studied. In this work, we audit ArcFace, a state-of-the-art, open source face recognition system, in a large-scale face identification experiment with more than one million distractor images. We find a Rank-1 face identification accuracy of 79.71% for individuals present in the model’s training data and an accuracy of 75.73% for those not present. This modest difference in accuracy demonstrates that face recognition systems using deep learning work better for individuals they are trained on, which has serious privacy implications when one considers all major open source face recognition training datasets do not obtain informed consent from individuals during their collection.' author: - Chris Dulhanty - Alexander Wong bibliography: - 'AIES-bib.bib' title: Investigating the Impact of Inclusion in Face Recognition Training Data on Individual Face Identification --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10002978.10003029.10003032&lt;/concept\_id&gt; &lt;concept\_desc&gt;Security and privacy Social aspects of security and privacy&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010147.10010178.10010224.10010225.10010231&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computing methodologies Visual content-based indexing and retrieval&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010520.10010521.10010542.10010294&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computer systems organization Neural networks&lt;/concept\_desc&gt; &lt;concept\_significance&gt;300&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10003456.10003462.10003487&lt;/concept\_id&gt; &lt;concept\_desc&gt;Social and professional topics Surveillance&lt;/concept\_desc&gt; &lt;concept\_significance&gt;100&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt; Introduction ============ Face recognition systems using Deep Convolutional Neural Networks (DCNNs) depend on the collection of large image datasets containing thousands of sets of *specific* individuals’ faces for training. Using this data, DCNNs learn a set of parameters that can map an *arbitrary* individual’s face to a feature representation, or *faceprint*, that has small intra-class and large inter-class variability. The ability of a face recognition system to distinguish between identities within this embedding space depends on the size and diversity of its training data, along with its model capacity and underlying algorithms. Face recognition systems have benefited from the enabling power of Internet in the collection of large-scale image datasets and from hardware improvements in enabling efficient training of large models. Recently, increased attention to face recognition by academia, industry and government has brought new researchers, ideas and funding to the field, leading to performance improvements on benchmark tasks Labelled Faces in the Wild (LFW) [@LFWTech] and MegaFace [@nech2017level]. Consequently, face recognition systems are now being integrated into consumer and industrial electronic devices and offered as application programming interfaces (APIs) by providers such as Amazon, Microsoft, IBM, Megvii and Kairos. However, along with improved performance has come increased public discourse on the ethics of face recognition systems and their development. Algorithmic auditing of commercial face analysis applications has uncovered disparate performance for intersectional groups across several tasks. Poor performance for darker skinned females by commercial face analysis APIs has been reported by Buolamwini, Gebru and Raji [@buolamwini2018gender; @Raji2019], as has lower accuracy in face identification by commercial systems with respect to lower (darker) skin reflectance by researchers at the US Department of Homeland Security [@cook2019demographic]. As bias in training data begets bias in model performance, efforts to create more diverse datasets for these tasks have resulted. IBM’s Diversity in Faces dataset [@merler2019diversity], released in January 2019, is a direct response to this body of research. Using ten established coding schemes from scientific literature, researchers annotated one million face images in an effort to advance the study of fairness and accuracy in face recognition. However, this dataset has seen public scrutiny from a different, but equally notable perspective. A March 2019 investigation by NBC News into the origins of the dataset brought to the public conversation the issue of informed consent in large-scale academic image datasets, as IBM leveraged images from Flickr with a Creative Commons Licence without notifying content owners of their use [@solon_2019]. To rationalize the collection of large-scale image datasets without explicit consent of individuals, some computer vision researchers appeal to the non-commercial nature of their work. However, work by Harvey *et al.* at MegaPixels have found that authors’ stated limitations on dataset use do not translate to real-world restrictions [@megapixels]. In the case of Microsoft’s MS-Celeb-1M dataset, authors included an explicit “non-commercial research purpose only" clause with the dataset, which was the largest publicly-available face recognition dataset at the time. However, as the dataset has been cited in published works by the research arms of many commercial entities, findings cannot easily be isolated from improvements in product offerings. As a direct result of MegaPixel’s work on the ethics, origins, and privacy implications of face recognition datasets, MS-Celeb-1M [@guo2016ms], Stanford’s Brainwash dataset [@stewart2016end] and Duke’s Multi-Target, Multi-Camera dataset [@ristani2016MTMC] were removed from their authors’ websites in June 2019. However, in the case of MS-Celeb-1M, the data remains accessible via torrents, derived datasets and other hosts [@megapixels]. In addition to issues of bias and informed consent in data collection, the general use of face recognition systems by commercial and government agencies has been raised by civil rights groups and research centers, as there is no oversight for its deployment in civil society [@aclu; @whittaker2018ai]. For these and other reasons, multiple cities in the United States have banned the use of face recognition systems for law enforcement purposes [@conger_2019; @wu_2019; @ravani_2019]. Many people are concerned with their identify being used to train the dual-use technology that is face recognition. With reports of face recognition being used by law enforcement entities to identify protesters in London [@bowcott_2018] and Hong Kong [@mozur_2019], and measures enacted to ban face masks in the latter location [@yu_2019], there is merit in understanding the impact of one’s inclusion in the training data that fuels the development of these systems. In an effort to inform the conversation about informed consent and privacy in the domain of face recognition, we conduct experiments on a state-of-the-art system. The goal of this work is to determine the impact of an individual’s inclusion in face recognition training data on a derived system’s ability to recognize them. To the best of the authors’ knowledge, this is the first paper to investigate this relationship. The remainder of this paper is organized in the following manner; section two outlines ethical considerations for some decisions in the design and implementation of this work, section three provides background for the taxonomy, algorithms and data used in face recognition research, section four outlines the design of experiments used to address the research question, section five presents our results and adds discussion and the paper concludes in section six. Ethical Considerations ====================== Intent ------ The intent of this work is to investigate the performance of face recognition systems with respect to inclusion in training datasets. While one interpretation of this work may be to motivate efforts to mitigate demographic bias in the development of face recognition systems, it should be noted that increasing the performance of face recognition systems in any context can increase their ability to be used for oppressive purposes. In addition, due to historical societal injustices against marginalized populations and racially-biased police practices in the United States, a disproportionate number of African Americans and Hispanics are present in mugshot databases,
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'Correlation effects are important for making predictions in the $\delta $ phase of Pu. Using a realistic treatment of the intra–atomic Coulomb correlations we address the long-standing problem of computing ground state properties. The equilibrium volume is obtained in good agreement with experiment when taking into account Hubbard $U$ of the order 4 eV. For this $U,$ the calculation predicts a 5f$^{5}$ atomic–like configuration with L=5, S=5/2, and J=5/2 and shows a nearly complete compensation between spin and orbital magnetic moments.' author: - 'S. Y. Savrasov$^{\ast }$ and G. Kotliar$^{+}$' - '$^{\ast }$Max-Planck-Institut für Festkörperforschung, Heisenbergstr. 1,  70569 Stuttgart, Germany.' - | $^{+}$Department of Physics and Astronomy and Center for Condensed Matter Theory\ Rutgers University, Piscataway, NJ 08854–8019 date: July 1999 title: 'Ground State Theory of $\delta $–Pu' --- Metallic plutonium is a key material in the energy industry and understanding its physical properties is of fundamental and technological interest [@Pu]. Despite intensive investigations [@PuBook], its extremely rich phase diagram with six crystal structures as well as its unique magnetic properties are not well understood. It is therefore of great interest to study the ground state of Pu by modern theoretical methods using first principles electronic structure calculations, which take into account the possible strong correlation among the f electrons. Density functional theory [@DFT] in its local density or generalized gradient approximations (LDA or GGA) is a well-established tool for dealing with such problems. This theory does an excellent job of predicting ground-state properties of an enormous class of materials. However, when applied to Pu [@Pucalc1; @Pucalc2], it runs into serious problems. Calculations of the high-temperature fcc $\delta$ phase have given an equilibrium atomic volume up to 35% lower than experiment [@Pucalc1]. This is the largest discrepancy ever known in density functional based calculations and points to a fundamental failure of existing approximations to the exchange-correlation energy functional. Many physical properties of this phase are puzzling: large values of the linear term in the specific heat coefficient and of the electrical resistivity are reminiscent of the physical properties of strongly-correlated heavy-fermion systems. On the other hand, the magnetic susceptibility is small and weakly temperature dependent [@fournier]. Moreover, early LDA calculations [@Pucalc2] predicted $\delta $–Pu to be magnetic with a total moment of 2.1 Bohr magnetons in disagreement with experiments. The reason for these difficulties has been understood for a long time: Pu is located on the border between light actinides with itinerant 5f–electrons and the heavy actinides with localized 5f electrons[@Johannson] . Near this localization-delocalization boundary, the large intra-atomic Coulomb interaction as well as the itineracy of the f electrons have to be considered on the same footing, and it is expected that correlations must be responsible for the anomalous properties. The parameter governing the importance of correlations in electronic structure calculations is the ratio between effective Hubbard interaction $U$ and the bandwidth $W.$ When the distance between atoms is small, the correlation effects may be not important, since the hybridization, and consequently the bandwidth become large. The low-temperature $\alpha $ phase of Pu has an atomic volume which is 25% smaller than the volume of $\delta $ phase. To the extent that the complicated monoclinic structure of the $\alpha $ phase can be modelled by the simplified fcc lattice, it becomes clear that the LDA or GGA calculations which ignore the large effective $U$ converge to the low volume $\alpha $ phase (for which $U/W < 1$). When volume is increased, this ratio is turned around, and LDA loses its predictive power. This results in the long-standing problem of accurate prediction of the volume of $\delta $–Pu. In the present work it will be shown that a proper treatment of Coulomb correlations allows us to compute the equilibrium atomic volume of $\delta $–Pu in good agreement with experiment. Moreover, our calculations suggest that there is a nearly complete compensation between the spin and the orbital contributions to the total magnetic moment which is consistent with experiment. Thus the strong correlation effects in $\delta $–Pu are not manifest in the static magnetic properties. To incorporate the effects of correlations we use the LDA + U approach of Anisimov and coworkers [@LDA+U]. This approach recognizes that the failure of LDA is related to the fact that it omits the Hubbard like interaction among electrons in the same shell, irrespectively of their spin orientation. A new orbital–dependent correction to the LDA functional was introduced to describe this effect. In its most recent, rotationally invariant representation, the correction to the LDA functional has the following form [@LDA+Urecent]: $$\Delta E[n]=\frac{1}{2}\sum_{\{\gamma \}}(U_{\gamma _{1}\gamma _{2}\gamma _{3}\gamma _{4}}-U_{\gamma _{1}\gamma _{2}\gamma _{4}\gamma _{3}})n_{\gamma _{1}\gamma _{2}}^{c}n_{\gamma _{3}\gamma _{4}}^{c}-E_{dc} \label{e2}$$ where $n_{\gamma _{1}\gamma _{2}}^{c}$ is the occupancy matrix for the correlated orbital (d or f), and $\gamma $ stands for the combined spin, ($s$), and azimuthal quantum number,($m$), indexes. The electron–electron correlation matrix $U_{\gamma _{1}\gamma _{2}\gamma _{3}\gamma _{4}}=\left\langle m_{1}m_{3}\left| v_{C}\right| m_{2}m_{4}\right\rangle \delta _{s_{1}s_{2}}\delta _{s_{3}s_{4}}$ can be expressed via Slater integrals $F^{(i)} $, $ i=0,2,4,6$ in the standard manner [@LDA+Urecent]. The term $E_{dc}$ accounts for the double counting effects. This scheme, known as the ”LDA+U method”, gives substantial improvements over the LDA in many cases[@LDA+Ureview]. The value of the $U$ matrix is an input which can be obtained from a constrained LDA calculations [@CLDA], or just taken from the experiment. The philosophy of this approach is that the delocalized s p d electrons are well described by the LDA while the energetics of the more localized f electrons require the explicit introduction of the Hubbard U. In the spirit of this method, in this work we will treat the s p d electrons by the generalized gradient approximation (GGA) [@GGA] which is believed to be more accurate that the LDA. Our implementation of the GGA+U functional is based on the localized–orbital representation provided by the linear–muffin–tin–orbital (LMTO) method for electronic structure calculations [@OKA]. It is important to include spin–orbit coupling effects which are not negligible for 5f electrons of Pu. Our calculations include non-spherical terms of the charge density and potential both within the atomic spheres and in the interstitial region [@Sav]. All low-lying semi-core states are treated together with the valence states in a common Hamiltonian matrix in order to avoid unnecessary uncertainties. These calculations are spin polarized and assume the existence of long–range magnetic order. For simplicity, the magnetic order is taken to be ferromagnetic [@Ferro]. We now report our results on the calculated equilibrium volume. To analyze the importance of the correlation effects, our calculations have been performed for several different values of $U$ varying from 0 to 4 eV. For $U$=4 eV we use standard choice of Slater integrals: $F^{(2)}$=10 eV, $F^{(4)}$=7 eV, and $F^{(6)}$=5 eV [@Pu]. For other U’s we have scaled these values proportionally. For each set of $F$’s a full self–consistent cycle minimizing the LDA/GGA+U functionals has been performed for a number of atomic volumes. We calculated the total energy $E$ as a function of both $V$ and $U$. For fixed $U,$ the theoretical equilibrium, $V_{calc},$ is given by the minimum of $E(V)$. Fig. 1 shows the dependence of the calculated–to–experimental equilibrium volume ratio $V_{calc}/V_{exp}$ as a function of the input $U.$ It is clearly seen that the $U$=0 result (LDA) predicts an equilibrium volume which is 38% off the experimental result and the use of GGA gives only slightly improved result ($V_{calc}/V_{exp}$=0.66). On the other hand, switching on a very large repulsion between 5f electrons obviously leads to an overestimate of the inter-atomic distances. An optimal $U$ deduced from this analysis is found to be close to 4 eV when using the GGA expressions for the exchange and correlation. This estimate of the intra-atomic correlation energy is in excellent agreement with the published conventional data [@PuU]: The value of $U$ deduced from the total energy differences was found to be 4.5 eV. Atomic spectral data give similar value close to 4 eV. Thus, it is demonstrated how significant it is to properly treat Coulomb correlations in predicting the equilibrium properties of this actinide. We now discuss the calculated GGA+
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'Two new approaches to solving first-order quasilinear elliptic systems of PDEs in many dimensions are proposed. The first method is based on an analysis of multimode solutions expressible in terms of Riemann invariants, based on links between two techniques, that of the symmetry reduction method and of the generalized method of characteristics. A variant of the conditional symmetry method for constructing this type of solution is proposed. A specific feature of that approach is an algebraic-geometric point of view, which allows the introduction of specific first-order side conditions consistent with the original system of PDEs, leading to a generalization of the Riemann invariant method for solving elliptic homogeneous systems of PDEs. A further generalization of the Riemann invariants method to the case of inhomogeneous systems based on the introduction of specific rotation matrices enabled us to weaken the integrability condition. It allows us to establish the connection between the structure of the set of integral elements and the possibility of the construction of specific classes of simple mode solutions. These theoretical considerations are illustrated by the examples of an ideal plastic flow in its elliptic region and a system describing a nonlinear interaction of waves and particles. Several new classes of solutions have been obtained in explicit form including the general integral for the latter system of equations.' author: - | A.M. Grundland[^1],\ Centre de Recherche Mathématiques, Université du Montréal,\ C.P. 6128, Succc. Centre-ville, Montréal, (QC) H3C 3J7, Canada\ and Département de mathématiques et informatiques, Université du Québec,\ Trois-Rivières, (QC) G9A 5H7, Canada\ - | V. Lamothe[^2],\ Département de Mathématiques et Statisque, Université de Montréal,\ C.P. 6128, Succc. Centre-ville, Montréal, (QC) H3C 3J7, Canada\ title: 'Multimode solutions of first-order quasilinear systems obtained from Riemann invariants. Part I. ' --- [Running Title: Multimode solutions of quasilinear systems. Part I.\ PACS numbers: 02.30.Jr; Secondary 02.70.-c\ Keywords: symmetry reduction method, generalized method of characteristics, Riemann invariants, multimode solutions]{} Introduction {#intro} ============ Riemann waves represent a very important class of solution of nonlinear first-order systems of partial differential equations (PDEs). They are ubiquitous in the differential equations of mathematical physics, since they appear in all multidimensional hyperbolic systems and constitute their elementary solutions. Their characteristic feature is that, in most cases, they are expressible only in implicit form. For a homogeneous hyperbolic quasilinear system of first-order PDEs, \[eq:1\] \_\^[i]{}(u)=0,i=1,…,p,=1,…, q,=1,…, m (where $\mathcal{A}^1,\ldots,\mathcal{A}^p$ are $q\times m$ matrix functions of an unknown function $u$ and we adopt the convention that repeated indices are summed unless one of them is in a bracket), a Riemann wave solution is defined by the equation $u=f(r(x,u))$, where $f:{\mathbb{R}}{\rightarrow}{\mathbb{R}}^q$, and the function $r(x,u)={\lambda}_i(u)x^i$ is called the Riemann invariant associated with the vector $\lambda$ satisfying the equation $\ker{\left( {\lambda}_i\mathcal{A}^i(u) \right)}\neq 0$. These solutions have rank at most equal to one. They are building blocks for constructing more general types of solutions describing nonlinear superpositions of many waves ($k$-waves), which are very interesting from the physical point of view. Until now, the only way to approach this task was through the generalized method of characteristics (GMC) [@Burnat:1972; @CourantHilbert:1962; @Jeffrey:1976; @JohnKlainerman:1984; @Perad:1985; @Rozdestvenski:1983] and more recently through the conditional symmetry method (CSM) [@AblowitzClarkson; @ConteGrundHuard:2009; @Fushchych:1991; @GrundHuard:2006; @GrundHuard:2007; @OlverVorobev1995]. The GMC relies on treating Riemann invariants as new dependent variables (which remain constant along the appropriate characteristic curves of the initial system (\[eq:1\]) and constitute a set of invariants of the Abelian algebra of some vector field $X_a=\xi_a^i(u){\partial}_x^i$ with ${\lambda}_i^a\xi_a^i=0$ for $1\leq a\leq k<p$. This leads to the reduction of the dimension of the problem. The most important theoretical results obtained with the use of the GMC or CSM [@GrundHuard:2007] include the finding of necessary and sufficient conditions for the existence of Riemann $k$-waves in multidimensional systems. It was shown [@Perad:1985] that these solutions depend on $k$ arbitrary functions of one variable. Some criteria were also found [@Perad:1985] for determining the elastic or nonelastic character of the superposition of Riemann waves described by hyperbolic systems, which is particularly useful in physical applications. In applications to fluid dynamics and nonlinear field theory, many new and interesting results were obtained [@Boillat:1965; @Burnat:1972; @DubrovinNovikov:1983; @FerapontovKhus:2004:1; @FerapontovKhus:2004:2; @JohnKlainerman:1984; @Mises:1958; @Rozdestvenski:1983; @Whitham:1974; @Zakharov:1998]. Both the GMC and CSM methods, like all other techniques for solving PDEs, have their limitations. This fact has motivated the authors to search for the means of constructing larger classes of multiple wave solutions expressible in terms of Riemann invariants by allowing the introduction of complex integral elements in the place of real simple integral elements (with which the solutions of hyperbolic systems are built [@Perad:1985]). This idea originated from the work of S. Sobolev [@Sobolev:1934] in which he solved the wave equation by using the associated complex wave vectors. We are particularly interested in the construction of nonlinear superpositions of elementary simple mode solutions, and the proposed analysis indicates that the language of conditional symmetries is an effective tool for this purpose. This approach is applied to the nonstationary irrotational flow of a ideal plastic material in its elliptic region. A further extension of the proposed method to the case of inhomogeneous systems is proposed in order to be applicable either in elliptic or hyperbolic regions. This allows for a wider range of physical applications. The approach is based on the use of rotation matrices which obey certain algebraic conditions and allow us to write the reduced system in terms of Riemann invariants in the sense that each derivative of dependent variables is equal to an algebraic expression (see equation (\[eq:ms:12\])). We discuss in detail the sufficient conditions for the existence of multimode solutions. This approach is applied to a system describing a propagation of shock waves intensity in the nonlinear interaction of waves and particles. The general integral of this system has been constructed in an explicit form depending on two arbitrary functions of one variable. #### The organization of this paper is as follows. Section \[sec:2\] contains a detailed account of the generalized method of characteristics for first-order quasilinear systems of PDEs in many dimensions based on complex characteristic elements. In Section \[sec:3\] we formulate the problem of multimode solutions expressible in terms of Riemann invariants by means of a group theoretical approach. This allows us to formulate the necessary and sufficient conditions for constructing these types of solutions. In Section \[sec:4\], the usefulness of the method developed in Section \[sec:3\] is illustrated by an example of the ideal plasticity in $(2+1)$ dimensions, in which we find several bounded solutions. Moreover, we have drawn extrusion dies and the flow inside it (limiting ourselves to the region where the gradient catastrophe does not occur). Sections \[sec:5\] and \[sec:6\] comprise a new approach to solving inhomogeneous elliptic systems to obtain simple wave and simple mode solutions. In Section \[sec:7\], we have shown a example of a simple mode solution using the method presented in Section \[sec:6\]. The technique is applied on a system describing the propagation of shock waves for the nonlinear interaction of waves and particles. We obtain its general solution. Section \[sec:8\] summarizes the results obtained and contains some suggestions regarding further developments. The method of characteristics for complex integral elements. {#sec:2} ============================================================ The methodological approach assumed in this section is based on the generalized method of characteristics which has been extensively developed (*e.g. in* [@Burnat:1972; @DoyleGrundland:1996; @Grundland:1974; @GrundlandTafel:1996; @Perad:1985] and references therein) for multidimensional homogeneous and inhomogeneous systems of first-order PDEs. A specific feature of that approach is an algebraic and geometric point of view. An algebraization of systems of PDEs was made possible by representing the general integral elements as linear combinations of some special elements associated with those vector fields which generate characteristic curves in the spaces of independent variables $X$ and dependent variables $U$, respectively (see [@GrundlandZ
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We discuss some fundamental concerns regarding the recent proposal of Dimopoulos and Giudice for dynamically aligning the soft masses of the sfermions in the minimal supersymmetric standard model (MSSM) with the corresponding fermion masses to suppress flavor changing neutral currents. We show that the phenomenologically-favored presence of right-handed neutrinos in the theory, even if only at very high scales, generically disaligns the slepton mass matrices. Further suppression is then needed to meet the current upper bound on the rate for $\mu \to e\gamma$. Planned improvements in the search for $\mu \to e\gamma$ should easily detect this rare mode. (With improved sensitivity $\mu\to 3e$ may also be seen.) By measuring the helicity of the amplitude for $\mu \to e\gamma$ such experiments could distinguish between unified and non-unified models at very high energies; by inserting the various MSSM parameters as they become available, the mixing in the leptonic Yukawa couplings can be extracted; and by combining the results with those of various neutrino experiments some information about the right-handed neutrino Majorana matrices can also be gained.' --- -.3in 8.5in .1in .1in 6.6in \#1[.3ex]{} December 1995UND-HEP-95-US01\ RU-95-72\ hep-ph/9512354 .2in [**Disoriented Sleptons**]{} .3in Riccardo Rattazzi[^1]\ [*Department of Physics and Astronomy\ Rutgers University\ Piscataway, NJ 08855*]{} Uri Sarid[^2]\ [*Department of Physics\ University of Notre Dame\ Notre Dame, IN 46556*]{} .2in The most promising candidates for a fundamental theory underlying the standard model have been supersymmetric (SUSY) models. There are compelling theoretical and phenomenological reasons to believe that nature is supersymmetric on microscopic scales, and that the observed asymmetry at low energies between bosons and fermions is due to spontaneous SUSY breaking. Much attention has been focused on the highly-successful minimal supersymmetric extension of the standard model, the MSSM. In this paper we address some aspects of this model and of its extension to include neutrino masses. In particular, we analyze the implications of neutrino masses to a new mechanism recently introduced by Dimopoulos, Giudice and Tetradis (DGT) [@ref:dgt] to ameliorate the flavor problem of the MSSM. In this analysis we present their mechanism somewhat differently from their original work, and then focus on its implications to rare leptonic processes such as $\mu\to e\gamma$. If the DGT mechanism is operative at a very large momentum scale $\Lambda$, then data about such rare processes can be combined with results from direct SUSY searches and from various neutrino experiments to reveal important information about the leptonic couplings at the scale $\Lambda$. The long-standing problem which DGT have sought to solve is that of flavor-changing neutral currents (FCNC) in the MSSM. If the soft mass matrices of squarks and sleptons are—as expected—not too far above the electroweak scale, and if they are neither proportional to unit matrices nor aligned with the corresponding fermion mass matrices, then they induce unacceptably large contributions to various FCNC processes, in particular neutral kaon oscillations and $\mu \to e \gamma$. Various mechanisms have been suggested to overcome this difficulty: if the gauginos are somewhat heavier than expected, they would raise the squark masses and make them roughly proportional to the unit matrix (though the difficulties in the leptonic sector would be harder to overcome); if the soft masses start out universal—proportional to the unit matrix—at a very high scale, they typically remain so in their first- and second-generation entries, and so contribute little to the two sensitive FCNC processes mentioned above; and if some flavor symmetries align the squarks with the quarks and the sleptons with the leptons, then once again the FCNC contributions can be suppressed. But the first two solutions have serious shortcomings: gaugino dominance requires unnaturally heavy gauginos and moreover is not very effective for $\mu \to e \gamma$; and universal soft masses seem an unlikely outcome of various theories at the highest scales. The third approach postulates a set of approximate symmetries to explain both the observed fermion mass matrices and their alignment with the squark and slepton masses [@ref:hor]. It is somewhat similar in spirit to the approach of DGT, in that the same suppression mechanism which works in the quark and lepton sectors is applied to the SUSY-breaking sector, and generically yields similar FCNC suppression. The actual amount of suppression, though, varies considerably depending on the horizontal symmetries used, and can be stronger or weaker than in the DGT scenario. In any case, a thoroughly novel mechanism for suppressing FCNC’s is very welcome. The recent proposal of DGT introduces just such a mechanism: a model, or more correctly a paradigm, in which the squark and slepton mass matrices are dynamically aligned with those of the corresponding fermions. In this letter we first present the idea and the assumptions of DGT somewhat differently than in the original proposal, focusing on the intrinsic link between any such dynamics and various fundamental concerns about the vacuum energy. We then show that even if such a mechanism is viable, and can greatly improve the situation in the quark sector, we do not expect it to be nearly sufficient in the lepton sector. In the paper of DTG individual lepton numbers were conserved, and therefore not surprisingly the slepton and lepton mass matrices were exactly aligned and no FCNC processes such as $\mu \to e \gamma$ could occur. However, realistically we expect the lepton number symmetries to be violated in the neutrino sector for a variety of phenomenological reasons. As we show below, such violations will induce a misalignment between leptons and sleptons. Though we can not at present predict the degree of misalignment precisely, we expect the effect to be phenomenologically important, and to yield valuable information about the leptonic flavor violations at very high energies. In the standard model, flavor violation comes about exclusively through the Cabibbo-Kobayashi-Maskawa mixing matrix. We can choose a basis for the quark fields such that the gauge interactions are flavor-conserving, as are the Yukawa couplings of the leptons $Y_E = \widehat Y_E$ (the hat indicates that the matrix is diagonal) and of the down-type quarks $Y_D = \widehat Y_D$, but then there is no more freedom to diagonalize the Yukawa couplings of the up-type quarks $Y_U$: they are given by $Y_U = K^\dagger \widehat Y_U$, where $K$ is the CKM matrix. Since the Yukawa couplings of the first two quark generations are small and the mixing with the third generation is small, the standard model exhibits very feeble FCNCs. In the lepton sector, flavor is exactly conserved. The minimal extension of this standard model introduces eight more potentially flavor-violating matrices: the five scalar mass matrices $\tilde m_Q^2$, $\tilde m_U^2$, $\tilde m_D^2$, $\tilde m_L^2$, and $\tilde m_E^2$, and the three trilinear scalar coupling matrices $A_E$, $A_D$, and $A_U$. We will choose once again to keep the gauge and gaugino interactions flavor-diagonal, and to do so we always rotate superpartners together. If we then stay with the above choice of basis for quark and lepton fields, we have no more freedom to diagonalize the eight new soft-SUSY-breaking matrices. If their off-diagonal terms are not suppressed relative to the diagonal ones, unacceptably large FCNCs can arise, as discussed above. We will concentrate first on the scalar masses, and then return to a discussion of the $A$ terms. DGT have proposed that these scalar mass matrices be promoted to dynamical fields rather than be treated as mere parameters. The advantage is that there may then exist a dynamical relaxation mechanism which would align these matrices with the Yukawa matrices and thereby minimize the flavor-changing interactions. Such a situation may arise in string theory, where the low-energy field theory parameters are often dynamically determined by the vacuum expectation values of certain fields. The fundamental, microscopic theory and the low-energy effective field theory are matched at a scale $\Lambda$ which we take to be of order the string or Planck scales, but could also be some lower scale. The Yukawa couplings are assumed to be fixed by the fundamental theory, perhaps by expectation values of fields with very large masses, so they are simply parameters of the effective theory. As for the scalar mass matrices, it is conceivable that their eigenvalues and orientations are determined by different mechanisms. We will only consider the “disoriented” scenario in which the eigenvalues are first fixed by some dynamics responsible for supersymmetry breaking, and then the orientations are dynamically determined by a set of light moduli fields. (We call them “moduli”, with an abuse of language, because they would correspond to flat directions when either the Yukawa couplings vanish or supersymmetry is unbroken.) If we denote the scalar masses by $3\times 3$ matrices $\tilde m_I^2$ where $I$ runs over the five fields $Q$ (squark doublets), $U$ (up-type antisquark singlets), $D$ (down-type antisquark singlets), $L$ (slepton doublets) and $E$ (charged slepton singlets), and diagonalize them by means of unitary matrices $U_I$ as in Refs. [@ref:dgt],
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'This paper investigates estimating the variance of a temporal-difference learning agent’s update target. Most reinforcement learning methods use an estimate of the value function, which captures how good it is for the agent to be in a particular state and is mathematically expressed as the expected sum of discounted future rewards (called the return). These values can be straightforwardly estimated by averaging batches of returns using Monte Carlo methods. However, if we wish to update the agent’s value estimates during learning–before terminal outcomes are observed–we must use a different estimation target called the $\lambda$-return, which truncates the return with the agent’s own estimate of the value function. Temporal difference learning methods estimate the expected $\lambda$-return for each state, allowing these methods to update online and incrementally, and in most cases achieve better generalization error and faster learning than Monte Carlo methods. Naturally one could attempt to estimate higher-order moments of the $\lambda$-return. This paper is about estimating the variance of the $\lambda$-return. Prior work has shown that given estimates of the variance of the $\lambda$-return, learning systems can be constructed to (1) mitigate risk in action selection, and (2) automatically adapt the parameters of the learning process itself to improve performance. Unfortunately, existing methods for estimating the variance of the $\lambda$-return are complex and not well understood empirically. We contribute a method for estimating the variance of the $\lambda$-return directly using policy evaluation methods from reinforcement learning. Our approach is significantly simpler than prior methods that independently estimate the second moment of the $\lambda$-return. Empirically our new approach behaves at least as well as existing approaches, but is generally more robust.' bibliography: - 'main.bib' title: 'Directly Estimating the Variance of the $\lambda$-Return Using Temporal-Difference Methods' --- Introduction ============ ![Each TD node takes as input a reward ${R}$, a discounting function $\gamma$, and features $\phi$. For the direct method (**top**) the squared TD error of the first-stage value estimator is used as the [meta-reward]{} for the second-stage [$V$]{} estimator. For VTD (**bottom**), a more complex computation is used for the [meta-reward]{} and an extra stage of computation is required. []{data-label="fig:networks"}](combined_graphic.pdf){width="\linewidth"} Conventionally in reinforcement learning, the agent estimates the expected value of the return—the discounted sum of future rewards, as an intermediate step to find an optimal policy. Given a trajectory of experience, the agent can average the returns observed from each state. To estimate the value function online—while the trajectory unfolds—we update the agent’s value estimates towards the expected $\lambda$-return. The $\lambda$-return has the same expected value as the return, but can be estimated online using a memory trace. Algorithms that estimate the expected value of the $\lambda$-return are called temporal-difference learning methods. The first moment, however, is not the only statistic that can be estimated. In addition to the expected value, we could estimate the variance of the $\lambda$-return. An estimate of the variance of the $\lambda$-return can be used in several ways to improve estimation and decision-making. @Sato2002 [@Ghavamzadeh; @Tamar2012; @Tamar2013b] use an estimate of the variance of the $\lambda$-return to design algorithms that account for risk in decision making. Specifically they formulate the agent’s objective as maximizing reward, while minimizing the variance of the $\lambda$-return. @White2016b estimated the variance of the $\lambda$-return, [$V$]{}, to automatically adapt the trace-decay parameter, $\lambda$, used in learning updates. This resulted in faster learning for the agent, but more importantly removed the need to tune $\lambda$ by hand. The variance [$V$]{} can be estimated directly or indirectly. Indirect estimation involves estimating the first moment (the value ${J}$) and second moment ($M$) of the return and taking their difference as: ${V}(s)=M(s)-{J}(s)^2$. @Sobel1982 were the first to formulate a Bellman operators for $M$. Later @Tamar2016 [@Tamar2013b; @Ghavamzadeh], extended @Sobel1982’s approach to estimating the variance for $\lambda = 0$ to $\lambda =1$. Finally, @White2016b introduced an estimation method called VTD, that supports off-policy learning [@Sutton2009; @Maei2011], state-dependent discounts and state-dependent trace-decay parameters. An alternative approach is to estimate the variance of the $\lambda$-return [$V$]{} directly. This has been considered by @Tamar2012, but they were unable to derive a Bellman operator—instead giving a Bellman-like operator—and considered only cost-to-go problems. In this paper, we show that one can use temporal-difference learning, a online method for estimating value functions [@Sutton1988], to estimate [$V$]{} directly. Our new method supports off-policy learning, state-dependent discounts, and state-dependent trace-decay parameters. We introduce a new Bellman operator for the variance of the $\lambda$-return, and further prove that even for a value function that does not satisfy the Bellman operator for the expected $\lambda$-return, the error in this recursive formulation is proportional to the error in the value function approximation. Interestingly, the Bellman operator for the second moment requires an unbiased estimate of the $\lambda$-return [@White2016b]; our Bellman operator for the variance avoids this term, and so has a simpler update. Both our direct method and VTD can be seen as a network of two TD estimators running sequentially (Figure \[fig:networks\]). Our goal is to understand the empirical properties of the direct and indirect approaches for estimating variance, as neither have yet been thoroughly studied. In general, we found that direct estimation is just as good as VTD, and in many cases better. Specifically, we observe that the direct approach is better behaved in the early stages of learning before the value function has converged. Further, we observe that the variance of the [$V$]{} estimates can be higher for VTD under several circumstances: (1) when there is a mismatch in step-size between the value estimator and the [$V$]{} estimator, (2) when traces are used with the value estimator, (3) when estimating [$V$]{} of the off-policy return, and (4) when there is error in the value estimate. Overall, we conclude that the direct approach to estimating [$V$]{} is both simpler and better behaved than VTD. The MDP Setting =============== We model the agent’s interaction with the environment as a finite Markov decision process (MDP) consisting of a finite set of states ${\mathcal{S}}$, a finite set of actions, ${\mathcal{A}}$, and a transition model $p: {\mathcal{S}}\times {\mathcal{S}}\times {\mathcal{A}}\rightarrow [0,1]$ defining the probability $p(s'|s, a)$ of transition from state $s$ to $s'$ when taking action $a$. In the policy evaluation setting considered in this paper, the agent follows a fixed policy $\pi(a|s)\in [0,1]$ that provides the probability of taking action $a$ in state $s$. At each timestep the agent receives a random reward $R_{t+1}$, dependent only on $S_t, A_t, S_{t+1}$. The return is the discounted sum of future rewards $$\begin{aligned} \label{eq:MCreturn} G_{t}&={R}_{t+1} + \gamma_{t+1}{R}_{t+2} + \gamma_{t+1}\gamma_{t+2}{R}_{t+3} + \ldots \\ &={R}_{t+1} + \gamma_{t+1}G_{t+1}. \end{aligned}$$ The discount function $\gamma: {\mathcal{S}}\rightarrow [0,1]$, with $\gamma_{t}\equiv\gamma(S_t)$, provides a variable level of discounting depending on the state [@Sutton2011]. The value of a state, ${j}(s)$, is defined as the expected return from state $s$ under a particular policy $\pi$ $$\begin{aligned} {j}(s)=&{\mathbb{E}}_{\pi}[G_t|S_t=s]. \label{eq:value}\end{aligned}$$ We use ${j}$ to indicate the true value function and ${J}$ the estimate. The TD-error is the difference between the one-step approximation and the current estimate: $$\begin{aligned} \delta_t={R}_{t+1}+\gamma_{t+1}{J}_t(S_{t+1})-{J}_t(S_t). \label{eq:TDerr}\end{aligned}$$ The $\lambda$-return $$G_t^{\lambda}={R}_{t+1}+\gamma_{t+1}(1-\lambda_{t+1}){J}_t(S_{t+1}) +\gamma_{t+1}\lambda_{t+1} G^{\lambda}_{t+1} $$ provides a bias-variance trade-off by incorporating ${J}$, which is a potentially lower-variance but biased estimate of the return. This trade-off is determined by a state-dependent trace-decay parameter, $\lambda_t\equiv\lambda(S_
{ "pile_set_name": "ArXiv" }
null
[**Comment on “Phase Coexistence in Multifragmentation”**]{}\ In their letter Moretto et al. [@moretto96] propose the fragment charge distribution in nuclear multifragmentation to give a signal for the coexistence of nuclear liquid and vapor phase. To our opinion this signal is not usefull and misleading as fluctuations of different origin spoil it. Phase transitions in macro-physics are usually indicated by a peak in the specific heat e.g. $c_p(T)$ or by an anomaly of the caloric equation of state ($C\!E\!S$) $T(E)$ e.g. at constant pressure or volume. In closed finite systems, as e.g. highly excited nuclei, phase transitions are well indicated by the shape of the $C\!E\!S$ c.f.[@gross72; @gross95]. Inherent to phase transitions are large fluctuations at the transition which do not allow a clear phase separation in space or any other observable in small finite systems because of the nonvanishing coherence length of the phase fluctuations c.f. [@janke95; @gross150] and which differ at const.E and at const.T. E.g. even though the backbending of the $C\!E\!S$ is clearly seen for a 10-state Potts model at a lattice size of $100*100$ and the area under the oscillation of $T^{-1}(E)$ is close to the asymptotic value of the surface entropy no phase separation can be seen in the configurations. Therefore, the interpretation by ref.[@moretto96] is too naive and suffers from several further difficulties: Equations like formulas (1-3) of ref.[@moretto96] notice charge conservation only via the mean value but leave its fluctuation unrestricted. These fluctuations are usually substantial especially near to phase transitions. Moreover, in nuclear fragmentation one has to take care of the indistinguishability of identical fragments and the partition problem is not the Euler problem as is suggested in [@phair95]. The correct formula for the quantum partition of an integer $Z_0$ is given in [@gross110]. The experimental method used in ref.[@phair95; @moretto96] is of course not ideally suited to look for a phase transition in equilibrized nuclear systems. First of all this system is generated in a collision of two sizeable nuclei. The transverse energy $E_t$ does not give the total excitation energy $\varepsilon^*$ of the system nor is it neccessarily proportional to it. In fact the width in $\varepsilon^*$ at low $E_t$ can easily be of the order of a few MeV/nucleon[@moretto94a]. I.e. a fixed value of $E_t$ allows for considerable fluctuations of $\varepsilon^*$. It is therefore neccessary to investigate the signal of ref.[@moretto96] in a situation where we definitely [*have an equilibrized*]{} nuclear system with a [*sharply defined excitation energy*]{}. Lacking experimental data of this kind we investigated a model system from which we know by experience that it reproduces nuclear multifragmentation and which shows a nuclear phase-transition of first order towards fragmentation: The Berlin microcanonical Metropolis Monte Carlo $M\!M\!M\!C$-model [@gross95]. Here as also in other versions of statistical multifragmentation models like the Copenhagen model [@bondorf95] the phase transition towards fragmentation is clearly seen as anomaly in the $C\!E\!S$. In fig.1 we show the resultant parameter $cZ$ or better the quantity $cZ=ln\{P_n(Z)/P_{n+1}(Z)\}$ averaged over the IMF-multiplicities $n$ to get better statistics vs.$Z$. Here $P_n(Z)$ is the probability to find one fragment with charge $Z$ in events with n IMF’s. The two panels at $\varepsilon^*=5$ and $=6$ MeV/nucleon resemble the findings of Moretto. At lower excitation in contrast to ref.[@moretto96] $cZ$ is forced to rise with $Z$ as the emission of a second fragment with larger $Z$ is prohibited due to limited energy resources. We guess that at low [*transverse*]{} energy the experimental data of ref.[@moretto96] are overshadowed by deep inelastic collisions where some of the small fragments are likely from projectile break-up which as such have small transverse energies. Consequently, low total transverse energies do not really characterize the limitation to low excitation energies as indicated by the large width in $\varepsilon^*(E_t)$ [@moretto94a]. This is probably the reason for the vanishing quantity $c$ found in ref.[@moretto96] at low transverse energies. Conclusion: From all experience of microcanonical first order phase transitions in small systems one knows that it is normally rather difficult to see a clear phase separation even though the caloric equation of state gives an unambiguous signal, phase fluctuations are usually too large. Within the arguments of ref.[@phair95; @moretto96] there are at least [*two*]{} important conservation laws to be observed by the reaction: Conservation of charge [*and energy*]{}. The latter forces the “chemical potential” $c$ to rise again at low excitation energy. The observation of an anomaly in the caloric equation of state [@pochodzalla95] is still the best signal for a phase transition as was predicted in [@gross72; @bondorf95; @gross95]. Since long this is one of the classical signals for phase-transitions.\  \ D.H.E. Gross and A.S. Botvina Hahn-Meitner-Institut Berlin, Bereich Theoretische Physik,14109 Berlin, Germany\ L.G. Moretto et al. , 76:372–375, 1996. D.H.E. Gross et al. , 56:1544, 1986. D.H.E. Gross. , 53:605–658, 1990. D.H.E. Gross et al. , in print, 1996. W. Janke and S. Kappler. , 197:227, 1995. L. Phair et al. , 75:213, 1995. D.H.E. Gross et al. , 1:340–358, 1992. L.Moretto, private communication 1994 J.P. Bondorf et al. , 257:133–221, 1995. J. Pochodzalla et al. , 75:1040, 1995.
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We review the physics potential at FAIR in the light of the existing data of the RHIC-BES program and the NA49/NA61 beam and system size scan. Special emphasize will be put on the potential of fluctuations, as well as dilepton observables.' address: - '$^{1}$ Frankfurt Institute for Advanced Studies (FIAS), Ruth-Moufang-Str. 1, and Institut für Theoretische Physik, Johann Wolfgang Goethe University, 60438 Frankfurt am Main, Germany' - '$^{2}$ SUBATECH, UMR 6457, Université de Nantes, Ecole des Mines de Nantes, IN2P3/CRNS. 4 rue Alfred Kastler, 44307 Nantes cedex 3, France' - '$^3$ CFTP, Departamento de Fisica, Instituto Superior Tecnico (Universidade Tecnica de Lisboa), Av. Rovisco Pais, 1049-001 Lisboa, Portugal' author: - 'M. Bleicher$^{1}$, M. Nahrgang$^{1,2}$, J. Steinheimer$^{1}$, Pedro Bicudo$^{3}$' title: 'Physics Prospects at FAIR [^1]' --- Introduction ============ The Facility for Antiproton and Ion Research, FAIR [@bib1a; @bib1b; @bib1c], will provide an extensive range of particle beams from protons and antiprotons to ion beams of all chemical elements up to the heaviest one, uranium, with in many respects world record intensities. As a joint effort of several countries the new facility builds, and substantially expands, on the present accelerator system at GSI, both in its research goals and its technical possibilities. Compared to the present GSI facility, an increase of a factor of 100 in primary beam intensities, and up to a factor of 10000 in secondary radioactive beam intensities, will be a technical property of the new facility. The main thrust of FAIR research focuses on the structure and evolution of matter on both a microscopic and on a cosmic scale. The approved FAIR research programme embraces 14 experiments, which form the four scientific pillars of FAIR and offers a large variety of unprecedented forefront research in hadron, nuclear, atomic and plasma physics as well as applied sciences. Already today, over 2500 scientists and engineers are involved in the design and preparation of the FAIR experiments. They are organized in the experimental collaborations APPA, CBM, NuSTAR, and PANDA. The CBM/HADES experiment is of particular interest for the understanding of highly compressed nuclear matter and its relevance for understanding fundamental aspects of the strong interaction. HADES [@bib3a; @bib3b; @bib3c] and CBM [@bib3d; @bib3e] at SIS100/300 will explore the QCD phase diagram in the region of very high baryon densities and moderate temperatures. This approach includes the study of the nuclear matter equation-of-state, the search for new forms of matter, the search for the predicted first order phase transition between hadronic and partonic matter, the QCD critical endpoint, and the chiral phase transition, which is related to the origin of hadron masses. It is intended to perform comprehensive measurements of hadrons, electrons, muons and photons created in collisions of heavy nuclei proton–nucleus, and proton–proton collisions at different beam energies. Most of the rare probes like lepton pairs, multi-strange hyperons and charm will be measured for the first time in the FAIR energy range. Dileptons ========= Dileptons represent a penetrating probe of the hot and dense nuclear matter created in heavy ion collisions at the CBM experiment at the FAIR facility. The analysis of the electromagnetic response of the dense and hot medium is tightly connected to the investigation of the in-medium modification of the vector meson properties. Vector mesons are ideally suited for this exploration, because they can directly decay into a lepton-antilepton pair. One therefore aims to infer information on the modifications induced by the medium on specific properties of the vector meson, such as its mass and/or its width, from the invariant mass dilepton spectra. In this work, we present a consistent calculation of the dilepton production at SPS energy within a model which attempts to take into account both the complexity of the dilepton rate in hot a dense medium as well as the complexity of the pre-, post-, and equilibrium heavy-ion dynamics. The latter is modelled with an integrated Boltzmann+hydrodynamics hybrid approach based on the Ultrarelativistic Quantum Molecular Dynamics (UrQMD) transport model with an intermediate (3+1) dimensional ideal hydrodynamic stage [@Petersen:2008dd]. During the locally equilibrated hydrodynamical stage, dimuon emission is calculated locally in space-time according to the expression for the thermal equilibrium rate of dilepton emission per four-volume and four-momentum from a bath at temperature $T$ and baryon chemical potential $\mu_B$. During the local equilibrium phase, the radiation rate of the strongly interacting medium is standardly modelled using the vector meson dominance model and related to the spectral properties of the light vector mesons, with the $\rho$ meson having the dominant role [@Gale:1990pn; @Rapp:1999ej; @Ruppert:2007cr; @vanHees:2007th]. In-medium modifications of the $\rho$-meson spectral function due to scattering from hadrons in the heat bath are properly included in the model. Two additional sources of thermal radiation, namely emission from four-pion annihilation processes and from a thermalized partonic phase are included as well. As an input for the hydrodynamical part of the evolution we employ an equation of state in line with lattice data that follows from coupling the Polyakov loop to a chiral hadronic flavor-SU(3) model [@Steinheimer:2010ib]. The results we show for the dilepton spectra in In+In collisions at $E_{lab}=160 A$ GeV will be compared to fully acceptance corrected NA60 data [@Arnaldi:2008fw]. The data correspond to nearly minimum bias collisions, selecting events with a charged particle density $dN_{ch}/d\eta$$>$30. In Fig. \[fig1\] we show results for the invariant mass spectra of the excess dimuons in two slices in the transverse momentum of the dilepton pair $p_T$ when adopting a sudden freezeout approximation. The theoretical spectra are normalized to the corresponding average number of charged particles in an interval of one unit of rapidity around mid-rapidity. Results on more bins in transverse momentum as well as a detailed discussion on the applied model and results can be found in [@arXiv:1102.4574]. Of particular interest is that in the intermediate mass region, 1$<$$M$$<$1.5 GeV, we find that emission from the QGP accounts for about half of the total radiation. The remaining half is filled by the considered hadronic sources. The $4\pi$ annihilation alone is comparable to the QGP emission only for $M$$>$1.4 GeV. This offers the possibility to even quantitatively pin down the QGP contribution to the dilepton spectra and consequentely to the active degrees of freedom for the SIS beam energies. Fluctuations induced by a phase transition ========================================== At larger baryochemical potential, as achieved at FAIR, a first order phase transition is expected from model studies [@Bleicher:1998wu; @Scavenius:2000qd; @Ratti:2005jh; @arXiv:0704.3234]. Interesting observables could here be based on the growth of fluctuations due to the nonequilibrium effect of supercooling leading to nucleation and spinodal decomposition [@Csernai:1992tj; @Mishustin:1998eq; @Randrup:2010ax; @Chomaz:2003dz]. At zero baryochemical potential the nature of the phase transition of QCD is well understood from lattice QCD calculations, which show that it is an analytic crossover [@Aoki:2006we]. As a consequence there must be a critical point, which terminates the line of first order phase transitions. In equilibrium systems fluctuations and correlations of the order parameter diverge at the critical point. Coupling particles to the sigma field, the order parameter of chiral symmetry, leads to a nonmonotonic behaviour in fluctuations of net-charge or net-baryon number multiplicities [@Stephanov:1998dy; @Stephanov:1999zu]. The key ingredient is the correlation length which becomes infinite in a system at a critical point. In a realistic evolution of a heavy-ion collision, however, the growth of the correlation length is limited by the size of the system and by the finite time, which the dynamic systems spends at a critical point. Relaxation times also become infinite at the critical point, a phenomenon called critical slowing down. Even if the system is in equilibrium above the critical point it is necessarily driven out of equilibrium by passing trough the critical point. Assuming a phenomenological time evolution of the correlation length with parameters from the $3$d Ising universality class it was found that the correlation length does not grow beyond $2-3$ fm [@Berdnikov:1999ph]. The explicit propagation of fluctuations coupled to a dynamic model is a necessary step towards understanding the QCD phase diagram from heavy-ion collision experiments. Here we present results from a recently developped extention of chiral fluid dynamics [@Mishustin:1998yc; @Paech:2003fe], which self-consistently includes the nonequilibrium propagation of the fluctuation of the order parameter of chiral symmetry, the sigma field [@Nahrgang:2011mg
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'Monatomic nanowires of the nonmagnetic transition metals Ru, Rh, and Pd have been studied theoretically, using first-principles computational techniques, in order to investigate the possible onset of magnetism in these nanosystems. Our fully relativistic spin-polarized all-electron density functional calculations reveal the onset of Hund’s rule magnetism in nanowires of all three metals, with mean-field moments of 1.1, 0.3, and 0.7 $\mu_B$, respectively, at the equilibrium bond length. An analysis of the band structures indicates that the nanocontact superparamagnetic state suggested by our calculations should affect the ballistic conductance between tips made of Ru, Rh or Pd, leading to possible temperature and magnetic field dependent conductance.' address: - '$^1$Materialvetenskap, Brinellvägen 23, KTH, SE-10044 Stockholm, Sweden' - '$^2$Abdus Salam International Center for Theoretical Physics (ICTP), Strada Costiera 11, 34100 Trieste, Italy' - '$^3$International School for Advanced Studies (SISSA), via Beirut 2–4, 34014 Trieste, Italy' - '$^4$INFM DEMOCRITOS National Simulation Center, via Beirut 2–4, 34014 Trieste, Italy ' author: - 'A Delin$^{1,2}$ and E Tosatti$^{2,3,4}$' title: ' Electronic structure of 4d transition-metal monatomic wires ' --- Introduction ============ Reducing the dimensionality and size of a metallic object eventually leads to quantum confinement of the electrons in one or more dimensions. Examples of such systems are metallic nanowires, where the electrons are confined in two dimensions, but unconfined in the third dimension, along the wire. The ultimately smallest metallic wire consists of just a single metallic chain of atoms. Experimentally, long segments of such nanowires have been realized, in particular of Au[@kondo2000_helical; @rodrigues2000]. Production of shorter segments were recently reported for several other metals including the $4d$ transition metals Ru, Rh[@itakura2000], and Pd[@ugarte]. The quantum confinement of the electrons in the wires results in intriguing behaviour with respect to their mechanical, electrical and chemical properties and causes new physical phenomena to appear, for example quantized conductance[@wees1988] and helical geometries[@gulseren1998; @kondo2000_helical; @tosatti2001_tension]. Thus, the properties of these nanosystems may be dramatically different from the bulk properties of the same metals. In particular, it is interesting to explore whether and how nanowires of bulk nonmagnetic metals can become magnetic, and how other properties of these nanosystems in turn are affected by the presence of magnetism in the nanosystem, especially of a genuine Hund’s rule magnetic order parameter. We recently performed a similar study of the $5d$ metals Os, Ir, and Pt, which revealed intriguing magnetic properties of nanowire systems. In the present paper, we concentrate on the $4d$ transition metals Ru, Rh, and Pd, and contrast our results for these metals with our results for the corresponding $5d$ systems. We investigate the possibility of ferromagnetism[@note1] and its effect on other properties, notably quantized conductance for straight monostrand nanowires of these metals, using state-of-the-art all-electron computational methods based on density functional theory. We have also performed the corresponding calculations for the noble metal Ag, where no magnetism is expected, for comparison. We address here the physics of metallic nanowires suspended between two leads, where transmission electron microscope images on monostrand nanowires indicate straight wire geometries[@ugarte]. Nanowires can be stabilized in this way only temporarily, as the flow of atoms to the leads inevitably implies stretching and thinning, which eventually breaks the nanowire[@tosatti2001_tension]. A free, unsuspended chain of atoms would be totally unstable against an even larger set of deformations, for the final stable configuration will be a cluster, approximately spherical in shape, with a surface dominated by close-packed facets. In our calculations, we address strictly the straight wire geometry, with equidistant atoms. One could imagine more complicated monostrand wire geometries, for example zigzag geometries[@sanchezportal1999] or Peierls distortions, leading to di- tri- or multimerization[@peierls]. Such distorted configurations of an unsuspended monostrand wire may represent interesting local minima or saddle points in the total energy. When suspended between leads, however, local minima or saddle points of the string tension are to be considered instead of those of the energy, since they alone will correspond to long-lived, or “magic” nanowires[@tosatti2001_tension]. In Au, the zigzag deformations do not survive the string tension, and the same would happen, if they existed, in Ru, Rh, and Pd. Thus, we shall ignore zigzag distortions, since they are soft against tension, in the systems we address here. Peierls di-, tri- or multimerization distortions are critically dependent on a long wire as well as on a precise Fermi surface nesting, and would lead to insulating nanowires. In Ru, Rh, and Pd, the reported nanocontacts are three atoms long at most[@ugarte]. Moreover, there is no unique nesting since multiple bands cross the Fermi level, the precise Fermi crossings are tension-dependent, and the corresponding incommensurate order parameters are likely suppressed by size. The experimental evidence that nanocontacts of Ru, Rh, and Pd are consistently metallic further suggests neglecting Peierls distortions too until evidence to the contrary. In wires, the electrons are confined in two dimensions. Before investigating in more detail what effect that has on the magnetic properties of Ru, Rh, and Pd, let us summarize shortly what is known about the magnetic properties of these metals when the electrons are confined only in one dimension, or in all three. In a monolayer grown on, or sandwiched between, magnetically “inert” substrates such as Cu, Ag, Au, or graphite, the electrons are at least approximately confined in one dimension, opening up the possibility for two-dimensional magnetism. Search for two-dimensional magnetism in Ru, Rh or Pd in such systems has been conducted extensively both theoretically and experimentally. Starting with Ru, a monolayer of this metal has been observed to order ferromagnetically when grown on graphite[@pfandzelter1995], and when layered between graphene sheets[@suzuki2003]. No magnetism has been observed for Ru monolayers grown on Ag or Au surfaces. Theoretical calculations predict a Ru monolayer to be magnetic on graphite (under certain conditions)[@chen1997; @kruger1998], Ag[@eriksson1991; @blugel1992] and Au[@blugel1992], but nonmagnetic on Cu[@garcia1999]. What regards Rh metal, the only case in which two-dimensional magnetism has been observed is in a superlattice structure of Rh monolayers sandwiched between adjacent graphene sheets[@suzuki2003]. Monolayers of Rh grown on Ag, Au, or graphite have not shown any signs of magnetic order[@beckmann1997; @chado2001; @goldoni2001]. In great contrast to the experimental results, Rh monolayers have been predicted to order ferromagnetically on Cu[@garcia1999], Ag[@eriksson1991; @blugel1992], Au[@zhu1991; @blugel1992], and graphite[@chen1997; @kruger1998]. Finally, it has been predicted that a monolayer of Pd should be nonmagnetic on all substrates tested (Cu[@garcia1999], Ag[@eriksson1991; @redinger1995; @niklasson1997], and graphite[@chen1997; @kruger1998]). However, for Pd (and also Rh) films on Ag, calculations predict that the magnetic moment of the film is periodically suppressed and enhanced due to quantum well effects as a function of film thickness, giving rise to a finite ferromagnetic moment in certain films thicker than one monolayer[@niklasson1997]. All in all, the discrepancy between theory and experiment regarding magnetism in Ru and Rh monolayers appears to be rather large at present. One possible explanation for this discrepancy is diffusion of transition-metal atoms into the substrate, at least when the substrate is a noble metal. What regards Rh on graphite, the intricacies of this system have been discussed in detail in reference[@goldoni2001]. If we reduce the size of all three dimensions down to nanometer size, we end up with clusters. Small Ru, Rh, and Pd clusters have been predicted to have magnetic ground states[@galicia1993; @reddy1993; @vitos2000; @moseler2001]. Experimentally, magnetism has been observed in Rh and Pd clusters. Counter-intuitively, large Pd clusters appear to be magnetic whereas small Pd clusters are not[@cox1994; @sampedro2003; @taniyama1997]. Returning to magnetism in nanowires, it has been predicted that monatomic rows of Rh on Ag(001) are ferromagnetic, using a semi-empirical tight-binding method[@bazhanov2000]. Monoatomic rows of Ru, Rh, and Pd on vicinal surfaces of Ag have also been studied theoretically using a screened Korringa-Kohn-Rostocker Green function method[@bellini2001], predicting magnetism to appear in Ru and Rh chains, but not in Pd chains. Further, Spi[š]{}ák and Hafner[@spisak2003] predict ferrom
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'Hybrid recommendation usually combines collaborative filtering with content-based filtering to exploit merits of both techniques. It is widely accepted that hybrid filtering outperforms the single algorithm, thus it has been the new trend in electronic commerce these years. In this paper, we propose a novel hybrid recommendation system based on weighted stochastic block model (WSBM). Our algorithm not only makes full use of content-based and collaborative filtering recommendation to solve the cold-start problem but also improves the accuracy of recommendation by selecting the nearest neighbor with WSBM. The experiment result shows that our proposed approach has better prediction and classification accuracy than traditional hybrid recommendation.' address: - 'Department of Mathematics, Jinan University, Guangzhou, China' - 'Department of Mathematics, The Hong Kong University of Science and Technology, Hong Kong, China' author: - Yuchen Xiao - Ruzhe Zhong bibliography: - 'Recommendation.bib' title: A hybrid recommendation algorithm based on weighted stochastic block model --- Weighted stochastic block model,Hybrid recommendation. 90B15 ,91B74 ,93A30. Introduction ============ With the development of information technology and network technology, the scale of information from the Internet has rapidly increased in recent years. How to filter overloaded information effectively and recommend useful information to users has become a hot issue in recommendation systems. The current recommendation systems algorithms contain content-based, social-based [@wang2014online], context-aware, collaborative filtering [@candillier2007comparing; @bobadilla2012collaborative], knowledge-based [@shi2004intelligent; @bobadilla2013recommender], graph-based [@wei2013distinguishing] and hybrid recommendation [@porcel2012hybrid]. Among all the algorithms, content-based, collaborative filtering and hybrid recommendation are widely used. Content-based recommendation system [@van2000using; @salter2006cinemascreen] determined the preferences of users by their choices made before. Content-based recommendation extracts the description documents of items and determines whether the items suit users by comparing them with user’s preference. Collaborative filtering algorithm [@candillier2007comparing; @su2009survey] is one of the most widely used and mature recommendation methods [@burke2002hybrid]. Its main idea is to use the historical rating data of the user’s nearest neighbor to predict the rating of him or her, and then recommend the Top-N items for him or her. Breese divided the collaborative filtering algorithm into two parts: memory-based and model-based [@breese1998empirical]. Memory-based algorithm is simple but its accuracy will reduce with the increase of users, while model-based algorithm need to create new models frequently to adapt to the change of users or projects. It is evident that different recommendation methods have different advantages and disadvantages, so hybrid recommendation [@xu2005content; @borras2014intelligent] has become a commonly used algorithm in recommendation system. For example, Ariyoshi and Kamahara proposed a hybrid information recommendation method by applying singular value decomposition (SVD) on both collaborative filtering and content-based filtering respectively to reduce the cost of computation [@ariyoshi2010hybrid]. Lucas et al. divided the users into groups using personal demographic data (Demographic-based), content information of the items previously selected by the user (Content-based) and the information of other users (Collaborative filtering) [@lucas2013hybrid]. Rathachai et al. proposed a prediction model which is a combination of three scoring functions, and took collaborative filtering, community structure, and biological classification into account [@chawuthai2014link]. It can not only take full advantages of a variety of recommendation technologies but also avoid their disadvantages. The accuracy of the recommendation can be increased by using it. According to the users’ basic information and historical users’ rating data, we propose a hybrid recommendation based on WSBM. Our hybrid recommendation contains two parts: the content-based part and the collaborative filtering part. The content-based part predicts the users’ ratings through the similarities of item documentations, so as to recommend the new item. The collaborative filtering part combines the similarity of users’ basic information with the similarity of historical users’ rating data and constructed the overall similarity, which can achieve recommendation for new users. Moreover, in this part, we generate a user-user social network based on the same purchase records and adopt the WSBM to find the nearest neighbor, which improves the accuracy of the algorithm. Hybrid recommendation based on WSBM =================================== Content-based rating prediction ------------------------------- ### Building the feature document of items Vector space model (VSM) is used in this paper to build the feature document of items. We firstly extract the description documents of items from the web, and remove the stop words and function words. According to tf-idf [@kim2014noise], the feature weight of the description documents can be calculated, and the feature words above the given threshold $\sigma$ can be chosen to build a feature space. In the feature space, treat the feature words as key code and the weights as the values for the key code. Thus we get the document-word frequency matrix $X_{r\times s}$ (where $r$ is the number of feature words and $s$ is the number of the items contained in item set $N$). We use latent semantic analysis [@zhong2010novel] to map the document into a lower dimensional latent semantic space and we obtain an approximate matrix $X_M$ as follows $$\begin{aligned} \label{eq1} X_M={U_M}{S_M}{V_M}^T.\end{aligned}$$ ### Predicting rating According to the approximate document-word frequency matrix $X_M$, we calculate the similarity between item $I_j$ and item $I_k$ in (\[eq2\]) to obtain the similarity matrix for $j,k=1,2,\ldots,s$. To predict the rating of user $U_a$ for item $I_j$, we select the items which have been rated by user $U_a$ to form a reference set $I_c^*$. According to the similarity matrix and the reference set, we can predict the rating in (\[eq3\])[@ghauth2010learning] $$\begin{aligned} \label{eq2} sim(I_j,I_k)=\cos(I_j,I_k)=\frac{w_k\cdot w_j}{\lVert w_k\rVert\lVert w_j\rVert},\\ \label{eq3} p_{a,j} = \frac{\sum\limits_{k\ne j,I_k\in I_c^*}sim(I_j,I_k)p_{a,k}}{\sum\limits_{k\ne j,I_k\in I_c^*}sim(I_j,I_k)}.\end{aligned}$$ where $w_j$ is the $j^{th}$ column of $X_M$, $p_{a,j}$ is the rate of user $U_a$ for item $I_j$. Rating prediction of collaborative filtering -------------------------------------------- ### Selecting the nearest neighbor based on WSBM There are two commonly used methods to select the nearest neighbors: one is to select neighbors whose similarity to the target user is larger than a threshold [@kim2007effective], the other is to search for the neighbors who have the greatest $N$ similarities to the target user [@bobadilla2011framework]. Both of the two methods are not universal on different datasets for the reason that the best recommendation can be achieved on different threshold or $N$. As the community is considered to have a high correlation among users in social networks, we consider that users may have similar interests if they purchase same things. So we define the users as the vertexes of the network and if users have the same purchase, an edge is established between them, thus the numbers of same purchases are defined as the weights of edges. After constructing a weighted network, we adopt the WSBM to detect the community structure and to find the vertex’s nearest neighbors in the community. Without relying on the similarities, this method is more universal on different datasets. Based on Stochastic Block Model, WSBM [@aicher2013adapting] defines the distribution of edge as two parts: the existence distribution and the weight distribution. For vertexes in the same community, to avoid heavy-tailed degree distributions, the model revises the probability of edge’s connection between one vertex to another according to the vertex degree. In the WSBM, we define $A$ as the adjacency matrix of the weighted network $N$, and $A_{ij}$ as the weight of the edge between the vertexes $i$ and $j$. The integer $K$ denotes a fixed number of latent communities, and the vector $Z_{n\times 1}$ contains the community labels of each vertex. The WSBM defines a “bundle” of edges that run between each pair of communities ($kk'$) and assigns an edge existence parameter to each edge bundle $kk'$, which we represent collectively by the matrix $\theta_{K\times K}$. The existence probability of an edge $A_{ij}$ is given by the parameter $\theta_{Z_i}\theta_{Z_j}$ that depends only on the community memberships of vertexes $i$ and $j$. Therefore, the model is fully given by $\theta_{K\times K}$ and $Z_{n\times 1}$ [@aicher2014learning]. WSBM models an edge’s existence as a Bernoulli or binary random variable and an edge’
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'Without using conformal transformation, a simple type of five-dimensional $f(R)-$brane model is linearized directly in its higher-order frame. In this paper, the linearization is conducted in the equation of motion approach. We first derive all the linear perturbation equations without specifying a gauge condition. Then by taking the curvature gauge we derive the master equations of the linear perturbations. We show that these equations are equivalent to those obtained in the quadratical action approach \[Phys. Rev. D 95 (2017) 104060\], except the vector sector, in which a constraint equation can be obtained in the equation of motion approach but absent in the quadratical action approach. Our work sets an example on how to linearize higher-order theories without using conformal transformation, and might be useful for studying more complicated theories.' address: - 'School of Science, Xi’an Jiaotong University, Xi’an 710049, People’s Republic of China' - 'School of Physical Science and Technology, Southwest University, Chongqing 400715, People’s Republic of China' - | Institute of Theoretical Physics, Lanzhou University,\ Lanzhou 730000, People’s Republic of China author: - Yuan Zhong - Ke Yang - 'Yu-Xiao Liu' title: 'Linearization of a warped $f(R)$ theory in the higher-order frame II: the equation of motion approach' --- $f(R)$ gravity ,linear perturbations,warped extra dimensions Introduction ============ In the last two decades, warped extra dimensions have been applied to explain the large hierarchy between the electroweak scale and the Planck scale [@RandallSundrum1999; @CabrerGersdorffQuiros2010; @CabrerGersdorffQuiros2011; @CabrerGersdorffQuiros2011; @RaychaudhuriSridhar2016], the splitting of fermion masses [@GherghettaPomarol2000], the reproduction of Newtonian gravity on a lower-dimensional hypersurface [@RandallSundrum1999a; @Gremm2000; @DeWolfeFreedmanGubserKarch2000; @CsakiErlichHollowoodShirman2000], and recently the LHC diphoton excess [@MegiasPujolasQuiros2016] and LHCb anomalies [@MegiasPanicoPujolasQuiros2016] (see [@Quiros2015; @Ponton2012; @Liu2017] for recent reviews on the theory and phenomenology of warped spaces). In a type of warped extra dimensional model, our world is described as a four-dimensional topological domain wall generated by a background scalar field in Einstein’s gravity [@RubakovShaposhnikov1983; @Gremm2000; @DeWolfeFreedmanGubserKarch2000; @CsakiErlichHollowoodShirman2000]. But it is also possible to generate pure geometric domain wall solutions in $f(R)$ theory [@ZhongLiu2016], where the gravitational Lagrangian is an arbitrary function of the scalar curvature (see [@Starobinsky1980; @BarrowOttewill1983; @NojiriOdintsov2003d; @CarrollDuvvuriTroddenTurner2004; @CapozzielloCarloniTroisi2003; @NojiriOdintsov2006] for early literatures and [@SotiriouFaraoni2010; @DeTsujikawa2010] for comprehensive reviews on $f(R)$ theory and its cosmological phenomenology). In this case, the domain wall is non-topological, because it connects two equivalent anti-de Sitter vacuum. More $f(R)$ domain wall solutions can be found in Refs.  [@ZhongLiu2016; @ParryPichlerDeeg2005; @AfonsoBazeiaMenezesPetrov2007; @HoffdaSilvaDias2011; @LiuZhongZhaoLi2011; @BazeiaMenezesPetrovSilva2013; @BazeiaLobaoMenezesPetrovSilva2014; @XuZhongYuLiu2015; @YuZhongGuLiu2015]. It is both important and interesting to study the linearization of domain wall solutions in a warped $f(R)$ gravity. Because the linearization not only tells us whether a solution is stable against small metric perturbation, but also offers the spectra of graviton and radion, which is important for phenomenological applications. As a higher-order curvature theory, $f(R)$ gravity might have some new features than Einstein’s theory. But a direct linearization of $f(R)$ domain wall is not easy, not only because one needs to carefully eliminate the residual gauge degrees of freedom, but also because the equation of motion in $f(R)$ gravity contains derivative up to fourth order. In literature, one usually rewrite the fourth-order $f(R)$ (the higher-order frame) as a second-order Einstein-scalar theory (the Einstein frame) by introducing a proper conformal transformation [@BarrowCotsakis1988; @Maeda1989; @Wands1994; @CapozzielloRitisMarino1997; @FaraoniGunzigNardone1999]. Therefore, to linearize a $f(R)$ domain wall, one can first do the conformal transformation and then conduct the linearization in the Einstein frame [@ZhongLiu2016]. To the authors’ knowledge, there is only a few works directly confront the linearization of $f(R)$ theory without using conformal transformation (see [@HwangNoh1996] for an example in $f(R)$ cosmology). If two frames are equivalent, the perturbation equations must be frame independent. But this conclusion is not obvious. Most importantly, when more general higher-order curvature theories are considered, the conformal transformation might not be convenient any more, then a direct analysis in the higher-order frame is inevitable. The aim of this work is to confront the linearization of $f(R)$ gravity with a warped geometry in the higher-order frame. In a previous work [@ZhongLiu2017], the linearization of $f(R)$ domain wall has been conducted in the quadratical action approach. In this paper, we redo the task in the equation of motion approach, and compare the results with those in Ref. [@ZhongLiu2017]. This paper is organized as follows. In next section, we briefly review the model and specify some conventions. The linearization of warped $f(R)$ domain walls is conducted in Sec. \[sec3\], where the metric perturbation is decomposed into scalar, tensor and vector parts. The gauge degrees of freedom will not be fixed until in Sec. \[sec4\], where the curvature gauge will be applied to simplify the scalar perturbation equation. The result is summarized in Sec. \[secSum\]. The model ========= In this paper, we consider a five-dimensional metric $f(R)$ gravity $$\begin{aligned} \label{action'} S=\frac{1}{2\kappa_5^2}\int d^5x\sqrt {-g}f(R).\end{aligned}$$ The corresponding Einstein field equations are $$\begin{aligned} \label{eqEE} R_{MN}f_R-\frac12g_{MN}f(R)+(g_{MN}\hat{\square}^{(5)} -\nabla_M\nabla_N)f_R=0,\end{aligned}$$ where $\hat{\square}^{(5)}=g^{MN}\nabla_{M}\nabla_{N}$ denotes the five-dimensional d’Alembertian operator defined by the metric $g_{MN}$ and the covariant derivative $\nabla_M$. The capital letters $M,N=0,1,2,3,5$ represent the bulk indices, and $f_R\equiv df(R)/dR$. A warped space is described by the following metric: $$\begin{aligned} \label{metric} ds^2=a^2(r)\eta_{MN}dx^M dx^N,\end{aligned}$$ where $\eta_{MN}=\textrm{diag}(-1,1,1,1,1)$ and $a(r)$ is the warp factor, which depends only on the extra dimension $r\equiv x^5$. Given the line element , it is easy to write the expressions of the connection, the Ricci tensor, the Ricci scalar and the last two terms in Eq. : $$\begin{aligned} \label{backquantities1} \Gamma^P_{MN}&=& 2\frac{{\delta _{(M}^P{\partial _{N)}}a}}{a} - {\eta _{MN}}\frac{{{\partial ^P}a}}{a},\\ R_{MN}&=&6\frac{\partial_{M}a\partial_{N}a}{a^2} -3\frac{\partial_{M}\partial_{N}a}{a} -2\eta_{MN}\left(\frac{a'}{a}\right)^2 -\eta_{MN}\frac{a''}{a},\\ R&=&-4 a^{-2}\left[\left(\frac{a'}{a}\right)^2+2\frac{a''}{a}\right],\\ \nabla_M\nabla_Nf_R &=& {\partial _M}{\partial _N
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'Given a state-of-the-art deep neural network classifier, we show the existence of a (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability. We propose a systematic algorithm for computing universal perturbations, and show that state-of-the-art deep neural networks are highly vulnerable to such perturbations, albeit being quasi-imperceptible to the human eye. We further empirically analyze these universal perturbations and show, in particular, that they generalize very well across neural networks. The surprising existence of universal perturbations reveals important geometric *correlations* among the high-dimensional decision boundary of classifiers. It further outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.[^1]' author: - | Seyed-Mohsen Moosavi-Dezfooli[^2][^3]\ [seyed.moosavi@epfl.ch]{} - | Alhussein Fawzi\ [alhussein.fawzi@epfl.ch]{} - '\' - | Omar Fawzi[^4]\ [omar.fawzi@ens-lyon.fr]{} - | Pascal Frossard\ [pascal.frossard@epfl.ch]{} bibliography: - 'bibliography.bib' title: Universal adversarial perturbations --- Acknowledgments {#acknowledgments .unnumbered} --------------- We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for this research. [^1]: To encourage reproducible research, the code is available at [gitHub](https://github.com/LTS4/universal). Furthermore, a video demonstrating the effect of universal perturbations on a smartphone can be found [here](https://youtu.be/jhOu5yhe0rc). [^2]: The first two authors contributed equally to this work. [^3]: École Polytechnique Fédérale de Lausanne, Switzerland [^4]: ENS de Lyon, LIP, UMR 5668 ENS Lyon - CNRS - UCBL - INRIA, Université de Lyon, France
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We derive expressions for the leading-order far-field flows generated by mobile colloids trapped at planar fluid-fluid interfaces. We consider both externally driven colloids and active colloids (swimmers) either adjacent to or adhered to the interface. In the latter case, we assume a pinned contact line. The Reynolds and capillary numbers are assumed small, in line with typical colloidal systems involving air- or alkane-aqueous interfaces. At clean (surfactant-free) interfaces, the hydrodynamic modes are essentially a restricted set of the usual Stokes multipoles in a bulk fluid. To leading order, driven colloids simply exert Stokelets parallel to the interface, while active colloids drive different kinds of fluid motion depending on their trapped configuration. We then consider how these modes are altered by the presence of an incompressible surfactant layer, which occurs at high Marangoni numbers. This limiting behavior is typical for colloidal-scale systems at small capillary numbers, even when scant surfactant is present. Compared to a clean interface, we find that incompressibility substantially weakens flow directed normal to the interface. Interestingly, for both driven and active colloids, we find that the leading-order flow normal to the interface is associated with colloid asymmetry with respect to the interfacial plane. Flow parallel to the interface, however, is not weakened. Moreover, surface-viscous stresses, if present, potentially generate very long-ranged flow on the interface itself and into the surrounding fluids. We examine the limiting forms of such flows. Our results have important implications for advective mass transport enhancement near fluid boundaries.' author: - 'Nicholas G. Chisholm' - 'Kathleen J. Stebe ,' bibliography: - 'main.bib' title: Driven and active colloids at fluid interfaces --- Intoduction {#sec:intro} =========== Fluid-fluid interfaces provide a rich setting for driven and active colloidal systems. Here, a “driven” colloid moves through a fluid due to external forces or torques, for example, a magnetic bead forced by a magnetic field. “Active” colloids, on the other hand, self propel by consuming a fuel source. For example, motile bacteria are active colloids that self-propel by the rotation of one or more flagella. Autophoretic nanorods or Janus particles are other examples of commonly studied active colloids. These catalytic swimmers self-propel via generation of chemical gradients that produce a propulsive layer of apparent fluid slip along the colloid surface. Past work on colloids adhered to interfaces has focused on their usefulness as Brownian rheological probes when embedded in biological lipid membranes or surfactant monolayers, where colloid motion is, in this case, “driven” by thermal fluctuations. For example, colloidal probes have been used to measure surface viscosity of a fluid interface as a function of surfactant concentration [@Sickert2007]. Such measurements require theoretical models of the mobility of the colloid. @Saffman1975 analytically computed the mobility of a flat disk embedded in a viscous, incompressible membrane separating two semi-infinite subphases in the limit of large Boussinesq number, a dimensionless number comparing the membrane viscosity to that of the surrounding fluid. This calculation was extended to moderate Boussinesq numbers by @Hughes1981 and to subphases of finite depth by @Stone1998. Later theoretical work quantified the response of a linearly viscoelastic membrane to an embedded point force [@Levine2002]. The effects of particle anisotropy have been quantified in the context of the mobility of a needle embedded in an incompressible Langmuir monolayer overlying a fluid of varying depth [@Fischer2004]. Finally, the impact of interfacial compressibility and surfactant solubility on the drag on a disk embedded in an interface above a thin film of fluid has also been quantified [@Elfring2016]. The dynamics of (three-dimensional) colloids that protrude into the surrounding fluid phases has also been characterized. Analytical and numerical analyses of the mobility of spheres [@Fischer2006; @Pozrikidis2007; @Stone2015; @Doerr2015; @Doerr2016] and thin filaments [@Fischer2006] can be found in the literature for clean and surfactant-laden interfaces in the limit of small capillary number, a dimensionless ratio of characteristic viscous stresses to interfacial tension. Active colloids are also strongly influenced by fluid interfaces. Motile bacteria have been the focus of much research in this context due to their relevance to human health and the environment. Seminal work by @Lauga2006 showed, via a resistive-force theory model, that circular trajectories of *E. coli* swimming near a solid boundary are caused by hydrodynamic interaction with the boundary. Similar results are found for free surfaces [@DiLeonardo2011], although the direction of circling is reversed. These theoretical models also predict that there is always an induced velocity toward the boundary, effectively trapping bacterium at the surface. More detailed boundary element simulations have shown the existence of stable trajectories of bacteria near solid boundaries, where the distance from the boundary and curvature of the trajectory reach a steady state [@Giacche2010]. In contrast, similar calculations show only unstable trajectories for swimmers near free surfaces; the swimmer inevitably crashes into the boundary unless it is initially angled steeply enough away to escape it altogether [@Pimponi2016]. Finally, @Shaik2017 analytically computed of the motion of a spherical “squirmer,” a common model for microorganism locomotion, near a weakly deformable interface. Others have investigated motion of autophoretic swimmers at fluid interfaces. Gold-platinum catalytic nanorods are highly motile at aqueous-alkane interfaces, and their rate of rotational diffusion can be used to measure interfacial shear viscosity [@Dhar2006]. Further experiments have shown that partially-wetted, self-propelled Janus particles at air-water interfaces move along circular trajectories with markedly decreased rotational diffusion as compared to their motion in a bulk fluid [@Wang2017]. Theoretical analysis has yielded analytical predictions of the linear and angular velocities of an autophoretic sphere straddling a surfactant-free interface with a freely-slipping, contact line [@Malgaretti2016]. This work has supplied valuable information about the influence of fluid interfaces on active colloid locomotion. Rather than developing detailed models for specific types of swimmers, an alternative approach is to use far-field models that capture universal features of colloid locomotion. For active colloids, this approach has been used to compute swimming trajectories near solid boundaries [@Spagnolie2012] and fluid interfaces [@Lopez2014]. Such methods are accurate when the colloid is separated from the boundary by a few body lengths [@Spagnolie2012]. Recent work has employed far-field models of active colloids to study trapping of microswimmers near surfactant-laden droplets [@Desai2018] and the density distribution of bacteria near fluid interfaces [@Ahmadzadegan2019]. While active and driven colloids near boundaries have been the subject of past theoretical analysis, the focus has largely been on computing drag (on driven colloids) or swimming trajectories (of active colloids) and how they are influenced by the boundary. The actual flows generated by such colloids at interfaces and the implications of these flows have received less attention. Aside from trapping due to hydrodynamic interactions, active colloids may be trapped at fluid interfaces by contact-line pinning, a phenomenon unique to fluid interfaces, in a variety of configurations, greatly affecting their motility and their induced fluid flows. For instance, recent work suggests that contact-line pinning traps *Pseusomonas aeruginosa* in a variety of different and persistent orientations at aqueous-hexadecane interfaces, leading to a distinct motility patterns [@Deng2020]. Bacteria may also become adhered to passive colloids already attached to the interface, towing them as cargo [@Vaccari2018]. The hydrodynamic implications of such trapped states have not been discussed. In this article, we use the multipole expansion method to examine the hydrodynamic modes generated by driven and active colloids at fluid interfaces in a variety of different trapped states. We focus on the modes that dominate in the far field, which may be observable in experiment. We focus on the case where the colloid is physically adhered to a fluid interface with a pinned contact line that constrains its motion. We also consider the case where the colloid is adjacent to the interface but not adhered, as might occur due to hydrodynamic trapping. This article is organized as follows. In \[sec:governing-eqs\], we develop the governing equations for the fluid motion due to colloids at two types of fluid interfaces: a clean, surfactant-free interface and an interface that is rendered incompressible by adsorbed surfactant. In \[sec:reciprocal-relations\], we develop a reciprocal relation that applies to two fluids in Stokes flow separated by either of these types of interface. In \[sec:clean-interfaces\], we develop a multipole expansion appropriate for colloids trapped at a clean interface, and we discuss the leading-order modes that are produced in the driven an active cases. We then compare these results to analogous results at an incompressible interface in \[sec:incompressible-interfaces\]. Finally, we conclude in \[sec:conclusion\] by discussing the implications of our results and opportunities for future research. Governing equations {#sec:governing-eqs} =================== Equations of motion ------------------- We consider a colloid adhered to a planar interface between two immiscible Newtonian fluids of viscosities $\mu_1$ and $\mu_2$, which are quiescent in the far field and together form an unbounded domain. We assume the resulting three-phase contact line is “pinned”, that is,
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'A consecutive formalism and analysis of *exactly solvable radial reflectionless potentials with barriers*, which in the spatial semiaxis of radial coordinate $r$ have one hole and one barrier, after which they fall down monotonously to zero with increasing of $r$, is presented. It has shown, that at their shape such potentials look qualitatively like radial scattering potentials in two-partial description of collision between particles and nuclei or radial decay potentials in the two-partial description of decay of compound spherical nuclear systems. An analysis shows, that the particle propagates without the smallest reflection and without change of an angle of motion (or tunneling) during its scattering inside the spherically symmetric field of the nucleus with such radial potential of interaction, i. e. the nuclear system with such interacting potential shows itself as *invisible* for the incident particle with any kinetic energy. An approach for construction of a hierarchy such reflectionless potentials is proposed, wave functions of the first potentials of this hierarchy are found.' author: - | Sergei P. Maydanyuk [^1]\ *Institute for Nuclear Research, National Academy of Sciences of Ukraine*\ *prosp. Nauki, 47, Kiev-28, 03680, Ukraine* bibliography: - 'Ref\_IMe.bib' title: Invisible nuclear system --- [**PACS numbers:**]{} 11.30.Pb 03.65.-w, 12.60.Jv 03.65.Xp, 03.65.Fd, [**Keywords:**]{} invisible nucleus, supersymmetry, exactly solvable model, reflectionless radial potentials, inverse power potentials, potentials of Gamov’s type, SUSY-hierarchy Introduction \[sec.1\] ====================== An interest to methods of supersymmetric quantum mechanics (SUSY QM) has been increasing every year. Initially constructed for a description of a symmetry between bosons and fermions in field theories, these methods during their development have formed completely independent section in quantum mechanics [@Cooper.1995.PRPLC]. Today, the methods of SUSY QM are a powerful tool for calculation and analysis of spectral characteristics of quantum systems, they have shown as extremely effective in obtaining of new types of *exactly solvable potentials* and in analysis of their properties, in an evident explanation of such unusual phenomena from the point of view of common sense as a *resonant tunneling*, a *reflectionless penetration* (or an *absolute transparency*) of the potentials (differed from the resonant tunneling by that it exists in a whole energy spectrum, where a coefficient of reflection is not only minimal but equals to zero also), *reinforcement of the barrier permeability* and *breaking of tunneling symmetry in opposite directions during the propagation of multiple of particles*, *absolute reflection for above-barrier energies*, *bound states in continuous energy spectra* of systems [@Zakhariev.1993.PHLTA; @Zakhariev.1994.PEPAN]. A number of papers has been increasing every year. Here, I should like to note a fine review [@Cooper.1995.PRPLC], to note intensively developed methods of *Nonlinear (also Polynomial, $N$-fold) supersymmetric quantum mechanics* in ), methods of *shape invariant potentials* with different types of parameters transformations (for example, see ), methods of a description of *self-similar potentials* studied by *Shabat* [@Shabat.1992.INPEE] and *Spiridonov* [@Spiridonov.1992.PRLTA; @Spiridonov.hep-th/0302046] and concerned with $q$-supersymmetry, methods of other types of potentials deformations and symmetries (for example, see [@Gomez-Ullate.quant-ph/0308062]), non-stationary approaches for a description of properties and behavior of quantum systems [@Samsonov.2002.Proc_IM]. One can note papers unified methods of supersymmetry with methods of inverse problem of quantum mechanics, and I should like to mention to nice monography [@Chadan.1977] and reviews [@Zakhariev.1994.PEPAN; @Zakhariev.1999.PEPAN] (with a literature list there). An essential progress has achieved in development of the methods of SUSY QM in spaces with different geometries [@Samsonov.1997.RusPhysJ], in non-commutative spaces [@Ghosh.2005.EPJC]. Having a powerful and universal apparatus, now the methods of SUSY QM find their application in a number of tasks of field theories, in QCD, in development of different models of quantum gravity, cosmology and other. However, in this paper I propose to pay attention into the reflectionless phenomenon in some types of spherical symmetric quantum systems (one note in development of SUSY QM formalism for different scattering problems). We find out a new type of radial exactly solvable reflectionless potential, which in its shape has one hole and one barrier, after which it falls down monotonously to zero with increasing of radial coordinate $r$ [@Maydanyuk.2005.APNYA]. Qualitatively, such potential looks like scattering potentials in two-partial description of collision between particle and spherically symmetric nucleus or decay potentials in the two-partial description of decay of compound spherical nuclear system. An analysis has shown that the particle propagates without the smallest reflection and without change of an angle of motion (or tunneling) in its scattering in the spherically symmetric field of the nucleus with such radial potential of interaction, i. e. the nuclear system with such potential shows itself as *invisible* for the incident particle with any kinetic energy. And this paper is devoted to an analysis of such radial reflection potentials. SUSY-interdependence between spectral characteristics of potentials partners in the radial problem \[sec.2\] ============================================================================================================ Darboux transformations \[sec.2.1\] ----------------------------------- Let’s consider a formalism of Darboux transformations in a problem about motion of a particle with mass $m$ in the spherically symmetric potential field (also see [@Andrianov.hep-th/9404061; @Bagrov.quant-ph/9804032]). The spherical symmetry of the potential allows to reduce this problem to the one-dimensional problem about the motion of this particle in the radial field $V(r)$, defined on the positive semiaxis of $r$, where wave function of such system looks like: $$\psi(r, \theta, \varphi) = \displaystyle\frac{\chi_{nl}(r)}{r} Y_{lm} (\theta, \varphi), \label{eq.2.1.1}$$ and the radial Schrödinger equation has a form: $$H \chi_{nl}(r) = -\displaystyle\frac{\hbar^{2}}{2m} \displaystyle\frac{d^{2} \chi_{nl}(r)}{dx^{2}} + \biggl(V_{n}(r) + \displaystyle\frac{l(l+1) \hbar^{2}}{2mr^{2}} \biggr) \chi_{nl}(r) = E_{n} \chi_{nl}(r) \label{eq.2.1.2}$$ and differs from the one-dimensional Schrödinger equation by a presence of a centrifugal term. One can reduce this equation to one-dimensional one by replacement: $$\bar{V}_{n}(r) = V_{n}(r) + \displaystyle\frac{l(l+1) \hbar^{2}}{2mr^{2}}. \label{eq.2.1.3}$$ As in the one-dimensional case, one can introduce operators $A_{1}$ and $A_{1}^{+}$: $$\begin{array}{ll} A_{1} = \displaystyle\frac{\hbar}{\sqrt{2m}} \displaystyle\frac{d}{dr} + W_{1}(r), & A_{1}^{+} = -\displaystyle\frac{\hbar}{\sqrt{2m}} \displaystyle\frac{d}{dr} + W_{1}(r), \end{array} \label{eq.2.1.4}$$ where $W_{1}(r)$ is a function, defined in the positive semiaxis $0 \le r < +\infty$ and continuous in it with an exception of some possible points of discontinuity. Then one can determine an interdependence between two hamiltonians of the propagation of the particle with mass $m$ in the fields $\bar{V}_{1}(r)$ and $\bar{V}_{2}(r)$: $$\begin{array}{l} H_{1} = A_{1}^{+} A_{1} + C_{1} = -\displaystyle\frac{\hbar^{2}}{2m} \displaystyle\frac{d^{2}}{dr^{2}} + \bar{V}_{1}(r), \\ H_{2} = A_{1} A_{1}^{+} + C_{1} = -\displaystyle\frac{\hbar^{2}}{2m} \displaystyle\frac{d^{2}}{dr^{2}} + \bar{V}_{2}(r), \end{array} \label{eq.2.1.5}$$ where each potential is expressed through one function $W_{1}(r)$: $$\begin{array}{ll} \bar{V
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We prove in this paper that the weighted volume – or generating function – of the set of integral transportation matrices between two integral histograms $r$ and $c$ of equal sum is a positive definite kernel of $r$ and $c$ when the set of considered weights forms a positive definite matrix. The computation of this quantity, despite being the subject of a significant research effort in algebraic statistics, remains an intractable challenge for histograms of even modest dimensions. We propose an alternative kernel which, rather than considering all matrices of the transportation polytope, only focuses on a sub-sample of its vertices known as its Northwestern corner solutions. The resulting kernel is positive definite and can be computed with a number of operations $O(R^2d)$ that grows linearly in the complexity of the dimension $d$, where $R^2$ – the total amount of sampled vertices – is a parameter that controls the complexity of the kernel.' address: 'Graduate School of Informatics, Kyoto University' author: - Marco Cuturi title: Positivity and Transportation --- Introduction ============ Suppose that among $30$ students in a classroom, $7$ and $23$ have light and dark colored eyes respectively. You are also told that $12$ of them have light hair while $18$ have dark hair. What are all the possible populations of the 4 subgroups of students with light/light, dark/dark, light/dark and dark/light eyes and hair color respectively? Such quantities can be arranged in a $2\times 2$ matrix whose row sum vector must be equal to $[7,23]^T$ and column sum vector must be equal to $[12,18]$, $\smallmat{3&4\\9&14}$ for instance, and more generally *any* integer values in the dots below that satisfy these constraints: $$\bordermatrix[{[]}]{& 12 & 18 \cr 7 & \bullet & \bullet\cr 23 & \bullet & \bullet \cr }$$ Alternatively, suppose that two bakeries in a small village produce daily $7$ and $23$ loafs of bread each, while two restaurants in the same area each need $12$ and $18$ loafs to serve their customers every day. What are all the possible morning delivery plans of bread loafs that the two bakeries and shops can agree upon? These seemingly trivial sets of matrices coincide, and are known in the statistics and optimization literature as the sets of *contingency tables* and *transportation plans* respectively. In statistics, the problem of enumerating all such tables arises naturally in hypothesis testing. Suppose that by entering the aforementioned classroom you observe that the actual repartition of these groups is $\smallmat{5&2\\7&16}$. Such an observation intuitively suggests that eye and hair color are related, but how confident should you be about this statement? In the $2\times 2$ case presented above, the Fisher exact test [@yates1934contingency] answers that question by computing the probabilities of *all* possible tables outcomes if one assumes that they have been generated as the product of independent Bernoulli variables with law $p_1=7/30$ and $p_2=12/30$. By comparing all these probabilities with that of the observed table, we can conclude how reliable an independence hypothesis would be. In optimization, given a $2\times 2$ cost matrix which describes the cost (in gas, calories or time) of bringing a loaf from each bakery to each shop, finding the delivery plan with minimal cost is known as a transportation problem. Transportation problems are an extremely general class of linear programs which are known to encompass all instances of network flows [@bertsimas1997introduction p.274]. Optimal transportation distances [@rachev1998mass; @villani09] are distances between probability densities which combine both perspectives outlined above, where the probabilistic view on contingency tables is matched with the goal of computing an optimal transportation plan between two marginal probabilities given a metric on the probability space of interest. Such distances have been widely used in computer vision following the impulsion of @rubner1997earth who used it to compare histograms of image features. When used in information retrieval tasks, transportation distances fare usually better in practice than other classical distances for histograms [@Pele-iccv2009]. Transportation distances have however two notable drawbacks. First, from a geometric point of view, transportation distances are deficient in the sense that they are not negative definite nor Hilbertian. Negative definiteness carries many favorable properties, among which the possibility to create Euclidean embeddings from which the metric can be accurately recovered, as well as the possibility to turn the distance into a positive definite kernel by simple exponentiation, as a radial basis function. Because of this deficiency, there is no known positive definite counterpart to transportation distances that can leverage the complexity of the set of contingency tables. Second, from a computational point of view, the computational cost of computing transportation distances grows in most cases of interest at least quadratically in the dimension $d$ of the histograms, which can be prohibitive for many applications. We try to address both issues in this work. The main contribution of this paper is theoretical: after providing some background material and motivation in Section \[sec:back\] we prove in Section \[sec:trans\] that the generating function of the set of all contingency tables between two integral histograms is a positive definite kernel. Our second contribution is practical: we propose in Section \[sec:nwc\] a positive definite kernel that leverages these ideas while still being computationally tractable. Background {#sec:back} ========== The Transportation Polytope and the Set of Contingency Tables ------------------------------------------------------------- We review in this section a few definitions, notations and results of interest to prove our result. In the following, we write $\dotprod{\,\cdot\,}{\cdot}$ for both the Frobenius dot-product and the usual dot-product of vectors. Given a dimension $d$ fixed throughout this paper, for two vectors $r,c\in \RR^d$, let $U(r,c)$ be the transportation polytope of $r$ and $c$, namely the subset of nonnegative matrices in $\RR^{d\times d}$ defined as: $$U(r,c)\defeq \{X\in\RR_+^{d\times d}\; |\; X\ones_d=r, X^T\ones_d=c\},$$ where $\ones_d$ is the $d$ dimensional vector of ones. $U(r,c)$ contains all nonnegative $d\times d$ matrices with row and column sums $r$ and $c$ respectively. It is easy to check that $U(r,c)$ is non-empty if and only if all coordinates of $r$ and $c$ are non-negative and if the total masses of $r$ and $c$ are the same, that is $r^T\ones_d=c^T\ones_d$. We will consider in most of this work *integral* vectors $r$ and $c$ taken in the set $\Sigma_N$ of $d$-dimensional integral histograms with total mass $N\in\NN$, $$\Sigma_d^N \defeq \{r \in\NN^{d} \;|\; r_1+\cdots+r_d = N\}.$$ We will also focus accordingly on the subset $\UU(r,c)$ of $U(r,c)$ that contains all integral transportation matrices, alternatively known as *contingency tables* [@lauritzen1982lectures; @everitt1992analysis]: $$\UU(r,c)\defeq U(r,c) \cap \NN^{d\times d}.$$ Weighted Volumes of Contingency Tables and Particular Cases of Positivity ------------------------------------------------------------------------- Ranging from early work by @yates1934contingency [@good1976] to @diaconisefron [@cryan2003polynomial; @chen2005sequential], the computation of elementary statistics about $\UU(r,c)$ has attracted considerable attention. Many of the ideas of this paper build upon recent work by @barvinok2008enumerating, most notably on his study of the generating function of $\UU(r,c)$, defined for $M\in \RR^{d\times d}$ as $$V(r,c\,;M)\defeq \sum_{X\in \UU(r,c)} e^{-\dotprod{X}{M}}.$$ The generating function can be related to the *weighted* volume [@barvinok2008enumerating p.2] of $\UU(r,c)$, defined for any nonnegative $d\times d$ matrix $K\in\RR_+^{d\times d}$ as: $$T(r,c\,;K) \defeq \sum_{X\in \UU(r,c)} \prod_{ij}^d k_{ij}^{x_{ij}}.$$ Both definitions are equivalent since if we agree that $k_{ij}=e^{-m_{ij}}$ then $T(r,c\,;K)=V(r,c\,;M)$. Because all of our results rely on $K$’s properties, we will mostly use the weighted volume formulation in this paper. Some sections in this paper, notably §\[subsec:rel\] below and §\[sec:nwc\], are better understood with the generating function formulation.  @cuturi07permanents [Prop.2] proved that the cardinal of the set $\UU(r,c)$ is a positive definite kernel of $r$ and $c$ using the
{ "pile_set_name": "ArXiv" }
null
--- author: - Andrew Adamatzky title: 'Thirty eight things to do with live slime mould[^1]' --- Introduction {#introduction .unnumbered} ============ Acellular slime mould *P. polycephalum* has quite sophisticated life cycle [@stephenson1994myxomycetes], which includes fruit bodied, spores, single-cell amoebas, syncytium. At one phase of its cycle the slime mould becomes a plasmodium. The plasmodium is a coenocyte: nuclear divisions occur without cytokinesis. It is a single cell with thousands of nuclei. The plasmodium is a large cell. It grows up to tens centimetres when conditions are good. The plasmodium consumes microscopic particles and bacteria. During its foraging behaviour the plasmodium spans scattered sources of nutrients with a network of protoplasmic tubes. The plasmodium optimises it protoplasmic network to cover all sources of nutrients, stay away from repellents and minimise transportation of metabolites inside its body. The plasmodium’s ability to optimise its shape [@nakagaki2001path] attracted attention of biologists, then computer scientists [@adamatzky2010physarum] and engineers. Thus the field of slime mould computing was born. So far, the plasmodium is the only useful for computation stage of *P. polycephalum*’s life cycle. Therefore further we will use word ‘Physarum’ when referring to the plasmodium. Most computing and sensing devices made of the Physarum explore one or more key features of the Physarum’s physiology and behaviour: - the slime mould senses gradients of chemo attractants and repellents [@durham1976control; @ueda1976chemotaxis; @rakoczy2015application]; it responds to chemical or physical stimulation by changing patterns of electrical potential oscillations [@ridgway1976oscillations; @kishimoto1958rhythmicity] and protoplasmic tubes contractions [@wohlfarth1979oscillatory; @teplov1991continuum]; - it optimises its body to maximise its protoplasm streaming [@dietrich2015explaining]; and, - it is made of hundreds, if not thousands, of biochemical oscillators [@kauffman1975mitotic] with varied modes of coupling [@grebecki1978plasmodium]. Here we offer very short descriptions of actual working prototypes of Physarum based sensors, computers, actuators and controllers. Details can be found in pioneer book on Physarum machines [@adamatzky2010physarum] and the ‘bible’ of slime mould computing [@adamatzkyAdvancesPhysarum]. Optimisation and graphs ======================= Shortest path and maze {#path} ---------------------- Given a maze we want to find a shortest path between the central chamber and an exit. This was the first ever problem solved by Physarum. There are two Physarum processors which solve the maze. First prototype [@nakagaki2001path] works as follows. The slime mould is inoculated everywhere in a maze. The Physarym develops a network of protoplasmic tubes spanning all channels of the maze. This network represents all possible solutions. Then oat flakes are placed in a source and a destination site. Tube lying along the shortest (or near shortest) path between two sources of nutrients develop increased flow of cytoplasm. This tube becomes thicker. Tubes branching to sites without nutrients become smaller due to lack of cytoplasm flow. They eventually collapse. The sickest tube represents the shortest path between the sources of nutrients. The selection of the shortest protoplasmic tube is implemented via interaction of propagating bio-chemical, electric potential and contractile waves in the plasmodium’s body, see mathematical model in [@tero2006physarum]. The approach is not efficient because we must literally distribute the computing substrates everywhere in the physical representation of the problem. A number of computing elements would be proportional to a sum of lengths of the maze’s channels. Second prototype of the Physarum maze solver is based on Physarum’ chemo-attraction [@adamatzky2012slimemaze]. An oat flake is placed in the central chamber. The Physarum is inoculated somewhere in in a peripheral channel. The oat flake releases chemoattractants. The chemoattractants diffuse along the maze’s channels. The Physarum explores its vicinity by branching out protoplasmic tubes into opening of nearby channels. When a wave-front of diffusing attractants reaches Physarum, the Physarum halts lateral exploration. Instead it develops an active growing zone propagating along gradient of the attractants’ diffusion. The sickest tube represents the shortest path between the sources of nutrients. The approach is efficient because a number of computing elements would be proportional to a length of the shortest path. Towers of Hanoi --------------- Given $n$ discs, each of unique size, and three pegs, we want to move the entire stack to another peg by moving one top disk at a time and not placing a disk on top of smaller disc. The set of all possible configurations and moves of the puzzle forms a planar graph with $3n$ vertices. To solve the puzzle one must find shortest paths between configurations of pegs with discs on the graph [@hinz1989tower; @hinz1992shortest; @romik2006shortest]. Physarum solves shortest path (Sect. \[path\]) therefore it can solve Tower of Hanoi puzzle. This is experimentally demonstrated in [@reid2013solving]. Sometimes Physarum does not construct an optimal path initially. However, if its protoplasmic networks are damaged and then allowed to regrow the path closer to optimal then before develops [@reid2013solving]. Travelling salesman problem --------------------------- Given a graph with weighted edges, find a cyclic route on the graph, with a minimum sum of edge weights, spanning all nodes, where each node is visited just once. Commonly, weight of an edge is an Euclidean length of the edge. Physarum is used as component of an experimental device approximating the shortest cyclic route [@zhu2013amoeba]. A map of eight cities is considered. A set of solutions is represented by channels arranged in a star graph. The channels merge in a central chamber. There are eight channels for each city. Each channel encodes a city and the step when the city appears in the route. There are sixty four channels. Physarum is inoculated in the central chamber. The slime mould then propagates into the channels. A node of the data graph is assumed to be visited when its corresponding channel is colonised by Physarum. Growth of the Physarum in the star-shape is controlled by a recurrent neural network. The network takes ratios of colonizations of the channels as input and produces patterns of illumination, projected onto the channels, as output. The network is designed to prohibit revisiting of already visited nodes and simultaneous visits to multiple nodes. The Physarum propagates into channels and pulls back. Then propagates to other channels and pulls back from some of them. Eventually the system reaches a stable solution where no propagation occurs. The stable solution represents the minimal distance cyclic route on the data graph [@zhu2013amoeba]. A rather more natural approximate algorithm of solving the travelling salesman problem is based on aconstruction of an $\alpha$-shape (Sect. \[concavehull\]). This how humans solve the problem visually [@macgregor1996human]. An approximate solution of the travelling salesman problem by a shrinking blob of simulated Physarum is proposed in [@jones2014computation]. The Physarum is inoculated all over the convex hull of the data set. The Physarum’s blob shrinks. It adapts morphologically to the configuration of data nodes. The shrinkage halts when the Physarum no longer covers all data nodes. The algorithm is not implemented with real Physarum. Spanning tree ------------- A spanning tree of a finite planar set is a connected, undirected, acyclic planar graph, whose vertices are points of the planar set. The tree is a minimal spanning tree where sum of edge lengths is minimal [@nevsetvril2001otakar]. As algorithm for computing a spanning tree of a finite planar set based on morphogenesis of a neuron’s axonal tree was initially proposed [@adamatzky1991neural]: planar data points are marked by attractants (e.g. neurotrophins) and a neuroblast is placed at some site. Growth cones sprout new filopodia in the direction of maximal concentration of attractants. If two growth cones compete for the same site of attractants then a cone with highest energy (closest to previous site or branching point) wins. Fifteen years later we implemented the algorithm with Physarum [@adamatzky2008growing]. Degree of Physarum branching is inversely proportional to a quality of its substrate. Therefore to reduce a number of random branches we cultivate Physarum not on agar but just humid filter paper. Planar data set is represented by a configuration of oat flakes. Physarum is inoculated at one of the data sites. Physarum propagates to a virgin oat flake closest to the inoculation site. Physarum branches, if there are several virgin flakes nearby. It colonises next set of flakes. The propagation goes on until all data sites are spanned by a protoplasmic network. The protoplasmic network approximates the spanning tree. The resulted tree does not remain static though. Later cycles can be formed and the tree is transformed to one of proximity graphs, e.g. relative neighbourhood graph or Gabriel graph [@adamatzky2009developing]. Approximation of transport networks {#
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We propose a notion of autoreducibility for infinite time computability and explore it and its connection with a notion of randomness for infinite time machines introduced in [@CaSc] and [@Ca3].' author: - Merlin Carl title: 'A note on Autoreducibility for Infinite Time Register Machines and parameter-free Ordinal Turing Machines' --- Autoreducibility for Infinite Time Register Machines ==================================================== The classical notion of autoreducibility can, for example, be found in [@DoHi]. We consider how this concept behaves in the context of infinitary machine models of computations. For the time being, we focus on Infinite Time Register Machines ($ITRM$s) (see [@ITRM] and [@ITRM2]) and ordinal Turing machines (see [@Ko]) - but the notion of course makes sense for other types like the Infinite Time Turing Machines ($ITTM$s, see [@HaLe]) as well. For $x\in ^{\omega}2$, we define $x_{\setminus n}$ as $x$ with its $n$th bit deleted (i.e. the bits up to $n$ are the same, the further bits are shifted one place to the left). We say that $x$ is $ITRM$-autoreducible iff there is an $ITRM$-program $P$ such that $P^{x_{\setminus n}}(n)\downarrow=x(n)$ for all $n\in\omega$. $x$ is called totally incompressible iff it is not $ITRM$-autoreducible, i.e. there is no $ITRM$-program $P$ such that $P^{x_{\setminus n}}(n)\downarrow=x(n)$ for all $n\in\omega$. If there is such a program, then we say that $P$ autoreduces $x$, $P$ is an autoreduction for $x$ or that $x$ is autoreducible via $P$. $x\in ^{\omega}2$ is $ITRM$-random in the measure sense iff there is no $ITRM$-decidable set $X$ of Lebesgue measure $0$ such that $x\in X$. $x\in ^{\omega}2$ is $ITRM$-random in the meager sense iff there is no $ITRM$-decidable meager set $X$ such that $x\in X$. We refer the reader to [@CaSc] and [@Ca3] for more information on $ITRM$-randomness, including that used in the course of this note. For the notion of $ITRM$-recognizability, we refer the reader to [@ITRM2], [@Ca] or [@Ca2]. No totally incompressible $x$ is $ITRM$-computable or even recognizable. $0^{\prime}_{ITRM}$, the real coding the halting problem for $ITRM$s, is $ITRM$-autoreducible. Clearly, if $P$ computes $x$, then $P$ is also an autoreduction for $x$. If $x$ is recognizable and $P$ recognizes $x$, we can easily retrieve a deleted bit by pluggin in $0$ and $1$ and letting $P$ run on both results to see for which one $P$ stops with output $1$. (The same idea works for finite subsets instead of single bits.) For $0^{\prime}_{ITRM}$, if a program index $j$ is given, it is easy to determine some index $i\neq j$ corresponding to a program that works in exactly the same way (by e.g. adding a meaningless line somewhere), so that the remaining bits allow us to reconstruct the $j$th bit. The autoreducibility of $0^{\prime}_{ITRM}$ also follows from the recognizability of $0^{\prime}_{ITRM}$ (see [@Ca2]). Let $x\in^{\omega}2$, $i\in\omega$. Then $\text{flip}(x,i)$ denotes the real obtained from $x$ by just changing the $i$th bit, i.e. $x\Delta\{i\}$. In the classical setting, no random real is autoreducible. This is still true for $ITRM$s: [\[randomnessimpliestotalincompressibility\]]{} If $x$ is $ITRM$-random, then $x$ is totally incompressible. (For the meager as well as for the measure $0$ interpretation of randomness.) Assume that $x$ is autoreducible via $P$. We show that $x$ is not $ITRM$-random. Let $X$ be the set of all $y$ which are autoreducible via $P$. Obviously, we have $x\in X$. $X$ is certainly decidable: Given $y$, use a halting problem solver for $P$ to see whether $P^{y_{\setminus n}}(n)\downarrow$ for all $n\in\omega$. If not, then $y\notin X$. Otherwise, carry out these $\omega$ many computations and check the results one after the other.\ Since $X$ is $ITRM$-decidable, it is provably $\Delta_{2}^{1}$, which implies that $X$ has the Baire property and thus is measurable. We show that $X$ must be of measure $0$. To see this, assume for a contradiction that $\mu(X)>0$. Note first, that, whenever $y$ is $P$-autoreducible and $z$ is a real that deviates from $y$ in exactly one digit (say, the $i$th bit), then $z$ is not $P$-autoreducible (since $P$ will compute the $i$th bit wrongly). By the Lebesgue density theorem, there is an open basic interval $I$ (i.e. consisting of all reals that start with a certain finite binary string $s$ length $k\in\omega$) such that the relative measure of $X$ in $I$ is $>\frac{1}{2}$. Let $X^{\prime}=X\cap I$, and let $X^{\prime}_0$ and $X^{\prime}_1$ be the subsets of $X^{\prime}$ consisting of those elements that have their $(k+1)$th digit equal to $0$ or $1$, respectively. Clearly, $X^{\prime}_{0}$ and $X^{\prime}_{1}$ are measurable, $X^{\prime}_{0}\cap X^{\prime}_{1}=\emptyset$ and $X^{\prime}=X^{\prime}_{0}\cup X^{\prime}_{1}$. Now define $\bar{X}^{\prime}_{0}$ and $\bar{X}^{\prime}_{1}$ by changing the $(k+1)$th bit of all elements of $X^{\prime}_{0}$ and $X^{\prime}_{1}$, respectively. Then all elements of $\bar{X}^{\prime}_{0}$ and $\bar{X}^{\prime}_{1}$ are elements of $I$ (as we have not changed the first $k$ bits), none of them is $P$-autoreducible (since they all deviate from $P$-autoreducible elements by exactly one bit, namely the $k$th), $\bar{X}^{\prime}_{0}\cap\bar{X}^{\prime}_{1}=\emptyset$ (elements of the former set have $1$ as their $(k+1)$th digit, for elements of $\bar{X}^{\prime}_{1}$ it is $0$) and $\mu(\bar{X}^{\prime}_{0})=\mu(X^{\prime}_{0})$, $\mu(\bar{X}^{\prime}_{1})=\mu(X^{\prime}_{1})$ (as the $\bar{X}^{\prime}_{i}$ are just translations of the $X^{\prime}_{i}$). As no element of the $\bar{X}^{\prime}_{i}$ is $P$-autoreducible, we have $(\bar{X}^{\prime}_{0}\cup\bar{X}^{\prime}_{1})\cap X^{\prime}=\emptyset$. Let $\bar{X}^{\prime}:=\bar{X}^{\prime}_{0}\cup\bar{X}^{\prime}_{1}$. Then we have\ $\mu_{I}(\bar{X}^{\prime})=\mu_{I}(\bar{X}^{\prime}_{0}\cup\bar{X}^{\prime}_{1})=\mu_{I}(\bar{X}^{\prime}_{0})+\mu_{I}(\bar{X}^{\prime}_{1})=\mu_{I}(X^{\prime}_{0})+\mu_{I}(X^{\prime}_{1})=\mu_{I}(X^{\prime})>\frac{1}{2}$ (where $\mu_{I}$ denotes the relative measure for $I$). So $X^{\prime}$ and $\bar{X}^{\prime}$ are two disjoint subsets of $I$ both with relative measure $>\frac{1}{2}$, a contradiction.\ For the meager version, we proceed similarly, taking $I$ to be an interval in which $X\cap I$ is comeager instead. That such an $I$ exists can be seen as follows: Suppose that $X$ is not meager. As above, $X$ is $ITRM$-decidable, hence provably $\Delta_{2}^{1}$ and therefore has the Baire property. Then, there is an open set $U$ such that $X\setminus U\cup U\setminus X$ is me
{ "pile_set_name": "ArXiv" }
null
--- author: - GIUSEPPE BOCCIGNONE bibliography: - 'levyeye.bib' title: A probabilistic tour of visual attention and gaze shift computational models --- This research was partially supported by the project “Interpreting emotions: a computational tool integrating facial expressions and biosignals based shape analysis and bayesian networks”, grant FIRB - *Future in Research* RBFR12VHR7\ Author’s address: G. Boccignone, Department of Computer Science, University of Milan, via Comelico 39/41, 20135 Milano, Italy; email: giuseppe.boccignone@unimi.it Introduction ============ As the french philosopher Merleau-Ponty put it, “vision is a gaze at grips with a visible world” [@maurice1945phenomenologie]. Goals and purposes, either internal or external, press the observer to maximise his information intake over time, by moment-to-moment sampling the most informative parts of the world. In natural vision this endless endeavour is accomplished through a sequence of eye movements such as saccades and smooth pursuit, followed by fixations. Gaze shifts require visual attention to precede them to their goal, which has been shown to enhance the perception of selected part of the visual field (in turn related to the foveal structure of the human eye, see for an extensive discussion of these aspects). The computational counterpart of using gaze shifts to enable a perceptual-motor analysis of the observed world can be traced back to pioneering work on active or animate vision [@aloimonos1988active; @Ballard; @bajcsy1992active]. The main concern at the time was to embody vision in the action-perception loop of an artificial agent that purposively acts upon the environment, an idea that grounds its roots in early cybernetics [@cordeschi2002discovery]. To such aim the sensory apparatus of the organism must be active and flexible, for instance, the vision system can manipulate the viewpoint of the camera(s) in order to investigate the environment and get better information from it. Surprisingly enough, the link between attention and active vision, notably when instantiated via gaze shifts (e.g, a moving camera), was overlooked in those early approaches, as lucidly remarked by . Indeed, active vision, as it has been proposed and used in computer vision, must include attention as a sub-problem [@rothenstein2008attention]. First and foremost when it must confront the computational load to achieve real-time processing (e.g., for autonomous robotics and videosurveillance). Nevertheless, the mainstream of computer vision has not dedicated to attentive processes and, more generally, to active perception much consideration. This is probably due to the original sin of conceiving vision as a pure information-processing task, a reconstruction process creating representations at increasingly levels of abstraction, a land where action had no place: the “from pixels to predicates” paradigm [@aloimonos1988active]. To make a long story short, the research field had a sudden burst when the paper by was published. Their work provided a sound and neat computational model (and the software simulation) to contend with the problem: in a nutshell, derive a saliency map and generate gaze shifts as the result of a Winner-Take-All (WTA) sequential selection of most salient locations. Since then, proposals and techniques have flourished. Under these circumstances, a deceptively simple question arises: Where are we now? A straight answer, which is the *leitmotiv* of this paper, is that whilst early active vision approaches overlooked attention, current approaches have betrayed purposive active perception. In this perspective, here we provide a critical discussion of a number of models and techniques. It will be by no means exhaustive, and yet, to some extent, idiosyncratic. Our purpose is not to offer a review (there are excellent ones the reader is urged to consult, e.g., [@BorItti2012; @borji2014salient; @bruce2015computational; @bylinskii2015towards]), but rather to spell in a probabilistic framework the variety of approaches, so to discuss in a principled way current limitations and to envisage intriguing directions of research, e.g., the hitherto neglected link between oculomotor behavior and emotion. In the following Section we highlight critical points of current approaches and open issues. In Section \[sec:prob\] we frame such problems in the language of probability. Section \[sec:action\] discusses possible routes to reconsider the problem of oculomotor behaviour within the action/perception loop. In Section \[sec:emo\] we explore the *terra incognita* of gaze shifts and emotion. A Mini review and open issues ============================= The aim of a computational model of attentive eye guidance is to answer the question *Where to Look Next?* by providing: 1. at the *computational theory level* (following ), an account of the mapping from visual data of a natural scene, say $\mathbf{I}$ (raw image data representing either a static picture or a stream of images), to a sequence of time-stamped gaze locations $(\mathbf{r}_{F_1}, t_1), (\mathbf{r}_{F_2}, t_2),\cdots$, namely $$\mathbf{I} \mapsto \{\mathbf{r}_{F_1}, t_1; \mathbf{r}_{F_2}, t_2;\cdots \}, \label{eq:mapping}$$ 2. at the *algorithmic level*, a procedure that simulates such mapping. A simple example of the problem we are facing is shown in Figure \[Fig:variab\]. ![Scan paths eye tracked from different human observers while viewing three pictures of different information content: outdoor (top row), indoor with meaningful objects (middle row), indoor with high semantic content (person and face, bottom row). The area of yellow disks marking fixations between saccades is proportional to fixation time (images freely available from the dataset).[]{data-label="Fig:variab"}](FigVariab.jpg) Under this conceptualization, when the input $\mathbf{I}$ is a static scene (a picture), the fixation duration time and saccade (lengths and directions) sequence are the only observables of the underlying guidance mechanism. When $\mathbf{I}$ stands for a time varying scene (e.g. a video), pursuit needs to be taken into account, too. We will adopt the generic terms of gaze shifts for either pursuit, saccades and fixational movements. In the following, for notational simplicity, we will write the time series $\{\mathbf{r}_{F_1}, t_1; \mathbf{r}_{F_2}, t_2;\cdots \}$ as the sequence $\{\mathbf{r}_{F}(1), \mathbf{r}_{F}(2),\cdots \}$, unless the expanded form is needed. Also, we will generically refer to such sequence as scan path, though this term has a historically precise meaning in the eye movement literature [@privitera2006scanpath]. The common practice of computational approaches to derive the mapping (\[eq:mapping\]) is to conceive it as a two step procedure: 1. obtaining a suitable perceptual representation $\mathcal{W}$, i.e., $\mathbf{I} \mapsto \mathcal{W}$; 2. using $\mathcal{W}$ to generate the scan path, $\mathcal{W} \mapsto \{\mathbf{r}_{F}(1), \mathbf{r}_{F}(2),\cdots \}$. It is important to remark that each gaze position $\mathbf{r}_{F}(t)$ sets a new field of view for perceiving the world, thus $\mathcal{W}=\{\mathcal{W}_{\mathbf{r}_{F}(1)}, \mathcal{W}_{\mathbf{r}_{F}(2)},\cdots \}$ should be a time-varying representation, even in the case of a static image input. This feedback effect of the moving gaze is hardly considered at the modelling stage [@zelinsky2008theory; @TatlerBallard2011eye]. By overviewing the field [@TatlerBallard2011eye; @BorItti2012; @bruce2015computational; @bylinskii2015towards], computational modelling has been mainly concerned with the first step: deriving a representation $\mathcal{W}$, typically in the form of a salience map. Yet, such step has recently evolved in a parallel research program, in which gaze shift prediction and simulation is not the focus, but salient object detection (for an in-depth review of this “second wave” of saliency-centered methods, see ). The second step, that is $\mathcal{W} \mapsto \{\mathbf{r}_{F}(1), \mathbf{r}_{F}(2),\cdots \}$, which actually brings in the question of *how* we look rather than *where*, is seldom taken into account. Surprisingly, in spite of the fact that the most cited work in the field [@IttiKoch98] clearly addressed the *how* issue (gaze shifts as the result of a WTA sequential selection of most salient locations), most models simply overlook the problem. As a matter of fact, the representation $\mathcal{W}$, once computed, is usually validated in respect of its capacity for predicting the image regions that would be explored by the overt attentional shifts of human observers (in a task designed to minimize the role of top-down factors). Predictability is assessed according to some established evaluation measures (see , and , for a recent
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'We introduce a minimal model for a collection of self-propelled apolar active particles, also called as ‘active nematic’, on a two-dimensional substrate and study the order-disorder transition with the variation of density. The particles interact with their neighbours within the framework of the Lebwohl-Lasher model and move asymmetrically, along their orientation, to unoccupied nearest neighbour lattice sites. At a density lower than the equilibrium isotropic-nematic transition density, the active nematic shows a first order transition from the isotropic state to a banded state. The banded state extends over a range of density, and the scalar order parameter of the system shows a plateau like behaviour, similar to that of the magnetic systems. In the large density limit the active nematic shows a bistable behaviour between a homogeneous ordered state with global ordering and an inhomogeneous mixed state with local ordering. The study of the above phases with density variation is scant and gives significant insight of complex behaviours of many biological systems.' author: - Rakesh Das - Manoranjan Kumar - Shradha Mishra title: Density Induced Phases in Active Nematic --- [*Introduction*]{} :— Active systems are composed of [*self-propelled*]{} particles where each particle extracts energy from its surroundings and dissipates it through motion towards a direction determined by its orientation. These kind of systems are ubiquitous in nature, ranging from very small scale systems inside the cell to larger scales [@harada; @nedelec; @rauch; @benjacob; @animalgroups; @helbing; @feder; @kuusela31; @hubbard], vibrated granular media [@vnarayan; @kudrolli] etc., and have been studied extensively through experiments, theories and simulations [@sriramrmp; @tonertusr; @rev]. A collection of head-tail symmetric ‘apolar’ active particles with an average mutual parallel alignment is said to be in a ‘nematic’ state, whereas in an ‘isotropic’ state particles remain randomly oriented. An active system where fluid media do not play important role in emergence of ordered state, and thus the hydrodynamic interactions can be ignored, is called a ‘dry active system’ [@kemkemer; @vnarayan; @animalgroups; @serra; @schaller; @surrey]. Active nature of particles introduces a nonequilibrium coupling between density and orientation field, as represented in terms of curvature coupling current in literature [@sradititoner; @shradhanjop; @sriramrmp]. Such coupling in active nematic induces unusual properties like large density fluctuation [@sradititoner; @chateprl2006] and growth kinetics faster than $1/3$ as in usual conserved model [@shradhatrans2014]. Recent studies of the active nematic found a defect-ordered nematic state [@aparnaredner; @shimanatcomm; @yeomans] as opposed to the equilibrium nematic for high particle densities. Recent experiment on amolyiod flibrils [@ncommam] also found a phase with coexisting aligned and disordered fibril domains, similar to the defect-ordered state obtained in simulations. But few investigations have been done on the behaviours of the active nematic in various density limits, especially at low densities. Here we introduce a minimal model for two-dimensional active nematic and compare various ordering phases of active and equilibrium nematic in different density limits. The ordering in the system is characterised in terms of a scalar order parameter $S$ which is the positive eigen value of nematic order parameter $\Q$ [@pgdgenne] in two-dimensions. In the low density limit both active and equilibrium systems are in the isotropic (I) state with particles randomly oriented throughout the whole system (see Fig. \[fig:phase\_snap\](b) - I), resulting in a small $S$. The Phase diagram of the active nematic as a function of packing density $C$ (see Fig. \[fig:phase\_snap\](a)) shows a jump in $S$ close to $C=0.37$, whereas in the equilibrium case $S$ goes continuously to larger values and an isotropic to nematic (I-N) transition occurs close to $C=0.58$. In the equilibrium nematic (EN) state particles remain homogeneously oriented in the system (see Fig. \[fig:phase\_snap\](b) - EN). At $C=0.37$ the active system goes from the isotropic to a banded state (BS) where particles cluster and align in the perpendicular direction to the long axis of the band (see Fig. \[fig:phase\_snap\](b) - BS-1). With increasing density more number of particles participate in band formation (see Fig. \[fig:phase\_snap\](b) - BS-2) and $S$ follows a plateau over a range of density. In the large density limit active system shows bistability between a homogeneous ordered (HO) (see Fig. \[fig:phase\_snap\](b) - HO) and an inhomogeneous mixed (IM) or local ordered state (see Fig. \[fig:phase\_snap\](b) - IM). This IM state is very similar to defect-ordered nematic state in ref. [@aparnaredner; @shimanatcomm; @yeomans]. [*Model*]{} :— We consider a two dimensional square lattice. At each vertex ‘$i$’ we define an occupation variable $n_i$, which can take values $1$ (occupied) or $0$ (unoccupied), and an orientation variable $\theta_i$, which lies between $0$ and $\pi$ because of the apolar nature of the particles. Each particle interacts with its nearest neighbours through modified Lebwohl - Lasher Hamiltonian [@llasher] $$\mathcal{H} = -\epsilon \sum_{<ij>}n_i n_j \cos2(\theta_i-\theta_j) \label{eqll}$$ where $\epsilon$ is the interaction strength between two neighbouring particles. This model is analogous to the diluted XY-model with nonmagnetic impurities [@dilutedxymodel], where impurities and spins are analogous to vacancies and particles respectively in the present model. Orientation evolves through Monte - Carlo (MC) updates [@mcbinder] following the Hamiltonian in Eq. \[eqll\]. Unlike the diluted XY-model, particles also move on the lattice. Depending on the type of movement we define two kinds of models. (i) ‘Equilibrium model’ (EM) - a particle can diffuse to any unoccupied nearest-neighbouring site, and therefore satisfies the detailed balance condition. (ii) ‘Active model’ (AM) - a particle can move to only those unoccupied nearest-neighbouring sites which are in the direction that makes the least inclination with the particle orientation. Details of the model and particle movement are shown in Supplemental Material [@SM] section I. The asymmetric move of the active particles does not staisfy the detailed balance condition and arises in general because of the self-propelled nature of the particles in many biological [@kemkemer; @paxton] and granular systems [@vnarayan]. These moves produce an active curvature coupling current in coarse-grained hydrodynamic equations of motion [@shradhanjop; @sradititoner]. [*Numerical details*]{} :— We consider a collection of $N$ particles with random orientation $\theta_i \in [0,\pi]$, homogeneously distributed on a $L \times L$ square lattice ($L=150, 256, 512$) with periodic boundary condition. The packing density of the system is $C=N/(L \times L)$. We choose a particle randomly and move it to an unoccupied neighbouring site, followed by an orientation updation through MC. We use $10^6$ MC steps to evolve the system to its steady state and all the results have been obtained by averaging over next $2 \times 10^6$ MC steps. Twenty four realizations have been used for better statistics. We calculate the scalar order parameter $$S=\sqrt{(\frac{1}{N}\sum_i n_i \cos(2 \theta_i))^2+(\frac{1}{N}\sum_i n_i \sin(2 \theta_i))^2} \label{eqops}$$ which is small in the isotropic state and close to $1$ in the ordered state. First we calculate $S$ for EM as a function of inverse temperature $\beta= 1 / k_BT$ for different densities. As shown in Supplemental Material [@SM] section II, the critical temperature $T_c$ is approximated as $T_c(S=0.4)$. Critical temperature $T_c(C)$ decreases with the lowering of the packing density $C$ , similar trend is found in the study of diluted XY-model [@dilutedxymodel] for varying nonmagnetic site density. In rest of our calculations temperature is kept fixed at $\beta\epsilon = 2.0$ and packing density $C$ is varied from small values to complete filling $C=1.0$. [*Phase diagram*]{} :— At low densities, $C<0.37$, the active system is in the isotropic state where the particles with random orientation remain homogeneously distributed throughout the system, and therefore $S$ holds vanishingly small values. The jump occurs in $S$ at $C=0.37$. For $C \geq 0.37$ particles cluster in, and both ordered state with high local density and disordered state with low local density coexist (see Fig. \[fig:phase\_snap\](b) - BS-
{ "pile_set_name": "ArXiv" }
null
--- abstract: 'The description of the proximity effect in superconducting/ferromagnetic heterostructures requires to use spin-dependent boundary conditions. Such boundary conditions must take into account the spin dependence of the phase shifts acquired by electrons upon scattering on the boundaries of ferromagnets. The present article shows that this property can strongly affect the critical temperature and the energy dependence of the density of states of diffusive heterostructures. These effects should allow a better caracterisation of diffusive superconductor/ferromagnet interfaces.' author: - Audrey Cottet title: 'Spectroscopy and critical temperature of diffusive superconducting/ferromagnetic hybrid structures with spin-active interfaces' --- Introduction ============ When a ferromagnetic metal ($F$) with uniform magnetization is connected to a BCS superconductor ($S$), the singlet electronic correlations characteristic of the $S$ phase can propagate into $F$ because electrons and holes with opposite spins and excitation energies are coupled coherently by Andreev reflections occurring at the $S/F$ interface. Remarkably, the ferromagnetic exchange field induces an energy shift between the coupled electrons and holes, which leads to spatial oscillations of the superconducting order parameter in $F$ [@Buzdin1982; @Golubov]. This effect has been observed experimentally through oscillations of the density of states (DOS) in $F$ with the thickness of $F$ [@TakisN], or oscillations of the critical current $I_{0}$ through $S/F/S$ structures [@Ryazanov; @TakisI; @SellierPRB; @Blum], with the thickness of $F$ or the temperature. The oscillations of $I_{0}$ have allowed to obtain $\pi$-junctions[@Guichard], i.e. Josephson junctions with $I_{0}<0$, which could be useful in the field of superconducting circuits [@Ioffe; @Taro]. A reentrant behavior of the superconducting critical temperature of $S/F$ bilayers with the thickness of $F$ has also been observed [@TcSF]. At last, some $F/S/F$ trilayers have shown a lower critical temperature for an antiparallel alignment of the magnetizations in the two $F$ layers as compared with the parallel alignment[@SSspinswitch], which should offer the possibility of realizing a superconducting spin-switch[@DeGennes; @TcFSFth]. ![a. Diffusive $F/S/F$[ ]{}trilayer consisting of a BCS superconductor $S$ with thickness $d_{S}$ placed between two ferromagnetic electrodes $F_{1}$ and $F_{2}$ with thickness $d_{F}$. In this picture, the directions of the magnetic polarizations in $F_{1}$ and $F_{2}$  are parallel \[antiparallel\], which corresponds to the configuration $\mathcal{C}=P$ $[AP]$. b. $S/F$[ ]{}bilayer consisting of a BCS superconductor $S$ with thickness $d_{S}/2$ contacted to a ferromagnetic electrode $F$ with thickness $d_{F}$.](structure.eps){width="0.7\linewidth"} For a theoretical understanding of the behavior of $S/F$ hybrid circuits, a proper description of the interfaces between the different materials is crucial. For a long time, the only boundary conditions available in the diffusive case were spin-independent boundary conditions derived for $S/$normal metal interfaces[@Kuprianov]. Recently, spin-dependent boundary conditions have been introduced for describing hybrid diffusive circuits combining BCS superconductors, normal metals and ferromagnetic insulators [@condmatHuertas]. These boundary conditions take into account the spin-polarization of the electronic transmission probabilities through the interface considered, but also the spin-dependence of the phase shifts acquired by electrons upon transmission or reflection by the interface. The first property generates widely known magnetoresistance effects[@magn]. The second property is less commonly taken into account. However, the Spin-Dependence of Interfacial Phase Shifts (SDIPS) can modify the behavior of many different types of mesoscopic circuits with ferromagnetic elements, like those including a diffusive normal metal island [@FNF], a resonant system[@CottetEurophys; @SST], a Coulomb blockade system[@CB; @Cottet06; @SST], or a Luttinger liquid[@Luttinger]. It has also been shown that the SDIPS has physical consequences in $S/F$ hybrid systems[@Tokuyasu; @otherBC; @mixing; @condmatHuertas]. One can note that, in some references, the SDIPS is called ”spin-mixing angle” or ”spin-rotation angle” (see e.g. Refs. ).** **In the diffusive $S/F$ case, the spin-dependent boundary conditions of Ref.  have been applied to different circuit geometries[@demoBC; @applications; @Morten; @cottet05; @Braude] but the only comparison to experimental data has been performed in Ref. . The authors of this Ref. have generalized the boundary conditions of Ref.  to the case of metallic $S/F$ interfaces with a superconducting proximity effect in $F$. They have showed that the SDIPS can induce a shift in the oscillations of the critical current of a $S/F/S$ Josephson junction or of the DOS of a $S/F$ bilayer with the thickness of $F$. Signatures of this effect have been identified in the hybrid structures of Refs. . Nevertheless, the problem of characterizing the SDIPS of diffusive $S/F$ interfaces has raised little attention so far, in spite of the numerous experiments performed. A good characterization of the properties of diffusive $S/F$ interfaces would be necessary for a better control of the superconducting proximity effect in diffusive heterostructures. The present article presents other consequences of the SDIPS than that studied in Ref. , which could be useful in this context. In particular, the SDIPS can generate an effective magnetic field in a diffusive $S$ in contact with a diffusive $F $, like found for a ballistic $S$ in contact with a ferromagnetic insulator [@Tokuyasu]. This effective field can be detected, in particular, through the DOS of the diffusive $F$ layer, with a visibility which depends on the thickness of $F$. A strong modification of the variations of the critical temperature of diffusive $S/F$ structures with the thickness of $F$ is also found. These effects should allow to characterize the SDIPS of diffusive $S/F$ interfaces through DOS and critical temperature measurements, by using the heterostructures currently fabricated. The calculations reported in this paper are also appropriate to the case of a diffusive $S$ layer contacted to a ferromagnetic insulator ($FI $). This paper is organized as follows: Section II presents the initial set of equations used to describe the heterostructures considered. The case of $F/S/F$ trilayers is mainly addressed, but the case of $S/F$ (or $S/FI$) bilayers follows straightforwardly. Section III specializes to the case of a weak proximity effect in $F$ and a superconducting layer with a relatively low thickness $d_{S}\leq\xi_{S}$, with $\xi_{S}$ the superconducting coherence length in $S$. The spatial evolution of the electronic correlations in the $S$ and $F$ layers is studied in Section III.A. The energy-dependent DOS of $S/F$ heterostructures is calculated in Section III.B. Section III.C considers briefly the limit of $S/FI$ bilayers. Section III.D discusses SDIPS-induced effective field effects in other types of systems. Section III.E compares the present work to other DOS calculations for data interpretation in $S/F$ heterostructures. Critical temperatures of $S/F$ circuits are calculated and discussed in Section III.F. Conclusions are presented in Section IV. Throughout the paper, I consider conventional BCS superconductors with a s-wave symmetry. Initial description of the problem ================================== This article mainly considers a diffusive $F/S/F$[ ]{}trilayer consisting of a BCS superconductor $S$ for $-d_{S}/2<x<d_{S}/2$, and ferromagnetic electrodes $F_{1}$ for $x\in\{-d_{S}/2-d_{F},-d_{S}/2\}$ and $F_{2}$ for $x\in\{d_{S}/2,d_{S}/2+d_{F}\}$ (see Figure 1.a). The magnetic polarization of the two ferromagnets can be parallel (configuration $\mathcal{C}=P$) or anti-parallel (configuration $\mathcal{C}=AP$), but the modulus $\left| E_{ex}\right| $ of the ferromagnetic exchange field is assumed to be the same in $F_{1}$ and $F_{2}$. Throughout the structure, the normal quasiparticle excitations and the superconducting condensate of pairs can be characterized with Usadel normal and anomalous Green’s functions $G_{n,\sigma}=\mathrm{sgn}(\omega_{n})\cos(\theta_{n,\sigma})$ and $F_{n,\sigma}=\sin(\theta_{n,\sigma})$, with $\theta_{n,\sigma}(x)$ the superconducting pairing angle, which depends on the spin direction $\sigma \in\{\uparrow,\downarrow\}$, the Matsubara frequency $\omega_{n}(T)=(2n+1)\pi k_{B}T$, and the coordinate $x$ (see e.g. Ref.
{ "pile_set_name": "ArXiv" }
null