text
stringlengths
0
12.5k
meta
dict
sentences_perturbed
int64
0
15
length_stats
dict
--- abstract: 'We present the IR luminosity function derived from ultra-deep 70$\mu$m imaging of the GOODS-North field. The 70 $\mu$m observations are longward of the PAH and silicate features which complicate work in the MIR. We derive far-infrared luminosities for the 143 sources with $S_{70}> 2$ mJy (S/N $> 3 \sigma$). The majority (81%) of the sources have spectroscopic redshifts, and photometric redshifts are calculated for the remainder. The IR luminosity function at four redshifts ($z \sim$ 0.28, 0.48, 0.78, and 0.97) is derived and compared to the local one. There is considerable degeneracy between luminosity and density evolution. If the evolving luminosity function is described as $\rho(L, z) = (1 + z)^q \rho(L/(1 + z)^p, 0)$, we find $q = -2.19p + 6.09$. In the case of pure luminosity evolution, we find a best fit of $p = 2.78^{+0.34}_{-0.32}$. This is consistent with the results from 24$\mu$m and 1.4GHz studies. Our results confirm the emerging picture of strong evolution in LIRGs and ULIRGs at $0.4 < z < 1.1$, but we find no evidence of significant evolution in the sub-LIRG ($L < 10^{11} L_{\odot}$) population for $z < 0.4$.' author: - 'Minh T. Huynh' - 'David T. Frayer' - Bahram Mobasher - Mark Dickinson - 'Ranga-Ram Chary' - Glenn Morrison bibliography: - 'refs.bib' title: 'The Far-Infrared Luminosity Function from GOODS-N: Constraining the Evolution of Infrared Galaxies for $\lowercase{z} \leq 1$' --- Introduction ============ Deep mid-infrared surveys are revealing a population of mid and far-infrared luminous galaxies out to $z \sim 3$. These luminous (LIRGs, $10^{11} L_\odot < L_{\rm IR} \equiv L_{8-1000\mu{\rm m}} < 10^{12} L_\odot$) and ultraluminous (ULIRGs, $L_{\rm IR} > 10^{12} L_\odot$) infrared galaxies are relatively rare in the local universe, but become increasingly important at high redshift, where dust enshrouded starbursts dominate the total cosmic star formation rate (e.g. [@chary2001], [@blain2002]). The [*Infrared Space Observatory*]{} (ISO) showed that infrared luminous starbursts were much more numerous at $z \sim 1$ than at the present time [@franceschini2001; @elbaz2002]. The ISO results were expanded upon by deep surveys at 24 $\mu$m with the Multiband Imaging Photometer (MIPS) on the [*Spitzer Space Telescope*]{} (e.g. [@chary2004], [@papovich2004]). Using the excellent ancillary data in the Great Observatories Origins Deep Survey (GOODS) South and North fields, 15 $\mu$m and total infrared luminosity functions were derived from thousands of 24 $\mu$m sources [@lefloch2005; @perez2005]. Strong evolution of the IR population was found and the IR luminosity function evolves as $(1 + z)^4$ for $z \lesssim 1$ [@lefloch2005; @perez2005]. The 24 $\mu$m results are dependent on the set of SED templates used to extrapolate the 24 $\mu$m flux densities to 15 $\mu$m and total infrared luminosities. Furthermore, significant variations in the bolometric correction are expected as strong PAH and silicate emission and absorption features are redshifted into the 24 $\mu$m band. Observations with the 70 $\mu$m band of MIPS are closer to the peak in FIR emission and are not affected by PAH or silicate features for $z \lesssim 3$. They should therefore provide more robust estimates of the far-infrared (FIR) luminosities. Studies by ISO in the FIR regime have been limited in sensitivity ($S_{90 \mu{\rm m}} \gtrsim 100$ mJy, $S_{170\mu{\rm m}} > 200$ mJy) and redshift completeness [@serjeant2004; @takeuchi2006]. [@frayer2006] derived a FIR luminosity function (LF) for the Extragalactic First Look Survey (xFLS) from Spitzer 70 $\mu$m data, but this survey had incomplete redshift information at faint fluxes, and it was limited to $z < 0.3$ and bright ($S_{70\mu{\rm m}} \gtrsim 50$ mJy) sources. In this paper we present the infrared luminosity function up to redshift 1 from the ultra-deep 70 $\mu$m survey of GOODS-N. We assume a Hubble constant of $71\,{\rm km}\,{\rm s}^{-1}{\rm Mpc}^{-1}$, and a standard $\Lambda$-CDM cosmology with $\Omega_{\rm M}=0.27$ and $\Omega_{\rm \Lambda}=0.73$ throughout this paper. We define the IR flux as the integrated flux over the wavelength range 8 to 1000 $\mu$m. The Data ======== Ultra-deep 70 $\mu$m Imaging ---------------------------- The GOODS-N field is centered on the Hubble Deep Field North at 12h36m55s, +62$^\circ$14m15s. The MIPS 70$\,\mu$m observations of GOODS-N were carried out during Cycle 1 ([*Spitzer*]{} program ID 3325, [@frayer2006b] and Cycle 3 (January 2006) for the Far Infrared Deep Extragalactic Legacy project (FIDEL, Spitzer PID:30948, PI: Dickinson). Together these data map a region 10$\times$ 18 to a depth of $10.6\,$ksec. The raw data were processed off-line using the Germanium Reprocessing Tools (GeRT), following the techniques described in [@frayer2006b]. We have cataloged 143 sources (over $\sim$$185\,{\rm arcmin}^2$) with $S_{70}\,{\gtrsim}\,2.0\,$mJy (S/N$\,{>}\,3\sigma$) in GOODS-N. The 70$\,\mu$m images have a beam size of 185 FWHM, and in the presence of Gaussian noise the 1$\sigma$ positional error of sources is of the order $\frac{0.5\,\theta_{\rm FWHM}}{{\rm S/N}}$, i.e. 3  for the faintest sources. Redshifts --------- All 70 micron sources were matched to 24 micron and IRAC sources to obtain good positions. The best Spitzer position was then used to search for optical redshifts. About 7% of the 70 micron sources have more than one 24 micron source within the 70 micron beam, and these were deblended individually (e.g. [@huynh2007]). Spectroscopic redshifts are available for 116 of the 143 objects ([@cohen2000]; [@wirth2004]; Stern et al. in prep). Photometric redshifts were derived for 141 of the 143 sources with the extensive photometry available: ACS [*HST*]{} [@giavalisco2004], U- (NOAO), BVRIz- (Subaru-SupremeCam) and JK- (NOAO/KittPeak-Flamingo) imaging. The photometric redshifts were calculated using the $\chi2$ minimization technique as explained in [@mobasher2006]. We have photometric redshifts for 26/27 sources that don’t have a spectroscopic redshift and we therefore have redshift information for 142/143 sources. We quantified the reliability of the photometric redshifts by examining the fractional error, $\Delta \equiv (z_{\rm phot} - z_{\rm spec} / (1 + z_{\rm spec})$. For all 115 70 $\mu$m sources with both photometric and spectroscopic redshifts, we found the median fractional error, $\Delta$, is $0.012 \pm 0.20$. Assuming the 6 cases where the fractional error is greater than 0.2 are outliers, the success rate of the photometric redshift method is 95%. Removing the 6 outliers gives a median fractional error of $0.0014 \pm 0.05$. We therefore conclude that the photometric redshifts are statistically reliable. The 70 micron sources have a median redshift of 0.64 (see Figure 1). The majority (79%) of sources lie at $z < 1$, as expected for the survey sensitivity and steep k-correction that is present at 70 micron. Infrared Luminosities ===================== Many authors argue that the MIR is a good indicator of the bolometric IR luminosity for normal and IR luminous galaxies (e.g. Chary and Elbaz 2001). Based on this, several authors have developed sets of galaxy templates that can be used to estimate the total infrared luminosity ([@chary2001]; [@dh02]; [@lagache2003]). We use the luminosity dependent SED templates based on local galaxies from Chary and Elbaz (2001) to determine the IR luminosities of the 70 $\mu$m galaxies. However it is not clear whether local templates can accurately reproduce the MIR SED of distant galaxies because PAH and silicate absorption features are dependent on complex dust physics, including the intensity of the radiation field, the metallicity of the ISM, and the distribution of grain sizes.
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'P. Papaderos' - 'J.M. Gomes' - 'J.M. Vílchez' - 'C. Kehrig' - 'M.D. Lehnert' - 'B. Ziegler' - 'S. F. Sánchez' - 'B. Husemann' - 'A. Monreal-Ibero' - 'R. Garc[í]{}a-Benito' - 'J. Bland-Hawthorn' - 'C. Coritjo' - 'A. de Lorenzo-C[á]{}ceres' - 'A. del Olmo' - 'J. Falcón-Barroso' - 'L. Galbany' - 'J. Iglesias-Páramo' - 'Á.R. López-Sánchez' - 'I. Marquez' - 'M. Moll[á]{}' - 'D. Mast' - 'G. van de Ven' - 'L. Wisotzki' - the CALIFA collaboration date: 'Received 11 April 2013 / Accepted 2 June 2013' title: 'Nebular emission and the Lyman continuum photon escape fraction in CALIFA early-type galaxies [^1]' --- =1 Introduction \[intro\] ====================== Even though the presence of faint nebular emission () in the nuclei of many early-type galaxies (ETGs) has long been established observationally [e.g., @Phillips1986; @sar06; @sar10; @ani10; @Kehrig2012 hereafter K12], the nature of the dominant excitation mechanism of the warm interstellar medium () in these systems remains uncertain. The [*low-ionization nuclear emission-line region*]{} (LINER) emission-line ratios, as a typical property of ETG nuclei, have prompted various interpretations [see, e.g., K12, @YanBlanton2012], including low-accretion rate active galactic nuclei [AGN; e.g., @Ho1999], fast shocks [e.g. @dop95], and hot, evolved ($\geq 10^8$ yr) post-AGB (pAGB) stars [e.g., @tri91; @bin94; @sta08]. Since each of these mechanisms is tied to distinct and testable expectations on the 2D properties of the , the limited spatial coverage of previous single-aperture and longslit spectroscopic studies has been an important obstacle to any conclusive discrimination between them. Spatially resolved integral field spectroscopy (IFS) over the entire extent of ETGs offers an essential advantage in this respect and promises key observational constraints toward the resolution of this longstanding debate. This Letter gives a brief summary of our results from an ongoing study of 32 ETGs, which were mapped with deep IFS over their entire extent and optical spectral range with the goal of gaining deeper insight into the 2D properties of their . A detailed discussion of individual objects and our methodology will be given in Gomes et al. (2013, in prep.; hereafter G13) and subsequent publications of this series. This study is based on low-spectral-resolution ($R\sim 850$) IFS cubes for 20 E and 12 S0 nearby ($<$150 Mpc) galaxies from the [*Calar Alto Legacy Integral Field Area*]{} (CALIFA) survey [@Sanchez2012 Walcher et al. 2013, in prep.]. These data are being made accessible to the community in a fully reduced and well-documented format [@Husemann2013] through successive data releases. Methodology and results \[meth\] ================================ The CALIFA data cubes were processed with the pipeline (see K12 and G13 for details), which, among various other tasks, permits spaxel-by-spaxel spectral fitting of the stellar component with the population synthesis code [starlight]{} [@cid05] and subsequent determination of emission line fluxes and their uncertainties from the pure emission-line spectrum (i.e. the observed spectrum after subtraction of the best-fitting synthetic stellar model). For each ETG, typically $\sim$1600 to $\sim$3400 individual spectra with a S/N$\geq$30 at 5150 Å  were extracted and modeled in the spectral range 4000–6800 Å using both @bru03 [hereafter BC] and MILES [@san06; @vaz10] simple-stellar population (SSP) libraries, which comprise 34 ages between 5 Myr and 13 Gyr for three metallicities (0.008, 0.019, and 0.03), i.e., 102 elements each. After full analysis and cross-inspection of the relevant output from the BC- and MILES-based models, the emission-line maps for each ETG were error-weighted and averaged spaxel-by-spaxel to reduce uncertainties. An extra module in permits computation of the Lyman continuum () ionizing photon rate corresponding to the best-fitting set of BC SSPs for each spaxel. The  output is then converted into Balmer line luminosities assuming case B recombination for an electron temperature and density of $10^4$ K and 100 cm$^{-3}$, respectively. The same module computes the distance-independent $\tau$ ratio of the  luminosity predicted from pAGB photoionization to the one observed [see @bin94; @cid11 for equivalent quantities]. The latter is optionally corrected for intrinsic extinction, assuming this to be equal to the extinction A$_V$ in the stellar component (cf K12 and G13). Since spectral fits imply a low ($\leq$0.3 mag) A$_V$ in most cases, this correction typically has a weak effect on $\tau$. We preferred to not base corrections of the $\tau$ ratio on nebular extinction estimates since these are consistent with A$_V$ within their uncertainties. We note that state-of-the-art SSP models imply that the  photon rate per unit mass from pAGB stellar populations of nearly-solar metallicity (0.008$\la Z \la$0.03) is almost independent of age, metallicity, and star formation history [e.g. @cid11 G13]. However, substantial uncertainties stem from the fact that existing models differ from one another by a factor $\sim$2 in the mean  output they predict for the pAGB stellar component [@cid11 see also, e.g., Brown et al. 2008 and Woods & Gilfanov 2013 for a discussion related to this subject]. These theoretical uncertainties presumably prevent a determination of the $\tau$ ratio to a precision better than within a factor of $\sim$2 from currently available SSP models. Our analysis in Sects. \[r\_vs\_BPT\] and \[r\_vs\_i\] uses two complementary data sets: i) single-spaxel () determinations from fits with an absolute deviation $\mid\!\!O_{\lambda}-M_{\lambda}\!\!\mid$/$O_{\lambda}$$\leq 2.6$ (cf K12), where $O_{\lambda}$ is the observed spectrum and $M_{\lambda}$ the fit. These are typically restricted to the central, brightest part ($\mu\la$23 $g$ [mag/$\sq\arcsec$]{}) of our sample ETGs. ii) The average of all single-spaxel determinations within isophotal annuli () adapted to the morphology of the (line-free) continuum between 6390 Å and 6490 Å (cf K12). These data, which are to be considered in a *statistical sense*, go $\ga$2 mag fainter, allowing study of the azimuthally averaged properties of the  in the ETG periphery. (8.6,13.8) (0.3,0.4)[![[*From top to bottom:*]{} 3hb, 2ha, $\log$(EW()), and $\log$(\_ext) vs normalized photometric radius /. The gray shaded areas in panels a&b mark the mean and $\pm$1$\sigma$ of the respective quantity, and in panel c the mean EW() for $\geq$ (0.43$\pm$0.65 Å). The light-blue area in panel c depicts the range in EW() that can be accounted for by pAGB photoionization models (0.1–2.4 Å). The color assigned to each ETG is related to its &lt;$\tau$&gt; (cf text and Fig. \[fig:tau2\]) in ascending order, from orange to violet, and is identical in all figures.[]{data-label="fig:r_vs_BPT"}](Fig1.png "fig:"){width="32.60000%"}]{} (16.4,6.4) (0.4,0.4)[![image](Fig2.png){width="32.60000%"}]{} (8.6,6.0) (0.1,0.3)[![Normalized  intensity vs $\log(R^{\star})$ for our sample ETGs, based on  determinations. The diagonal lines correspond to a power-law intensity drop-off of the form $\log(I/I_0) \propto -\alpha\cdot log(R^{\star})$, with $\alpha=1$. The right-hand side table lists the power-
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'A. Chanthbouala' - 'R. Matsumoto' - 'J. Grollier' - 'V. Cros' - 'A. Anane' - 'A. Fert' - 'A. V. Khvalkovskiy' - 'K.A. Zvezdin' - 'K. Nishimura' - 'Y. Nagamine' - 'H. Maehara' - 'K. Tsunekawa' - 'A. Fukushima' - 'S. Yuasa' title: 'Vertical current induced domain wall motion in MgO-based magnetic tunnel junction with low current densities' --- **Shifting electrically a magnetic domain wall (DW) by the spin transfer mechanism [@Slonczewski:JMMM:1996; @Berger:PRB:1996; @Grollier:APL:2003; @Klaui:APL:2003] is one of the future ways foreseen for the switching of spintronic memories or registers [@Parkin:Science:2008; @NEC]. The classical geometries where the current is injected in the plane of the magnetic layers suffer from a poor efficiency of the intrinsic torques [@Hayashi:PRL:2007; @Klaui:PRL:2005] acting on the DWs. A way to circumvent this problem is to use vertical current injection [@Ravelosona:PRL:2006; @Boone:PRL:2010; @Rebei:PRB:2006]. In that case, theoretical calculations [@Khvalkovskiy:PRL:2009] attribute the microscopic origin of DW displacements to the out-of-plane (field-like) spin transfer torque [@Slonczewski:PRB:2005; @Theodonis:PRL:2006]. Here we report experiments in which we controllably displace a DW in the planar electrode of a magnetic tunnel junction by vertical current injection. Our measurements confirm the major role of the out-of-plane spin torque for DW motion, and allow to quantify this term precisely. The involved current densities are about 100 times smaller than the one commonly observed with in-plane currents [@Lou:APL:2008]. Step by step resistance switching of the magnetic tunnel junction opens a new way for the realization of spintronic memristive devices [@Strukov:Nature:2008; @Wang:IEEE:2009; @Grollier:patent:2010].** We devise an optimized sample geometry for efficient current DW motion using a magnetic tunnel junction with an MgO barrier sandwiched between two ferromagnetic layers, one free, the other fixed. Such junctions are already the building block of magnetic random-access memories (M-RAMs), which makes our device suitable for memory applications. The large tunnel magnetoresistance [@Yuasa:NatMat:2004; @Parkin:NatMat:2004] allows us to detect clearly DW motions when they propagate in the free layer of the stack [@Kondou:APEX:2008]. The additional advantage of magnetic tunnel junctions is that the out-of-plane field-like torque $\mathbf{T_{OOP}}$ can reach large amplitudes, up to 30$\%$ of the classical in-plane torque $\mathbf{T_{IP}}$ [@Sankey:Nature:2007; @Kubota:Nature:2007], in contrast to metallic spin-valve structures, in which the out-of-plane torque is only a few $\%$ of the in-plane torque [@Stiles:PRB:2002; @Xia:PRB:2002]. This is of fundamental importance since theoretical calculations predict that, when the free and reference layers are based on materials with the same magnetization orientation (either in-plane or perpendicular), the driving torque for steady domain wall motion by vertical current injection is the OOP field-like torque [@Khvalkovskiy:PRL:2009]. Indeed, $\mathbf{T_{OOP}}$ is equivalent to the torque of a magnetic field in the direction of the reference layer, that has the proper symmetry to push the DW along the free layer. On the contrary, the in-plane torque $\mathbf{T_{IP}}$ can only induce a small shift of the DW of a few nm. In magnetic tunnel junctions with the same composition for the top free and bottom reference layers, the OOP field-like torque exhibits a quadratic dependence with bias [@Sankey:Nature:2007; @Kubota:Nature:2007], which could not allow us to reverse the DW motion by current inversion. Therefore we use asymmetric layer composition to obtain an asymmetric OOP field-like torque [@Oh:Nature:2009; @Tang:PRB:2010]. ![image](fig1.pdf){width=".7\textwidth"} The magnetic stack is sketched in Fig.\[fig1\] (a). The top free layer is (CoFe 1nm/NiFe 4 nm), and the fixed layer is a CoFeB alloy. An S.E.M. top view image of the sample geometry before adding the top contact is shown on Fig.\[fig1\] (b). The half-ring shape was designed for two reasons. First, it facilitates the DW creation [@Saitoh:Nature:2004]. As can be seen from the micromagnetic simulations presented on Fig.\[fig1\] (d), the larger width at the edges stabilizes the DW at an intermediate position in the wire. Secondly, it allows a specific distribution of the Oersted field created by the perpendicular current, as shown by the simulations of Fig.\[fig1\] (c). Thanks to the hollow center, the Oersted field is quasi unidirectional along the wire, and can assist the DW propagation. We first focus on the results obtained with the 210 nm wide wires. A sketch of the sample geometry is given in Fig.\[fig1\] (d), including our convention for the angle of the applied magnetic field. In order to create and pin a DW, we tilt the magnetic field to 75$^{\circ}$. As can be seen in in Fig.\[fig2\] (a), plateaus appear in the resistance vs. field R(H) curve, corresponding to the creation of a magnetic domain wall close to the sample edge (as in the micromagnetic simulation of Fig.\[fig1\] (d)). We chose to work with the plateau obtained at positive fields ($\approx$ + 15 Oe) close to the AP state, which is stable when the field is swept back to zero. This DW creation/pinning process is reproducible, allowing measurements with the same initial state. The strength of the pinning can be evaluated by measuring the corresponding depinning fields. After pinning the DW and coming back to zero field, the R(H) curves have been measured by increasing the field amplitude along 90$^{\circ}$, either to negative or positive values, as shown in Fig.\[fig2\] (b). The positive (resp. negative) depinning fields are $H_{dep}^+$ = +22 Oe and $H_{dep}^-$ = - 43 Oe. This indicates an asymmetry of the potential well which is due to the dipolar field of the synthetic antiferromagnet ($\approx$ + 40 Oe) and also to the asymmetric geometry of the sample close to the edge. ![image](fig2.pdf){width=".7\textwidth"} In order to study the current induced domain wall depinning, once the domain wall is created, we apply a fixed magnetic field between $H_{dep}^+$ and $H_{dep}^-$, for example - 10 Oe, corresponding to zero effective field, as illustrated by a blue vertical line in Fig.\[fig2\] (b). In our convention, a positive current corresponds to electrons flowing from the synthetic antiferromagnet to the free layer. In Fig.\[fig2\] (c), we show two resistance versus current curves obtained at - 10 Oe, starting always from the same initial DW position (resistance 16.6 $\Omega$). In addition to the expected decrease of the tunnel resistance with bias, we clearly observe irreversible resistance jumps. When the current is swept first to positive values (green curve), the resistance is switched at $I_{dep}^+$ = + 7 mA to a lower resistance state corresponding to another domain wall position, stable at zero current, with a low bias resistance of 16.1 $\Omega$. By resetting the DW position, then applying negative currents (red curve) a resistance jump to a higher resistance state of 17.3 $\Omega$ occurs at $I_{dep}^-$ = -11 mA. We thus demonstrate the possibility to move a domain wall by perpendicular dc current injection in both directions depending on the current sign. The current densities corresponding to the DW motion are lower than 4 10$^6$ A.cm$^{-2}$ (see top x axis of Fig.\[fig2\] (c)). The use of perpendicular current injection therefore allows to reduce the current densities by a factor 100 compared to the classical lateral current injection [@Hayashi:PRL:2007; @Klaui:PRL:2005]. ![image](fig3.pdf){width=".7\textwidth"} Similar measurements have been performed for several fields between $H_{dep}^+$ and $H_{dep}^-$. As shown on Fig.\[fig2\] (d), the resistance associated with each pinning center changes progressively as a function of the applied magnetic field, which can be ascribed to field-induced DW displacement / deformation within the potential well. The depinning currents strongly depend on the applied magnetic field too. Negative fields favour the domain wall motion in the -90$^{\circ}$ direction, thus reducing the values of $I_{dep}^-$, and increasing $I_{dep}^+$. As expected, the effect
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'Roland Diehl[^1]' title: Astrophysics with Radioactive Isotopes --- =@jnl\#1[[\#1]{}]{} å[$\alpha$]{} \[intro\] Origin of Radioactivity ======================= The nineteenth century spawned various efforts to bring order into the elements encountered in nature. Among the most important was an inventory of the [*elements*]{} assembled by the Russian chemist Dimitri Mendeleyev in 1869, which grouped elements according to their chemical properties, their [*valences*]{}, as derived from the compounds they were able to form, at the same time sorting the elements by atomic weight. The genius of Mendeleyev lay in his confidence in these sorting principles, which enforce gaps in his table for expected but then unknown elements, and Mendeleyev was able to predict the physical and chemical properties of such elements-to-be-found. The tabular arrangement invented by Mendeleyev (Fig. \[fig\_1\_periodic\_table\]) still is in use today, and is being populated at the high-mass end by the great experiments in heavy-ion collider laboratories to create the short-lived elements predicted to exist. The second half of the nineteenth century thus saw scientists being all-excited about chemistry and the fascinating discoveries one could make using Mendeleyev’s sorting principles. Note that this was some 30 years before sub-atomic particles and the atom were discovered. Today the existence of 118 elements is firmly established[^2], the latest additions no. 113-118 all discovered in year 2016, which reflects the concerted experimental efforts. ![The periodic table of elements, grouping chemical elements according to their chemical-reaction properties and their atomic weight, after Mendeleyev (1869), in its 2016 version (IUPAC.org)[]{data-label="fig_1_periodic_table"}](Fig_1_IUPAC_PeriodicTable_Nov16.pdf "fig:"){width="\textwidth"}\ In the late nineteenth century, scientists also were excited about new types of penetrating radiation. Conrad Röntgen’s discovery in 1895 of [*X-rays*]{} as a type of electromagnetic radiation is important for understanding the conditions under which Antoine Henri Becquerel discovered radioactivity in 1896. Becquerel also was engaged in chemical experiments, in his research on phosphorescence exploiting the chemistry of photographic-plate materials. At the time, Becquerel had prepared some plates treated with uranium-carrying minerals, but did not get around to make the planned experiment. When he found the plates in their dark storage some time later, he accidentally processed them, and was surprised to find an image of a coin which happened to have been stored with the plates. Excited about X-rays, he believed he had found yet another type of radiation. Within a few years, Becquerel with Marie and Pierre Curie and others recognised that the origin of the observed radiation were elemental transformations of the uranium minerals: The physical process of [*radioactivity*]{} had been found! The revolutionary aspect of elements being able to spontaneously change their nature became masked at the beginning of the twentieth century, when sub-atomic particles and the atom were discovered. But well before atomic and quantum physics began to unfold, the physics of [*weak interactions*]{} had already been discovered in its form of [*radioactivity*]{}. The different characteristics of different chemical elements and the systematics of Mendeleyev’s periodic table were soon understood from the atomic structure of a compact and positively charged nucleus and a number of electrons orbiting the nucleus and neutralising the charge of the atom. Bohr’s atomic model led to the dramatic developments of quantum mechanics and spectroscopy of atomic shell transitions. But already in 1920, Ernest Rutherford proposed that an electrically neutral particle of similar mass as the hydrogen nucleus (proton) was to be part of the compact atomic nucleus. It took more than two decades to verify by experiment the existence of this ’neutron’, by James Chadwick in 1932. The atomic nucleus, too, was seen as a quantum mechanical system composed of a multitude of particles bound by the strong nuclear force. This latter characteristic is common to ’hadrons’, i.e. the electrically charged proton and the neutron, the latter being slightly more massive[^3]. Neutrons remained a mystery for so long, as they are unstable and decay with a mean life of 880 seconds from the weak interaction into a proton, an electron, and an anti-neutrino. This is the origin of radioactivity. The chemical and physical characteristics of an element are dominated by their electron configuration, hence by the number of charges contained in the atomic electron cloud, which again is dictated by the charge of the atomic nucleus, the number of protons. The number of neutrons included in the nucleus are important as they change the mass of the atom, however the electron configuration and hence the properties are hardly affected. Therefore, we distinguish *isotopes* of each particular chemical element, which are different in the number of neutrons included in the nucleus, but carry the same charge of the nucleus. For example, we know of three stable isotopes of oxygen as found in nature, $^{16}$O, $^{17}$O, and $^{18}$O. There are more possible nucleus configurations of oxygen with its eight protons, ranging from $^{13}$O as the lightest and $^{24}$O as the most massive known isotope. An [*isotope*]{} is defined by the number of its two types of nucleons[^4], [*protons*]{} (the number of protons defines the charge number Z) and [*neutrons*]{} (the sum of the numbers of protons and neutrons defines the mass number A), written as $^A$X for an element ’X’. Note that some isotopes may exist in different nuclear quantum states which have significant stability by themselves, so that transitions between these configurations may liberate the binding energy differences; such states of the same isotope are called [*isomers*]{}. The landscape of isotopes is illustrated in Fig. \[fig\_1\_table-of-isotopes\], with black symbols as the naturally-existing stable isotopes, and coloured symbols for unstable isotopes. Unstable isotopes, once produced, will be *radioactive*, i.e. they will transmute to other isotopes through nuclear interactions, until at the end of such a decay chain a stable isotope is produced. Weak interactions will mediate transitions between protons and neutrons and lead to neutrino emission, involvements of atomic-shell electrons will result in X-rays from atomic-shell transitions after electron capture and internal-conversion transitions, and $\gamma$-rays will be emitted in electromagnetic transitions between excitation levels of a nucleus. The production of non-natural isotopes and thus the generation of man-made radioactivity led to the Nobel Prize in Chemistry being awarded to Jean Frédéric Joliot-Curie and his wife Iréne in 1935 – the second Nobel Prize awarded for the subject of radioactivity after the 1903 award jointly to Pierre Curie, Marie Skłodowska Curie, and Henri Becquerel, also in the field of Chemistry. At the time of writing, element 118 called oganesson (Og) is the most massive superheavy element which has been synthesised and found to exist at least for short time intervals, although more massive elements may exist in an island of stability beyond. ![The table of isotopes, showing nuclei in a chart of neutron number (abscissa) versus proton number (ordinate). The stable elements are marked in black. All other isotopes are unstable, or radioactive, and will decay until a stable nucleus is obtained.[]{data-label="fig_1_table-of-isotopes"}](Fig_Table_of_Isotopes.pdf "fig:"){width="\textwidth"}\ Depending on the astrophysical objective, radioactive isotopes may be called *short-lived*, or *long-lived*, depending on how the radioactive lifetime compares to astrophysical time scales of interest. Examples are the utilisation of [$^{26}$Al]{}and [$^{60}$Fe]{}($\tau\sim$My) diagnostics of the early solar system (*short-lived*, Chap. 6) or of nucleosynthesis source types (*long-lived*, Chap. 3-5). Which radioactive decays are to be expected? What are stable configurations of nucleons inside the nuclei involved in a production and decay reaction chain? The answer to this involves an understanding of the nuclear forces and reactions, and the structure of nuclei. This is an area of current research, characterised by combinations of empirical modeling, with some capability of *ab initio* physical descriptions, and far from being fully understood. Nevertheless, a few general ideas appear well established. One of these is recognising a system’s trend towards minimising its total energy, and inspecting herein the concept of *nuclear binding energy*. It can be summarised in the expression for nuclear masses [@Weizsacker:1935]: $$m(Z,A) = Z m_p + (A-Z) m_n - BE$$ with $$BE = a_{volume} A - a_{surface} A^{2/3} - a_{coulomb} {Z^2 \over {A^{1/3}}} - a_{asymmetry} {{{(a-2Z)}^2} \over {4A}} - {\delta \over A^{1/2}}$$ The total *binding energy* (BE) is used as a key parameter for a system of nucleons, and nucleons may thus adopt bound states of lower energy than the sum of the free nucleons, towards a global minimum of system energy. Thus, in a thermal mixture of nucleons, bound nuclei will be formed,
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | Because the baryon-to-photon ratio $\eta_{10}$ is in some doubt, we drop nucleosynthetic constraints on $\eta_{10}$ and fit the three cosmological parameters $(h, \Omega_{\mathrm{M}}, \eta_{10})$ to four observational constraints: Hubble parameter $h_{\mathrm{o}} = 0.70 \pm 0.15$, age of the universe $t_{\mathrm{o}} = 14^{+7}_{-2}$ Gyr, cluster gas fraction $f_{\mathrm{o}} \equiv f_{\mathrm{G}}h^{3/2} = 0.060 \pm 0.006$, and effective shape parameter $\Gamma_{\mathrm{o}} = 0.255 \pm 0.017$. Errors quoted are $1\sigma$, and we assume Gaussian statistics. We experiment with a fifth constraint $\Omega_{\mathrm{o}} = 0.2 \pm 0.1$ from clusters. We set the tilt parameter $n = 1$ and the gas enhancement factor $\Upsilon = 0.9$. We consider CDM models (open and $\Omega_{\mathrm{M}} = 1$) and flat $\Lambda$CDM models. We omit HCDM models (to which the $\Gamma_ {\mathrm{o}}$ constraint does not apply). We test goodness of fit and draw confidence regions by the $\Delta\chi^2$ method. CDM models with $\Omega_{\mathrm{M}} =1$ (SCDM models) are accepted only because the large error on $h_{\mathrm{o}}$ allows $h < 0.5$. Baryonic matter plays a significant role in $\Gamma_{\mathrm{o}}$ when $\Omega_{\mathrm{M}} \sim 1$. Open CDM models are accepted only for $\Omega_{\mathrm{M}} \gtrsim 0.4$. The combination of the four other constraints with $\Omega_{\mathrm{o}} \approx 0.2$ is rejected in CDM models with 98% confidence, suggesting that light may not trace mass. $\Lambda$CDM models give similar results. In all of these models, $\eta_{10}$ $\gtrsim 6$ is favored strongly over $\eta_{10}$ $\lesssim 2$. This suggests that reports of low deuterium abundances on QSO lines of sight may be correct, and that observational determinations of primordial $^4$He may have systematic errors. Plausible variations on $n$ and $\Upsilon$ in our models do not change the results much. If we drop or change the crucial $\Gamma_{\mathrm{o}}$ constraint, lower values of $\Omega_{\rm M}$ and $\eta_{10}$ are permitted. The constraint $\Gamma_{\mathrm{o}} = 0.15 \pm 0.04$, derived recently from the IRAS redshift survey, favors $\Omega_{\rm M} \approx 0.3$ and $\eta_{10} \approx 5$ but does not exclude $\eta_{10} \approx 2$. author: - 'Gary Steigman, Naoya Hata, and James E. Felten' title: | Non-Nucleosynthetic Constraints on the Baryon Density\ and Other Cosmological Parameters --- INTRODUCTION {#Sec:Introduction} ============ In a Friedmann-Lemaître big bang cosmology, the universal baryonic mass-density parameter $\Omega_{\mathrm{B}}\; (\,\equiv 8 \pi G \rho_{\mathrm{B}}/3H_0^2\,)$ may be calculated from $$\begin{split} \Omega_{\mathrm{B}}\,h^2 & = 3.675 \times 10^{-3}(T/2.73\,\mathrm{K})^3 \; \eta_{10} \\ & = 3.667 \times 10^{-3} \; \eta_{10}, \label{Eq:Omega_B} \end{split}$$ where $h$ is defined by the present Hubble parameter $H_0 \; [\, h \equiv H_0/(100$ km s$^{-1}$ Mpc$^{-1})\,]$, $T$ is the present microwave background temperature, and $\eta_{10}$ is the baryon-to-photon number ratio in units $10^{-10}$. The last member of equation (\[Eq:Omega\_B\]) is obtained by setting $T = 2.728$ K (Fixsen et al. 1996). In principle, $\eta_{10}$ is well determined (in fact overdetermined) by the observed or inferred primordial abundances of the four light nuclides D, $^3$He, $^4$He, and $^7$Li, if the number of light-neutrino species has its standard value $N_\nu =3$. For some years it has been argued that $\eta_{10}$ is known to be $3.4 \pm 0.3$ (Walker et al. 1991; these error bars are about “1$\sigma$”; cf. Smith, Kawano, & Malaney 1993) or at worst $4.3 \pm 0.9$ (Copi, Schramm, & Turner 1995a; cf. Yang et al. 1984), and that equation (\[Eq:Omega\_B\]) is a powerful constraint on the cosmological parameters $\Omega_{\mathrm{B}}$ and $h$. In practice, it seems recently that $\eta_{10}$ may not be so well determined, and even that the standard theory of big bang nucleosynthesis (BBN) may not give a good fit. With improved abundance data, it appears that the joint fit of the theory to the four nuclide abundances is no longer good for any choice of $\eta_{10}$ (Hata et al. 1995). These authors offer several options for resolving the apparent conflict between theory and observation. Although they suggest that some change in standard physics may be required (e.g., a reduction in the effective value of $N_\nu$ during BBN below its standard value 3), they note that large systematic errors may compromise the abundance data (cf. Copi, Schramm, & Turner 1995b). The nature of such errors is unclear, and this remains controversial. Other authors have reacted to the impending crisis in self-consistency by simply omitting one or more of the four nuclides in making the fit (Dar 1995; Olive & Thomas 1997; Hata et al. 1996, 1997; Fields et al. 1996). This controversy has been sharpened by new observations giving the deuterium abundances on various lines of sight to high-redshift QSOs. These data should yield the primordial D abundance, but current results span an order of magnitude. The low values, D/H by number $\approx 2 \times 10^{-5}$ (Tytler, Fan, & Burles 1996; Burles & Tytler 1996), corresponding to $\eta_{10} \approx 7$ in the standard model, have been revised slightly upward \[D/H $\approx (3-4) \times 10^{-5}$ (Burles & Tytler 1997a,b,c); $\eta_{10} \approx 5$\], but it still seems impossible to reconcile the inferred abundance of $^4$He \[Y$_{\rm P} \approx 0.234$; Olive & Steigman 1995 (OS)\] with standard BBN for this large value of $\eta_{10}$ (which implies Y$_{\rm BBN} \approx 0.247$) unless there are large systematic errors in the $^4$He data (cf. Izotov, Thuan, & Lipovetsky 1994, 1997). Such low D/H values have also been challenged on observational grounds by Wampler (1996) and by Songaila, Wampler, and Cowie (1997), and deuterium abundances nearly an order of magnitude higher, D/H $\approx 2\times10^{-4}$, have been claimed by Carswell et al. (1994), Songaila et al. (1994), and Rugers and Hogan (1996) for other high-redshift systems with metal abundances equally close to primordial. Although some of these claims of high deuterium have been called into question (Tytler, Burles, & Kirkman 1997), Hogan (1997) and Songaila (1997) argue that the spectra of other absorbing systems require high D/H (e.g., Webb et al. 1997). If these higher abundances are correct, then D and $^4$He are consistent with $\eta_{10} \approx 2$, but modellers of Galactic chemical evolution have a major puzzle: How has the Galaxy reduced D from its high primordial value to its present (local) low value without producing too much $^3$He (Steigman & Tosi 1995), without using up too much interstellar gas (Edmunds 1994, Prantzos 1996), and without overproducing heavy elements (cf. Tosi 1996 and references therein)? It appears that $\eta_{10}$, though known to order of magnitude, may be among the less well-known cosmological parameters at present. Despite this, large modern simulations which explore other cosmological parameters are often limited to a single value of $\eta_{10} = 3.4$ (e.g., Borgani et al. 1997). In this situation it may be instructive, as a thought experiment, to abandon nucleosynthetic constraints on $\eta_{10}$ entirely and ask: If we put $\eta_{10}$ onto the same footing as the other cosmological free parameters, and apply joint constraints on all these parameters based on other astronomical observations and on theory and simulation, what values of
{ "pile_set_name": "ArXiv" }
null
null
--- author: - | \ ITEP, B.Cheremushkinskaya 25, Moscow, 117259, Russia\ E-mail: - | A.I.Veselov\ ITEP, B.Cheremushkinskaya 25, Moscow, 117259, Russia\ E-mail: title: Upper bound on the cutoff in the Standard Model --- Introduction ============ According to the conventional point of view the upper bound $\Lambda$ on the cutoff in the Electroweak theory (without fermions) depends on the Higgs mass. It is decreased when the Higgs mass is increased. And at the Higgs mass around $1$ Tev $\Lambda$ becomes of the order of $M_H$. At the same time for $M_H \sim 200$ Gev the value of $\Lambda$ can be made almost infinite[^1]. This conclusion is made basing on the perturbation expansion around trivial vacuum. In our presentation we demonstrate that the vacuum of the lattice Weinberg - Salam model is rather complicated, which means that the application of the perturbation expansion around trivial vacuum may be limited. Namely, we investigate the behavior of the topological defects composed of the lattice gauge fields that are to be identified with quantum Nambu monopoles [@Nambu; @BVZ; @Chernodub_Nambu]. We show that their lattice density increases along the lines of constant physics when the ultraviolet cutoff in increased. At sufficiently large values of the cutoff these objects begin to dominate. Moving further along the line of constant physics we reach the point on the phase diagram where the monopole worldlines begin to percolate. This point roughly coincides with the position of the transition between the physical Higgs phase and the unphysical symmetric phase of the lattice model. At infinite bare scalar self coupling $\lambda$ the transition is a crossover and the ultraviolet cutoff achieves its maximal value around $1.4$ Tev at the transition point. At smaller bare values of $\lambda$ correspondent to small Higgs masses the phase transition becomes stronger. Still we do not know the order of the phase transition at small values of $\lambda$. We have estimated the maximal value of the cutoff in the vicinity of the transition point at $\lambda = 0.009$. The obtained value of the cutoff appears to be around $1.4$ Tev. The lattice model under investigation ===================================== The lattice Weinberg - Salam Model without fermions contains gauge field ${\cal U} = (U, \theta)$ (where $ \quad U \in SU(2), \quad e^{i\theta} \in U(1)$ are realized as link variables), and the scalar doublet $ \Phi_{\alpha}, \;(\alpha = 1,2)$ defined on sites. The action is taken in the form $$\begin{aligned} S & = & \beta \!\! \sum_{\rm plaquettes}\!\! ((1-\mbox{${\small \frac{1}{2}}$} \, {\rm Tr}\, U_p ) + \frac{1}{{\rm tg}^2 \theta_W} (1-\cos \theta_p))+\nonumber\\ && - \gamma \sum_{xy} Re(\Phi^+U_{xy} e^{i\theta_{xy}}\Phi) + \sum_x (|\Phi_x|^2 + \lambda(|\Phi_x|^2-1)^2), \label{S}\end{aligned}$$ where the plaquette variables are defined as $U_p = U_{xy} U_{yz} U_{wz}^* U_{xw}^*$, and $\theta_p = \theta_{xy} + \theta_{yz} - \theta_{wz} - \theta_{xw}$ for the plaquette composed of the vertices $x,y,z,w$. Here $\lambda$ is the scalar self coupling, and $\gamma = 2\kappa$, where $\kappa$ corresponds to the constant used in the investigations of the $SU(2)$ gauge Higgs model. $\theta_W$ is the Weinberg angle. Bare fine structure constant $\alpha$ is expressed through $\beta$ and $\theta_W$ as $\alpha = \frac{{\rm tg}^2 \theta_W}{\pi \beta(1+{\rm tg}^2 \theta_W)}$. In our investigation we fix bare Weinberg angle equal to $30^o$. The renormalized fine structure constant can be extracted through the potential for the infinitely heavy external charged particles. Phase diagram ============= The phase diagram at infinite $\lambda$ is represented on Fig.1. The dashed vertical line represents the confinement-deconfinement phase transition corresponding to the $U(1)$ constituent of the model. The continuous horizontal line corresponds to the transition between the broken and the symmetric phases. Real physics is commonly believed to be achieved within the phase of the model situated in the right upper corner of Fig. $1$. The double-dotted-dashed vertical line on the right-hand side of the diagram represents the line, where the renormalized $\alpha$ is constant and is equal to $1/128$. Qualitatively the phase diagram at finite $\lambda$ looks similar to that of infinite $\lambda$. In the three - dimensional ($\beta, \gamma, \lambda$) phase diagram the transition surfaces are two - dimensional. The lines of constant physics on the tree level are the lines ($\frac{\lambda}{\gamma^2} = \frac{1}{8 \beta} \frac{M^2_H}{M^2_W} = {\rm const}$; $\beta = \frac{1}{4\pi \alpha}={\rm const}$). In general the cutoff is increased along the line of constant physics when $\gamma$ is decreased. The maximal value of the cutoff is achieved at the transition point. Nambu monopole density in lattice units is also increased when the ultraviolet cutoff is increased. At $\beta = 12$ the phase diagram is represented on Fig. 2. The physical Higgs phase is situated up to the transition line. The position of the transition is localized at the point where the susceptibility extracted from the Higgs field creation operator achieves its maximum. All simulations were performed on lattices of sizes $8^3\times 16$. Several points were checked using larger lattices up to $16^3\times 24$. At $\lambda = \infty$ we found no significant difference between the results obtained using the mentioned lattices. For small $\lambda$ the careful investigation of the dependence of physical observables on the lattice size has not been performed. Calculation of the cutoff ========================= The following variable is considered as creating the $Z$ boson: $ Z_{xy} = Z^{\mu}_{x} \; = {\rm sin} \,[{\rm Arg} (\Phi_x^+U_{xy} e^{i\theta_{xy}}\Phi_y) ]$. In order to evaluate the masses of the $Z$-boson and the Higgs boson we use the correlators: $$\frac{1}{N^6} \sum_{\bar{x},\bar{y}} \langle \sum_{\mu} Z^{\mu}_{x} Z^{\mu}_{y} \rangle \sim e^{-M_{Z}|x_0-y_0|}+ e^{-M_{Z}(L - |x_0-y_0|)} \label{corZ}$$ and $$\frac{1}{N^6}\sum_{\bar{x},\bar{y}}(\langle H_{x} H_{y}\rangle - \langle H\rangle^2) \sim e^{-M_{H}|x_0-y_0|}+ e^{-M_{H}(L - |x_0-y_0|)}, \label{cor}$$ Here the summation $\sum_{\bar{x},\bar{y}}$ is over the three “space" components of the four - vectors $x$ and $y$ while $x_0, y_0$ denote their “time“ components. $N$ is the lattice length in ”space“ direction. $L$ is the lattice length in the ”time" direction. In lattice calculations we used two different operators that create Higgs bosons: $ H_x = |\Phi|$ and $H_x = \sum_{y} Z^2_{xy}$. In both cases $H_x$ is defined at the site $x$, the sum $\sum_y$ is over its neighboring sites $y$. After fixing the unitary gauge, lattice Electroweak theory becomes a lattice $U(1)$ gauge theory. The $U(1)$ gauge field is $ A_{xy} = A^{\mu}_{x} \; = \,[-{\rm Arg} (\Phi_x^+U_{xy} e^{i\theta_{xy}}\Phi_y) + 2\theta_{xy}] \,{\rm mod} \,2\pi$. The usual Electromagnetic field is $ A_{\rm EM} = A + Z^{\prime} - 2 \,{\rm sin}^2\, \theta_W Z^{\prime}$, where $Z^{\prime} = [ {\rm Arg} (\Phi_x^+U_{xy} e^{i\theta_{xy}}\Phi_y) ]{\rm mod} 2\pi$. The physical scale is given in our lattice theory by the value of the $Z$-boson mass $M^{phys}_Z \sim 91$ GeV. Therefore the lattice spacing is evaluated to be $a \sim [91 {\rm GeV}]^{-1} M
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We experimentally study the propagation of circularly polarized light in the sub-diffusion regime by exploiting enhanced backscattering (EBS, also known as coherent backscattering) of light under low spatial coherence illumination. We demonstrate for the first time that circular polarization memory effect exists in EBS over a large range of scatterers’ sizes in this regime. We show that EBS measurements under low spatial coherence illumination from the helicity preserving and orthogonal helicity channels cross over as the mean free pathlength of light in media varies, and that the cross point indicates the transition from multiple to double scattering in EBS of light.' author: - 'Young L. Kim' - Prabhakar Pradhan - 'Min H. Kim' - Vadim Backman bibliography: - 'Cir\_pol\_memo.bib' title: | Circular polarization memory effect in enhanced backscattering of light\ under partially coherent illumination --- The circular polarization memory effect is an unexpected preservation of the initial helicity (or handedness) of circular polarization of multiply scattered light in scattering media consisting of large particles. Mackintosh *et al*. \[1\] first observed that the randomization of the helicity required unexpectedly far more scattering events than did the randomization of its propagation in media of large scatterers. Bicout *et al*. \[2\] demonstrated that the memory effect can be shown by measuring the degree of circular polarization of transmitted light in slabs. Using numerical simulations of vector radiative transport equations, Kim and Moscoso \[3\] explained the effect as the result of successive near-forward scattering events in large scatterers. Recently, Xu and Alfano \[4\] derived a characteristic length of the helicity loss in the diffuse regime and showed that this characteristic length was greater than the transport mean free pathlength $l_s^*$ for the scatterers of large sizes. Indeed, the propagation of circularly polarized light in random media has been investigated mainly using either numerical simulations or experiments in the diffusion regime, in part because its experimental investigation in the sub-diffusion regime has been extremely challenging. Therefore, the experimental investigation of circularly polarized light in the low-order scattering (or short traveling photons) regime using enhanced backscattering (EBS, also known as coherent backscattering) of light under low spatial coherence illumination will provide a better understanding of its mechanisms and the polarization properties of EBS as well. EBS is a self-interference effect in elastic light scattering, which gives rise to an enhanced scattered intensity in the backward direction. In our previous publications, \[5-8\] we demonstrated that low spatial coherence illumination (the spatial coherence length of illumination $L_{sc}\!<<l_s^*$) dephases the time-reversed partial waves outside its finite coherence area, rejecting long traveling waves in weakly scattering media. EBS under low spatial coherence illumination ($L_{sc}\!<<l_s^*$) is henceforth referred to as low-coherence EBS (LEBS). The angular profile of LEBS, $I_{LEBS}(\theta)$, can be expressed as an integral transform of the radial probability distribution $P(r)$ of the conjugated time-reversed light paths:\[6-8\] $$I_{LEBS}(\theta)\propto \int^\infty_0 C(r)rP(r)\exp(i2\pi r \theta / \lambda)dr,$$ where $r$ is the radial distance from the first to the last points on a time-reversed light path and $C(r) =|2J_1(r/L_{sc})/(r/L_{sc})|$ is the degree of spatial coherence of illumination with the first order Bessel function $J_1$.\[9\] As $C(r)$ is a decay function of $r$, it acts as a spatial filter, allowing only photons emerging within its coherence areas ($\sim L_{sc}^2$ ) to contribute to $P(r)$. Therefore, LEBS provides the information about $P(r)$ for a small $r$ ($<\sim100~\mu m$) that is on the order of $L_{sc}$ as a tool for the investigation of light propagation in the sub-diffusion regime. ![Representative $I_{LEBS}(\theta)$ with $L_{sc} = 110~\mu µm$ obtained from the suspensions of microspheres ($a = 0.15~\mu m$, $ka = 2.4$, and $g = 0.73$). We obtained $I_{LEBS}(\theta)$ for various $l_s^* = 67 - 1056 ~\mu m$ ($l_s = 18 - 285 ~\mu m$) from the (h$||$h) and (h$\bot$h) channels. The insets show the enhancement factors $E$. ](Image1) To investigate the helicity preservation of circularly polarized light in the sub-diffusion regime by exploiting LEBS, we used the experimental setup described in detail elsewhere.\[5,6\] In brief, a beam of broadband cw light from a 100 W xenon lamp (Spectra-Physics Oriel) was collimated using a 4-$f$ lens system, polarized, and delivered onto a sample with the illumination diameter of $3~mm$. By changing the size of the aperture in the 4-$f$ lens system, we varied spatial coherence length $L_{sc}$ of the incident light from $35~\mu m$ to $200~\mu m$. The temporal coherence length of illumination was $0.7~\mu m$ with the central wavelength = $520~nm$ and its FWHM = $135~nm$. The circular polarization of LEBS signals was analyzed by means of an achromatic quarter-wavelet plate (Karl Lambrecht) positioned between the beam splitter and the sample. The light backscattered by the sample was collected by a sequence of a lens, a linear analyzer (Lambda Research Optics), and a CCD camera (Princeton Instruments). We collected LEBS signals from two different circular polarization channels: the helicity preserving (h$||$h) channel and the orthogonal helicity (h$\bot$h) channel. In the (h$||$h) channel, the helicity of the detected circular polarization was the same as that of the incident circular polarization. In the (h$\bot$h) channel, the helicity of the detected circular polarization was orthogonal to that of the incident circular polarization. In our experiments, we used media consisting of aqueous suspensions of polystyrene microspheres ($n_{sphere} = 1.599$ and $n_{water} = 1.335$ at $520~nm$) (Duke Scientific) of various radii $a$ = 0.05, 0.10, 0.15, 0.25, and 0.45 $\mu m$ (the size parameter $ka = 0.8 - 7.2$ and the anisotropic factor $g = 0.11- 0.92$). The dimension of the samples was $\pi \times 252~mm^2 \times 50~mm$. Using Mie theory,\[10\] we calculated the optical properties of the samples such as the scattering mean free pathlength of light in the medium $l_{s}$ ($= 1/\mu_s$, where $\mu_s$ is the scattering coefficient), the anisotropy factor $g$ (= the average cosine of the phase function), and the transport mean free pathlength $l_{s}^*$ ($= 1/\mu_s^* = l_{s}/(1 - g)$, where $\mu_s^*$ is the reduced scattering coefficient). We also varied $L_{sc}$ from 40 to 110 $\mu m$. We used $g$ as a metric of the tendency of light to be scattered in the forward direction. ![$I_{LEBS}$ in the backward direction from Fig. 1. (a) $I_{LEBS}^{||}(\theta = 0)$ and $I_{LEBS}^{\bot}(\theta = 0)$ cross over at $l_s^* = 408 ~\mu m$ ($l_s = 110~\mu m m$). The lines are third-degree polynomial fitting. (b) Inset: $I_{LEBS}^{||}(\theta)$ and $I_{LEBS}^{\bot}(\theta)$ at the cross point. $C(r)rP(r)$ obtained by calculating the inverse Fourier transform of $I_{LEBS}(\theta)$ reveals helicity preserving in the (h$||$h) channel when $r > \sim50~\mu m$. ](Image2) The total experimental backscattered intensity $I_{T}$ can be expressed as $I_T = I_{SS} + I_{MS} + I_{EBS}$, where $I_{SS}$, $I_{MS}$, and $I_{EBS}$ are the contributions from single scattering, multiple scattering, and interference from the time-reserved waves (i.e., EBS), respectively. In media of relatively small particles (radius, $a\leq\lambda$), the angular dependence of $I_T(\theta)$ around the backward direction is primarily due to the interference term, while the multiple and single scattering terms have weaker angular dependence.Thus, $I_{SS} + I_{MS}$ ($=$ the baseline intensity) can be measured at large backscattering angles ($\theta > 3^{\circ}$). Conventionally, the enhancement factor $E = 1 + I_{EBS}(\theta=0^{\circ})/(I_{SS}+I_{MS})$ is commonly used. However, in the studies of circularly polarized light, the enhancement factor
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'The form of the inflationary potential is severely restricted if one requires that it be natural in the technical sense, i.e. terms of unrelated origin are not required to be correlated. We determine the constraints on observables that are implied in such natural inflationary models, in particular on $r$, the ratio of tensor to scalar perturbations. We find that the naturalness constraint does not require $r$ to be lare enough to be detectable by the forthcoming searches for B-mode polarisation in CMB maps. We show also that the value of $r$ is a sensitive discriminator between inflationary models.' author: - | Shaun Hotchkiss, Gabriel Germán,[^1]  Graham G Ross[^2]  and Subir Sarkar\ *Rudolf Peierls Centre for Theoretical Physics,*\ *University of Oxford, 1 Keble Road, Oxford, OX1 3NP, UK* title: | \ Fine tuning and the ratio of tensor to scalar density fluctuations from cosmological inflation --- The nature of the density perturbations originating in the early universe has been of great interest both observationally and theoretically. The hypothesis that they were generated during an early period of inflationary expansion has been shown to be consistent with all present observations. The most discussed mechanism for inflation is the ‘slow roll’ of a weakly coupled ‘inflaton’ field down its potential — the near-constant vacuum energy of the system during the slow-roll evolution drives a period of exponentially fast expansion and the density perturbations have their origin as quantum fluctuations in the inflaton energy density. In such models the detailed structure of the density perturbations which give rise to the large scale structure of the universe observed today depends on the nature of the inflationary potential in the field region where they were generated. Boyle, Steinhardt and Turok [@Boyle:2005ug] have argued that “naturalness” imposes such strong restrictions on the inflationary potential that one may derive interesting constraints on observables today. They concluded that in theories which are “natural” according to their criterion, the spectral index of the scalar density perturbations is bounded as $n_\mathrm{s}<0.98$, and that the ratio of tensor-to-scalar perturbations satisfies $r>0.01$ provided $n_\mathrm{s}>0.95$, in accord with then current measurements [@Spergel:2003cb]. Such a lower limit on the amplitude of gravitational waves is of enormous interest as there is then a realistic possibility of detecting them as ‘B-mode’ polarisation in CMB sky maps (see e.g. [@Efstathiou:2007gz]) and thus verifying a key prediction of inflation. Of course these conclusions are crucially dependent on the definition of naturalness. In this paper we re-examine this important issue and argue that the criterion proposed by Boyle [*et al*]{} does not capture the essential aspects of a [*physically*]{} natural theory. We propose an alternative criterion that correctly reflects the constraints coming from underlying symmetries of the theory and we use this to determine a new bound on $r$ that turns out to quite opposite to the previously inferred one. We emphasise that our result, although superficially similar to the ‘Lyth bound’ [@Lyth:1996im], follows in fact from different considerations and in particular makes no reference to how long inflation lasts. Inflation predicts a near scale-invariant spectrum for the scalar and tensor fluctuations, the former being in reasonable agreement with current observations. Here we explore the predictions for [ *natural*]{} models involving a single inflaton at the time the density perturbations are produced. Models with two or more scalar fields affecting the density perturbations require some measure of fine tuning to relate their contribution to the energy density, whereas the single field models avoid this unnatural aspect. In order to characterize the inflationary possibilities in a model independent way it is convenient to expand the inflationary potential about the value of the field $\phi_\mathrm{H}$ just at the start of the observable inflation era, $\sim 60$ e-folds before the end of inflation when the scalar density perturbation on the scale of our present Hubble radius [^3] was generated, and expand in the field $\phi^\ast \equiv \phi - \phi_\mathrm{H}$ [@German:2001tz]. Since the potential must be very flat to drive inflation, $\phi ^{\ast }$ will necessarily be *small* while the observable density perturbations are produced, so the Taylor expansion of the potential will be dominated by low powers of $\phi^\ast$: $$V (\phi^\ast) = V(0) + V^\prime(0) \phi^\ast + \frac{1}{2}V^{\prime\prime}(0)\phi^{\ast 2} + \ldots \label{expand}$$ The first term $V(0)$ provides the near-constant vacuum energy driving inflation while the $\phi^\ast$-dependent terms are ultimately responsible for ending inflation, driving $\phi^\ast$ large until higher-order terms violate the slow-roll conditions. These terms also determine the nature of the density perturbations produced, in particular the departure from a scale-invariant spectrum. The observable features of the primordial density fluctuations can readily be expressed in terms of the coefficients of the Taylor series [@German:2001tz]. It is customary to use these coefficients first to define the slow-roll parameters $\epsilon$ and $\eta$ [@Liddle:2000cg] which must be small during inflation: $$\epsilon \equiv \frac{M^2}{2}\left(\frac{V^\prime(0)}{V(0)}\right)^2 \ll 1, \qquad |\eta| \equiv M^2 \left\vert \frac{V^{\prime\prime}(0)}{V(0)}\right\vert \ll 1, \label{slowroll}$$ where $M$ is the reduced Planck scale, $M=2.44\times 10^{18}$ GeV. In terms of these the spectral index is given by $$n_\mathrm{s} = 1 + 2\eta - 6\epsilon, \label{spectral}$$ the tensor-to-scalar ratio is $$r = 16\epsilon, \label{indicetensorial}$$ and the density perturbation at wave number $k$ is $$\delta_\mathrm{H}^2 (k) = \frac{1}{150\pi^2}\frac{V(0)}{\epsilon M^4} . \label{densitypert}$$ Finally the ‘running’ of the spectral index is given by $$n_\mathrm{r} \equiv \frac{\mathrm{d}n_\mathrm{s}}{\mathrm{d}\ln k} = 16\epsilon\eta - 24\epsilon^2 - 2\xi , \label{spectraltilt}$$ where $$\xi \equiv M^4 \frac{V^\prime V^{\prime\prime\prime}}{V^2}. \label{xi}$$ At this stage we have four observables, $n_\mathrm{s},$ $n_\mathrm{r}$, $\delta_\mathrm{H}$ and $r$ and four unknown parameters $V(0)$, $V^\prime(0)$, $V^{\prime\prime}(0)$ and $V^{\prime\prime\prime}(0)$ which, for an arbitrary inflation potential, are independent. However for natural potentials these parameters are related, leading to corresponding relations between the observables. Observational confirmation of such relations would provide evidence for the underlying potential, hence crucial clues to the physics behind inflation. As discussed above we are considering the class of natural models in which a single inflaton field dominates when the density perturbations relevant to the large-scale structure of the universe today are being produced.[^4] In classifying “natural” inflation, Boyle [*et al*]{} imposed a set of five conditions [@Boyle:2005ug]: 1. The energy density (scalar) perturbations generated by inflation must have amplitude $\sim 10^{-5}$ on the scales that left the horizon $\approx 60$ e-folds before the end of inflation; 2. The universe undergoes at least $N > 60$ e-folds of inflation; 3. After inflation, the field must evolve smoothly to an analytic minimum with $V = 0$; 4. If the minimum is metastable, then it must be long-lived and $V$ must be bounded from below; 5. Inflation must halt and the universe must reheat without spoiling its large-scale homogeneity and isotropy. They proposed that the level of fine-tuning for potentials satisfying the above conditions should be measured by the integers $Z_{\epsilon, \eta}$ that measure the number of zeros that $\epsilon$ and $\eta$ and their derivatives undergo within the last 60 e-folds of inflation [@Boyle:2005ug]. Here we argue that such a measure does [*not*]{} capture the essential character of physical naturalness. At a purely calculational level this is illustrated by the fact that it is necessary to impose an (arbitrary) cut-off on the number of derivatives included in the criterion.[^5] This is necessary because $\epsilon$ and $\eta$ are defined in terms of the ratio of first or second order derivatives of the potential to the potential itself, so [*all*]{} higher order derivatives must be considered separately when counting the total number of zeros. The difficulty follows from the observation that, as far as naturalness is concerned, it is the inflaton potential that is the primary object, being restricted by the underlying symmetries of the (effective) field theory describing the inflaton dynamics. As stressed by ’t Hooft [@'
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'High-resolution spectroscopic observations were taken of 29 extended main sequence turn-off (eMSTO) stars in the young ($\sim$200 Myr) LMC cluster, NGC 1866 using the Michigan/[*Magellan*]{} Fiber System and MSpec spectrograph on the [*Magellan*]{}-Clay 6.5-m telescope. These spectra reveal the first direct detection of rapidly rotating stars whose presence has only been inferred from photometric studies. The eMSTO stars exhibit H[$\alpha$]{} emission (indicative of Be-star decretion disks), others have shallow broad H[$\alpha$]{} absorption (consistent with rotation $\ga$150 [km s$^{-1}$]{}), or deep H[$\alpha$]{} core absorption signaling lower rotation velocities ($\la$150 [km s$^{-1}$]{}). The spectra appear consistent with two populations of stars - one rapidly rotating, and the other, younger and slowly rotating.' author: - 'A. K. Dupree' - 'A. Dotter' - 'C. I. Johnson' - 'A. F. Marino' - 'A. P. Milone' - 'J. I. Bailey III' - 'J. D. Crane' - 'M. Mateo' - 'E. W. Olszewski' title: | NGC 1866: First Spectroscopic Detection of Fast Rotating Stars\ in a Young LMC Cluster --- Introduction ============ Identification of multiple main sequences in old Milky Way globular clusters from HST photometry (Bedin et al. 2004; Piotto [et al.]{} 2007; Gratton [et al.]{} 2012) created a fundamental change in our concept of their stellar populations for it suggested that cluster stars are neither coeval nor chemically homogeneous. This paradigm shift results from the fact that multiple sequences are visible along the entire color-magnitude diagram (CMD) signaling two or more generations of stars. No completely successful scenario exists to explain multiple populations although many possibilities have been offered. The most popular suggests that a second generation of enriched (polluted) stars forms from gas that was processed at high temperatures in the cores and/or envelopes of intermediate to high mass first generation stars. Each of the many possibilities appears to have at least one fatal flaw (Bastian [et al.]{} 2015; Renzini [et al.]{} 2015; Charbonnel 2016). The situation becomes more complicated when investigating younger clusters, which could reveal the predecessors of the Milky Way clusters. Photometric studies of young and intermediate age clusters (age $<$2 Gyr) in the Large Magellanic Cloud (LMC) support yet another scenario. They have revealed an extended (broadened) main-sequence turnoff (eMSTO) and/or a bimodal main sequence (Mackey [et al.]{} 2008; Milone et al. 2009; Goudfrooij [et al.]{} 2009, 2014). This discovery could imply that a prolonged (100-500 Myr) star-formation history occurred (Mackey [et al.]{} 2008; Conroy & Spergel 2011; Keller [et al.]{} 2011). This could be an attractive simple explanation since there are concerns about the lack of active star-formation in clusters older than 10 Myr (Niederhofer [et al.]{} 2016) and the absence of natal cluster gas after 4 Myr (Hollyhead [et al.]{} 2015) suggesting that multiple stellar generations may not be present. Photometry of young ($\sim$300 Myr) stellar clusters also reveals the eMSTO and a bifurcated main sequence (D’Antona [et al.]{} 2015; Milone [et al.]{}  2016, 2017). A recent claim of detection of young stellar objects in some young clusters in the LMC hints at ongoing star formation (For & Bekki 2017). However, other scenarios have been introduced to explain the eMSTO and bifurcated main sequence including a range of ages (Mackey & Broby Nielsen 2007; Milone [et al.]{} 2009; Keller [et al.]{} 2011; Correnti [et al.]{} 2014; Goudfrooij [et al.]{} 2014), different rotation rates (Bastian & deMink 2009, Bastien [et al.]{} 2016; Niederhofer [et al.]{} 2015; D’Antona [et al.]{} 2015; Milone [et al.]{} 2016, 2017), braking of rapid rotators (D’Antona [et al.]{} 2017), or different metallicities (Milone [et al.]{} 2015). Our target, NGC 1866, a 200 Myr cluster in the LMC, displays the eMSTO and also a bifurcated main sequence (Milone [et al.]{}  2017). These characteristics are not due to photometric errors, field-star contamination, differential reddening, or non-interacting binaries (Milone [et al.]{} 2016, 2017). Comparison with isochrones (Milone [et al.]{} 2017) suggests that the best-fit of the bifurcated main sequence comes from rotating stellar models for the red main sequence and non-rotating models for the blue main sequence. It is believed that abundances are similar among the populations of NGC 1866 (Mucciarelli [et al.]{} 2011), although the ages are not, and may range from 140 Myr to 220 Myr (Milone [et al.]{} 2017). Isochrone modeling provides good agreement with the main-sequence objects but the fit to the eMSTO objects is not as satisfactory. Variable stars, such as $\delta$ Scuti objects might also produce an extended turn off (Salinas [et al.]{} 2016), however these stars become significant in older clusters (1$-$3 Gyr) where the turnoff from the main sequence coincides with the instability strip. Stellar rotation not only affects the colors of the stars but also their lifetimes through rotational mixing. Possibly rotation could cause the observed spreads in the CMD (Bastian & deMink 2009). In fact, narrow and broad-band photometry of bright stars in two young LMC clusters hints at the appearance of H[$\alpha$]{} emission (Bastian [et al.]{} 2017) which is interpreted as signaling the presence of rapidly rotating Be stars. No direct measure of rotation has been carried out for individual stars populating the eMSTO in LMC clusters. In this paper, we report the first high-resolution spectroscopy of the H[$\alpha$]{} line in 29 stars in the extended turnoff region of the LMC cluster NGC 1866. Synthesis of model spectra indicated that narrow photospheric features would be ‘washed out’ and too subtle to detect if the stars are rapidly rotating, making the H[$\alpha$]{} transition a feature of choice to characterize the rotational state of the star. Spectroscopic Material ====================== Stellar spectra were obtained with the Michigan/[*Magellan*]{} Fiber System (M2FS, Mateo [et al.]{} 2012) and the MSpec multi-object spectrograph mounted on the [*Magellan*]{}-Clay 6.5-m telescope at Las Campanas Observatory. The fibers have a diameter of 1.2” and can span a field of view nearly 30 arcminutes in diameter. A 180$\mu$m slit yielded a resolving power $\lambda /\Delta \lambda \sim 28,000$. The spectra were binned by 2 pixels in the spatial direction and remained at 1 pixel along the dispersion. The selected targets, which are likely members of the cluster NGC 1866, were identified by Milone et al. (2017) from the Ultraviolet and Visual Channel of the Wide Field Camera 3 (UVIS/WFC3) of HST. Images taken with the F336W filter and the F814W filter provided the photometry and astrometry. Milone [et al.]{} (2017) noted that the apparent stellar density became constant at radial distances greater than about 3 arcminutes from the cluster center, and concluded that cluster members did not extend beyond that distance. Our targets comply with this criterion. In addition, we selected targets separated by 2.5 arcsec at a minimum from any neighboring stars that are brighter and located away from stars less than 2 magnitudes fainter in the F814W band than the target star. With this selection criterion, coupled with the requirements on fiber placement, very few stars remain within the half-light radius of the cluster (41 arcsec); in fact only two of our targets are located there. The vast majority lie between 41 arcsec and $\sim$180 arcsec from the center. This criterion identified $\sim$ 150 acceptable targets, spanning V = 16.2–20. Positions of the guide and acquisition stars were verified by comparison with the 2MASS catalog and WFI images. The software code for M2FS fiber positioning selected targets according to our priorities. We chose the filter “Bulge-GC1” which spans 6120–6720Å over 6 echelle orders, and allows up to 48 fibers to be placed on our targets. In practice, several fibers are placed on the sky; thus we obtained about 43 stellar targets per configuration. Some targets were “lost” due to low fiber sensitivity, neighboring very bright stars, or possibly inaccurate coordinates. Two configurations - a bright and faint selection – each spanning about 2 magnitudes were implemented. Our principal configuration was observed on 8 December and 12 December 2016 with 7 exposures totaling 5.5 hours varying between 2100s and 2700s each. A fainter target configuration was observed on 11 December and 13 December 2016, but the spectra were severely
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | *Octal games* are a well-defined family of two-player games played on heaps of counters, in which the players remove alternately a certain number of counters from a heap, sometimes being allowed to split a heap into two nonempty heaps, until no counter can be removed anymore. We extend the definition of octal games to play them on graphs: heaps are replaced by connected components and counters by vertices. Thus, playing an octal game on a path $P_n$ is equivalent to playing the same octal game on a heap of $n$ counters. We study one of the simplest octal games, called 0.33, in which the players can remove one vertex or two adjacent vertices without disconnecting the graph. We study this game on trees and give a complete resolution of this game on subdivided stars and bistars. address: - 'LIMOS, 1 rue de la Chebarde, 63178 Aubière CEDEX, France.' - 'LAAS-CNRS, Université de Toulouse, CNRS, Université Toulouse 1 Capitole - IUT Rodez, Toulouse, France' - 'Fédération de Recherche Maths à Modeler, Institut Fourier, 100 rue des Maths, BP 74, 38402 Saint-Martin d’Hères Cedex, France' - 'LAAS-CNRS, Université de Toulouse, CNRS, INSA, Toulouse, France.' - 'Univ Lyon, Université Lyon 1, LIRIS UMR CNRS 5205, F-69621, Lyon, France.' - 'CNRS/Université Grenoble-Alpes, Institut Fourier/SFR Maths à Modeler, 100 rue des Maths - BP 74, 38402 Saint Martin d’Hères, France.' - 'Univ. Bordeaux, Bordeaux INP, CNRS, LaBRI, UMR5800, F-33400 Talence, France' author: - Laurent Beaudou - Pierre Coupechoux - Antoine Dailly - Sylvain Gravier - Julien Moncel - Aline Parreau - Éric Sopena title: | Octal Games on Graphs:\ The game 0.33 on subdivided stars and bistars --- Combinatorial Games; Octal Games; Subtraction Games; Graphs Introduction ============ *Combinatorial games* are finite two-player games without chance, with perfect information and such that the last move alone determines which player wins the game. Since the information is perfect and the game finite, there is always a winning strategy for one of the players. A formal definition of combinatorial games and basic results will be given in Section \[sec:def\]. For more details, the interested reader can refer to [@winningways], [@lip] or [@cgt]. A well-known family of combinatorial games is the family of *subtraction games*, which are played on a heap of counters. A subtraction game is defined by a list of positive integers $L$ and is denoted by $Sub(L)$. A player is allowed to remove $k$ counters from the heap if and only if $k \in L$. The first player unable to play loses the game. For example, consider the game $Sub(\{1,2\})$. In this game, both players take turns removing one or two counters from the heap, until the heap is empty. If the initial number of counters is a multiple of 3, then the second player has a winning strategy: by playing in such a way that the first player always gets a multiple of 3, he will take the last counter and win the game. A natural generalization of subtraction games is to allow the players to split a heap into two nonempty heaps after having removed counters. This defines a much larger class of games, called *octal games* [@winningways]. An octal game is represented by an octal code which entirely defines its rules. As an example, $Sub(\{1,2\})$ is defined as [**0.33**]{}. A precise definition will be given in Section \[sec:def\]. Octal games have been extensively studied. One of the most important questions [@Guy96] is the periodicity of these games. Indeed, it seems that all finite octal games have a periodic behaviour in the following sense: the set of initial numbers of counters for which the first player has a winning strategy is ultimately periodic. This is true for all subtraction games and for all finite octal games for which the study has been completed [@althofer; @winningways]. Octal games can also be played by placing counters in a row. Heaps are constituted by consecutive counters and only consecutive counters can be removed. According to this representation, it seems natural to play octal games on more complex structures like graphs. A position of the game is a graph and players remove vertices that induce a connected component which corresponds to consecutive counters. The idea to extend the notion of octal games to graphs was already suggested in [@fleischer]. However, to our knowledge, this idea has not been further developed. With our definition, playing the generalization of an octal game on a path is the same as playing the original octal game. In the special case of subtraction games, players have to keep the graph connected. As an example, playing [**0.33**]{} on a graph consists in removing one vertex or two adjacent vertices from the graph without disconnecting it. This extension of octal games is in line with several take-away games on graphs such as [Arc Kayles]{} [@S78] and [Grim]{} [@adams]. However, it does not describe some other deletion games, such as the vertex and edge versions of the game <span style="font-variant:small-caps;">geography</span> [@S78; @edgegeo], vertex and edge deletion games with parity rules, considered in [@ottaway1] and [@ottaway2], or scoring deletion games such as Le Pic arête [@picarete]. We will first give in Section \[sec:def\] basic definitions from combinatorial game theory as well as a formal definition of octal games on graphs. We then focus on the game [**0.33**]{} which is one of the simplest octal games, and to its study on trees. We first study subdivided stars in Section \[sec:star\]. We prove that paths can be reduced modulo 3 which leads to a complete resolution, in contrast with the related studies on subdivided stars of [Node Kayles]{} [@fleischer] and [Arc Kayles]{} [@H15]. In Section \[sec:bistar\], we extend our results to subdivided bistars (i.e. trees with at most two vertices of degree at least 3) using a game operator similar to the sum of games. Unfortunately, these results cannot be extended to all trees and not even to caterpillars. In a forthcoming paper [@futurpapier], some of our results are generalized to other subtraction games on subdivided stars. Definitions {#sec:def} =========== Basics of Combinatorial Game Theory ----------------------------------- *Combinatorial games* [@winningways] are two-player games such that: 1. The two players play alternately. 2. There is no chance. 3. The game is finite (there are finitely many positions and no position can be encountered twice during the game). 4. The information is perfect. 5. The last move alone determines the winner. In *normal* play, the player who plays the last move wins the game. In *misère* play, the player who plays the last move loses the game. *Impartial games* are combinatorial games where at each turn the moves are the same for both players. Hence the only distinction between the players is who plays the first move. In this paper, we will only consider impartial games in normal play. Positions in impartial games have exactly two possible [*outcomes*]{}: either the first player has a winning strategy, or the second player has a winning strategy. If a game position falls into the first category, it is an *${\mathcal{N}}$-position* (for ${\mathcal{N}}$ext player wins); otherwise, it is a *${\mathcal{P}}$-position* (for ${\mathcal{P}}$revious player wins). From a given position $J$ of the game, the different positions that can be reached by playing a move from $J$ are the *options* of $J$, and the set of options of $J$ is denoted ${\mathrm{opt}}(J)$. If we know the outcomes of the positions in ${\mathrm{opt}}(J)$ we can deduce the outcome of $J$, using the following proposition: \[prop:outcome\] Let $J$ be a position of an impartial combinatorial game in normal play: - If ${\mathrm{opt}}(J)=\emptyset$, then $J$ is a ${\mathcal{P}}$-position. - If there exists a ${\mathcal{P}}$-position $J'$ in ${\mathrm{opt}}(J)$, then $J$ is an ${\mathcal{N}}$-position: a winning move consists in playing from $J$ to $J'$. - If all the options of $J$ are ${\mathcal{N}}$-positions, then $J$ is a ${\mathcal{P}}$-position. Every position $J$ of a combinatorial game can be viewed as a
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We take a first step towards the solution of QCD in $1+1$ dimensions at nonzero density. We regularize the theory in the UV by using a lattice and in the IR by putting the theory in a box of spatial size $L$. After fixing to axial gauge we use the coherent states approach to obtain the large-$N$ classical Hamiltonian ${\cal H}$ that describes color neutral quark-antiquark pairs interacting with spatial Polyakov loops in the background of baryons. Minimizing ${\cal H}$ we get a regularized form of the ‘t Hooft equation that depends on the expectation values of the Polyakov loops. Analyzing the $L$-dependence of this equation we show how volume independence, à la Eguchi and Kawai, emerges in the large-$N$ limit, and how it depends on the expectation values of the Polyakov loops. We describe how this independence relies on the realization of translation symmetry, in particular when the ground state contains a baryon crystal. Finally, we remark on the implications of our results on studying baryon density in large-$N$ QCD within single-site lattice theories, and on some general lessons concerning the way four-dimensional large-$N$ QCD behaves in the presence of baryons.' --- [**Volume dependence of two-dimensional large-$N$ QCD\ with a nonzero density of baryons.** ]{} Introduction ============ QCD simplifies in the ’t Hooft limit of a large number of colors, and as a result it has been a long-standing goal to understand the properties of the theory in that limit [@largeN], including on the lattice [@lattice-reviews]. One alternative to conventional large volume simulations is the use the large-$N$ equivalence of QCD at large volume to QCD with zero volume [@EK; @BHN1; @MK; @Migdal; @TEK; @AEK; @DW; @GK; @Parisipapers] (see also the related Ref. [@KNN]). These large-$N$ volume reductions allows one, in principle, to study very large values of $N\sim O(100-400)$ with modest resources. Volume reduction holds only if the ground states of the large and zero volume theories respect certain symmetries [@AEK]. Unfortunately, in the most interesting case of QCD in four dimensions these symmetries spontaneously break in the continuum limit when a naive reduction prescription is used [@BHN1; @MK]. An extension of that prescription is thus required and for a recent summary of the literature on this topic we refer to the reviews in the recent Refs. [@QEK; @DEK]. In the case of two space-time dimensions – the ‘t Hooft model – a naive large-$N$ volume reduction is expected to hold and so this theory is generally thought to be completely independent of its volume. In the current paper we analyze this volume dependence. Our motivation is two-fold. Firstly, the ‘t Hooft model is analytically soluble at large-$N$. Thus we can explicitly see how large-$N$ volume reduction works in this case, and what may cause it to fail. This topic was also addressed for zero baryon number by the authors of Ref. [@SchonThies_decompact], and our treatment here differs from that paper by being manifestly gauge invariant, by going beyond zero baryon number, and by using the lattice regularization. Our approach also makes a direct connection with Eguchi-Kawai reduction, and shows how the expectation values of spatial Polyakov loops play a crucial role in the validity of volume independence. Secondly, this paper is a prelude to our companion publication Ref. [@nonzeroBpaper] where we use the formalism presented here to solve the theory in the presence of nonzero baryon density. Considering the current incomplete understanding of the way four-dimensional QCD behaves at low temperatures and large (but not asymptotic) baryon densities, we believe that such a study is useful. Also, there exist certain confusions in the literature about the way large-$N$ gauge theories behave at nonzero baryon number [@Cohen], and seeing how these confusions go away in the soluble two-dimensional case is very helpful. Surprisingly, QCD in two dimensions and nonzero density has not been solved yet : While Ref. [@Salcedo] studied only one and two baryons in an infinite volume, then Ref. [@SchonThies] attempted to extend this but restricted to either (1) translational invariant states which were seen to be inconsistent, or (2) a particular translational non-invariant ansatz for a baryon crystal in the vicinity of the chiral limit. Since $1+1$-dimensional baryons become massless for massless quarks, it is natural to expect that they behave very differently than four-dimensional massive baryons. Furthermore, most of the current literature on the ‘t Hooft model has so far focused on its infinite volume limit where a certain set of gluonic zero modes are irrelevant. With a finite density of baryons, however, these become important and cannot be neglected (at least if the density is increased by fixing the baryon number and decreasing the volume). Thus, given the current status surveyed above, it seems wise to study the dense ‘t Hooft model for arbitrary quark mass, by making as few assumptions on the form of the ground state as possible, and by incorporating correctly the gluonic zero modes. In this paper we develop the machinery to achieve this goal. For the actual solution of the theory for arbitrary baryon numbers we refer to Ref. [@nonzeroBpaper] and for all other discussions on the way nonzero chemical potential affects the system to Ref. [@nonzeroMUpaper]. Former studies of the ‘t Hooft model used a plethora of mathematical methods – for example see [@plethora] for some papers relevant to this work. Common to all these is the need to control the severe IR divergences of this two-dimensional model. A particular clear approach, that we will follow in our study, is the one advocated in the seminal Refs. [@LTYL; @LNT]. There, one works in the Hamiltonian formalism defined in a spatial box of side $L$, and uses the axial gauge to remove all redundant degrees of freedom. This approach is also most suitable for our purpose of investigating the $L$-dependence of this Hamiltonian’s ground state. The outline of the paper is as follows. In Section \[LQCDH\] we present the details of the Hamiltonian approach to lattice QCD. A reader who is familiar with this approach can skip to Section \[Haxial\] where, by generalizing Refs. [@LTYL; @LNT] to the lattice, we show how to fix the axial gauge in the Hamiltonian formalism. Since such gauge fixing is less familiar than the gauge fixing in the Euclidean formalism, we do so in detail. Next, in Section \[GLresolve\], we show how to resolve Gauss law and re-write the electric fields in terms of the fermion color charge densities. This rewriting can be done for all components of the electric field except for those conjugate to the eigenvalues of the spatial Polyakov loops. This set of eigenvalues and their conjugate electric fields is what we refer to above as zero modes, and in Section \[0mode\] we focus on them. Specifically, we show how to represent the zero modes’ electric fields in the Schröedinger picture as differential operators. The end result of Sections \[LQCDH\]-\[Hrecap\] is a Hamiltonian that depends only on the fermions and on the zero-modes, with an overall color neutrality enforced on its Hilbert space. For the convenience of the reader we summarize this emerging structure in Section \[Hrecap\]. We then turn to find the ground state of this Hamiltonian. At large-$N$ this is done in two steps : (1) Solution of the gluon zero modes dynamics - discussed in Section \[SectorG\]. (2) Solution of the fermion sector - Section \[SectorF\]. In the second step we use the coherent state approach of Refs. [@YaffeCoherent] which seems particularly suitable for our problem. The end product is a regularized form of the ‘t Hooft classical Hamiltonian describing color neutral operators that correspond to quark-antiquark pairs and Polyakov-loops (that wrap around the spatial circle), and that interact in the presence of a fixed overall baryon number.[^1] In Section \[otherworks\] we survey other relevant works in the literature that obtain a similar Hamiltonian, pointing out the way they differ from our approach. In Section \[decompact\] we analyze the resulting Hamiltonian and its $L$ dependence for arbitrary baryon number $B$. We show how large-$N$ volume dependence emerges and that for it to hold we need to assume that the ground state has some degree of translation invariance. We also show how it can be violated by giving the Polyakov loops nonzero expectation values.[^2] An interesting phenomena occurs when the ground state contains a baryon crystal and we show how a ‘soft’ form of volume independence emerges. This leads us to remark on the way our results affect studies of nonzero chemical potential that try to rely on large-$N$ volume independence. We conclude in Section \[summary\] by noting some general lessons one can learn about the way large-$N$ QCD behaves in the presence of baryons. Hamiltonian QCD in $1+1$ dimensions : a brief reminder {#LQCDH} ======================================================= In this section we introduce the Hamiltonian formalism of lattice QCD restricted to one spatial dimension and one flavor. A reader familiar with this formalism can skip to Section \[Haxial\]. This Hamiltonian of lattice QCD was first introduced by Kogut and Susskind in 1975 [@Kogut75], shortly after Wilson’s
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Data-driven methods for improving turbulence modeling in Reynolds-Averaged Navier-Stokes (RANS) simulations have gained significant interest in the computational fluid dynamics community. Modern machine learning algorithms have opened up a new area of black-box turbulence models allowing for the tuning of RANS simulations to increase their predictive accuracy. While several data-driven turbulence models have been reported, the quantification of the uncertainties introduced has mostly been neglected. Uncertainty quantification for such data-driven models is essential since their predictive capability rapidly declines as they are tested for flow physics that deviate from that in the training data. In this work, we propose a novel data-driven framework that not only improves RANS predictions but also provides probabilistic bounds for fluid quantities such as velocity and pressure. The uncertainties capture both model form uncertainty as well as epistemic uncertainty induced by the limited training data. An invariant Bayesian deep neural network is used to predict the anisotropic tensor component of the Reynolds stress. This model is trained using Stein variational gradient decent algorithm. The computed uncertainty on the Reynolds stress is propagated to the quantities of interest by vanilla Monte Carlo simulation. Results are presented for two test cases that differ geometrically from the training flows at several different Reynolds numbers. The prediction enhancement of the data-driven model is discussed as well as the associated probabilistic bounds for flow properties of interest. Ultimately this framework allows for a quantitative measurement of model confidence and uncertainty quantification for flows in which no high-fidelity observations or prior knowledge is available.' address: 'Center for Informatics and Computational Science, University of Notre Dame, 311 I Cushing Hall, Notre Dame, IN 46556, USA' author: - Nicholas Geneva - Nicholas Zabaras bibliography: - 'mybibfile.bib' title: 'Quantifying model form uncertainty in Reynolds-averaged turbulence models with Bayesian deep neural networks' --- Turbulence ,Reynolds-Averaged Navier-–Stokes Equations (RANS) ,Model Form Uncertainty ,Uncertainty Quantification ,Bayesian ,Deep Neural Networks Introduction ============ Over the past decade, with the exponential power increase of computer hardware, computational fluid dynamics (CFD) has become an ever more predominate tool for fluid flow analysis. The Reynolds-averaged Navier-Stokes (RANS) equation provides an efficient method to compute time-averaged turbulent flow quantities making RANS solvers a frequently selected CFD method. However, it is common knowledge that RANS simulations can be highly inaccurate for a variety of flows due to the modeling of the Reynolds stress term [@pope2001turbulent]. Although over recent years Large Eddy Simulations (LES) or Direct Numerical Simulations (DNS) have become more accessible, these methods still remain out of the scope of practical engineering applications. For example, design and optimization tasks require repeated simulations with rapid turnaround time requirements for which RANS simulations are the choice modeling tool. Thus improving the accuracy of RANS simulations and providing measures of their predictive capability remains essential for the CFD community. Turbulence models seek to resolve the closure problem that is brought about from the time averaging of the Navier-Stokes equations. While CFD and computational technology has made significant strides over the past decade, turbulence models have largely become stagnate with the majority of today’s most popular models being developed over two decades ago. Many of the most widely used turbulence models employ the Boussinesq assumption as the theoretical foundation combined with a set of parameters that are described through one or more transport equations. In general, these turbulence models can be broken down into families based off the number of additional partial differential equations they introduce into the system. For example, the Spalart-Allmaras model [@spalart1992one] belongs to the family of single equation models. While the Spalart-Allmaras model has been proven to be useful for several aerodynamic related flows [@godin1997high], its very general structure severely limits the range of flows that it is applicable. In the two-equation family, models such as the k-$\epsilon$ model [@jones1972prediction; @launder1974application] and the k-$\omega$ model [@wilcox1993turbulence] provide better modeling for a much larger set of flows even though their limitations are well known. In all the aforementioned models, a set of empirically found constants are used for model-calibration thus resulting in potentially poor performance for flows that were not considered in the calibration process. This combined with empirical modeling of specific transport equations, such as the $\epsilon$ equation, result in a significant source of model form uncertainty. While many have proposed more complex approaches such as using different turbulence models for different regions of the flow [@menter1994two] or using a turbulence model with additional transport equations [@walters2008three], these methods still rely heavily on empirical tuning and calibration. Thus model form uncertainty introduced by turbulence models continues to be one of the largest sources of uncertainty in RANS simulations. This work aims to improve turbulence modeling for RANS simulations using machine learning techniques that also allow us to quantify the underlying model error. While the use of machine learning methods in CFD simulations can be traced back to over a decade ago [@milano2002neural], recently there has been a new wave of integrating innovative machine learning algorithms to quantify and improve the accuracy of CFD simulations. Earlier work in quantifying the uncertainty and calibration of turbulence models focused on treating model parameters as random variables and sampling via Monte Carlo to obtain a predictive distribution of outcomes [@cheung2011bayesian; @oliver2011bayesian]. Rather than constraining oneself to a specific model, an alternative approach was to directly perturb components of the anisotropy term of the Reynolds stress [@dow2011uncertainty]. Lately, the use of machine learning models has been shown to provide an efficient alternative to direct sampling. In general, the integration of machine learning with turbulence models can be broken down into three different approaches: modeling the anisotropic term of the Reynolds stress directly, modeling the coefficients of turbulence models and modeling new terms in the turbulence model. Tracey [*et al*.]{} [@tracey2013application] explored the use of kernel regression to model the eigenvalues of the anisotropic term of the Reynolds stress. Later, Tracey [*et al*.]{} [@tracey2015machine] used a single layer neural network to predict a source term in the Spalart-Allmaras turbulence model. Similarly, Signh [*et al*.]{} [@singh2017machine] have used neural networks to introduce a functional corrective term to the source term of the Spalart-Allmaras turbulent model for predicting various quantities over airfoils. Zhang [*et al*.]{} [@zhang2015machine] investigated the use of neural networks and Gaussian processes to model a correction term introduced to the turbulence model. Ling [*et al*.]{} [@ling2016reynolds] considered deep neural networks to predict the anisotropic tensor using a neural network structure with embedded invariance [@ling2016machine]. Ling [*et al*.]{} [@ling2017uncertainty] additionally proposed using random forests to improve RANS predictions for a flow with a jet in a cross flow. While the above works have managed to improve the accuracy of RANS simulations, uncertainty quantification has largely been ignored. Arguably, the integration of black box machine learning models increases the importance of uncertainty quantification in the context of quantifying the error of the improved turbulence model but also quantifying the uncertainty of the machine learning predictions. This is largely due to the significant prediction degradation of these proposed machine learning models for flows that vary from the training data in either fluid properties or geometry [@tracey2013application; @ling2016reynolds]. Past literature has clearly shown that data-driven methods are not exempt from the conflicting objectives of predictive accuracy versus flow versatility seen in traditional turbulence modeling. Several works have taken steps towards using machine learning to provide uncertainty quantification analysis of RANS simulations. For example, Xiao [*et al*.]{} [@xiao2016quantifying] proposed a Bayesian data-driven methodology that uses a set of high-fidelity observations to iteratively tune an ensemble of Reynolds-stress fields and other quantities of interest. While proven to work well for even sparse observational data, this work is limited to a single flow with which the machine learning model was trained explicitly on. Wu [*et al*.]{} [@wu2017priori] used the Mahalanobis distance and kernel density estimation to formulate a method to predict the confidence of a data-driven model for a given flow. While this allows the potential identification of regions of less confidence after training, it is limited to the prediction of the anisotropic stress and fails to provide any true probabilistic bounds. For machine learning methods to be a practical tool for reliably tuning RANS turbulence models, transferability to flows with different geometries and fluid properties is important. Additionally, quantifying the model uncertainty is critical for assessing both the accuracy and confidence of the machine learning model and of the resulting predicted quantities of interest. The novelty of our work is the use of a data-driven model with a Bayesian deep learning framework to provide the means of improving the accuracy of RANS simulations and allow for the quantification of the model form uncertainty arising in the turbulence model. This uncertainty is then propagated to the quantities of interest, such as pressure and velocity. The focus of our work will not be application on flows that are the same or similar to those in the training set, but rather to flows defined by different geometries and fluid properties. We aim to take a much more practical and expansive view of using these innovative machine learning models for improved turbulence modeling. The specific novel contributions of this work are fourfold: (a) the use of a Bayesian deep neural network as a model to predict a tuned Reynolds stress field, (b) introducing a stochastic data-
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In recent years, the majority of works on deep-learning-based image colorization have focused on how to make a good use of the enormous datasets currently available. What about when the data at disposal are scarce? The main objective of this work is to prove that a network can be trained and can provide excellent colorization results even without a large quantity of data. The adopted approach is a mixed one, which uses an adversarial method for the actual colorization, and a meta-learning technique to enhance the generator model. Also, a clusterization *a-priori* of the training dataset ensures a task-oriented division useful for meta-learning, and at the same time reduces the per-step number of images. This paper describes in detail the method and its main motivations, and a discussion of results and future developments is provided.' author: - Tomaso Fontanini - Eleonora Iotti - Andrea Prati bibliography: - 'mybibliography.bib' title: 'MetalGAN: a Cluster-based Adaptive Training for Few-Shot Adversarial Colorization' --- ![Example images generated using MetalGAN for 100-epochs, and 100-meta-iterations. From left to right: gray scale image, ground truth, output of the network. The example images belong to two different clusters.[]{data-label="fig:meta_net"}](img/visual_abstract_2.png){width="\textwidth"} Introduction ============ The *automatic image colorization* task is an image processing problem that is fundamental and extensively studied in the field of computer vision. The task consists in creating an algorithm that takes as input a gray-scale image and outputs a colorized version of the same image. The challenging part is to colorize it in a plausible and well-looking way. Many systems were developed over the years, exploiting a wide variety of image processing techniques, but recently, the image colorization problem, as many other problems in computer vision, was approached with deep-learning methods. Colorization is a *generative* problem from a machine learning perspective. Generative techniques, such as *Generative Adversarial Networks* (*GANs*) [@goodfellow2014generative], are then suitable to approach such a task. In particular, *conditional GANs* (*cGANs*) models seem especially appropriate to this purpose, since their structure allows the network to learn a mapping from an image $x$ and (only if needed) a random noise vector $z$ to an output generated image $y$. On the contrary, standard GANs only learn the mapping from the noise $z$ to $y$. As many deep-learning techniques, the training of a GAN or a cGAN needs a large amount of images. Large datasets usually grant a great diversity among images, allowing the network to better generalize its results. Nevertheless, having a huge number of images is often not feasible in real-world applications, or simply it requires too much storage space for an average system, and high training computational times. Hence, porting the current deep-learning colorization technologies to a more accessible level and achieving a better understanding of the colorization training process are eased by using a smaller dataset. For these reasons, one of the aims of this work is to achieve good performances in the colorization task using a little number of images compared to standard datasets. In *few-shot learning*, a branch of the deep-learning field, the goal is to learn from a small number of inputs, or from one single input in the ideal case (*one-shot learning*): the network is subject to a low quantity of examples, and it has to be capable to infer something when posed face-to-face to a new example. This problem underpins a high generalization capability of the network, which is a very difficult task and an open challenging problem in deep networks research. Recently, some novel interesting ideas highlight a possible path to reach a better generalization ability of the network. These ideas are based on the concept of learning to learn, i.e., adding a meta-layer of learning information above the usual learning process of the network. The generalization is achieved by introducing the concept of *tasks distribution* instead of a single task, and the concept of *episodes* instead of instances. A tasks’ distribution is the family of those different tasks on which the model has to be adapted to. Each task in the distribution has its own training and test sets, and its own loss function. A meta-training set is composed of training and test images samples, called episodes, belonging to different tasks. During training, these episodes are used to update the initial parameters (weights and bias) of the network, in the direction of the sampled task. Results of meta-learning methods investigated in literature are encouraging and obtain good performances on some few-shot datasets. For this reason and since the goal of this work is to colorize images with a few number of examples, a meta-learning algorithm to tune the network parameters on many different tasks was employed. The chosen algorithm is Reptile [@nichol2018first], and it was combined with an adversarial colorization network composed by a Generator $G$ and a Discriminator $D$. In other words, the proposed method approaches the colorization problem as a meta-learning learning one. Intuitively, Reptile works by randomly selecting tasks, then it trains a fast network on each task, and finally it updates the weights of a slow network. In this proposal, tasks are defined as clusters of the initial dataset. In fact, a typical initial dataset is an unlabeled dataset that contains a wide variety of images, usually photographs. In this setting, for example, a task could be to color all seaside landscape, and another could be to color all cats photos. Those tasks refer to the same problem and use the same dataset, but they are very different at a practical level. A very large amount of images could overwhelm the problem, showing as much seasides and cats as the network needs in order to differentiate between them. The troubles start when only a small dataset is available. As a matter of fact, such a dataset could not have the suitable number of images for making the network learning how to perform both the two example colorizations decently. The idea is to treat different classes of images as different tasks. For dividing tasks, features were extracted from the dataset using a standard approach—e.g., a Convolutional Neural Network (CNN)—and the images were clusterized through K-means. Each cluster is thus considered as a single task. During training, Reptile tunes the network $G$ on the specific task corresponding to an input query image and therefore it adapts the network to a specific colorization class. The problems and main questions that emerge in approaching a few-shot colorization are various. First of all, how the clusterization should be made in order to generate a coherent and meaningful distribution of tasks? Does a task specialization really improve the colorization or the act of automatically coloring a photo is independent from the subject of the photo itself? Second, how the meta-learning algorithm should be combined with cGAN training, also to prevent overfitting the generator on few images? And last, since the purpose of the work is not to propose a solution to the colorization problem in general, but to propose a method that substantially reduce the amount of images involved in training without—or with minor—losses in state-of-the-art results, how to evaluate the actual performance of the network compared to other approaches? In particular, what are the factors that should be taken in account to state an enhancement, not in the proper colorization, but in few-shot colorization? In the light of these considerations, the contributions of this work are summarized as follows: - A new architecture that combines meta-learning techniques and cGAN called *MetalGAN* is proposed, specifying in detail how the generator and the discriminator parameters are updated; - A clusterization and a novel algorithm are described and their ability to tackle image-to-image translation problems is highlighted; - An empirical demonstration that a very good colorization can be achieved even with a small dataset at disposal during training is provided by showing visual results; - A precise comparison between two modalities (i.e. our algorithm and only cGAN training) is performed at experimental time, using the same network model and hyper-parameters. Related Work {#sec:related} ============ ### Image retrieval: Since we need the clusterization to be as accurate as possible we reserved a particular attention to the recent image retrieval techniques that focus on obtaining optimal descriptors. Recently, deep learning allowed to greatly improve the feature extraction phase of image retrieval. Some of the most interesting papers on the subject are [@razavian2016visual; @gong2014multi; @babenko2014neural; @yue2015exploiting; @reddy2015object] and, in particular, MAC descriptors [@tolias2015particular], that we ended up using. ### Conditional GANs: When a GAN generator is not only conditioned with a random noise vector, but also with more complex information like text [@reed2016generative], labels [@mirza2014conditional], and especially images, the model to use is a *conditional* GANs (*cGANs*). cGANs allow a better control over the output of the network and thus are very suitable in a lot of image generation tasks. In particular, cGANs conditioned on images were used both in a paired [@isola2017image] and unpaired [@zhu2017unpaired] way, to produce complex texture [@xian2018texturegan], to colorize sketches [@sangkloy2017scribbler] or images [@cao2017unsupervised] and
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'A set is effectively chosen in every class of $\fd02$ sets modulo countable.' author: - 'Vladimir Kanovei[^1]' title: 'Definable selector for $\fd02$ sets modulo countable' --- Let $\cnt$ be the equivalence relation of equality modulo countable, that is, $X\cnt Y$ iff the symmetric difference $X\sd Y$ is (at most) countable. Does there exist an , , an effective choice of an element in each equivalence class of sets of certain type? The answer depends on the type of sets considered. For instance, the question answers in the positive for the class of closed sets in Polish spaces by picking the only perfect set in each equivalence class of closed sets. On the other hand, effective selectors for $\cnt$ do not exist in the domain of $\Fs$ sets, , in the Solovay model (in which the axiom of choice AC holds and all ROD sets are LM and have the Baire property) by [@1 Theorem 5.5]. Our goal here is to prove that $\Fs$ is the best possible for such a negative result. There exists a definable selector for $\cnt$ in the domain of $\fd02$ sets in Polish spaces. [($\fd02$ = all sets simultaneously $\Fs$ and $\Gd$.)]{} We’ll make use of the following lemma. If $X$ is a countable $\Gd$ set in a Polish space then the [closure]{} $\clo X$ is countable. Therefore if $X\cnt Y$ are $\fd02$ sets then $\clo X\cnt\clo Y$. Otherwise $X$ is a countable dense $\Gd$ set in an uncountable Polish space $\clo X$, which is not possible. [Difference hierarchy.]{} It is known (see  [@kDST 22.E]) that every $\fd02$ set $A$ in a Polish space $\dX$ admits a representation in the form $A=\bigcup_{\et<\vt}(F_\et\bez H_\et)$, where $\vt<\omi$ and $F_0\qs H_0\qs F_1\qs H_1\qs\dots F_\et\qs H_\et\qs\dots$ is a decreasing sequence of closed sets in $\dX$, defined by induction so that $F_0=\dX$, $H_\et=\clo{F_\et\bez A}$, $F_{\et+1}=H_\et\cap \clo{F_\et\cap A}$, and the intersection on limit steps. The induction stops as soon as $F_\vt=\pu$. The key idea of the proof of Theorem \[mt\] is to show that if $A\cnt B$ are $\fd02$ sets then the corresponding sequences of closed sets $$\left. \bay{l} F_0^A\qs H^A_0\qs F^A_1\qs H^A_1\qs\dots F^A_\et\qs H^A_\et\qs\dots \\[1ex] F_0^B\qs H^B_0\qs F^B_1\qs H^B_1\qs\dots F^B_\et\qs H^B_\et\qs\dots \eay \right\} \quad (\et<\vt=\vt^A=\vt^B),\snos {A shorter sequence is extended to the longer one by empty sets if necessary.}$$ satisfying $A=\bigcup_{\et<\vt}(F^A_\et\bez H^A_\et)$ and $B=\bigcup_{\et<\vt}(F^B_\et\bez H^B_\et)$ as above, also satisfy $F^A_\et\cnt F^B_\et$  and  $H^A_\et\cnt H^B_\et$  —   for all $\et<\vt$. It follows that the perfect kernels $\pk{F^A_\et}$, $\pk{F^B_\et}$ coincide: $\pk{F^A_\et}=\pk{F^B_\et}$, and $\pk{H^A_\et}=\pk{H^B_\et}$ as well. Therefore the sets $\Phi(A)=\bigcup_{\et<\vt}(\pk{F^A_\et}\bez \pk{H^A_\et})$ and $\Phi(B)$ coincide (whenever $A\cnt B$ are $\fd02$ sets), and $A\cnt\Phi(A)$ holds for each $\fd02$ set $A$, so $\Phi$ is a selector required, ending the proof of the theorem. Thus it remains to prove \[\*\]. We argue by induction. We have $F^A_0=F^B_0=\dX$ (the underlying Polish space). Suppose that $F^A_\et\cnt F^B_\et$; prove that $H^A_\et\cnt H^B_\et$. By definition, we have $H^A_\et=\clo{F^A_\et\bez A}$ and $H^B_\et=\clo{F^B_\et\bez B}$, where $(F^A_\et\bez A)\cnt (F^B_\et\bez B)$ (recall that $A\cnt B$ is assumed), hence $H^A_\et\cnt H^B_\et$ holds by Lemma \[fdL\]. It’s pretty similar to show that if $F^A_\et\cnt F^B_\et$ (and then $H^A_\et\cnt H^B_\et$ by the above) then $F^A_{\et+1}\cnt F^B_{\et+1}$ holds. This accomplishes the step $\et\to\et+1$. Finally the limit step is rather obvious. Coming back to the mentioned result of [@1 Theorem 5.5], it is a challenging problem to prove that the equivalence relation $\cnt$ on $\Fs$ sets is not ROD-reducible to the equality of Bodel sets in the Solovay model. As established in [@kl], it is true in some models (including  Cohen and random extensions of $\rL$) that every OD and Borel set is OD-Borel (, has an OD Borel code). In such a model, there is an effective choice of a set and its Borel code, by an OD function, in every class of Borel sets containing an OD set. The author thanks Philipp Schlicht for useful comments. [10]{} V. Kanovei and V. Lyubetsky. 147 (2019), 1277-1282. A. Kechris. . Springer-Verlag, New York, 1995 S. M[ü]{}ller, P. Schlicht, D. Schrittesser, T. Weinert. . , 1811.06489 v4. [^1]: IITP RAS, Bolshoy Karetny, 19, b.1, Moscow 127051, Russia. Partial support of RFBR grant 17-01-00705 acknowledged. [kanovei@googlemail.com]{}.
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We consider a model of *selective prediction*, where the prediction algorithm is given a data sequence in an online fashion and asked to predict a pre-specified statistic of the upcoming data points. The algorithm is allowed to choose when to make the prediction as well as the length of the prediction window, possibly depending on the observations so far. We prove that, even without *any* distributional assumption on the input data stream, a large family of statistics can be estimated to non-trivial accuracy. To give one concrete example, suppose that we are given access to an arbitrary binary sequence $x_1, \ldots, x_n$ of length $n$. Our goal is to accurately predict the average observation, and we are allowed to choose the window over which the prediction is made: for some $t < n$ and $m \le n - t$, after seeing $t$ observations we predict the average of $x_{t+1}, \ldots, x_{t+m}$. This particular problem was first studied in [@drucker2013high] and referred to as the “density prediction game”. We show that the expected squared error of our prediction can be bounded by $O(\frac{1}{\log n})$ and prove a matching lower bound, which resolves an open question raised in [@drucker2013high]. This result holds for any sequence (that is not adaptive to when the prediction is made, or the predicted value), and the expectation of the error is with respect to the randomness of the prediction algorithm. Our results apply to more general statistics of a sequence of observations, and we highlight several open directions for future work.' author: - | Mingda Qiao\ `mqiao@stanford.edu` - | Gregory Valiant\ `valiant@stanford.edu`[^1] bibliography: - 'main.bib' title: 'A Theory of Selective Prediction[^2]' --- [^1]: This work is supported by NSF awards CCF-1704417 and AF:1813049 and by ONR award N00014-18-1-2295. [^2]: This revised version replaces an older version in which we had missed the closely related work of [@drucker2013high].
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'One of the essential features of quantum mechanics is that most pairs of observables cannot be measured simultaneously. This phenomenon is most strongly manifested when observables are related to mutually unbiased bases. In this paper, we shed some light on the connection between mutually unbiased bases and another essential feature of quantum mechanics, quantum entanglement. It is shown that a complete set of mutually unbiased bases of a bipartite system contains a fixed amount of entanglement, independently of the choice of the set. This has implications for entanglement distribution among the states of a complete set. In prime-squared dimensions we present an explicit experiment-friendly construction of a complete set with a particularly simple entanglement distribution. Finally, we describe basic properties of mutually unbiased bases composed only of product states. The constructions are illustrated with explicit examples in low dimensions. We believe that properties of entanglement in mutually unbiased bases might be one of the ingredients to be taken into account to settle the question of the existence of complete sets. We also expect that they will be relevant to applications of bases in the experimental realization of quantum protocols in higher-dimensional Hilbert spaces.' address: | $^1$ Vienna Center for Quantum Science and Technology (VCQ), Faculty of Physics, University of Vienna, Boltzmanngasse 5, 1090 Vienna, Austria\ $^2$ Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, 117543 Singapore, Singapore\ $^3$ Institute for Quantum Optics and Quantum Information (IQOQI), Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria author: - 'M Wieśniak$^{1}$[^1], T Paterek$^{2}$[^2], and A Zeilinger$^{1,3}$' title: Entanglement in mutually unbiased bases --- Introduction ============ Quantum complementarity forbids the simultaneous knowledge of almost all pairs of observables. This impossibility is drawn to the extreme in the case of observables described by operators whose eigenstates form mutually unbiased bases (MUBs). Two bases are said to be unbiased if any vector from one basis has an overlap with all vectors from the other basis that is equal in modulo. The definition for a bigger set of MUBs means that the unbiasedness property holds for all pairs of these bases. Accordingly, if we can perfectly predict a measurement result of one such observable corresponding to an eigenstate in one of the bases, then the results of all other observables corresponding to all other basis vectors of all other bases in the set remain completely uncertain. One typical example of a set of three MUBs is the eigenbases of spin-${\mbox{$\textstyle \frac{1}{2}$}}$ projections onto three orthogonal directions: a spin-${\mbox{$\textstyle \frac{1}{2}$}}$ state along one axis leaves us totally uncertain about the results along the orthogonal axes. A spin-${\mbox{$\textstyle \frac{1}{2}$}}$ particle is a two-level quantum system, a qubit, and clearly admits three MUBs. A $d$-level quantum system, a qudit with pure states described in $d$ dimensional Hilbert space, can have at most $d+1$ MUBs [@WF1989], and such a set is referred to as the complete set of MUBs. The first explicit construction of the complete sets of MUBs was presented by Ivanović for $d$ being a prime number [@IVANOVIC]. Subsequently, Wootters and Fields constructed the complete sets for prime-power $d$ [@WF1989]. Since then, many explicit constructions have been derived and they are collected in a recent review [@REVIEW]. If $d$ is not a prime power, the number of MUBs remains unknown although it is considered unlikely that a complete set of MUBs exists in these cases. For example, the works [@BH2007; @BW2008; @arX; @RLE2011] describe failed numerical attempts to find a complete set of MUBs in dimension 6. In addition to this fundamental question, MUBs find applications in quantum tomography [@WF1989], quantum cryptography [@BRUSS; @B-PP; @MOHAMED], the Mean King problem [@VAA1987; @AE2001; @ARAVIND2003; @HHH2005], and other tasks. Here we study the properties of entanglement between subsystems of a global system with a composite (i.e. nonprime) dimension as well as entanglement distribution among the states of MUBs. We show that the amount of entanglement, as measured as a function of the linear entropy of a subsystem, present in states of a complete set of MUBs of a composite dimension always must have a nonzero value that is independent of a chosen set. In other words, entanglement is always present in such a complete set of MUBs and it is always the same independent of the choice of the complete set, being solely a function of dimensions of subsystems. Moreover, for global dimensionality that is big enough, practically all MUBs of a complete set contain entanglement. We then show an experiment-friendly procedure that creates complete sets of MUBs in all dimensions $d=p^2$, which are squares of a prime number. This procedure uses only one entangling operation, which is repeatedly applied to states of product MUBs to give the complete set. Remarkably, the generated set consists of either product states or maximally entangled states. Finally, we discuss the properties of MUBs consisting of product states only. We believe that understanding entanglement in MUBs can lead on the practical side to novel applications and on the conceptual side to an understanding of why complete sets of MUBs can (not) exist for nonprime-power $d$. Conservation of entanglement ============================ Consider a bipartite system composed of subsystems $A$ and $B$, i.e. its global dimension is $d = d_A d_B$. Any (hypothetical) complete set of MUBs allows for efficient quantum tomography as it reveals complete information about an arbitrary quantum state of the system [@WF1989; @LKB2003]. Hence we intuitively expect that the average entanglement over all the states constituting the complete set of MUBs shall be fixed with respect to some measure, independent of the choice of the bases. This intuition is made rigorous in this section. The relevant measure of entanglement is a function of the linear entropy of a reduced density operator. The idea of the proof is to use the property of a complete set of MUBs called a complex projective $2$-design [@BARNUM2002; @KR2005], which here means that the entanglement averaged over a complete set of MUBs is the same as the entanglement averaged over all pure quantum states. The latter is constant due to known results in statistical mechanics [@LUBKIN1978]. The message of this section, namely that the amount of entanglement is the same independent of a choice of the complete set of MUBs, may be well-known to scientists working with designs, but our proof is elementary and has immediate consequences for the distribution of entanglement among the states of MUBs. Complete sets of mutually unbiased bases and designs ---------------------------------------------------- A complete set of MUBs is composed of $d+1$ bases, each basis of $d$ orthonormal vectors. We denote by ${\left | j_m \right\rangle}$ the $j$th state of the $m$th basis, where for convenience we enumerate the states and the bases as $j=0,\dots,d-1$ and $m=0,\dots,d$. To introduce the notion of a $2$-design, one studies polynomials $\mathcal{P}(i) \equiv \mathcal{P}(x_1,x_2,y_1^*,y_2^* | i)$, which are biquadratic in variables $x_1,x_2$ and separately in variables $y_1^*,y_2^*$, where $x_i,y_i$ are any coefficients of arbitrary state $|i\rangle$ with respect to a fixed (say, standard) basis and $^*$ denotes complex conjugation. Any complete set of MUBs is known to be a complex projective $2$-design [@BARNUM2002; @KR2005] because the average of any $\mathcal{P}(j_m)$ over states ${\left | j_m \right\rangle}$ is the same as the average with the Haar measure over all pure states: $$\langle \mathcal{P}(j_m) \rangle_{\mathrm{MUBs}} = \langle \mathcal{P}(i) \rangle_{\mathrm{Haar}}. \label{MUB-HAAR}$$ The conservation law -------------------- In order to utilize the design property of the complete set of MUBs in the studies of entanglement, we characterize the latter by the purity of a reduced density operator, say $\rho_{A|j_m} = \mathrm{Tr}_B({\left | j_m \right\rangle} {\left \langle j_m \right |})$: $$\mathcal{P}(j_m) \equiv \mathrm{Tr}(\rho_{A|j_m}^2).$$ This quantity acquires its minimum of $\frac{1}{d_A}$ for maximally entangled states and its maximum of unity for unentangled product states. By ‘maximally entangled states’ we mean pure states with maximal possible entropy for the smaller of the subsystems. Note that due to the properties of the Schmidt decomposition it does not matter which subsystem is taken into account. Moreover, the assumptions behind Eq. (\[MUB-HAAR\]) are fulfilled and, since per definition $\langle \mathcal{P}(j_m) \rangle_{\mathrm{MUBs}} = \frac{1}{d(d+1)} \sum_{m=0}^{d} \sum_{j=0}^{d-1} \
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Suppose we have identified three clusters of galaxies as being topological copies of the same object. How does this information constrain the possible models for the shape of our Universe? It is shown here that, if our Universe has flat spatial sections, these multiple images can be accommodated within any of the six classes of compact orientable 3-dimensional flat space forms. Moreover, the discovery of two more triples of multiple images in the neighbourhood of the first one, would allow the determination of the topology of the Universe, and in most cases the determination of its size.' author: - | G.I. Gomero[^1],\ \ Instituto de Física Teórica,\ Universidade Estadual Paulista,\ Rua Pamplona, 145\ São Paulo, SP 01405–900, Brazil title: '**Determining the shape of the Universe using discrete sources**' --- Introduction ============ The last two decades have seen a continuously increasing interest in studying cosmological models with multiply connected spatial sections (see [@Review] and references therein). Since observational cosmology is becoming an increasingly high precision science, it would be of wide interest to develop methods to systematically construct specific candidates for the shape of our Universe in order to analyse whether these models are consistent with observational data. Since one of the simplest predictions of cosmological models with multiply connected spatial sections is the existence of multiple images of discrete cosmic objects, such as clusters of galaxies,[^2] the following question immediately arises: Suppose we have identified three clusters of galaxies as being different topological copies of the same object, how does this information constrain the possible models for the shape of our Universe? The initial motivation for this work was the suggestion of Roukema and Edge that the X–ray clusters RXJ 1347.5–1145 and CL 09104+4109 may be topological images of the Coma cluster [@RE97]. Even if these particular clusters turn out not to be topological copies of the same object, the suggestion of Roukema and Edge raises an interesting challenge. *What if* one day a clever astrophysicist discovers three topological copies of the same object? It is shown here that these (would be) multiple images could be accommodated within any of the six classes of compact orientable 3-dimensional flat space forms. Moreover, and this is the main result of this paper, the discovery of two more triples of multiple images in the neighbourhood of the first one, would be enough to determine the topology of the Universe, and in most cases even its size. Thus, two interesting problems appear now, (i) does our present knowledge of the physics of clusters of galaxies (or alternatively, of quasars) may allow the identification of a triple of multiple images if they actually exist?, and (ii) given that such an identification has been achieved, how easy can other triples of topological copies near the first one be identified? The present paper does not deal with these two problems, however it should be noticed that a recent method proposed by A. Bernui and me in [@BerGo] (see also [@Gomero]) could be used to test, in a purely geometrical way, the hypothesis that any two given clusters of galaxies are topological copies. The model building procedure is explained in the next section, while section \[Examples\] presents some numerical examples illustrating specific candidates for the shape of our Universe, under the pressumed validity of the Roukema–Edge hypothesis. In section \[Decide\] it is discussed the main result of this paper: how the topology of space could be determined with the observation of just two more triples of images; and how, in most cases, one could even determine the size of our Universe. Finally, section \[Concl\] consists of discussions of the results presented in this letter and suggestions for further research. Model Building {#ModBuild} ============== Suppose that three topological copies of the same cluster of galaxies have been identified. Let $C_0$ be the nearest copy from us, $C_1$ and $C_2$ the two other copies, $d_1$ and $d_2$ the distances from $C_0$ to $C_1$ and $C_2$ respectively, and $\theta$ the angle between the geodesic segments $\overline{C_0C_1}$ and $\overline{C_0C_2}$. Roukema and Edge [@RE97] have suggested an example of this configuration, the Coma cluster being $C_0$ and the clusters RXJ 1347.5–1145 and CL 09104+4109 being $C_1$ and $C_2$ (or vice versa). The distances of these clusters to Coma are 970 and 960$h^{-1}$ $Mpc$ respectively (for $\Omega_0=1$ and $\Lambda=0$), and the angle between them, with the Coma cluster at the vertex, is $\approx \! 88^o$. Under the assumption that these multiplicity of images were due to two translations of equal length and in orthogonal directions, they constructed FL cosmological models whose compact flat spatial sections of constant time were (i) 3-torii, (ii) manifolds of class ${\mathcal{G}_2}$, or (iii) manifolds of class ${\mathcal{G}_4}$, all of them with square cross sections, and scale along the third direction larger than the depth of the catalogue of X-ray clusters used in the analysis. Let us consider the possibility that at least one of the clusters $C_i$ is an image of $C_0$ by a screw motion, and do not assume that the distances from $C_0$ to $C_1$ and $C_2$ are equal, nor that they form a right angle (with $C_0$ at the vertex). It is shown in this section that one can accommodate this generic configuration of clusters within any of the six classes of compact orientable 3-dimensional flat space forms, thus providing a plethora of models for the shape of our Universe consistent with the (would be) observational fact that these clusters are in fact the same cluster. Moreover, one could also consider the possibility that one of the clusters $C_i$ is an image of $C_0$ by a glide reflection, thus giving rise to non–orientable manifolds as models for the shape of space. However, these cases will not be considered here since they do not give qualitatively different results, and the corresponding calculations can be done whenever needed. The diffeomorphic and isometric classifications of 3-dimensional Euclidean space forms given by Wolf in [@Wolf] were described in detail by Gomero and Rebouças in [@GR02]. The generators of the six diffeomorphic compact orientable classes are given in Table \[Tb:OESF\], where an isometry in Euclidean 3-space is denoted by $(A,a)$, $a$ is a vector and $A$ is an orthogonal transformation, and the action is given by $$\label{action} (A,a) : p \mapsto Ap + a \; ,$$ for any point $p$. The orientation preserving orthogonal transformations that appear in the classification of the Euclidean space forms take the matrix forms $$\begin{aligned} \label{Rot3} A_1 = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{array} \right) \; , & A_2 = \left( \begin{array}{ccc} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{array} \right) \; , & A_3 = \left( \begin{array}{ccc} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array} \right) \; , \nonumber \\ \\ B = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & -1 \end{array} \right) \; , & C = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \end{array} \right) \quad\mbox{and} & D = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 1 \end{array} \right) \; , \nonumber\end{aligned}$$ in the basis formed by the set $\{a,b,c\}$ of linearly independent vectors that appear in Table 1. We will fit the set of multiple images $\{C_0,C_1,C_2\}$ within manifolds of classes ${\mathcal{G}_2}-{\mathcal{G}_6}$, since the class ${\mathcal{G}_1}$ (the 3–torus) is trivial. [|c|\*[6]{}[|c]{}|]{} Class & ${\mathcal{G}_1}$ & ${\mathcal{G}_2}$ & ${\mathcal{G}_3}$ & ${\mathcal{G}_4}$ & ${\mathcal{G}_5}$ & ${\mathcal{G}_6}$\ & & & & & & ($A_1$,$a$)\ Generators & $a$, $b$, $c$ & ($A_1$,$a$), $b
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We study the interaction between the microstructures of regular AdS Hayward black hole using Ruppeiner geometry. Our investigation shows that the dominant interaction between the black hole molecules is attractive in most of the parametric space, as in van der Waals system. However, in contrast to the van der Waals fluid, there exists a weak dominant repulsive interaction for small black hole phase in some parameter domain. This result clearly distinguishes the interactions in a magnetically charged black hole from that of van der Waals fluid. However, the interactions are universal for charged black holes since they do not dependent on magnetic charge or temperature.' author: - 'Naveena Kumara A.' - 'Ahmed Rizwan C.L.' - Kartheek Hegde - 'Ajith K.M.' bibliography: - 'BibTex.bib' title: 'Repulsive Interactions in the Microstructure of Regular Hayward Black Hole in Anti-de Sitter Spacetime' --- [ ]{} [ ]{} [ ]{} [ ]{} Introduction ============ In recent years the subject of black hole chemistry has become an attractive window to probe the properties of AdS black holes. In black hole chemistry, the negative cosmological constant of the AdS spacetime is identified as the thermodynamic variable pressure to study the phase transition of the AdS black holes [@Kastor:2009wy; @Dolan:2011xt]. Interestingly the phase transition of certain AdS black hole analytically resembles that of the van der Waals system [@Kubiznak2012; @Gunasekaran2012; @Kubiznak:2016qmn]. Recently by studying the phase transitions, attempts were made to investigate of the underlying microscopic properties of the AdS black holes [@Wei2015; @Wei2019a; @Wei2019b; @Guo2019; @Miao2017; @Zangeneh2017; @Wei:2019ctz; @Kumara:2019xgt; @Kumara:2020mvo; @Xu:2019nnp; @Chabab2018; @Deng2017; @Miao2019a; @Chen2019; @Du2019; @Dehyadegari2017]. In these researches, the geometric methods were the key tools in probing the microscopic details of the black holes. Contrast to the statistical investigation in ordinary thermodynamics the approach here is upside down, the macroscopic thermodynamic details are ingredients for the microscopic study [@Ruppeinerb2008]. The technique is inspired by the applications of thermodynamic geometry in ordinary thermodynamic systems [@Ruppeiner95; @Janyszek_1990; @Oshima_1999x; @Mirza2008; @PhysRevE.88.032123]. Recently, a general Ruppeiner geometry framework is developed from the Boltzmann entropy formula, to study the black hole microstructure [@Wei2019a]. The fluctuation coordinates are taken as the temperature and volume, and a universal metric was constructed in that scheme. When this methodology is applied to the van der Waals fluid only a dominant attractive interaction was observed, as it should be. However, when the same methodology is used for the RN AdS black hole, a different result was obtained. In a small parameter range, the repulsive interaction is also found in addition to the dominant attractive interaction between the black hole molecules [@Wei2019a; @Wei2019b]. Even so, in the case of five-dimensional neutral Gauss-Bonnet black hole only a dominant attractive interaction was discovered, which is similar to van der Waals fluid [@Wei:2019ctz]. Therefore, in general, the nature of the black hole molecular interactions are not universal. In our recent work [@Kumara:2020mvo], we have observed that there exists a repulsive interaction in regular Hayward black hole, like that of RN AdS case. In the present work, we will make a detailed study of the previously observed repulsive interaction. The primary motivation for our research is due to the great interest on the regular black holes in black hole physics since they do not possess singularities. Wide variety of regular black holes exists, ranging from the first solution given by Bardeen [@Bardeen1973], the later versions [@AyonBeato:1998ub; @AyonBeato:2000zs], to the one on which we are interested, the Hayward black hole [@Hayward:2005gi]. (We suggest the readers to go through our previous article [@Kumara:2020mvo] for the chronological discussion on this). Hayward black hole is the solution to Einstein gravity non-linearly coupled to an electromagnetic field, which carries a magnetic charge. In this article, we probe the phase structure and repulsive interactions in the microstructure of this magnetically charged AdS black hole. The article is organised as follows. After a brief introduction, we discuss the phase structure of the black hole in section \[secone\]. Then the Ruppeiner geometry for the black hole is constructed for microstructure scrutiny (section \[sectwo\]). Then we present our findings in section \[secthree\]. Phase structure of the Hayward AdS Black Hole {#secone} ============================================= The Hayward black hole solution in the four dimensional AdS background is given by [@Fan:2016hvf; @Fan:2016rih] (see [@Kumara:2020mvo] for a brief explanation), $$ds^2=-f(r)dt^2+\frac{dr^2}{f(r)}+r^2d\Omega ^2,$$ where $d\Omega ^2=d\theta ^2+\sin \theta ^2d\phi ^2$ and the metric function, $$f(r)=1-\frac{2 M r^2}{g^3+r^3}+\frac{8}{3} \pi P r^2.$$ We study the phase structure in the extended phase space where the cosmological constant $\Lambda$ gives the pressure term $P=-\Lambda /8\pi$. The parameter $g$ is related to the total magnetic charge of the black hole $Q_m$ as, $$Q_m=\frac{g^2}{\sqrt{2\alpha}},$$ where $\alpha$ a free integration constant. The thermodynamic quantities temperature, volume and entropy of the black hole are easily obtained to be, $$T=\frac{f'(r_+)}{4\pi}=\frac{2 P r^4}{g^3+r^3}-\frac{g^3}{2 \pi r \left(g^3+r^3\right)}+\frac{r^2}{4 \pi \left(g^3+r^3\right)}; \label{temperature}$$ $$V=\frac{4}{3} \pi \left(g^3+r^3\right) \quad \text{and} \quad S=2 \pi \left(\frac{r^2}{2}-\frac{g^3}{r}\right).$$ These results are consistent with the first law $$dM=TdS+\Psi dQ_m+VdP+\Pi d \alpha,$$ and the Smarr relation, $$M=2(TS-VP+\Pi \alpha)+\Psi Q_m.$$ The heat capacity of the black hole system at constant volume is, $$C_V=T\left( \frac{\partial S}{\partial T}\right)_V=0. \label{cv}$$ Inverting the expression for the Hawking temperature (\[temperature\]) we get the equation of state, $$P=\frac{g^3}{4 \pi r^5}+\frac{g^3 T}{2 r^4}-\frac{1}{8 \pi r^2}+\frac{T}{2 r}.$$ From the state equation one can see that the black hole shows critical behaviour similar to van der Waals system. This often interpreted as the transition between a small black hole and a large black hole phases. In our earlier studies [@Kumara:2020mvo], we have shown that an alternate interpretation is possible, using Landau theory of continuous phase transition, where the phase transition is between the black hole phases at different potentials. In this alternate view the black hole phases, namely high potential, intermediate potential and low potential phases, are determined by the magnetic charge. In either of these interpretations the phase transition can be studied by choosing a pair of conjugate variables like $(P-V)$ or $(T-S)$. With the conjugate pair $(P,V)$, the Maxwell’s equal area law has the form, $$P_0(V_2-V_1)=\int _{V_1}^{V_2}PdV. \label{equalarea}$$ Since there exists no analytical expression for the coexistence curve for Hayward AdS black hole we seek numerical solutions most of the time. For that, we obtain the key ingredient from the Maxwell’s equal area law. Using the equation (\[equalarea\]) and expressions for $P_0(V_1)$ and $P_0(V_2)$ from equation of state we get, $$r_2=g\left[ \frac{x \left(x^3+6 x^2+6 x+1\right)+\sqrt{y}}{x^4}\right]^{1/3}, \label{r2eqn}$$ $$P_0=\frac{3 \left[\frac{\sqrt{y}+ x \left(x^3+6 x^2+6 x+1\right)}{x^4}\right]^{1/3} \left[\left(-2 x^4-11 x^3-20 x^2-11 x-2\right) \sqrt{y}+ z\right]}{16
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'A key property of Majorana zero modes is their protection against local perturbations. In the standard picture this is the result of a high degree of spatial wavefunction non-locality. A careful quantitative definition of non-locality in relation to topological protection goes beyond purely spatial separation, and must also take into account the projection of wavefunction spin. Its form should be physically motivated from the susceptibility of the Majorana mode to different local perturbations. Non-locality can then be expressed as one of various wavefunction overlaps depending on the type of perturbation. We quantify Majorana non-locality using this approach and study its dependence with Majorana nanowire parameters in several classes of experimentally relevant configurations. These include inhomogeneous nanowires with sharp and smooth depletion and induced pairing, barriers and quantum dots. Smooth inhomogeneities have been shown to produce near-zero modes below the critical Zeeman field in the bulk. We study how accurately their non-locality can be estimated using a purely local measurement on one end of the nanowire, accessible through conventional transport experiments. In uniform nanowires the local estimator quantifies non-locality with remarkable accuracy, but less so in nanowires with inhomogeneities greater than the superconducting gap. We further analyse the Majorana wavefunction structure, spin texture and the spectral features associated with each type of inhomogeneity. Our results highlight the strong connection between internal wavefunction degrees of freedom, non-locality and protection in smoothly inhomogeneous nanowires.' author: - 'Fernando Peñaranda$^1$, Ramón Aguado$^2$, Pablo San-Jose$^2$, Elsa Prada$^1$' bibliography: - 'biblio.bib' title: 'Quantifying wavefunction non-locality in inhomogeneous Majorana nanowires' --- Introduction ============ A unique electronic state by the name of Majorana zero mode (MZM)[@Kitaev:PU01] associated with topological superconductivity has been the subject of intense research recently. The pace picked up after the first experimental hints of its existence were reported six years ago [@Mourik:S12] in so-called Majorana nanowires, i.e. semiconducting nanowires with induced superconductivity and spin-orbit coupling subjected to a Zeeman field above a critical value $B>B_c$. These pioneering experiments were quickly followed by others [@Deng:NL12; @Das:NP12; @Churchill:PRB13; @Lee:NN14; @Deng:S16; @Zhang:N18; @Grivnin:A18], mostly revolving around robust zero energy midgap states in tunneling spectroscopy. ![**Inhomogeneous nanowires.** Sketch of an inhomogeneous nanowire, hosting Majorana zero modes of overlap $\Omega_s$. The overlap may be estimated by a local quantity $\eta$ measured by a local probe. Five types of inhomogeneous profiles of the electrostatic potential $\phi(x)$ and pairing $\Delta(x)$ are considered: uniform, S’S, NS, Barrier-S and Dot-S. The latter two are subtypes of the general NS case. []{data-label="fig:sketch"}](sketch.pdf){width="\columnwidth"} The reason for all the excitement is manyfold. From a technological perspective, MZM are viewed by many as a possible foundation of a new form of quantum computer architecture that could achieve topologically protection against some forms of logic errors [@Nayak:RMP08; @Cheng:PRB12a; @Sarma:NQI15]. From the point of view of fundamental physics, standard theory predicts, moreover, that a MZM should exhibit some truly exotic properties. It is a zero energy state inside a superconducting gap that is typically localised in space [@Kitaev:PU01; @Oreg:PRL10; @Lutchyn:PRL10]. The most common place to find a MZM is at boundaries between regions of distinct electronic topology [@Qi:RMP11; @Aguado:RNC17; @Lutchyn:NRM18]. A MZM behaves in many respects as half an electron, with each MZM emerging simultaneously with a second Majorana partner located at some other position in the system. Two such “electron-halves” form a rather unusual, spatially non-local fermion [@Jackiw:PS12; @Fu:PRL10; @Semenoff:C16]. The non-local nature of this fermion pins it to zero energy regardless of any local perturbations performed on either of the MZMs. This is often called topological protection [@Cheng:PRB12a], although protection through non-locality is perhaps a better description, as will be argued here. Each MZM also exhibits non-Abelian braiding statistics upon exchange [@Nayak:RMP08]. They are hence sometimes called fractionalised non-Abelian Ising anyons [@Bonderson:PRB13; @Aasen:PRX16]. All these exotic properties are expected to be remarkably robust, and to not require any fine tuning of the system’s state. The reason is that, at least within standard theory, they are a consequence of the different band topology at either side of the boundary they inhabit, which does not depend on microscopic details. However, while the MZM analysis in terms of band-topology can be made rigorous for boundaries between semi-infinite systems, it becomes problematic when applied to closed, finite-sized systems, for example. It also fails to account for the properties of so-called trivial Andreev zero modes, also known as pseudo-MZMs or quasi-MZMs, predicted to appear in smoothly inhomogeneous nanowires without any obvious form of band-topological order [@Kells:PRB12; @Prada:PRB12; @Liu:PRB17; @Moore:PRB18; @Setiawan:PRB17; @Moore:18; @Liu:18; @Vuik:18]. An example of such states relevant to the present work arises in fully trivial nanowires ($B<B_c$) hosting a sufficiently smooth normal-superconductor interface, wherein modes of arbitrarily small energy localise. All the experimental evidence so far of conventional topological MZMs can be mimicked by pseudo-MZMs. This realisation has given rise to an intense debate regarding possible loopholes in the interpretation of the experimental observations, and on the protection, or lack thereof, of the observed zero modes. Intriguingly, these states share most properties with MZMs at the end of a uniform $B>B_c$ topological nanowire, except in one crucial aspect: they may be highly local in space. Since spatial non-locality is conventionally associated to the resilience of MZMs against error-inducing local perturbations, it is often argued that pseudo-MZMs, unlike MZMs, would not be useful for topological quantum computation. Perhaps for this reason the idea that two distinct types of zero modes, trivial and non-trival, can exist in real samples has taken hold in recent literature. Instead of classifying states into trivial and non-trivial [we will characterise them in terms of their resilience against perturbations]{} [@Cheng:PRB12a; @Aseev:A18; @Knapp:PRB18]. The associated susceptibility is expressed as different spatial overlap integrals $\Omega$ of the Majorana Nambu-spinorial wavefunctions, depending on the type of perturbation [@Penaranda:18]. Despite all of these integrals expressing non-locality, the way the internal spin degrees of freedom combine in the overlap integral is different. This leads to several [measures]{} of non-locality $0\leq 1-\Omega\leq 1$ that go beyond purely spatial separation, and are directly connected to protection [against]{} perturbations. This quantity $1-\Omega$ provides an extension of the concept of *topological* non-triviality, but in contrast to the latter it is no longer an all-or-nothing proposition, but a matter of degree. It has also the distinct advantage of being generally applicable to zero modes in arbitrary isolated systems, [not only semi-infinite ones]{}. From this point of view, $1-\Omega$ [is proposed here as]{} the essential figure of merit of a given MZM, ultimately associated to its resilience against decoherence [@Cheng:PRB12a; @Penaranda:18]. Beyond this, the distinction between ‘proper’ MZMs and pseudo-MZMs ceases to make sense. As an aside, we note that an alternative theoretical framework has been recently proposed that allows to recover a well-defined and unambiguous trivial/non-trivial classification within this continuum of MZMs of isolated systems. It defines the topological nature of these zero modes in more general terms by considering the exceptional-point topology of the non-hermitian Hamiltonians that describe the system when it is coupled to a reservoir [@Avila:A18; @Pikulin:JL12; @Pikulin:PRB13]. In essence, the coupling to the reservoir makes the system infinite, so that it is once more amenable to a rigorous topological classification. This approach is related to band topology, but is more general, and in it the degree of non-locality of the isolated states studied here plays a crucial role. In this work we further consider the practical problem of quantifying and detecting the degree of non-locality of a given zero mode using purely local measurements by local spectroscopic probes. These include e.g. a tunnel contact or a quantum dot coupled to a certain point in a
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | Non-standard sandwich gravitational waves are constructed from the homogeneous $pp$ vacuum solution and the motions of free test particles in the space-times are calculated explicitly. They demonstrate the caustic property of sandwich waves. By performing limits to impulsive gravitational wave it is demonstrated that the resulting particle motions are identical regardless of the “initial” sandwich. PACS number(s): 04.30.-w, 04.20.Jb, 98.80.Hw --- 6.6in 21.7cm J. Podolsk' y, K. Veselý ** Department of Theoretical Physics, Faculty of Mathematics and Physics, Charles University, V Holešovičkách 2, 180 00 Prague 8, Czech Republic [ Electronic address: podolsky@mbox.troja.mff.cuni.cz]{} **1 Introduction** Plane-fronted gravitational waves with parallel rays ([*pp*]{} waves) are characterized by the existence of a quadruple Debever-Penrose null vector field which is covariantly constant. In suitable coordinates (cf. [@KSMH]) the metric of vacuum [*pp*]{} waves can be written as $${{\rm d}}s^2=2\,{{\rm d}}\zeta {{\rm d}}\bar\zeta-2\,{{\rm d}}u{{\rm d}}v-(f+\bar f)\,{{\rm d}}u^2\ , \label{E1}$$ where $f(u,\zeta)$ is an arbitrary function of $u$, analytic in $\zeta$. The only non-trivial components of the curvature tensor are proportional to $f_{,\zeta\zeta}$ so that (\[E1\]) represents flat Minkowski space-time when $f$ is linear in $\zeta$. The simplest case for which the metric describe gravitational waves arise when $f$ is of the form $$f(u,\zeta)=d(u)\zeta^2\ , \label{E2}$$ where $d(u)$ is an [*arbitrary*]{} function of $u$; such solutions are called homogeneous [*pp*]{} waves (or “plane” gravitational waves). Performing the transformation (cf. [@Penrose]) $$\begin{aligned} \zeta&=&\frac{1}{\sqrt{2}}\,(Px+iQy) \ ,\nonumber\\ v &=&\frac{1}{2}\,(t+z+PP'x^2+QQ'y^2) \ ,\label{E3}\\ u &=&t-z \ ,\nonumber\end{aligned}$$ where real functions $P(u)\equiv P(t-z)$, $Q(u)\equiv Q(t-z)$ are solutions of differential equations $$P''+d(u)\,P=0\ ,\qquad Q''-d(u)\,Q=0\ , \label{E4}$$ (here prime denotes the derivation with respect to $u$) the metric can simply be written as $${{\rm d}}s^2 = - {{\rm d}}t^2 + P^2 {{\rm d}}x^2 + Q^2 {{\rm d}}y^2 + {{\rm d}}z^2\ . \label{E5}$$ This form of the homogeneous [*pp*]{} waves is suitable for physical interpretation. Considering two free test particles standing at fixed $x$, $y$ and $z$, their relative motion in the $x$-direction is given by the function $P(u)$ while it is given by $Q(u)$ in the $y$-direction. The motions are unaffected in the $z$-direction which demonstrate transversality of gravitational waves. The coordinate $u=t-z$ can now be understood as a “retarded time” and the function $d(u)$ as a “profile” of the wave. Note also that functions $P, Q$ may have a higher degree of smoothness than the function $d$ so that relative motions of particles are continuous even in the case of a shock wave (with a step-function profile, $d(u)\sim\Theta(u)$), or an impulsive wave (with a distributional profile, $d(u)\sim\delta(u)$). **2 Standard sandwich wave** A sandwich gravitational wave [@BPR; @BP] is constructed from the homogeneous [*pp*]{} solution (\[E1\]), (\[E2\]) if the function $d(u)$ is non-vanishing only on some finite interval of $u$, say $u\in[u_1, u_2]$. In such a case the space-time splits into three regions: a flat region $u<u_1$ (“Beforezone”), a curved region $u_1<u<u_2$ (“Wavezone”), and another flat region $u_2<u$ (“Afterzone”). In the region $u<u_1$ where $d(u)=0$ it is natural to choose solutions of Eqs. (\[E4\]) such that $P=1$ and $Q=1$ so that the metric (\[E5\]) is explicitly written in Minkowski form. The form of the metric (\[E5\]) for $u>u_1$ is then given by solutions of Eqs. (\[E4\]) where the function $P, Q$ are chosen to be continuous up to the first derivatives at $u_1$ and $u_2$. A standard example of a sandwich wave can be found in textbooks (cf. [@Rindler]). The “square” profile function $d(u)$ is given simply by $$d(u)=\left\{ \begin{array}{l} 0, \qquad u<0 \\ a^{-2}, \quad 0\le u\le a^2 \\ 0, \qquad a^2<u \end{array}\right. \label{E6}$$ where $a$ is a constant. It is easy to show that the corresponding functions $P, Q$ are given by $$\begin{aligned} P(u)&=&\left\{ \begin{array}{l} 1, \hskip 57mm u\le0 \\ \cos(u/a), \hskip44mm 0\le u\le a^2 \\ -u \sin a/a+\cos a+a\sin a, \hskip11mm a^2\le u \end{array}\right. \label{E7}\\ Q(u)&=&\left\{ \begin{array}{l} 1, \hskip 57mm u\le 0 \\ \cosh(u/a), \hskip42mm 0\le u\le a^2 \\ u \sinh a/a+\cosh a-a\sinh a, \qquad a^2\le u \end{array}\right. \label{E8}\end{aligned}$$ Therefore, particles which were in rest initially accelerate within the wave in such a way that they approach in $x$-direction and move apart in $y$-direction. Behind the wave they move uniformly (see Fig. 1a). **3 Non-standard sandwich waves** Now we construct some other (non-trivial) sandwich waves. Our work is motivated primarily by the possibility of obtaining impulsive gravitational waves by performing appropriate limits starting from [*different*]{} sandwich waves (see next Section). This also enables us to study particle motions in such radiative space-times. Moreover, the standard sandwich wave given by (\[E6\]) is very special and “peculiar” since it represents a [*radiative*]{} space-time containing [*stationary*]{} regions. Indeed, for $d(u)$ being a positive constant, the Killing vector $\partial_u$ is timelike where $|Re\,\zeta|>|Im\,\zeta|$. This “strange” property remained unnoticed in literature so far. It may be useful to introduce more general sandwich waves which are [*not*]{} stationary. [**a) Sandwich wave with “$\bigwedge$” profile**]{} Let us consider a solution (\[E1\]), (\[E2\]) for which the function $d(u)$ takes the form $$d(u)=\left\{ \begin{array}{l} 0, \hskip27mm u\le -a \\ b\,(a+u)/a, \hskip10mm -a\le u\le0 \\ b\,(a-u)/a, \hskip10mm 0\le u\le a \\ 0, \hskip27mm a\le u \end{array}\right. \label{E9}$$ where $a, b$ are arbitrary real (positive) constants. The wave has a “wedge” profile illustrated in Fig. 1b. Straightforward but somewhat lengthy calculations give the following form of the functions $P(u), Q(u)$ (continuous up to the second derivatives everywhere including the points $u=-a$, $u=0$ and $u=a$): $$\begin{aligned} P(u)&=&\left\{ \begin{array}{l} 1, \hskip 70mm u\le -a \\ c\,\sqrt{u_1}\,J_{-\frac{1}{3}}(\frac{2}{3}u_1^{3/2}), \hskip40mm -a\le u\le 0 \\ \sqrt{u_2}\,\left[A\,
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'Jun-ichi [Igarashi]{}[^1] and Tatsuya [Nagao]{}$^{1}$' title: 'Lattice Distortion and Resonant X-Ray Scattering in DyB$_{2}$C$_2$' --- Introduction ============ Resonant x-ray scattering has recently attracted much interest, since the resonant enhancement for the prohibited Bragg reflection corresponding to the orbital order has been observed in several transition-metal compounds by using synchrotron radiation with photon energy around the $K$ absorption edge. [@Murakami98a; @Murakami98b; @Murakami99c; @Murakami00b] For such $K$-edge resonances, $4p$ states of transition metals are involved in the intermediate state in the electric dipolar ($E_1$) process, and they have to be modulated in accordance with the orbital order for the signal to be observed. This modulation was first considered to come from the anisotropic term of the $4p$-$3d$ intra-atomic Coulomb interaction, [@Ishihara1] but subsequent studies based on the band structure calculation [@Elfimov; @Benfatto; @Takahashi1; @Takahashi2] have revealed that the modulation comes mainly from the crystal distortion via the oxygen potential on the neighboring sites. This is because $4p$ states are so extending in space that they are very sensitive to the electronic structure at neighboring sites. Rare-earth compounds also show the orbital order (usually an ordering of quadrupole moments). In CeB$_6$, RXS experiments were carried out around the Ce $L_{\rm III}$ absorption edge, and resonant enhancements have been found on quadrupolar ordering superlattice spots. [@Nakao01] Only one peak appeared as a function of photon energy, which was assigned to the $E_1$ process. In the $E_1$ process, $5d$ states of Ce in the intermediate state are to be modulated in accordance with the superlattice spots. Since the lattice distortion seems extremely small and the $5d$ states are less extending than the $4p$ states in transition-metal compounds, it is highly possible that the modulation is mainly caused by the Coulomb interaction between the $5d$ states and the orbital ordering $4f$ states. In our previous papers,[@Nagao; @Igarashi] we demonstrated this scenario by calculating the RXS spectra on the basis of the effective Hamiltonian of Shiina et al.[@Shiina; @Sakai; @Shiba] Without the help of lattice distortion, we obtained sufficient intensities of the spectra, and reproduced well the temperature and magnetic field dependences. This situation contrasts with those in transition-metal compounds. ![ (a) Sketch of the crystal structure of DyB$_2$C$_2$ ($P4/mbm$: $a=5.341$ ${\rm \AA}$, $c=3.547$ ${\rm \AA}$ at $30$ K). Gray large circles are Dy atoms. Solid and open small circles are B and C atoms, respectively. (b) Local coordinate frames attached to each sublattice. \[fig.cryst\]](fig.print.1.eps){width="8.0cm"} Another example for rare-earth compounds is RXS experiments on DyB$_2$C$_2$, where the intensity is resonantly enhanced near the Dy $L_{\rm III}$ absorption edge. [@Tanaka; @Hirota; @Matsumura] This material takes a tetragonal form at high temperatures as shown in Fig. \[fig.cryst\](a), and undergoes two phase transitions with decreasing temperatures in the absence of the magnetic field: a quadrupole order below $T_{\rm Q}$ ($=24.7$ K) (Phase II) and a magnetic order below $T_{\rm C}$ ($=15.3$ K) (Phase III).[@Yamauchi] Corresponding to the transition at $T_{\rm Q}$, a large non-resonant intensity is found in the $\sigma\to\sigma'$ channel on the $(h0\frac{\ell}{2})$ spot ($h$ and $\ell$ are odd integers).[@Matsumura] This suggests that some structural change takes place at $T=T_{\rm Q}$ from the tetragonal phase at high temperatures.[@Tanaka; @Hirota] A buckling of sheets of B and C atoms was proposed,[@Tanaka] and the non-resonant intensities by the buckling has recently been evaluated; about $0.01$ ${\rm \AA}$ shift of B and/or C atoms may be sufficient to give rise to such large intensities.[@Adachi] It is not clear in experiments whether the intensity on this spot is resonantly enhanced at the $L_{\rm III}$ edge, since the non-resonant part is so large that it may mask the resonant behavior. On the other hand, the resonant enhancement of RXS intensities has clearly been observed on the superlattice spot $(00\frac{\ell}{2})$. In this paper, we study the mechanism of the RXS spectra at the $L_{\rm III}$ edge in Phase II of DyB$_2$C$_2$. Since the $5d$ states are so extended in space that they are sensitive to lattice distortion caused by the buckling of sheets of B and C atoms. Then the question arises whether the direct influence of the lattice distortion on the $5d$ states is larger than the influence of the anisotropic $4f$ charge distribution associated with the quadrupole order through the $5d$-$4f$ Coulomb interaction. Lovesey and Knight[@Lovesey] have discussed the mechanism from the symmetry viewpoint, and have pointed out that the RXS intensities on $(00\frac{\ell}{2})$ and $(h0\frac{\ell}{2})$ spots come from lowering the local symmetry probably due to lattice distortion. The argument based on symmetry alone is powerful in some respect, but does not shed light on this issue. In the transition-metal compounds, the corresponding question has already been answered by [*a*b initio]{} calculations as mentioned above. However, such [*a*b initio]{} calculations are difficult in rare-earth compounds. We resort to a model calculation by treating the $5d$ states as a band and the $4f$ states as localized states. The buckling of sheets of B and C atoms causes modulations of the $5d$ bands and of the $4f$ states. We analyze such effects of lattice distortion on the basis of the point charge model,[@Hutchings] which leads to four inequivalent Dy sites with principal axes of the crystal field shown in Fig. \[fig.cryst\](b). These principal axes seem to correspond well to the direction of magnetic moments in the magnetic phase.[@Yamauchi] Of course, the point charge model is not good in quantitative viewpoint. Nonetheless, we construct an effective model that the $5d$ and $4f$ states are under the crystal field of the same form and with the same principal axes as the above analysis. The crystal field modulates the $5d$ states. Although the actual effect may come from hybridizations to $2p$, $3s$ states of B and C, it can be included into a form of the crystal field. The crystal field also makes the quadrupole moment of the $4f$ states align along the principal axes, establishing a quadrupole order. A molecular field caused by the Dy-Dy interaction may also act on the $4f$ states in Phase II in addition to the crystal field. This interaction may be mediated by the RKKY interaction, but the explicit form has not been derived yet. Note that the Ce-Ce interaction in CeB$_6$ has been extensively studied, describing well the phase diagram under the magnetic field. [@Shiina; @Sakai; @Shiba] But the molecular field may change little and even stabilize the quadrupole order. Therefore, we need not explicitly consider the molecular field by regarding the crystal field as including the effect. The charge anisotropy associated with the quadrupole order modulates the $5d$ states through the intra-atomic $5d$-$4f$ Coulomb interaction. We calculate the RXS intensity within the $E_1$ transition. We take account of the above two processes, direct and indirect ones, of modulating the $5d$ states. Both processes give rise to the RXS intensities on the $(00\frac{\ell}{2})$ and on the $(h0\frac{\ell}{2})$ spots. Both give similar photon-energy dependences and the same azimuthal-angle dependence in agreement with the experiment. However, the mechanism of direct modulation of the $5d$ band gives rise to the intensities much larger than the mechanism of indirect modulation through the $5d$-$4f$ Coulomb interaction in a wide parameter range of the crystal field. This suggests that the RXS intensities are mainly controlled by the lattice distortion. This paper is organized as follows. In § 2, we analyze the buckling of sheets of B and C atoms. In § 3, we briefly summarize the formulae used in the calculation of the RXS spectra. In § 4, we calculate the RXS spectra on two mechanisms. Section 5 is devoted to concluding remarks. Lattice Distortion ================== ![ Sketch of a B$_2$C$_2$ sheet ($z=c/2$). Open circles represent B and C atoms; big and small circles move to positive and negative directions along the $z$ axis, respectively. The directions are reversed on the plane of $z=-c/2$.
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Uncertainty quantification plays an important role in biomedical engineering as measurement data is often unavailable and literature data shows a wide variability. Using state-of-the-art methods one encounters difficulties when the number of random inputs is large. This is the case, e.g., when using composite Cole-Cole equations to model random electrical properties. It is shown how the number of parameters can be significantly reduced by the Karhunen-Loève expansion. The low-dimensional random model is used to quantify uncertainties in the axon activation during deep brain stimulation. Numerical results for a Medtronic 3387 electrode design are given.' author: - '[^1]' title: 'Low-Dimensional Stochastic Modeling of the Electrical Properties of Biological Tissues' --- Uncertainty, random processes, principal component analysis, biomedical engineering. Introduction ============ electrical properties of biological tissue are based on experimental data and are subject to large variability in literature [@gabriel2009; @schmidtieee2013], which arises from difficulties associated with the measuring process. Their properties vary over frequency and exhibit a non-symmetrical distribution of relaxation times, which can be described by composite Cole-Cole equations. Randomness in the material can be accounted for by modeling the parameters in the Cole-Cole equations as random variables. This gives rise to random material laws which are physically motivated but contain a large number of random parameters. Hence, they are not well suited for the majority of uncertainty quantification methods that scale unfavorably with the dimension of the parameter space. In this study we exploit correlation in the random Cole-Cole equation to substantially reduce the number of parameters. In particular, we use an eigendecomposition of the covariance matrix to derive a low-rank approximation. The truncated Karhunen-Loève (KL) expansion [@loeve1978; @ghanem1991] of the random material is then spanned in direction of the dominant eigenfunctions. This procedure is closely related to principal component analysis and proper orthogonal decomposition. The final computational goal is to quantify uncertainties in the axon activation during Deep Brain Stimulation (DBS) [@schmidtieee2013]. To this end, the stimulation electrode and the surrounding brain tissue are modeled as a volume conductor, see Figure \[fig:axons\] (left). A numerical approximation of the electric potential is obtained by the finite element method. The quantity of interest is the minimal electrode current to be applied in order to activate a particular axon in the electrode’s vicinity. This optimization is formulated as a root-finding problem for a function obtained from post-processing the solution of the volume conductor problem. Brent’s method is applied for its numerical solution. at (0,0) [![Axons aligned perpendicular to electrode (left) and computational domain with mesh using rotational symmetry (right)[]{data-label="fig:axons"}](fig01.png "fig:"){width="0.13\columnwidth"}]{}; (0.4,0.1) circle (.4ex); (0.6,0.1) circle (.4ex); (0.8,0.1) circle (.4ex); (1,0.1) circle (.4ex); (1.2,0.1) circle (.4ex); (1.4,0.1) circle (.4ex); (1.6,0.1) circle (.4ex); (1.8,0.1) circle (.4ex); (2,0.1) circle (.4ex); (2.2,0.1) circle (.4ex); at (1,1) [axons]{}; (1,0.8) – (1,0.4); at (0,0) [![Axons aligned perpendicular to electrode (left) and computational domain with mesh using rotational symmetry (right)[]{data-label="fig:axons"}](fig02.png "fig:"){width="0.54\columnwidth"}]{}; at (1.3,0.1) [boundary $\Gamma$]{}; In the presence of randomness in the electric coefficients, uncertainty quantification techniques are required. We use a stochastic quadrature on sparse grids [@xiu2005; @babuska2010] to efficiently compute the mean value and standard deviation of the axon activation. The method is non-intrusive as it only requires repetitive runs of the volume conductor model and the activation potential post-processing routine. The paper is organized as follows: Sections \[sec:cole\] and \[sec:KL\] contain the random Cole-Cole equation together with the KL expansion. Section \[sec:problem\] briefly summarizes the main equations needed for modeling DBS. Section \[sec:uq\] introduces a stochastic setting together with the stochastic quadrature. Finally, numerical results for a Medtronic 3387 electrode design are given in Section \[sec:num\]. Random Cole-Cole Equation {#sec:cole} ========================= Electrical properties of biological tissues can be modeled by the Cole-Cole equation $$f(\omega) = \epsilon_\infty + \frac{\varkappa_i}{j \omega \epsilon_0} + \sum_{i=1}^4 \frac{\Delta \epsilon_n}{1+(j \omega \tau_n)^{1-\alpha_n}}, \label{eq:cole_cole}$$ where $\omega$ denotes frequency, $j$ the imaginary unit, $\varkappa_i$ the static ionic conductivity and $\tau_n$ represents relaxation time constants. Also, $\epsilon_\infty$ and $\Delta \epsilon_n$ denote the high frequency and difference of the low to high frequency relative permittivitiy, respectively. From the permittivity and electric conductivity are inferred as $\epsilon(\omega) = {\mathrm{Re}(f(\omega))}$ and $\varkappa(\omega) = -{\mathrm{Im}(\epsilon_0 \omega f(\omega))}$, with $\mathrm{Re}$ and $\mathrm{Im}$ referring to the real part and imaginary part, respectively. In , $\epsilon_\infty,\varkappa_i,\Delta \epsilon_n,\tau_n$ and $\alpha_n$ are parameters that need to be inferred from measurements. As uncertainties are inevitably connected to this process we consider these parameters to be random variables $Y_i: \Theta \rightarrow \mathbb{R}$, $i=1,\ldots,14$, where $\Theta$ refers to a set of random outcomes. Then, with $\theta \in \Theta$ denoting a random event, the random Cole-Cole equation reads $$f(\theta,\omega) = Y_1(\theta) + \frac{Y_2(\theta)}{j \omega \epsilon_0} + \sum_{i=1}^4 \frac{Y_{3 i}(\theta)}{1+(j \omega Y_{3 i + 1}(\theta))^{1-Y_{3 i + 2}(\theta)}}. \label{eq:random_cole_cole}$$ In view of , both the electric permittivity and the conductivity are random. Since the following derivation is identical for both $\epsilon$ and $\varkappa$, we use the function $g$ referring to either of them. Important measures of the random field $g$ are the expected value and the covariance, given as $$\begin{aligned} {\mathrm{E}}_g(\omega) &= \int_{\Theta} g(\theta,\omega) \ \mathrm{d} P(\theta), \\ {\mathrm{Cov}}_g(\omega,\omega') &= \int_{\Theta} (g(\theta,\omega)-{\mathrm{E}}[g](\omega)) \notag \\ & \hspace*{3em} \cdot (g(\theta,\omega')-{\mathrm{E}}[g](\omega')) \ \mathrm{d} P(\theta),\end{aligned}$$ where $P$ refers to a probability measure. Discrete Karhunen-Loève Expansion {#sec:KL} ================================= When is used within simulations, both the large number of random variables and their possible correlation pose difficulties. The former results in a high computational complexity, whereas a possible correlation of the inputs cannot be handled by many state-of-the-art uncertainty quantification methods. In the following we apply the discrete Karhunen-Loève expansion (KLE) to reduce the number of random variables in . Although, the KLE is readily applicable to random fields such as , we consider its discrete variant, also referred to as principal component analysis. The exposition thereby follows [@elia2013coarse]. Given a set of frequency points $\{\omega_n\}_{n=1}^N$, chosen equidistantly over a fixed interval on a logarithmic scale, we consider the covariance matrix ${\mathbf{C}}$ with entries $$C_{n_1,n_2} = {\mathrm{Cov}}_g(\omega_{n_1},\omega_{n_2}), \ n_1,n_2=1,\ldots,N \label{eq:cov}$$ and denote its eigenvectors and eigenvalues with $\mathbf{b}_n$ and $\lambda_n$, respectively. Then, ${\mathbf{C}}$ can be decomposed as $$\mathbf{C} = \mathbf{V} \mathbf{E} \mathbf{V}^\top,$$ with $\mathbf{V}$ storing the eigenvectors $\mathbf{b}_n$ column-wise and $\mathbf{E}$ containing the eigenvalues $\lambda_n$ in decreasing order on its diagonal. As ${\mathbf{C}}$ is symmetric positive definite, the $\lambda_n$ are real and positive. Moreover, given a strongly
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We prove asymptotic formulas for the density of integral points taking coprime polynomial values on the affine quadrics defined by $Q(X_1,\cdots,X_n)=m$, where $Q$ is a non-degenerate quadratic form in $n\geqslant 3$ variables and $m$ a non-zero integer. This is a quantitative version of the arithmetic purity of strong approximation off infinity for affine quadrics, a property that has been established in our previous work, and may also be viewed as a refined version of the Hardy-Littlewood property in the sense of Borovoi-Rudnick’s work.' address: - 'Yang CAO Lebniz Universität Hannover Welfengarten 1, 30167 Hannover, Germany' - 'Zhizhong HUANG Lebniz Universität Hannover Welfengarten 1, 30167 Hannover, Germany' author: - Yang Cao - Zhizhong Huang title: | Arithmetic purity, geometric sieve\ and counting integral points on affine quadrics --- [UTF8]{}[gkai]{} 岂曰无衣,与子同裳。\ 同气连枝,共盼春来。 Introduction ============ #### **Background and empiricism** The behavior of integral points on affine varieties defined over a number field is sometimes more subtle than rational points. Studying integral points on an open part of a variety naturally involves infinitely many congruence conditions, a problem to overcome when trying to move integral points around in showing approximation results in adelic topology. When the complementary of the open set has codimension at least two, no cohomological or topological obstructions to the local-to-global principle for this open set can arise, which serves as positive evidence for the following question first raised by Wittenberg (c.f. [@Wittenberg §2.7 Question 2.11]). \[q:purity\] Let $X$ be a smooth variety over a number field satisfying strong approximation (off a finite set of places). Does any open subset $U\subset X$ satisfy also this property, whenever ${{\mathrm{codim}}}(X\setminus U,X)\geqslant 2$? We shall say that such $X$ verifies the *arithmetic purity (of strong approximation)* (c.f. [@Cao-Huang Definition 1.2]). Recently the authors in [@Cao-Huang §1.3] settled this question in the affirmative for a wide class of semisimple simply connected linear algebraic groups and consequently for their homogeneous spaces (with connected stabilizers). We refer to the references therein for an account of known results towards Question \[q:purity\]. The purpose of this article is to address an effective or statistic aspect of Question \[q:purity\] concerning the arithmetic purity off the real place ${{\mathbb {R}}}$. Let $X$ and $U$ be as before defined over ${{\mathbb {Q}}}$, with a fixed integral model for each (still denoted by $X,U$ by abuse of notation). Assume that $X$ is quasi-affine. Embed $X$ into an affine space ${{\mathbb {A}}}^n$ equipped with an archimedean height function $\|\cdot\|$. We ask the following: \[q:countingpurity\] Does there exist an asymptotic formula for $$\label{eq:NUT} N_U(T):=\#\{{\underline{\mathbf{X}}}\in U({{\mathbb {Z}}}):\|{\underline{\mathbf{X}}}\|\leqslant T\},\quad T\to\infty?$$ When $U=X$, this question is identical to the usual one of counting integral points on varieties. The situation for symmetric varieties is relatively well-understood and different methods can be applied. See notably the work [@Duke-Rudnick-Sarnak] and the more recent one [@Browning-Gorodnik]. When such an asymptotic formula exists (and assume that $X$ satisfies strong approximation), then one expects that the order of growth of $N_X(T)$ should be $T^{n-\deg X}$ (depending on the embedding $X\hookrightarrow {{\mathbb {A}}}^n$), and the leading constant should be the product of local densities. Varieties of this type are called *(strongly) Hardy-Littlewood* after Borovoi-Rudnick [@Borovoi-Rudnick Definition 2.2] (see also [@Duke-Rudnick-Sarnak p. 143]), and they satisfy $$\label{eq:strongHL} N_X(T)\sim\tau_{\infty}(X;T)\prod_{p<\infty}\hat{\tau}_p(X),$$ where $\hat{\tau}_p(X)$ are the $p$-adic local factors (c.f. [@Borovoi-Rudnick (0.0.3)]) of $X$: $$\label{eq:HLpadic} \hat{\tau}_p(X):=\lim_{t\to\infty} \frac{\#X({{\mathbb {Z}}}/p^t{{\mathbb {Z}}})}{p^{t\dim X}},$$ and if $X$ is cut off in ${{\mathbb {A}}}^n$ by polynomials $f_1,\cdots,f_r\in{{\mathbb {Q}}}[X_1,\cdots,X_n]$, then $$\label{eq:realHL} \tau_{\infty}(X;T):=\lim_{\varepsilon\to 0}\frac{1}{\varepsilon^r}{{\mathrm{vol}}}_{{{\mathbb {R}}}^n}\{{\mathbf{x}}\in{{\mathbb {R}}}^n:\|{\mathbf{x}}\|\leqslant T,|f_i({\mathbf{x}})|<\frac{\varepsilon}{2},\forall 1\leqslant i\leqslant r\}$$ is the *real Hardy-Littlewood density* (for the embedding $X\hookrightarrow{{\mathbb {A}}}^n$) ([@Borovoi-Rudnick (0.0.4)]) (or *singular integral*). The infinite product $$\label{eq:singularseries} \mathfrak{G}(X)=\prod_{p<\infty}\hat{\tau}_p(X)$$ is called the *singular series* of $X$. It happens that an asymptotic formula exists for $N_X(T)$, even if $X$ fails the integral Hasse principle and strong approximation (this failure is explained by Brauer-Manin obstruction). Such $X$ is called *relatively Hardy-Littlewood* ([@Borovoi-Rudnick Definition 2.3]), for which a certain density function is included in describing $N_X(T)$. A discussion on affine quadrics in three variables, amongst varieties of this type, is given in §\[se:quadricarith\]. Handling the cases $U\subsetneq X, {{\mathrm{codim}}}_X(X\setminus U)\geqslant 2 $ requires special care regarding certain infinite congruence conditions. For instance, if $Z=X\setminus U$ is defined by two regular functions $f,g\in{{\mathbb {Q}}}[X]$ with integral coefficients, the estimation of $N_U(T)$ in now boils down to $$\label{eq:gcd1} \#\{{\underline{\mathbf{X}}}\in X({{\mathbb {Z}}}): \|{\underline{\mathbf{X}}}\|\leqslant T,\gcd(f({\underline{\mathbf{X}}}),g({\underline{\mathbf{X}}}))=1\}.$$ Here an “infinite” congruence condition comes in, since $$\gcd(f({\underline{\mathbf{X}}}),g({\underline{\mathbf{X}}}))=1\Leftrightarrow {\underline{\mathbf{X}}}{\ \mathrm{mod}\ }p\not\in Z,\forall p,$$ although it is actually a condition about finitely many primes if we bound the height of ${\underline{\mathbf{X}}}$. A *geometric sieve* was inaugurated by Ekedahl [@Ekedahl] when dealing with $X={{\mathbb {A}}}^n$. Further pursued by Poonen [@Poonen Theorem 3.1] and Bhargava [@Bhargava], this sieve method has demonstrated surprising applications on the density of square-free polynomial values in various circumstances. Their results provide $$N_U(T)\sim N_{{{\mathbb {A}}}^n}(T) \prod_{p<\infty}\left(1-\frac{\#Z({{\mathbb {F}}}_p)}{\#{{\mathbb {A}}}^n({{\mathbb {F}}}_p)}\right) .$$ The Lang-Weil estimate (c.f. [@Lang-Weil], [@Cao-Huang Corollary 3.5]) shows that $$\frac{\#Z({{\mathbb {F}}}_p)}{\#{{\mathbb {A}}}^n({{\mathbb {F}}}_p)}=\frac{c_p}{p^2},$$ where $c_p\geqslant 0$ is uniformly bounded for any prime $p$. Hence the above infinite product is absolutely convergent. This motivates, at least for $X$ being strongly Hardy-Littlewood, that if Question \[q:purity\] has a positive answer to $U$ (which implies that $N_U(T)\to\infty$), and if an asymptotic formula should exist, then we expect $$N_U(T)\sim N_X(T)\left(\prod_{p<\infty }\tau_p(Z;X)\right),$$ where for almost all prime $p$ (usually when $X{\ \mathrm{mod}\ }p$ is smooth
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We study the location and field distribution of zero-energy corner states in a non-Hermitian quadrupole insulator (QI) and discover an unexpected splitting of the parameter space into three distinct regimes: near-Hermitian QI, intermediate phase, and trivial insulator. In the newly discovered intermediate phase, the Hamiltonian becomes defective, and our analysis using Jordan decomposition reveals the existence of a new corner state without a Hermitian counterpart. Resonant excitation of corner states in this region is found to be highly counter-intuitive owing to disparity of field profiles between left Jordan basis states and the corresponding right states: the most efficient excitation corresponds to placing the source as far as possible from the corner state’s location.' author: - Yang Yu - Minwoo Jung - Gennady Shvets bibliography: - 'nhqi.bib' title: 'Zero-energy Corner States in a Non-Hermitian Quadrupole Insulator' --- *Introduction.*—Higher-order topological insulators (HOTIs) are characterized by exotic topological signatures with dimensionality that is lower by at least two than that of the protecting bulk. One such signature is fractionally quantized corner charges in two-dimensional (2D) crystals with $C_n$ symmetry [@benalcazar2019quantization]. In the presence of an additional chiral (sublattice) symmetry, $e/2$ corner charges become associated with mid-gap (“zero-energy") corner-localized states [@benalcazar2019quantization]. Similar fractionalized vortex states can also exist [*inside*]{} a 2D lattice with an appropriate order parameter twists [@hou2007electron]. While the fractional nature of the topological charge is of particular significance for fermionic systems, the localized nature and robust spectral pinning of such corner/vortex states is of great practical importance for bosonic (e.g., acoustic, photonic, and radio-frequency) lattices [@serra2018observation; @peterson2018quantized; @imhof2018topolectrical; @ni2019observation]. Among many types of HOTIs supporting zero-energy corner states, the quadrupole insulator (QI) is a particularly interesting one because its lowest non-vanishing bulk polarization moment is quadrupolar [@benalcazar2017quantized; @benalcazar2017electric], i.e., its dipole polarization moment strictly vanishes. QI is the first type of HOTI to be theoretically predicted [@benalcazar2017quantized] and experimentally implemented [@serra2018observation; @peterson2018quantized]. Non-Hermitian physics also attracted considerable interest in recent years because of its relevance to non-equilibrium (e.g., undergoing photo-ionization) systems [@baker_pra84; @lopata_jctc13]. Some of its notable phenomena include “exceptional points" (EPs) [@heiss2004exceptional; @berry2004physics; @moiseyev2011non] and real-valued spectra despite non-Hermiticity. At the EP, both the complex-valued eigenvalues of two bands as well as their corresponding eigenvectors coalesce [@liang_feng_nphot17; @el2018non]. In other words, the matrix corresponding to the Hamiltonian at the EP becomes [*defective*]{} [@golub2013matrix; @lee2016anomalous]. The completely real spectrum of some non-Hermitian systems can be related to parity-time (PT) symmetry [@bender2007making; @PhysRevLett.104.054102; @PhysRevA.88.062111] or pseudo-Hermiticity [@mostafazadeh2002pseudo], though in general it is hard to assert a real spectrum without directly calculating the eigenvalues. Extending the rich and rapidly growing field of topological physics to non-Hermitian systems has been of great interest [@shen2018topological; @yao2018edge; @liu2019second] because of their relevance to non-equilibrium topological systems [@sobota_prl12; @marsi_pss18; @shen_fu_prl18]. However, some of the earlier obtained results must be reconsidered using the appropriate mathematical formalism and modern computational techniques, and considerable gaps remain in the parameter space studied so far. In this Letter, we concentrate on a non-Hermitian version of a QI model proposed in Ref. [@benalcazar2017quantized]. We pay special attention to the locations and field profiles of the zero-energy corner states, and to exotic behaviors without Hermitian counterparts in some regions of the parameter space when the Hamiltonian becomes defective. We also discuss the excitation of the corner states by external drives. *Tight-Binding Model*—The non-Hermitian QI model studied in this Letter is schematically shown in Fig. \[fig:nhqi\](a), where the intra/inter-cell hopping amplitudes $t\pm\gamma$ and $\lambda$ are all taken to be real. It is a natural non-Hermitian generalization of the QI model described in Ref. [@benalcazar2017quantized], with the intracell hopping strength becoming asymmetric, characterized by a finite $\gamma$, while maintaining the chiral symmetry $\Sigma H\Sigma^{-1}=-H$. Here the chiral operator $\Sigma=P_1-P_2-P_3+P_4$, where $P_j=\sum_{x,y}|x,y,j\rangle\langle x,y,j|$ are the sublattice projection operators, and $|x,y,j\rangle$ are the tight-binding states, where $x$ and $y$ are integer-valued coordinates of the unit cells as defined in Fig. \[fig:nhqi\](a), and $j=1,\dots,4$ denote four sub-lattice sites of each unit cell. This model can also be viewed as a two-dimensional (2D) generalization of the non-Hermitian Su-Schrieffer-Heeger (SSH) model [@lieu2018topological; @yin2018geometrical; @yao2018edge]. An earlier study [@liu2019second] of this 2D non-Hermitian HOTI model did not identify an important parameter regime (the cyan region in Fig. \[fig:nhqi\](b)) and incorrectly reported the numbers and spatial locations of the corner states in other parameter regimes. Below we rigorously resolve these issues using a mathematical technique of “partial Jordan decomposition", which is critical when the Hamiltonian matrix is close to defective. The significance of the defectiveness of the Hamiltonian was raised in the study of edge states in a non-Hermitian linear chain [@lee2016anomalous]. As we demonstrate below, our deceptively simple model supports rich physics with novel non-Hermtian phenomena. ![\[fig:nhqi\] (a) Tight binding model of a non-Hermitian QI on a square lattice. Grey dashed line: boundary of unit cell with four (sublattice) sites (numbered 1 to 4). Red and blue lines with arrows: asymmetric intra-cell hopping amplitudes $\pm t \pm \gamma$, green lines: symmetric inter-cell hopping amplitudes $\pm \lambda$. Dashed lines: negative hopping terms. All four sublattices have the same on-site potentials (set to $\epsilon_j \equiv 0$). (b) The phase diagram of a large non-Hermitian QI with open boundary condition. Green region ($|\lambda| > |t| + |\gamma|$): near-Hermitian regime with $4$ zero-energy corner states, each localized at a separate corner. Cyan region ($\sqrt{|t^2-\gamma^2|} <|\lambda| < |t|+|\gamma|$): intermediate regime with $2$ zero-energy corner states at the top-left corner. White region ($|\lambda| < \sqrt{|t^2-\gamma^2|}$): no corner states. Bandgap vanishes along solid black lines. The spectrum is complex-valued between the two dashed orange lines, real-valued elsewhere.](nH-QI.eps){width="\linewidth"} *Non-Bloch bulk continuum*—As was pointed in the context of the non-Hermitian SSH system [@yao2018edge], the open-boundary spectrum can significantly differ from that of the periodic-boundary system described by the Bloch Hamiltonian $H(\vec{k})$. That is because the usual Bloch phase-shift factor $e^{ik}$ for bulk eigenstates (i.e., eigenstates in the continuum spectrum) of an open-boundary system needs to be modified to $\beta\equiv \beta_0e^{ik}$, where $\beta_0$ can be non-unity (i.e., the wavevector acquires an imaginary part: $k\to k-i\ln \beta_0$). This extra *bulk localization factor* $\beta_0$ must be taken into account when calculating the spectrum of the open-boundary system. The same argument applies to our 2D non-Hermitian QI system, where $\vec{k}\equiv(k_x,k_y)\to(k_x-i\ln \beta_0,k_y-i\ln \beta_0)$, and $\beta_0=\sqrt{|(t-\gamma)/(t+\gamma)|}$[@liu2019second]. With this substitution, the corrected Bloch Hamiltonian shows (see the Supplemental Material) agreement with numerical simulations of an open-boundary system, that a finite bulk bandgap exists for all values of the
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In this paper, we show that SVRG and SARAH can be modified to be fundamentally faster than all of the other standard algorithms that minimize the sum of $n$ smooth functions, such as SAGA, SAG, SDCA, and SDCA without duality. Most finite sum algorithms follow what we call the “span assumption”: Their updates are in the span of a sequence of component gradients chosen in a random IID fashion. In the big data regime, where the condition number $\kappa=\mathcal{O}(n)$, the span assumption prevents algorithms from converging to an approximate solution of accuracy $\epsilon$ in less than $n\ln(1/\epsilon)$ iterations. SVRG and SARAH do not follow the span assumption since they are updated with a hybrid of full-gradient and component-gradient information. We show that because of this, they can be up to $\Omega(1+(\ln(n/\kappa))_+)$ times faster. In particular, to obtain an accuracy $\epsilon = 1/n^\alpha$ for $\kappa=n^\beta$ and $\alpha,\beta\in(0,1)$, modified SVRG requires $\mathcal{O}(n)$ iterations, whereas algorithms that follow the span assumption require $\mathcal{O}(n\ln(n))$ iterations. Moreover, we present lower bound results that show this speedup is optimal, and provide analysis to help explain why this speedup exists. With the understanding that the span assumption is a point of weakness of finite sum algorithms, future work may purposefully exploit this to yield even faster algorithms in the big data regime.' author: - 'Robert Hannah[^1]' - 'Yanli Liu[^2]' - 'Daniel O’Connor[^3]' - 'Wotao Yin[^4]' bibliography: - 'Master\_Bibliography.bib' title: 'Breaking the Span Assumption Yields Fast Finite-Sum Minimization' --- Introduction ============ Finite sum minimization is an important class of optimization problem that appears in many applications in machine learning and other areas. We consider the problem of finding an approximation $\hat{x}$ to the minimizer $x^{*}$ of functions $F:\RR^{d}\to\RR$ of the form: $$\begin{aligned} F\p x & =f(x)+\psi(x)=\frac{1}{n}\sum_{i=1}^{n}f_{i}\p x +\psi\p{x}.\label{eq:Average-of-fi}\end{aligned}$$ We assume each function $f_{i}$ is smooth[^5], and possibly nonconvex; $\psi$ is proper, closed, and convex; and the sum $F$ is strongly convex and smooth. It has become well-known that under a variety of assumptions, functions of this form can be minimized much faster with variance reduction (VR) algorithms that specifically exploit the finite-sum structure. When each $f_{i}$ is $\mu$-strongly convex and $L$-smooth, and $\psi=0$, SAGA [@DefazioBachLacoste-Julien2014_saga], SAG [@RouxSchmidtBach2012_stochastic], Finito/Miso [@DefazioDomkeCaetano2014_finito; @Mairal2013_optimization], SVRG [@JohnsonZhang2013_accelerating], SARAH [@NguyenLiuScheinbergTakac2017_sarah], SDCA [@Shalev-ShwartzZhang2013_stochastic], and SDCA without duality [@Shalev-Shwartz2016_sdca] can find a vector $\hat{x}$ with expected suboptimality $\EE\p{f\p{\hat{x}}-f\p{x^{*}}}=\cO\p{{\epsilon}}$ with only $\cO\p{\p{n+L/\mu}\ln\p{1/{\epsilon}}}$ calculations of component gradients $\nabla f_{i}\p x$. This can be up to $n$ times faster than (full) gradient descent, which takes $\cO\p{n L/\mu \ln\p{1/{\epsilon}}}$ gradients. These algorithms exhibit sublinear convergence for non-strongly convex problems[^6]. Various results also exist for nonzero convex $\psi$. Accelerated VR algorithms have also been proposed. Katyusha [@Allen-Zhu2017_katyusha] is a primal-only Nesterov-accelerated VR algorithm that uses only component gradients. It is based on SVRG and has complexity $\cO\p{\p{n+\sqrt{n\kappa}}\ln\p{1/{\epsilon}}}$) for condition number $\kappa$ which is defined as $L/\mu$. In [@Defazio2016_simple], the author devises an accelerated SAGA algorithm that attains the same complexity using component proximal steps. In [@LanZhou2017_optimal], the author devises an accelerated primal-dual VR algorithm. There also exist “catalyst” [@LinMairalHarchaoui2015_universala] accelerated methods [@LinLuXiao2014_accelerateda; @Shalev-ShwartzZhang2016_accelerated]. However, catalyst methods appear to have a logarithmic complexity penalty over Nesterov-accelerated methods. In [@LanZhou2017_optimal], authors show that a class of algorithms that includes SAGA, SAG, Finito (with replacement), Miso, SDCA without duality, etc. have complexity $K(\epsilon)$ lower bounded by $\Omega\p{\p{n+\sqrt{n\kappa}}\ln\p{1/{\epsilon}}}$ for problem dimension $d\geq 2K(\epsilon)$. More precisely, the lower bound applies to algorithms that satisfy what we will call the **span condition**. That is $$\begin{aligned} x^{k+1} & \in x^{0}+\text{span}\cp{\nabla f_{i_{0}}\p{x^{0}},\nabla f_{i_{1}}\p{x^{1}},\ldots,\nabla f_{i_{k}}\p{x^{k}}}\label{eq:SpanCondition}\end{aligned}$$ for some fixed IID random variable $i_k$ over the indices $\cp{1,\ldots,n}$. Later, [@WoodworthSrebro2016_tight] and [@ArjevaniShamir2016_dimensionfreea] extend lower bound results to algorithms that do not follow the span assumption: SDCA, SVRG, SARAH, accelerated SAGA, etc.; but with a smaller lower bound of $\Omega\p{n+\sqrt{n\kappa}\ln\p{1/{\epsilon}}}$. The difference in these two expressions was thought to be a proof artifact that would later be fixed. However we show a surprising result in Section \[sec:OptimalSVRG\], that SVRG, and SARAH can be fundamentally faster than methods that satisfy the span assumption, with the full gradient steps playing a critical role in their speedup. More precisely, for $\kappa=\cO\p n$, SVRG and SARAH can be modified to reach an accuracy of ${\epsilon}$ in $\cO((\frac{n}{1+(\ln\p{n/\kappa})_+} )\ln\p{1/{\epsilon}})$ gradient calculations[^7], instead of the $\Theta(n\ln(1/\epsilon))$ iterations required for algorithms that follow the span condition. We also improve the lower bound of [@ArjevaniShamir2016_dimensionfreea] to $\Omega(n+(\frac{n}{1+\p{\ln\p{n/\kappa}}_+}+\sqrt{n\kappa})\ln\p{1/{\epsilon}})$ in Section \[sec:Optimality\]. That is, the complexity $K(\epsilon)$ of a very general class of algorithm that includes all of the above satisfies the lower bound: $$\begin{aligned} K\p{\epsilon} & =\begin{cases} \Omega(n+\sqrt{n\kappa}\ln\p{1/{\epsilon}}), &\text{ for } n=\cO\p{\kappa},\\ \Omega(n+\frac{n}{1+\p{\ln\p{n/\kappa}}_+}\ln\p{1/{\epsilon}}), &\text{ for } \kappa=\cO\p{n}. \end{cases}\end{aligned}$$ Hence when $\kappa=\cO\p n$ our modified SVRG has optimal complexity, and when $n=\cO\p{\kappa}$, Katyusha is optimal. SDCA doesn’t quite follow the span assumption. Also the dimension $n$ of the dual space on which the algorithm runs is inherently small in comparison to $k$, the number of iterations. We complete the picture using different arguments, by showing that its complexity is greater than $\Omega(n\ln(1/\epsilon))$ in Section \[sec:SDCA\], and hence SDCA doesn’t attain this logarithmic speedup. We leave the analysis of accelerated SAGA and accelerated SDCA to future work. Our results identify a significant obstacle to high performance when $n\gg\kappa$. The speedup that SVRG and SARAH can be modified to attain in this scenario is somewhat accidental since their original purpose was to minimize memory overhead. However, with the knowledge that this assumption is a point of weakness for VR algorithms, future work may more purposefully exploit this to yield better speedups than SVRG and SARAH can currently attain. Though the complexity of SVRG and SARAH can be made
{ "pile_set_name": "ArXiv" }
null
null
--- bibliography: - 'bibliografia.bib' --- ![image](logofac.eps) UNIVERSIDAD DE BUENOS AIRES Facultad de Ciencias Exactas y Naturales Departamento de Matemática **Metaestabilidad para una EDP con blow-up y la dinámica FFG en modelos diluidos** Tesis presentada para optar al título de Doctor de la Universidad de Buenos Aires en el área Ciencias Matemáticas **Santiago Saglietti** Director de tesis: Pablo Groisman Consejero de estudios: Pablo Groisman Buenos Aires, 2014 Fecha de defensa : 27 de Junio del 2014 {#section .unnumbered} [**Metaestabilidad para una EDP con blow-up y la dinámica FFG en modelos diluidos**]{} **Resumen** Esta tesis consiste de dos partes, en cada una estudiamos la estabilidad bajo pequeñas perturbaciones de ciertos modelos probabilísticos en diferentes contextos. En la primera parte, estudiamos pequeñas perturbaciones *aleatorias* de un sistema dinámico determinístico y mostramos que las mismas son inestables, en el sentido de que los sistemas perturbados tienen un comportamiento cualitativo diferente al del sistema original. Más precisamente, dado $p > 1$ estudiamos soluciones de la ecuación en derivadas parciales estocástica $${\partial}_t U = {\partial}^2_{xx} U + U|U|^{p-1} + \varepsilon \dot{W}$$ con condiciones de frontera de Dirichlet homogéneas y mostramos que para $\varepsilon > 0$ pequeños éstas presentan una forma particular de inestabilidad conocida como *metaestabilidad*. En la segunda parte nos situamos dentro del contexto de la mecánica estadística, donde estudiamos la estabilidad de medidas de equilibrio en volumen infinito bajo ciertas perturbaciones *determinísticas* en los parámetros del modelo. Más precisamente, mostramos que las medidas de Gibbs para una cierta clase general de sistemas son continuas con respecto a cambios en la interacción y/o en la densidad de partículas y, por lo tanto, estables bajo pequeñas perturbaciones de las mismas. También estudiamos bajo qué condiciones ciertas configuraciones típicas de estos sistemas permanecen estables en el límite de temperatura cero $T \to 0$. La herramienta principal que utilizamos para nuestro estudio es la realización de estas medidas de equilibrio como distribuciones invariantes de las dinámicas introducidas en [@FFG1]. Referimos al comienzo de cada una de las partes para una introducción de mayor profundidad sobre cada uno de los temas. [*Palabras claves:*]{} ecuaciones en derivadas parciales estocásticas, metaestabilidad, blow-up, medidas de Gibbs, procesos estocásticos, redes de pérdida, Pirogov-Sinai. {#section-1 .unnumbered} [**Metastability for a PDE with blow-up and the FFG dynamics in diluted models**]{} **Abstract** This thesis consists of two separate parts: in each we study the stability under small perturbations of certain probability models in different contexts. In the first, we study small *random* perturbations of a deterministic dynamical system and show that these are unstable, in the sense that the perturbed systems have a different qualitative behavior than that of the original system. More precisely, given $p > 1$ we study solutions to the stochastic partial differential equation $${\partial}_t U = {\partial}^2_{xx} U + U|U|^{p-1} + \varepsilon \dot{W}$$ with homogeneous Dirichlet boundary conditions and show that for small $\varepsilon > 0$ these present a rather particular form of unstability known as *metastability*. In the second part we situate ourselves in the context of statistical mechanics, where we study the stability of equilibrium infinite-volume measures under small *deterministic* perturbations in the parameters of the model. More precisely, we show that Gibbs measures for a general class of systems are continuous with respect to changes in the interaction and/or density of particles and, hence, stable under small perturbations of them. We also study under which conditions do certain typical configurations of these systems remain stable in the zero-temperature limit $T \to 0$. The main tool we use for our study is the realization of these equilibrium measures as invariant distributions of the dynamics introduced in [@FFG1]. to the beginning of each part for a deeper introduction on each of the subjects. [*Key words*]{}: stochastic partial differential equations, , stochastic blow-up, measures, loss networks, Pirogov-Sinai. Agradecimientos {#agradecimientos .unnumbered} =============== Un gran número de personas han contribuido, de alguna manera u otra, con la realización de este trabajo. Me gustaría agradecer: 1. A mi director, Pablo Groisman, por todo. Por su constante apoyo y durante la elaboración de esta Tesis. Por su infinita paciencia y gran Por estar siempre para atender mis inquietudes, y por recibirme todas y cada una de las veces con una sonrisa y la mejor onda. Por compartir conmigo su manera de concebir y hacer matemática, lo que ha tenido un gran impacto en mi formación como matemático y es, para mí, de un valor incalculable. Por su amistad. Por todo. 2. A Pablo Ferrari, por estar siempre dispuesto a darme una mano y a discutir sobre matemática conmigo. Considero realmente un privilegio haber tenido la oportunidad de pensar problemas juntos y entrar en contacto con su forma de ver la matemática. Son muchísimas las cosas que he aprendido de él en estos últimos cuatro años, y es por ello que le voy a estar siempre inmensamente agradecido. 3. A los jurados de esta Tesis: Pablo De Napoli, Mariela Sued y Aernout Van Enter. Por leerla y darme sus sugerencias, con todo el esfuerzo y tiempo que ello requiere. 4. A Nicolas Saintier, por su entusiasmo en mi trabajo y su colaboración en esta Tesis. 5. A Roberto Fernández y Siamak Taati, por la productiva estadía en Utrecht. 6. A Inés, Matt y Leo. Por creer siempre en mí y enseñarme algo nuevo todos los días. 7. A Maru, por las muchas tardes de clase, estudio, charlas, chismes y chocolate. 8. A mis hermanitos académicos: Anita, Nico, Nahuel, Sergio L., Sergio Y. y Julián. Por todos los momentos compartidos, tanto de estudio como de amistad. 9. A Marto S., Pablo V., Caro N. y los chicos de la 2105 (los de ahora y los de antes). 10. A Adlivun, por los buenos momentos y la buena música. 11. A la (auténtica) banda del Gol, por ser los amigos incondicionales que son. 12. A Ale, por haber estado siempre, en las buenas y (sobre todo) en las malas. 13. A mi familia, por ser mi eterno sostén y apoyo. $$\text{Gracias!}$$ Introducción a la Parte I {#introducción-a-la-parte-i .unnumbered} ========================= Las ecuaciones diferenciales han probado ser de gran utilidad para modelar un amplio rango de fenómenos físicos, químicos y biológicos. Por ejemplo, una vasta clase de ecuaciones de evolución, conocidas como ecuaciones en derivadas parciales parabólicas surgen naturalmente en el estudio de fenómenos tan diversos como la difusión de un fluido a través de un material poroso, el transporte en un semiconductor, las reacciones químicas acopladas con difusión espacial y la genética de poblaciones. En todos estos casos, la ecuación representa un modelo aproximado del fenómeno y por
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We comment on the paper of S. Postnikov et al. in Phys. Rev. D 82, 024016 (2010) and give a modified formula that needs to be taken into account when calculating the tidal Love number of neutron stars in case a first order phase-transition occurs at non-zero pressure. We show that the error made when using the original formula tends to zero as $p \rightarrow 0$ and we estimate the maximum relative error to be $\sim 5\%$ if the density discontinuity is at larger densities.' author: - 'János Takátsy$^{1,2}$' - 'Péter Kovács$^{1,2}$' title: 'Comment on “Tidal Love numbers of neutron and self-bound quark stars”' --- In Ref. [@postnikov2010] the authors investigated the qualitative differences between the tidal Love numbers of self-bound quark stars and neutron stars. In Eq. (14) they derived an expression for the extra term that should be subtracted from the logarithmic derivative $y(r)$ of the metric perturbation $H(r)$ in case there is a first-order phase transition in the equation of state (EoS). The authors applied this formula to quark stars where there is a core-crust phase transition at or below neutron-drip pressure. Since then multiple papers have included or applied this formula explicitly using EoSs with first-order phase transitions at non-negligible pressures ([*e.g.*]{} [@zhao2018; @han2019]). However, when the pressure $p_d$ corresponding to the density discontinuity is non-negligible compared to the central energy density of the neutron star, Eq. (14) of Ref. [@postnikov2010] should be modified as shown below. In this comment we derive the correct formula and estimate the error made when using the other formula instead. It needs to be added, that although Ref. [@han2019] contains the uncorrected formula, the results presented in the paper were calculated using the correct relation, as it was reported by the authors and also verified by the authors of Ref. [@postnikov2010]. This also applies to more recent publications including the same authors [@han2019b; @chatziioannou2020]. Moreover, despite using the erroneous formula, the results of Ref. [@zhao2018] are also mainly unaffected by this error, since they only provide approximate analytic fits for the ratios of tidal deformabilities of the two components in binary neutron stars. Thus, uncertainties of a few percent are inherently contained in these fits, which encompass the errors of individual tidal deformabilities. The corrected fits – as it was claimed by the authors of Ref. [@postnikov2010] – are negligibly different from the reported fits in Ref. [@zhao2018]. The tidal $l=2$ tidal Love number can be expressed the following way: $$\begin{aligned} k_2 &= \frac{8}{5} (1-2 \beta)^2 \beta^5 [2 \beta (y_R-1)-y_R+2]\nonumber\\ &\times \{2 \beta [4 (y_R+1) \beta^4+(6 y_R-4) \beta^3+(26-22 y_R) \beta^2\nonumber\\ &+3 (5 y_R-8) \beta-3 y_R+6]+3 (1-2 \beta)^2\nonumber\\ &\times[2 \beta (y_R-1)-y_R+2]\ln \left(1-2\beta\right)\}^{-1} , \label{eq:k2}\end{aligned}$$ where $\beta=M/R$ is the compactness parameter of the neutron star and $y_R=y(R)=[rH'(r)/H(r)]_{r=R}$ with $H(r)$ being a function related to the quadrupole metric perturbation (see [*e.g.*]{} [@damour2009]). $y_R$ is obtained by solving the following first-order differential equation: $$\begin{aligned} ry'(r)&+y(r)^2+r^2 Q(r) \nonumber\\ &+ y(r)e^{\lambda(r)}\left[1+4\pi r^2(p(r)-\varepsilon(r))\right] = 0 , \label{eq:y}\end{aligned}$$ where $\varepsilon$ and $p$ are the energy density and pressure, respectively, and $$\begin{aligned} Q(r)=4\pi e^{\lambda(r)}\left(5\varepsilon(r)+9p(r)+\frac{\varepsilon(r)+p(r)}{c_s^2(r)}\right) \nonumber\\ -6\frac{e^{\lambda(r)}}{r^2}-(\nu'(r))^2 . \label{eq:Q}\end{aligned}$$ Here $c_s^2=\mathrm{d}p/\mathrm{d}\varepsilon$ is the sound speed squared, while $e^{\lambda(r)}$, $\nu(r)$ metric functions are given by $$\begin{aligned} e^{\lambda(r)} &= \left[1-\frac{2m(r)}{r}\right]^{-1} \label{eq:tov_e} , \\ \nu'(r) &= \dfrac{2[m(r)+4\pi r^3 p(r)]}{r^2 - 2 m(r) r} \label{eq:tov_nu} ,\end{aligned}$$ with the line element for the unperturbed star defined as $$\mathrm{d}s^2 = e^{\nu(r)}\mathrm{d}t^2 - e^{\lambda(r)}\mathrm{d}r^2 - r^2(\mathrm{d}\vartheta^2 + \sin^2\vartheta \, \mathrm{d}\varphi^2),$$ and where $m(r)$ and $p(r)$ are calculated through the Tolman-Oppenheimer-Volkoff equations [@tolman1939; @oppenheimer1939]: $$\begin{aligned} m'(r) &= 4\pi r^2 \varepsilon(r) , \label{eq:tov_m} \\ p'(r) &= - [\varepsilon(r)+p(r)]\dfrac{m(r)+4\pi r^3 p(r)}{r^2 - 2 m(r) r} .\label{eq:tov_p}\end{aligned}$$ In case there is a first-order phase transition in the EoS, there is a jump of $\Delta\varepsilon$ in the energy density at constant pressure, hence $c_s^2=0$ in that region and the term in Eq. (\[eq:Q\]) containing $1/c_s^2$ diverges. Expressing $1/c_s^2$ in the vicinity of the density discontinuity: $$\frac{1}{c_s^2} = \frac{\mathrm{d}\varepsilon}{\mathrm{d}p}\bigg|_{p\neq p_d} + \delta(p-p_d) \Delta \varepsilon . \label{eq:cs2}$$ Changing the delta-function to a function in the radial position $r$, inserting Eq. (\[eq:cs2\]) into Eq. (\[eq:y\]) and integrating over an infinitesimal distance around $r_d$ one obtains: $$y(r_d^+) - y(r_d^-) = -4\pi r_d e^{\lambda(r_d)} [\varepsilon(r_d)+p(r_d)] \frac{\Delta \varepsilon}{|p'(r_d)|} .$$ Using Eq. (\[eq:tov\_p\]) we get: $$\begin{aligned} y(r_d^+) - y(r_d^-) &= -\frac{4\pi r_d^3 \Delta \varepsilon}{m(r_d)+4\pi r_d^3 p(r_d)}\nonumber\\ &= -\frac{\Delta \varepsilon}{\tilde{\varepsilon}/3+p(r_d)} , \label{eq:ydisc}\end{aligned}$$ where $\tilde{\varepsilon}=m(r_d)/(4\pi r_d^3/3)$ is the average energy density of the inner ($r<r_d$) region. Eq. (\[eq:ydisc\]) shows that there is an extra $p(r_d)$ term in the denominator as compared to Eq. (14) of Ref. [@postnikov2010]. We see that if the phase transition is at very low densities compared to the central energy density then $p(r_d)/\tilde{\varepsilon}\rightarrow0$ [^1] and we get back the formula in Ref. [@postnikov2010]. ![\[fig:css\]Illustration of the EoS in the constant-sound-speed construction [@alford2013; @han2019]. At $p_\mathrm{trans}$ a quark matter part with a constant sound speed of $c_\mathrm{QM}$ is attached to the nuclear matter EoS after an energy density jump of $\Delta\varepsilon$.](CSS_EoS){width="48.00000%"} We investigated the difference caused by applying the two different formulas using a constant-sound-speed construction (see Fig. \[fig:css
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | The experimental results relevant for the understanding of the microscopic dynamics in liquid metals are reviewed, with special regards to the ones achieved in the last two decades. Inelastic Neutron Scattering played a major role since the development of neutron facilities in the sixties. The last ten years, however, saw the development of third generation radiation sources, which opened the possibility of performing Inelastic Scattering with X rays, thus disclosing previously unaccessible energy-momentum regions. The purely coherent response of X rays, moreover, combined with the mixed coherent/incoherent response typical of neutron scattering, provides enormous potentialities to disentangle aspects related to the collectivity of motion from the single particle dynamics. If the last twenty years saw major experimental developments, on the theoretical side fresh ideas came up to the side of the most traditional and established theories. Beside the raw experimental results, therefore, we review models and theoretical approaches for the description of microscopic dynamics over different length-scales, from the hydrodynamic region down to the single particle regime, walking the perilous and sometimes uncharted path of the generalized hydrodynamics extension. Approaches peculiar of conductive systems, based on the ionic plasma theory, are also considered, as well as kinetic and mode coupling theory applied to hard sphere systems, which turn out to mimic with remarkable detail the atomic dynamics of liquid metals. Finally, cutting edges issues and open problems, such as the ultimate origin of the anomalous acoustic dispersion or the relevance of transport properties of a conductive systems in ruling the ionic dynamic structure factor are discussed. author: - 'Tullio Scopigno$^{1}$' - 'Giancarlo Ruocco$^{1}$' - 'Francesco Sette$^{2}$' title: ' Microscopic dynamics in liquid metals: the experimental point of view. ' --- Introduction ============ Liquid metals are an outstanding example of systems combining great relevance in both industrial applications and basic science. On the one hand they find broad technological application ranging from the production of industrial coatings (walls of refinery coker, drill pipe for oil search) to medical equipments (reconstructive devices, surgical blades) or high performance sporting goods. Most metallic materials, indeed, need to be refined in the molten state before being manufactured. On the other hand liquid metals, in particular the monoatomic ones, have been recognized since long to be the prototype of simple liquids, in the sense that they encompass most of the physical properties of real fluids without the complications which may be present in a particular system [@BALUCANI]. In addition to that, metallic fluids such as molten sodium, having similar density and viscosity as water, find application as coolant in nuclear reactors. The thermodynamic description of liquid metals can be simplified by assuming a few parameters. Usually, if compound formation is weak physical theory alone can be used while, if there is strong compound formation, chemical theory alone is used. The lowest-melting liquid metals are those that contain heavier elements, and this may be due to an increase in ease of creating a free-electron solution. Alkali metals are characterized by low melting points, and they tend to follow trends. Binary associating liquids show a sharp melting point, with the most noticeable example being mercury ($T_m=234$ K). Melting points can be lowered by introducing impurities into the metal. Often, to this purpose, another metal with a low melting point is used. Mixing different metals may often result in a solution that is eutectic. In other words, from Henry’s law it is understood that a melting point depression occurs, and the system becomes more disordered as a result of the perturbation to the lattice. This is the case, for instance, of the well known eutectic Pb-Tin alloy, widely used in soldering applications ($T_m=453$ K). Until the sixties the understanding of the physical properties of metals proceeded rather slowly. It was John Ziman, indeed, who made the theory of liquid metals respectable for the first time [@ZIMAN], and the Faber-Ziman theory, developed in 1961-63 and dealing with electronic and transport properties, is attractively introduced in Faber’s book, which is an excellent treatise of the physical properties of liquid metals [@FABER]. The other text which can be considered a classic is March’s book [@MARCHLM], along with the more recent [@MARCHLMCT], which provides a comprehensive overview over liquid metals. It is from these texts that a first clear definition of liquid metal can be outlined. At first glance, indeed, the words “liquid metal” are self-explanatory: by definition any metal heated to its melting point can be cast in this category. Liquid metals, however, are implicitly understood to be less general than the above definition, and no literature clearly states an exact definition. Although no precise agreement has been made, there are certain characteristics shared by liquid metals, descending from a close interplay between ionic structure, electronic states and transport properties. The book of Shimoji [@SHIMOJI] deals with the fundamentals of liquid metals in an elementary way, covering the developments achieved after the first book by March. It does not address, however, the dynamical properties in great detail. Addison’s book [@ADDISON] is much like March’s general book, but is more focused on applications of alkali metals, especially on their use in organic chemistry. In addition, Addison discusses many methods for purifying and working with liquid alkali metals. March is more theoretical whereas Addison is practical, but both authors focus on a thermodynamic explanation of liquid metals. For an appealing general introduction to the physics and chemistry of the liquid-vapor phase transition (beyond the scope of this review) the reader should certainly make reference to [@HENSEL], which also provides a bird eye view of the practical applications of fluid metals, such as high-temperature working fluid or key ingredients for semiconductor manufacturing. There are, then, a number of books which are more general and more specific at the same time, in the sense that they deal with with the wider class of simple liquids (including noble fluids, hard sphere fluids etc.), but they are mainly concerned with structural and dynamical properties only [@BALUCANI; @HANSEN; @BY; @MARCH; @EGELSTAFF]. They are practically ineludible for those aiming at a rigorous approach to the statistical mechanics description of the liquid state. It can be difficult to find an exhaustive updated database of the physical properties of liquid metals, especially as far as dynamics is concerned. But the handbooks of [@ida] and [@OSE] are remarkable exceptions, with the second one specifically addressing liquid alkali metals. Historical background --------------------- Early phenomenological approaches to the study of relaxation dynamic in fluids can be dated back to the end of the nineteen century [@max_visco; @kel_visco]. Only in the mid twentieth century, however, it was realized that a deeper understanding of the physical properties of liquids could have been reached only through a microscopic description of the atomic dynamics. This became possible through the achievements of statistical mechanics which provided the necessary tools, such as correlation functions, integral equations etc. The mathematical difficulties related to the treatment of real liquids brought to the general attention the importance of simple liquids, as systems endowed with the rich basic phenomenology of liquids but without the complications arising, for instance, by orientational and vibrational degrees of freedom. As a consequence, the end of the fifties saw major experimental efforts related to the development of Inelastic Neutron Scattering (INS) facilities which, as we shall see, constitutes a privileged probe to access the microscopic dynamics in condensed matter and, in particular, in the liquid state [@egel_pio]. A sizable library of experimental data on liquid metals has been constituted since then, realizing the prototypical structural and dynamical properties of these systems, representative of the whole class of liquids. In the sixties, the advent and the broad diffusion of computational facilities brought a new era for two main reasons: on the one side, realistic computer simulation experiments become possible [@sch_sim], on the other side the new computation capabilities greatly facilitated the interpretation of INS experiments. For instance, new protocols for accurate estimates of the multiple scattering contribution affecting neutron scattering were proposed [@cop_multiplo]. The theoretical framework of the Inelastic Neutron Scattering, and the guidelines to interpret the results, have been reviewed in the classical textbooks [@LOVESEY; @MARSHALL]. The dynamics of liquid metals has been extensively investigated by INS and computer simulations with the main purpose of ascertaining the role of the mechanisms underlying both collective and single-particle motions at the microscopic level. In the special case of collective density fluctuations, after the seminal inelastic neutron scattering study by Copley and Rowe [@cop_rb] and the famous molecular dynamics simulation of Rahman [@rahman_sim] in liquid rubidium, the interest in performing more and more accurate experiments is continuously renewed: it was soon realized, indeed, that well-defined oscillatory modes could be supported even outside the strict hydrodynamic region. In molten alkali metals, moreover, this feature is found to persist down to wavelengths of one or two interparticle distances, making these systems excellent candidates to test the various theoretical approaches developed so far for the microdynamics of the liquid state. Up to ten years ago the only experimental probe appropriate to access the atomic dynamics over the interparticle distance region were thermal neutrons, and using this probe fundamental results have been gained. There are, however, certain limitations of this technique which can restrict its applicability: First, the presence of an incoherent contribution to the total neutron scattering cross section. If on one hand this allow to gather a richer
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'A commercial single laser line Raman spectrometer is modified to accommodate multiline and tunable dye lasers, thus combining the high sensitivity of such single monochromator systems with broadband operation. Such instruments rely on high-throughput interference filters that perform both beam alignment and Rayleigh filtering. Our setup separates the dual task of the built-in monochromator into two independent elements: a beam splitter and a long pass filter. Filter rotation shifts the transmission passband, effectively expanding the range of operation. Rotation of the filters has a negligible effect on the optical path, allowing broadband operation and stray light rejection down to 70-150 cm$^{-1}$. Operation is demonstrated on single-walled carbon nanotubes, for which the setup was optimized.' author: - Gábor Fábián - Christian Kramberger - Alexander Friedrich - Ferenc Simon - Thomas Pichler title: Adaptation of a commercial Raman spectrometer for multiline and broadband laser operation --- Introduction ============ Raman spectroscopy is a widespread and important tool in various fields of science from biology to physics. Commercial Raman spectrometers are usually equipped with a built-in laser and a setup optimized for this single laser line, resulting in stable operation but inherently narrow-band characteristics. The electronic, optical, and vibrational characterization of certain materials, such as single-wall carbon nanotubes (SWCNTs)[@DresselhausCNTRamanReview] requires measurements with a large number of laser lines [@KuzmanyEPJB] or with a tunable laser system [@FantiniPRL2004; @TelgPRL2004]. Raman spectroscopy relies on the efficient suppression of “stray light” photons with wavelengths close to that of the exciting laser (e.g. from Rayleigh scattering) which dominate over the Raman signal by several orders of magnitude. Operation down to Raman shifts of 100 cm$^{-1}$ is made possible in modern spectrometers with the use of interference Rayleigh filters (often referred to as notch filters). The transmission of these filters typically exceeds $80\,\%$ for the passband, this is significantly higher than for a classical subtractive double monochromator system. Although interference filters are manufactured for the most common laser lines only, rotation extends the range of filter operation. Thus the narrow-band constraint could be circumvented to allow broadband operation. However in most spectrometers, the interference filter has a dual role: it reflects the laser light to the sample and it functions as a Rayleigh filter. Filter rotation changes the optical path of the excitation that can be corrected for with tedious and time consuming readjustment only, effectively nullifying the advantage of the higher sensitivity. In particular for the radial breathing mode of SWCNTs, the presence of the low energy ($\geq 100-150\,\text{cm}^{-1}$) [@RaoCNTRamanScience] Raman modes and the narrow (FWHM $\sim 30 \,\text{meV}$) optical transition energies [@FantiniPRL2004] pose several challenges to the instrumentation. A proper energy dependent Raman measurement requires a broadband spectrometer with efficient stray light rejection. Herein, we describe the modification of a commercial Raman spectrometer with interference Rayleigh filters, which enable broadband operation with relative ease. The improvement is based on replacing the built-in interference filter with a beam splitter and a separate interference filter. Thus the two functions of the filter are performed independently with no observable on influence the direction of the transmitted light. The different behavior of the filter passband for the $S$ and $P$ [^1] polarizations under rotation is overcome by the application of polarization filters on the spectrometer input. The setup operates with polarizations which are optimized when the so-called antenna effect of SWCNTs is taken into account, i.e. that the Raman light is polarized predominantly along the polarization of the excitation [@SunAntennaEffect; @JorioPhysRevLett85]. Spectrometer setup ================== A high sensitivity, confocal single monochromator Raman system with a interference Rayleigh filter—such as described in the previous section—can be modified to enable broadband measurements with multiple laser lines or even with a tunable laser, which we demonstrate for a LabRAM commercial spectrometer (Horiba Jobin-Yvon Inc.) as an example. The key step in achieving the broadband operation was replacing the built-in interference filter, which acts as a beam splitter and a Rayleigh filter at the same time, with a combination of a simple beam splitter and a serarate interference filter. We note herein that this modification also enables a cost effective operation with usual laser wavelengths (such as e.g. the lines of an Ar/Kr laser) since no complicated filter realignments are required. We have to emphasize that the use of standard optical elements, which are non-specific to the spectrometer allows economic implementation for most spectrometer designs. ![Schematic diagram of the broadband configuration of the LabRAM spectrometer. V and H denote vertical and horizontal polarizations, respectively. The tunable source is a dye laser pumped with a 532 nm solid state laser. The laser light is aligned with the spectrometer using the periscope element, which also rotates the polarization to horizontal, if needed. The laser outputs are cleaned with a filter. The sample emits a nominally horizontally polarized light and the unwanted vertical polarization is filtered with the polarizer.[]{data-label="schem"}](Fig1_setup.eps){width="0.98\columnwidth"} The setup for the modified LabRAM spectrometer is shown in Fig. \[schem\]. A multiline Ar/Kr laser (Coherent Inc., Innova C70C-Spectrum) and a dye laser (Coherent Inc., CR-590) pumped by a 532 nm 5 W solid state laser (Coherent Inc., Verdi G5) serve as excitation light sources. The former operates at multiple, well defined wavelengths while the latter allows fully tunable application. In our case, the dye laser is operated in the 545-580 nm, 580-610 nm, and 610-660 nm wavelength ranges with three dyes: Rhodamin 110, Rhodamin 6G, and DCM Special, respectively. The periscope allows beam alignment and sets the polarization of the excitation light to horizontal. In the case of the dye laser, the spurious fluorescent background of the laser output is filtered with short pass (“3rd Millennium filters” for 580 and 610 nm from Omega Optical Inc.) and band pass (“RazorEdge” for 568, 633, and 647 nm from Semrock Inc.) filters. For the clean-up of the multiline laser excitation band pass filters are used at the appropriate wavelengths (“RazorEdge” for 458, 488, 515, 532, 568, 633, and 647 nm from Semrock Inc.) The light is directed toward the sample with a broadband beam splitter plate (Edmund Optics Inc., NT47-240) with 30 % reflection and 70 % transmission. For both excitation sources a single, long pass interference edge filter (“RazorEdge” for 458, 488, 515, 532, 568, 633, and 647 nm from Semrock Inc.) performs stray light rejection. The use of a short pass filter for laser clean-up and long pass filters for Rayleigh photon supression limits operation for the Stokes Raman range. The long pass filter has double function in the original spectrometer: it mirrors the laser excitation to the sample and acts as a Rayleigh filter, quenching the stray light. In our construction, these two tasks are performed independently by a beam splitter and a long pass filter, respectively. The broadband beam splitter plate has 30 % reflection and 70 % transmission, thus only a small fraction of the Raman light is lost. The 70 % excitation power loss on the beam splitter can be compensated by reducing the attenuation of the intensive laser beam, maintaining a constant irradiation density on the sample. The application of an anti-reflective coating to the back side of the plate prohibited the emergence of higher order reflections and standing waves (whose effect is known as ghosts) within the plate. The beam splitter plate is mounted on a finely adjustable 2-axis holder (Thorlabs Inc., VM1) with a home made mounting. The fine adjustment is required to set the light alignment properly with the spectrometer. Final fine adjustment is performed with the holder to maximize the Raman signal. ![Transmittance of the 633 nm long pass filter using unpolarized white light; a.) at normal incidence and b.) rotated by $30^{\circ}$. When polarization filters are used, the two parts of the double step feature (solid black line) are separated according to the $S$- and $P$-polarization (dashed black and solid gray lines, respectively). Note the broadening of filter transition width upon rotation.[]{data-label="LP"}](Fig2_LongPass.eps){width="0.98\columnwidth"} Increasing the incidence angle of the light changes the range of filter operation of the interference filters without the misalignment of the light. Thus filter rotation enables broadband operation. In Fig. \[LP\]., we show the behavior of a 633 nm long pass filter at different incidence angles. The edge of transmission blue shifts upon rotation with respect to normal incidence. However, the shift is smaller for the $S$ than for the $P$ polarization; i.e. the shift is larger for the horizontally polarized light when the filter is rotated around a vertical axis. Vertical rotation of the long pass filter is more practical, meaning that the setup prefers horizontally polarized scattered (Raman) light as it is of the $P$ polarization, for which the edge shift is larger. For 1 inch apertures short and long pass filters rotation angles up to $30^{\circ}$ were used, yielding a blue shift of about 10 %; the 0.5 inch aperture of band pass “Razor edge” filters limited the blue shift to
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | Most online platforms strive to learn from interactions with consumers, and many engage in *exploration*: making potentially suboptimal choices for the sake of acquiring new information. We initiate a study of the interplay between *exploration* and *competition*: how such platforms balance the exploration for learning and the competition for consumers. Here consumers play three distinct roles: they are customers that generate revenue, they are sources of data for learning, and they are self-interested agents which choose among the competing platforms. We consider a stylized duopoly model in which two firms face the same multi-armed bandit instance. Users arrive one by one and choose between the two firms, so that each firm makes progress on its bandit instance only if it is chosen. We study whether and to what extent competition incentivizes the adoption of better bandit algorithms, and whether it leads to welfare increases for consumers. We find that stark competition induces firms to commit to a “greedy" bandit algorithm that leads to low consumer welfare. However, we find that weakening competition by providing firms with some “free" consumers incentivizes better exploration strategies and increases consumer welfare. We investigate two channels for weakening the competition: relaxing the rationality of consumers and giving one firm a first-mover advantage. We provide a mix of theoretical results and numerical simulations. Our findings are closely related to the “competition vs. innovation" relationship, a well-studied theme in economics. They also elucidate the first-mover advantage in the digital economy by exploring the role that data can play as a barrier to entry in online markets. author: - 'Guy Aridor[^1]' - 'Yishay Mansour[^2]' - 'Aleksandrs Slivkins[^3]' - 'Zhiwei Steven Wu[^4]' bibliography: - 'bib-abbrv.bib' - 'bib-ML.bib' - 'refs.bib' - 'bib-bandits.bib' - 'bib-AGT.bib' - 'bib-slivkins.bib' - 'bib-random.bib' date: July 2020 title: | Competing Bandits:\ The Perils of Exploration under Competition[^5] --- Introduction {#sec:intro} ============ Related work {#sec:related-work} ============ Our model in detail {#sec:model} =================== Theoretical results: the [Bayesian-choice model]{} {#sec:theory} ================================================== Numerical simulations: the [reputation-choice model]{} {#sec:sim} ====================================================== Background for non-specialists: multi-armed bandits {#app:bg} =================================================== Monotone MAB algorithms {#app:examples} ======================= Non-degeneracy via a random perturbation {#app:perturb} ======================================== Full proofs for Section \[sec:theory\] {#sec:theory-proofs} ====================================== Full experimental results {#app:expts} ========================= [^1]: Columbia University, Department of Economics. Email: g.aridor@columbia.edu [^2]: Google and Tel Aviv University, Department of Computer Science. Email: mansour.yishay@gmail.com [^3]: Microsoft Research New York City. Email: slivkins@microsoft.com [^4]: University of Minnesota - Twin Cities, Department of Computer Science. Email: zsw@umn.eduPart of the research was done when Z.S. Wu was an intern and a postdoc at Microsoft Research NYC. [^5]: This is a merged and final version of two conference papers, @CompetingBandits-itcs18 and @CompetingBandits-ec19, with a unified and streamlined presentation and expanded background materials. All theoretical results are from @CompetingBandits-itcs18, and all experiments are from @CompetingBandits-ec19. Appendices \[app:bg\],\[app:examples\] are completely new compared to the conference versions.
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'The vicinity of the unidentified EGRET source 3EG J1420–6038 has undergone extensive study in the search for counterparts, revealing the energetic young pulsar PSR J1420-6048 and its surrounding wind nebula as a likely candidate for at least part of the emission from this bright and extended gamma-ray source. We report on new Suzaku observations of PSR J1420–6048, along with analysis of archival XMM Newton data. The low background of Suzaku permits mapping of the extended X-ray nebula, indicating a tail stretching $\sim 8 \arcmin$ north of the pulsar. The X-ray data, along with archival radio and VHE data, hint at a pulsar birthsite to the North, and yield insights into its evolution and the properties of the ambient medium. We further explore such properties by modeling the spectral energy distribution (SED) of the extended nebula.' author: - 'Adam Van Etten, Roger W. Romani' title: 'The Extended X-ray Nebula of PSR J1420–6048' --- Introduction ============ The campaign to identify 3EG J1420–6038 has revealed sources across the electromagnetic spectrum from radio to VHE $\gamma$-rays. The complex of compact and extended radio sources in this region is referred to as the Kookaburra [@robertsetal99], and covers nearly a square degree along the Galactic plane. Within a Northeasterly excess in this complex labeled “K3” @d'amicoetal01 discovered PSR J1420–6048 (hereafter J1420), a young energetic pulsar with period 68 ms, characteristic age $\rm \tau_c = 13$ kyr, and spin down energy $\rm \dot E = 1.0 \times 10^{37} \, erg \, s^{-1}$. The NE2001 dispersion measure model [@candl02] of this pulsar places it 5.6 kpc distant. Subsequent ASCA observations by @robertsetal01b revealed extended X-ray emission around this pulsar, and @ngetal05 further examined the K3 pulsar wind nebula (PWN) with Chandra and XMM-Newton, resolving a bright inner nebula along with fainter emission extending $\sim 2\arcmin$ from the pulsar. @aharonianetal06b report on the discovery of two bright VHE $\gamma$-ray sources coincident with the Kookaburra complex. HESS J1420-607 is centered just north of J1420, with best fit extension overlapping the pulsar position. The other H.E.S.S. source appears to correspond to the Rabbit nebula half a degree southwest, which is also observed in the radio [@robertsetal99] and X-ray [@robertsetal01a]. Most recently, PSR J1420-6048 was detected by the Fermi Large Area Telescope (LAT) [@abdoetal09]. This crowded region clearly merits further study, and we report on new X-ray results obtained with Suzaku and XMM-Newton, as well as SED modeling of the K3 nebula. Data Analysis ============= The Suzaku pointing (obsID 503110010) occurred on January 11-12 2009 for a total of 50.3 ks. We utilize the standard pipeline screened events, and analyze the XIS front side (XIS0 and XIS3) and back side (XIS1) illuminated chips with XSelect version 2.4. We also obtained recent archival XMM data to augment the Suzaku data; observation 0505840101 occurred on February 15 2008, for 35.0 ks, while observation 0505840201 added 5.6 ks. The second data set has a slightly different CCD placement, and suffers from high background, so we only use the 35.0 ks of data. We apply the standard data processing, utilizing SAS version 9.0. After screening the data for periods of high background 19.9 ks remain with the MOS chips. The PN chip suffers greatly from flaring, and we discard this data. Spectral fits are accomplished with XSPEC version 12.5. Broadband Morphology and Point Sources -------------------------------------- Suzaku X-ray emission is peaked in the vicinity of the pulsar, with a bright halo extending $\sim3 \arcmin$ and a fainter tail extending north $\sim 8 \arcmin$. A number of other excesses of emission correspond to point sources, as discussed below. Figure 1 shows the Suzaku data in the 2–10 keV band, which highlights the extended PWN emission to the north. Also depicted is the XMM exposure, clearly showing a number of point sources, though no obvious extended emission is apparent. To identify X-ray point sources we use the SAS source detection function edetect$\_$chain on the XMM MOS chips and search in two energy bands of 0.3–2 keV and 2–10 keV for sources with a probability $P < 10^{-13}$ of arising from random Poissonian fluctuations. The source detection algorithm also attempts to determine source extension via a Gaussian model, though all detections are consistent with a point source. Counts are therefore extracted from a 15 pixel ($16.5 \arcsec$) radius circle. A total of 8 sources pass this test, 4 of which also appear in @ngetal05: PSR J142–6048 (source 5 in our dataset), the X-ray sources denoted star 1 (source 1) and star 3 (source 2), and another point source to the southeast (source 3) unlabeled by [@ngetal05] but visible in their XMM exposure. This source to the southeast is also a field star, as it appears quite bright in DSS2 red images. Of the four remaining sources, only one, a hard bright source $8.5 \arcmin$ north of the J1420 labeled source 7, lacks an optical counterpart. Source 7 also overlaps a radio hotspot to the north. Below we list the properties of these sources, defining the hardness ratio as: HR=$ \rm (C_{hi}-C_{lo})/(C_{hi} + C_{lo})$ where $\rm C_{lo}$ and $\rm C_{hi}$ are MOS counts in the 0.3–2 keV and 2–10 keV bands, respectively. It is worth noting that PSR J1420–6048 is only the fifth brightest point source in the XMM field and that all 8 XMM sources appear as excesses in the soft band Suzaku data as well. All point sources are quite soft, save for J1420 and source 7. No. ($\tablenotemark{*}$) R.A. Dec. Pos. Err.$\arcsec$ Counts HR --------------------------- --------------- ---------------- -------------------- -------------- -------------------- 1 (Star 1) $14:19:11.52$ $-60:49:34.00$ $0.26$ $462 \pm 27$ $-0.99 \pm 0.026$ 2 (Star 3) $14:19:31.48$ $-60:46:20.29$ $0.39$ $146 \pm 18$ $-0.85 \pm 0.12 $ 3 (Unlabeled) $14:20:22.72$ $-60:53:21.47$ $0.45$ $164 \pm 19$ $-1.00 \pm 0.069$ 4 $14:19:17.61$ $-60:45:23.45$ $0.61$ $123 \pm 17$ $-0.81 \pm 0.12 $ 5 (PSR J1420–6048) $14:20:08.19$ $-60:48:14.85$ $0.63$ $150 \pm 20$ $ 0.95 \pm 0.072$ 6 $14:20:40.75$ $-60:41:20.22$ $0.79$ $40 \pm 11$ $-0.79 \pm 0.30 $ 7 $14:20:09.78$ $-60:39:42.86$ $0.85$ $62 \pm 14$ $ 0.79 \pm 0.18 $ 8 $14:19:35.85$ $-60:42:11.39$ $1.16$ $34 \pm 11$ $-0.68 \pm 0.36 $ : XMM Source Properties \[srcprop\] On a larger scale, extended emission is observed in all wavebands. Australia Compact Telescope Array (ATCA) observations within the error ellipse of 3EG J1420–6038 (which is broad enough to encompass both the K3 wing and the Rabbit nebula) by @robertsetal99 revealed the “K3” excess, a resolved knot of emission surrounding the pulsar of flux density 20 mJy at 20 cm with index $\alpha = -0.4 \pm 0.5$. Adjacent is the “K2 wing,” with 1 Jy at 20 cm and index of $-0.2 \pm 0.2$. Closer inspection of the both the 13 cm and 20 cm continuum maps reveal that J1420 lies on the southeastern rim of an apparent
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Let $f$ be a holomorphic cusp form of weight $k$ with respect to $SL_2(\mathbb{Z})$ which is a normalized Hecke eigenform, $L_f(s)$ the $L$-function attached to the form $f$. In this paper, we shall give the relation of the number of zeros of $L_f(s)$ and the derivatives of $L_f(s)$ using Berndt’s method, and an estimate of zero-density of the derivatives of $L_f(s)$ based on Littlewood’s method.' author: - | Yoshikatsu Yashiro\ Graduate School of Mathematics, Nagoya University,\ 464-8602  Chikusa-ku, Nagoya, Japan\ E-mail: m09050b@math.nagoya-u.ac.jp title: '**Distribution of zeros and zero-density estimates for the derivatives of *L*-functions attached to cusp forms**' --- [^1] [^2] Introduction ============ Let $f$ be a cusp form of weight $k$ for $SL_2(\mathbb{Z})$ which is a normalized Hecke eigenform. Let $a_f(n)$ be the $n$-th Fourier coefficient of $f$ and set $\lambda_f(n)=a_f(n)/n^{(k-1)/2}$. Rankin showed that $\sum_{n\leq x}|\lambda_f(n)|^2=C_fx+O(x^{3/5})$ for $x\in\mathbb{R}_{>0}$, where $C_f$ is a positive constant depending on $f$ (see [@RAN (4.2.3), p.364]). The $L$-function attached to $f$ is defined by $$\begin{aligned} L_f(s)=\sum_{n=1}^\infty\frac{\lambda_f(n)}{n^s}=\prod_{p\text{:prime}}\left(1-\frac{\alpha_f(p)}{p^s}\right)^{-1}\left(1-\frac{\beta_f(p)}{p^s}\right)^{-1} \quad (\text{Re }s>1), \label{4LD}\end{aligned}$$ where $\alpha_f(p)$ and $\beta_f(p)$ satisfy $\alpha_f(p)+\beta_f(p)=\lambda_f(p)$ and $\alpha_f(p)\beta_f(p)=1$. By Hecke’s work ([@HEC]), the function $L_f(s)$ is analytically continued to the whole $s$-plane by $$\begin{aligned} (2\pi)^{-s-\frac{k-1}{2}}\Gamma(s+\tfrac{k-1}{2})L_f(s)=\int_0^\infty f(iy)y^{s+\frac{k-1}{2}-1}dy, \label{4AC}\end{aligned}$$ and has a functional equation $$\begin{aligned} L_f(s)=\chi_f(s)L_f(1-s)\end{aligned}$$ where $\chi_f(s)$ is given by $$\begin{aligned} \chi_f(s)=&(-1)^{-\frac{k}{2}}(2\pi)^{2s-1}\frac{\Gamma(1-s+\frac{k-1}{2})}{\Gamma(s+\frac{k-1}{2})} \notag\\ =&2(2\pi)^{-2(1-s)}\Gamma(s+\tfrac{k-1}{2})\Gamma(s-\tfrac{k-1}{2})\cos\pi(1-s). \label{4XFE}\end{aligned}$$ The second equality is deduced from the fact $\Gamma(s)\Gamma(1-s)=\pi/\sin(\pi s)$ and $\sin\pi(s+(k-1)/2)=(-1)^{k/2}\cos\pi(1-s)$. Similarly to the case of the Riemann zeta function $\zeta(s)$, it is conjectured that all complex zeros of $L_f(s)$ lie on the critical line $\text{Re }s= 1/2$, namely, the Generalized Riemann Hypothesis (GRH). In order to support the truth of the GRH, the distribution and the density of complex zeros of $L_f(s)$ are studied without assuming the GRH. Lekkerkerker [@LEK] proved the approximate formula of a number of complex zeros of $L_f(s)$: $$\begin{aligned} N_f(T)=\frac{T}{\pi}\log\frac{T}{2\pi e}+O(\log T), \label{ML0}\end{aligned}$$ where $T>0$ is sufficiently large, and $N_f(T)$ denotes the number of complex zeros of $L_f(s)$ in $0<{\rm Im\;}s\leq T$. The formula (\[ML0\]) is an analogy of $N(T)$ which denotes the number of complex zeros of $\zeta(s)$ in $0<{\rm Im\;}s\leq T$. Riemann [@RIE] showed that $$\begin{aligned} N(T)=\frac{T}{2\pi}\log\frac{T}{2\pi e}+O(\log T). \label{Z0A}\end{aligned}$$ (Later von Mangoldt [@VOM] proved (\[Z0A\]) rigorously.) In the Riemann zeta function, the zeros of derivative of $\zeta(s)$ have a connection with RH. Speiser [@SPE] showed that the Riemann Hypothesis (RH) is equivalent to the non-existence of complex zero of $\zeta'(s)$ in $\text{Re }s<1/2$, where $\zeta'(s)$ denotes the derivative function of $\zeta(s)$. Levinson and Montgomery proved that if RH is true, then $\zeta^{(m)}(s)$ has at most finitely many complex zeros in $0<\text{Re }s<1/2$ for any $m\in\mathbb{Z}_{\geq0}$. There are many studies of the zeros of $\zeta^{(m)}(s)$ without assuming RH. Spira [@SP1], [@SP2] showed that there exist $\sigma_{m}\geq(7m+8)/4$ and $\alpha_{m}<0$ such that $\zeta^{(m)}(s)$ has no zero for ${\rm Re\;}s\leq\sigma_{m}$ and ${\rm Re\;}s\leq\alpha_m$, and exactly one real zero in each open interval $(-1-2n,1-2n)$ for $1-2n\leq\alpha_m$. Later, Y[i]{}ld[i]{}r[i]{}m [@YIL] showed that $\zeta''(s)$ and $\zeta'''(s)$ have no zeros in the strip $0\leq\text{Re\;}s<1/2$. Berndt [@BER] gave the relation of the number of complex zeros of $\zeta(s)$ and $\zeta^{(m)}(s)$: $$\begin{aligned} N_m(T)=N(T)-\frac{T\log 2}{2\pi}+O(\log T), \label{ZMA}\end{aligned}$$ where $m\in\mathbb{Z}_{\geq1}$ is fixed and $N_m(T)$ denotes the number of complex zeros of $\zeta^{(m)}(s)$ in $0<{\rm Im\;}s\leq T$. Recently, Aoki and Minamide studied the density of zeros of $\zeta^{(m)}(s)$ in the right hand side of critical line ${\rm Re\;}s=1/2$ by using Littlewood’s method. Let $N_m(\sigma,T)$ be the number of zeros of $\zeta^{(m)}(s)$ in $\text{Re\;}s\geq\sigma$ and $0<\text{Im\;}s\leq T$. They showed that $$\begin{aligned} N_m(\sigma,T)=O\left(\frac{T}{\sigma-1/2}\log\frac{1}{\sigma-1/2}\right), \label{MME}\end{aligned}$$ uniformly for $\sigma>1/2$. From (\[ZMA\]) and (\[MME\]), we see that almost all complex zeros of $\zeta^{(m)}(s)$ lie in the neighbourhood of the critical line. The purpose of this paper is to study the corresponding results of Berndt, Aoki and Minamide for the derivatives of $L_f(s)$, namely, the relation between the number of complex zeros of $L_f(s)$ and that of $L_f^{(m)}(s)$, and the density of zeros of $L_f^{(m)}{(s)}$ in the right half plane ${\rm Re\;}s>1/2$. Let $n_f$ be the smallest integer greater than 1 such that $\lambda_f(n_f)\ne0$. Here $L^{(m)}_f(s)$ denotes the $m$-th derivative of $L_f(s)$ given by $$\begin{aligned} L_f^{(m)}(s)=\sum_{n=1}^\infty\frac{\lambda_f(n)(-\log n)^m}{n^s}=\sum_{n=n_f
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'One finding of cognitive research is that people do not automatically acquire usable knowledge by spending lots of time on task. Because students’ knowledge hierarchy is more fragmented, “knowledge chunks” are smaller than those of experts. The limited capacity of short term memory makes the cognitive load high during problem solving tasks, leaving few cognitive resources available for meta-cognition. The abstract nature of the laws of physics and the chain of reasoning required to draw meaningful inferences makes these issues critical. In order to help students, it is crucial to consider the difficulty of a problem from the perspective of students. We are developing and evaluating interactive problem-solving tutorials to help students in the introductory physics courses learn effective problem-solving strategies while solidifying physics concepts. The self-paced tutorials can provide guidance and support for a variety of problem solving techniques, and opportunity for knowledge and skill acquisition.' author: - Chandralekha Singh title: Problem Solving and Learning --- [ address=[Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, Pennsylvania, 15260]{} ]{} Cognitive Research and Problem Solving ====================================== Cognitive research deals with how people learn and solve problems [@nrc1; @joe]. At a coarse-grained level, there are three components of cognitive research: how do people acquire knowledge, how do they organize and retain the knowledge in memory (brain) and how do they retrieve this knowledge from memory in appropriate situations including to solve problems. These three components are strongly coupled, e.g., how knowledge was organized and retained in memory during acquisition will determine how effectively it can be retrieved in different situations to solve problems. We can define problem solving as any purposeful activity where one must devise and perform a sequence of steps to achieve a set goal when presented with a novel situation. A problem can be quantitative or conceptual in nature. Using the findings of cognitive research, human memory can be broadly divided into two components: the working memory or the short term memory (STM) and the long term memory (LTM). The long term memory is where the prior knowledge is stored. Appropriate connections between prior knowledge in LTM and new knowledge that is being acquired at a given time can help an individual organize his/her knowledge hierarchically. Such hierarchical organization can provide indexing of knowledge where more fundamental concepts are at the top of the hierarchy and the ancillary concepts are below them. Similar to an index in a book, such indexing of knowledge in memory can be useful for accessing relevant knowledge while solving problems in diverse situations. It can also be useful for inferential recall when specific details may not be remembered. The working memory or STM is where information presented to an individual is processed. It is the conscious system that receives input from memory buffers associated with various sensory systems and can also receive input from the LTM. Conscious human thought and problem solving involves rearranging and synthesizing ideas in STM using input from the sensory systems and LTM. One of the major initial findings of the cognitive revolution is related to Miller’s magic numbers 7$\pm$2 (5 to 9), i.e., how much information can STM hold at one time. [@miller] Miller’s research found that STM can only hold 5 to 9 pieces of information regardless of the IQ of an individual. Here is an easy way to illustrate this. If an individual is asked to memorize the following sequence of 25 numbers and letters in that order after staring at it for 30 seconds, it is a difficult task: 6829-1835-47DR-LPCF-OGB-TWC-PVN. An individual typically only remembers between 5 to 9 things in this case. However, later research shows that people can extend the limits of their working memory by organizing disparate bits of information into chunks or patterns. [@chunk] Using chunks, STM can evoke from LTM, highly complex information. An easy way to illustrate it is by asking an individual to memorize the following sequence of 25 numbers and letters: 1492-1776-1865-1945-AOL-IBM-USA. This task is much easier if one recognizes that each of the four digit number is an important year in history and each of the three letters grouped together is a familiar acronym. Thus, an individual only has to remember 7 separate chunks rather than 25 disparate bits. This chunking mechanism is supported by research in knowledge rich fields such as chess and physics where experts in a field have well organized knowledge. [@chess] For example, research shows that if experts in chess are shown a very good chess board that corresponds to the game of a world-class chess player, they are able to assemble the board after it is disassembled because they are able to chunk the information on the board and remember the position of one piece with respect to another. If chess novices are shown the same board, they are only able to retrieve 5-9 pieces after it is jumbled up because they are not able to chunk large pieces of information present on the chess board. On the other hand, both chess experts and novices are poor at assembling a board on which the chess pieces are randomly placed before it was jumbled up. In this latter case, chess experts are unable to chunk the random information due to lack of pattern. A crucial difference between expert and novice problem solving is the manner in which knowledge is represented in their memory and the way it is retrieved to solve problems. Experts in a field have well organized knowledge. They have large chunks of “compiled" knowledge in LTM and several pieces of knowledge can be accessed together as a chunk [@automatic]. For example, for an expert in physics, vector addition, vector subtraction, displacement, velocity, speed, acceleration, force etc. can be accessed as one chunk while solving problems while they can be seven separate pieces of information for beginning students. If a problem involves all of these concepts, it may cause a cognitive overload if students’ STM can only hold 5 or 6 pieces of information. Experts are comfortable going between different knowledge representations, e.g., verbal, diagrammatic/pictorial, tabular etc. and employ representations that make problem solving easier. [@rep] Experts categorize problems based upon deep features unlike novices who can get distracted by context dependent features. For example, when physics professors and introductory physics students are asked to group together problems based upon similarity of solution, professors group them based upon physics concepts while students can choose categories that are dependent on contexts such as ramp problems, pulley problems, spring problems etc [@chi; @hardiman; @reif2; @larkin]. Of course, an important goal of most physics courses is to help students develop expertise in problem solving and improve their reasoning skills. In order to help students, instructors must realize that the cognitive load, which is the amount of mental resources needed to solve a problem, is subjective [@cogload]. The complexity of a problem not only depends on its inherent complexity but also on the expertise, experience and intuition of an individual [@intuition]. It has been said that problems are either “impossible" or “trivial". A ballistic pendulum problem that may be trivial for a physics professor may be very difficult for a beginning student [@rosengrant]. Cognitive load is higher when the context is abstract as opposed to concrete. The following Wason tasks [@wason] are examples of abstract and concrete problems which are conceptually similar, but the abstract problem turns out to be cognitively more demanding. - You will lose your job unless you enforce the following rule: “If a person is rated K, then his/her document must be marked with a 3".\ Each card on the table for a person has a letter on one side and a number on the other side. Indicate only the card(s) shown in Figure 1 that you definitely need to turn over to see if the document of any of these people violates this rule.\ - You are serving behind the bar of a city centre pub and will lose your job unless you enforce the following rule: “If a person is drinking beer, then he/she must be over 18 years old".\ Each person has a card on the table which has his/her age on one side and the name of his/her drink on the other side. Indicate only the card(s) shown in Figure 2 that you definitely need to turn over to see if any of these people are breaking this rule. The correct answer for the abstract case is that you must turn the cards with K and 7 (to make sure that there is no K on the other side). Please note that the logic presented in the task is one sided in that it is ok for a document with a 3 to have anything on the other side. The correct answer for the concrete case is “beer" and “16 years old", and it is much easier to identify these correct answers than the correct answers for the abstract case. A major reason for why the cognitive load is high during problem solving in physics is because the laws of physics are abstract. It is important to realize that it is not easy to internalize them unless concrete contexts are provided to the students. Another difficulty is that, once the instructor has built an intuition about a problem, it may not appear difficult to him/her even if it is abstract. In such situations the instructor may overlook the cognitive complexity of the problem for a beginning student unless the instructor puts himself/herself in the students’ shoes. An important lesson from cognitive research is that new knowledge that an individual acquires builds on prior knowledge. This idea is commensurate with Piaget’s notion of “optimal mismatch" [@piaget] or Vygotsky’s idea of “zone of proximal development" (ZPD) [@vygotsky]. ZPD is
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We use freeness assumptions of random matrix theory to analyze the dynamical behavior of inference algorithms for probabilistic models with dense coupling matrices in the limit of large systems. For a toy Ising model, we are able to recover previous results such as the property of vanishing effective memories and the analytical convergence rate of the algorithm.' address: | Department of Artificial Intelligence, Technische Universität Berlin,\ Berlin 10587, Germany author: - Manfred Opper and Burak Çakmak bibliography: - 'mybib.bib' title: 'Understanding the dynamics of message passing algorithms: a free probability heuristics [^1]' --- \#1\#2[[ ]{}]{} Introduction ============ Probabilistic inference plays an important role in statistics, signal processing and machine learning. A major task is to compute statistics of unobserved random variables using distributions of these variables conditioned on observed data. An exact computation of the corresponding expectations in the multivariate case is usually not possible except for simple cases. Hence, one has to resort to methods which approximate the necessary high-dimensional sums or integrals and which are often based on ideas of statistical physics [@mezard2009information]. A class of such approximation algorithms is often termed [*message passing*]{}. Prominent examples are [*belief propagation*]{} [@pearl2014probabilistic] which was developed for inference in probabilistic Bayesian networks with sparse couplings and [*expectation propagation*]{} (EP) which is also applicable for networks with dense coupling matrices [@Minka1]. Both types of algorithms make assumptions on weak dependencies between random variables which motivate the approximation of certain expectations by Gaussian random variables invoking central limit theorem arguments [@Adatap]. Using ideas of the statistical physics of disordered systems, such arguments can be justified for the [*fixed points*]{} of such algorithms for large network models where couplings are drawn from random, rotation invariant matrix distributions. This extra assumption of randomness allows for further simplifications of message passing approaches [@ccakmak2016self; @CakmakOpper18], leading e.g. to the [*approximate message passing*]{} AMP or VAMP algorithms, see [@Ma; @rangan2019vector; @takeuchi]. Surprisingly, random matrix assumptions also facilitate the analysis the [*dynamical*]{} properties of such algorithms [@rangan2019vector; @takeuchi; @CakmakOpper19] allowing e.g. for exact computations of convergence rates [@CakmakOpper19; @ccakmak2020analysis]. This result might not be expected, because mathematically the updates of message passing algorithms somewhat resemble the dynamical equations of spin-glass models or of recurrent neural networks which often show a complex behavior in the large system limit [@Mezard]. This manifests itself e.g. in a slow relaxation towards equilibrium [@cugliandolo1993analytical] with a possible long-time memory on initial conditions [@Eisfeller]. Such properties would definitely not be ideal to the design of a numerical algorithm. So a natural question is: which properties of the dynamics enable both their analytical treatment and guarantee fast convergence? In this paper, we give a partial answer to this question by interpreting recent results on the dynamics of algorithms for a toy inference problem for an Ising network. We develop a heuristics based on freeness assumptions on random matrices which lead to an understanding of the simplifications in the analytical treatment and provide a simple way for predicting the convergence rate of the algorithm. The paper is organized as follows: In Section 2 we introduce the motivating Ising model and provide a brief presentation on the TAP mean-field equations. In Section 3 and Section 4 we present the message passing algorithm of [@CakmakOpper19] (to solve the TAP equations) and provide a brief discussion on its dynamical properties in the thermodynamic limit, respectively. In Section 5 and Section 6 we recover the property of vanishing-memories and analytical convergence speed of the messaging passing algorithm using a free probability heuristic. Comparisons of our results with simulations are given in Section 7. Section 8 presents a summary and outlook. Motivation: Ising models with random couplings and TAP mean field equations =========================================================================== We consider a model of a multivariate distribution of binary units. This is given by an Ising model with pairwise interactions of the spins ${\mathlette{\boldmath}{s}}=(s_1,\ldots,s_N)^\top\in\{-1,1\}^{N}$ described by the Gibbs distribution $$p({\mathlette{\boldmath}{s}}\vert {\mathlette{\boldmath}{J}},{\mathlette{\boldmath}{h}})\doteq \frac{1}{Z}\exp\left(\frac{1}{2}{\mathlette{\boldmath}{s}}^\top{\mathlette{\boldmath}{J}}{\mathlette{\boldmath}{s}}+{\mathlette{\boldmath}{s}}^\top{\mathlette{\boldmath}{h}}\right)\label{Gibbs}$$ where $Z$ stands for the normalizing partition function. While such models have been used for data modeling where the couplings ${\mathlette{\boldmath}{J}}$ and fields ${\mathlette{\boldmath}{h}}$ are adapted to data sets [@hinton2007boltzmann], we will restrict ourselves to a toy model where all external fields are equal $$h_{i}=h\neq 0,~\forall i.$$ The coupling matrix ${\mathlette{\boldmath}{J}}={\mathlette{\boldmath}{J}}^{\top}$ is assumed to be drawn at random from a rotation invariant matrix ensemble, in order to allow for nontrivial and rich classes of models. This means that ${\mathlette{\boldmath}{J}}$ and ${\mathlette{\boldmath}{V}}{\mathlette{\boldmath}{J}}{\mathlette{\boldmath}{V}}^\top$ have the same probability distributions for any orthogonal matrix ${\mathlette{\boldmath}{V}}$ independent of ${\mathlette{\boldmath}{J}}$. Equivalently, ${\mathlette{\boldmath}{J}}$ has the spectral decomposition [@Collins14] $${\mathlette{\boldmath}{J}}={\mathlette{\boldmath}{O}}^ \top{\mathlette{\boldmath}{D}}{\mathlette{\boldmath}{O}} \label{decom}$$ where ${\mathlette{\boldmath}{O}}$ is a random Haar (orthogonal) matrix that is independent of a diagonal matrix ${\mathlette{\boldmath}{D}}$. This class of models generalizes the well known SK (Sherrington–Kirkpatrick) model [@SK] of spin glasses for which ${\mathlette{\boldmath}{J}}$ is a symmetric Gaussian random matrix. The simplest goal of probabilistic inference would reduce to the computation of the magnetizations $${\mathlette{\boldmath}{m}}=\mathbb E[{\mathlette{\boldmath}{s}}]$$ where the expectation is taken over the Gibbs distribution. For random matrix ensembles, the so–called TAP equations [@SK] were developed in statistical physics to provide approximate solutions to ${\mathlette{\boldmath}{m}}.$ Moreover, these equations can be assumed (under certain conditions) to give exact results (for a rigorous analysis in case of the SK model, see [@chatterjee2010spin]) for the magnetizations in the thermodynamic limit [@Mezard] $N\to\infty$ for models with random couplings. For general rotation invariant random coupling matrices, the TAP equations are given by \[tap\] $$\begin{aligned} {\mathlette{\boldmath}{m}}&={\rm Th}({\mathlette{\boldmath}{\gamma}})\\ {\mathlette{\boldmath}{\gamma}}&={\mathlette{\boldmath}{J}}{\mathlette{\boldmath}{m}}-{\rm R}(\chi){\mathlette{\boldmath}{m}}\\ \chi &= \mathbb E[{\rm Th}'(\sqrt{(1-\chi){\rm R}'(\chi)} u)] \label{chi}. \end{aligned}$$ Here $u$ denotes the normal Gaussian random variable and for convenience we define the function $${\rm Th}(x)\doteq\tanh(h+x) .$$ Equation (\[tap\]) provides corrections to the simpler naive mean-field method. The latter, ignoring statistical dependencies between spins, would retain only the term ${\mathlette{\boldmath}{J}}{\mathlette{\boldmath}{m}}$ as the “mean field” acting on spin $i$. The so-called [*Onsager reaction term*]{} $-{\rm R}(\chi){\mathlette{\boldmath}{m}}$ models the coherent small changes of the magnetisations of the other spins due to the presence of spin $i$. Furthermore, $\chi$ coincides with static susceptibility computed by the replica-symmetric ansatz. The Onsager term for a Gaussian matrix ensemble was developed in [@TAP] and later generalized to general ensembles of rotation invariant coupling matrices in [@Parisi] using a free energy approach. For alternative derivations, see [@Adatap] and [@CakmakOpper18]. The only dependency on the random matrix ensemble in is via the R-transform ${\rm R}(\chi)$ and its derivative ${\rm R}'(\chi)$. The R-transform is defined as [@Hiai] $${\rm R}(\omega)={\rm G}^{-1}(\omega)-\frac{1}{\omega}, \label{Rtrans}$$ where ${\rm G}^{-1}$ is the functional inverse of the Green-function $${\rm G}(z)\doteq {\rm Tr}((z{\bf I}-{\mathlette{\boldmath}{J}})^{-1}). \label{Greens}$$ Here, for an $N\times N$ matrix ${\mathlette{\boldmath}{X}}$ we define its limiting (averaged) normalized-trace by $${\rm Tr}({\mathlette{\boldmath}{X}})\doteq \lim_{N\to\infty}\frac{1}{N}\mathbb E_{{\mathlette{\boldmath}{X}}}{\rm tr}({\mathlette{\boldmath}{X}}).$$ From
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We discuss the dependence of pure Yang-Mills equation of state on the choice of gauge algebra. In the confined phase, we generalize to an arbitrary simple gauge algebra Meyer’s proposal of modelling the Yang-Mills matter by an ideal glueball gas in which the high-lying glueball spectrum is approximated by a Hagedorn spectrum of closed-bosonic-string type. Such a formalism is undefined above the Hagedorn temperature, corresponding to the phase transition toward a deconfined state of matter in which gluons are the relevant degrees of freedom. Under the assumption that the renormalization scale of the running coupling is gauge-algebra independent, we discuss about how the behavior of thermodynamical quantities such as the trace anomaly should depend on the gauge algebra in both the confined and deconfined phase. The obtained results compare favourably with recent and accurate lattice data in the $\mathfrak{su}(3)$ case and support the idea that the more the gauge algebra has generators, the more the phase transition is of first-order type.' author: - Fabien - Gwendolyn title: 'Comments on Yang-Mills thermodynamics, the Hagedorn spectrum and the gluon gas' --- Introduction ============ The existence of a critical temperature, $T_c$, in QCD, is of particular phenomenological interest since it signals a transition from a confined phase of hadronic matter to a deconfined one. When $T<T_c$, a successful effective description of QCD is the hadron resonance gas model, in which the hadronic matter is seen as an ideal gas of hadrons. It compares well with current lattice data when the meson and baryon resonances below 2.5 GeV are included [@borsanyi2010]. A problem is that experimental information about resonances above 3 GeV is still lacking. To describe the high-lying hadronic spectrum, Hagedorn [@hage65] proposed a model in which the number of hadrons with mass $m$ is found to increase as $\rho(m)\propto m^a \, {\rm e}^{m/T_h}$ ($a$ is real): the so-called Hagedorn spectrum. Thermodynamical quantities, computed using hadronic degrees of freedom, are then undefined for $T>T_h$. Other degrees of freedom are then needed at higher temperatures, so it is tempting to guess that $T_h\approx T_c$, the new degrees of freedom being deconfined quarks and gluons. Although the current lattice studies agree on a value of $T_c$ in the range $(150-200)$ MeV when $2+1$ light quark flavours are present [@borsanyi2010; @tcd], there is currently no consensus concerning the value of $T_h$. Indeed, to reach values of $T_h$ as low as 200 GeV demands an ad hoc modification of $\rho(m)$: By introducting an extra parameter $m_0$ and setting $\rho(m)\propto (m^2+m^2_0)^{a/2} \, {\rm e}^{m/T_h}$, one can reach values of $T_h$ in the range $(160-174)$ MeV, that agree with lattice computations, see *e.g.* [@hage68; @cley]. However, by taking the original form $m_0=0$, one rather ends up with values of $T_h$ around $(300-360)$ MeV, see [@cudell0; @bronio]. Moreover, it has been observed in some pure gauge lattice simulations with the gauge algebra $\mathfrak{su}(N)$ that $T_c\lesssim T_h$ [@TcTh0; @TcTh] as intuitively expected. It has to be said that the value of $T_h$ and its relation to $T_c$ are still a matter of debate. Open strings as well as closed strings naturally lead to a Hagedorn spectrum, see *e.g.* [@zwie]. Modelling mesons as open strings is a way to make appear a Hagedorn spectrum in QCD [@cudell]. The question of showing that a Hagedorn spectrum arises from QCD itself is still open but, under reasonable technical assumptions, it has recently been found in the large-$N$ limit of QCD [@cohen] (glueballs and mesons have a zero width in this limit). In the pure gauge sector, the $\mathfrak{su}(3)$ equation of state computed on the lattice has been shown to be compatible with a glueball gas model in which the high-lying spectrum is modelled by a gas of closed bosonic strings [@meyer]. Besides QCD, pure Yang-Mills (YM) thermodynamics is challenging too, in particular because it can be formulated for any gauge algebra. A clearly relevant case is the one of $\mathfrak{su}(N)$-type gauge algebras, linked to the large-$N$ limit of QCD. Moreover, a change of gauge algebra may lead to various checks of the hypothesis underlying any approach describing $\mathfrak{su}(3)$ YM theory. To illustrate this, let us recall the pioneering work [@sve], suggesting that the phase transition of YM theory with gauge algebra $\mathfrak{g}$ is driven by a spontaneous breaking of a global symmetry related to the center of $\mathfrak{g}$. Effective $Z_3$-symmetric models are indeed able to describe the first-order phase transition of $\mathfrak{su}(3)$ YM thermodynamics [@Z3]. However, a similar phase transition has also been observed in lattice simulations of G$_2$ YM theory [@G2] even though the center of G$_2$ is trivial, meaning that the breaking of center symmetry is not the only mechanism responsible for deconfinement. For example, it is argued in [@diakonov] that the YM phase transition for any gauge group is rather driven by dyons contributions. In this case, still under active investigation, studying different gauge algebras helps to better understand the general mechanisms of (de)confinement in YM theory. For completeness, we mention that the structure of the gluon propagator at low momentum as well as the Dyson-Schwinger equations in scalar-Yang-Mills systems have recently started to be studied for generic gauge algebra [@maas; @maas2]. The main goal of the present work is to give predictions for the equation of state of YM theory with an arbitrary simple gauge algebra. This topic has, to our knowledge, never been investigated before and will be studied within two well-established different frameworks: A glueball gas with a high-lying Hagedorn spectrum in the confined phase (Sec. \[conf\]) and a gluon gas above the critical one (Sec. \[deconf\]). Some phenomenological consequences of the obtained results will then be discussed in Sec. \[conclu\]. More specifically, our results apply to the following gauge algebras : A$_{r\geq 1}$ related to $\mathfrak{su}$ algebras, B$_{r\geq 3}$ and D$_{r\geq 4}$ related to $\mathfrak{so}$ algebras, C$_{r\geq 2}$ related to $\mathfrak{sp}$ algebras, and the exceptional algebras E$_{6}$, E$_7$, F$_4$ and G$_2$. The case of E$_8$ is beyond the scope of the present paper as it will be explained below. Glueball gas and the Hagedorn spectrum {#conf} ====================================== The model --------- In the confined phase, glueballs, *i.e.* colour singlet bound states of pure YM theory, are the relevant degrees of freedom of YM matter. Hence it can be modelled in a first approximation by an ideal gas of glueballs, assuming that the residual interactions between these colour singlet states are weak enough to be neglected [@dashen]. Note that the glueball gas picture emerges from a strong coupling expansion in the case of large-$N$ $\mathfrak{su}(N)$ YM theory [@langelage10], where glueballs are exactly noninteracting since their scattering amplitude scales as $1/N^2$ [@witten]. The glueball gas picture implies that, for example, the total pressure should be given by $\sum_{J^{PC}}p_0(2J+1,T,M_{J^{PC}})$, where the sum runs on all the glueball states of the YM theory with a given gauge algebra, and where $$p_0(d,T,M)=\frac{d}{2\pi^2}M^2T^2\sum_{j=1}^\infty\frac{1}{j^2}K_2(j\, M/T)$$ is the pressure associated with a single bosonic species with mass $M$ and $d$ degrees of freedom. Performing the sum $\sum_{J^{PC}}$ demands the explicit knowledge of all the glueball states, not only the lowest-lying ones that can be known from lattice computations or from effective approaches. To face this problem, it has been proposed in [@meyer] to express the total pressure of $\mathfrak{su}(3)$ YM theory as $$\label{preh} p= \hspace{-0.3cm}\sum_{M_{J^{PC}}<2M_{0^{++}}}\hspace{-0.65cm} p_0(2J+1,T,M_{J^{PC}})+\int^\infty_{2M_{0^{++}}}\hspace{-0.5cm}dM\ p_0(\rho(M),T, M),$$ where the high-lying glueball spectrum (above the two
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Let $E$ be a rank $2$, degree $d$ vector bundle over a genus $g$ curve $C$. The loci of stable pairs on $E$ in class $2[C]$ fixed by the scaling action are expressed as products of ${\operatorname{Quot}}$ schemes. Using virtual localization, the stable pairs invariants of $E$ are related to the virtual intersection theory of ${\operatorname{Quot}}E$. The latter theory is extensively discussed for an $E$ of arbitrary rank; the tautological ring of ${\operatorname{Quot}}E$ is defined and is computed on the locus parameterizing rank one subsheaves. In case $E$ has rank $2$, $d$ and $g$ have opposite parity, and $E$ is sufficiently generic, it is known that $E$ has exactly $2^g$ line subbundles of maximal degree. Doubling the zero section along such a subbundle gives a curve in the total space of $E$ in class $2[C]$. We relate this count of maximal subbundles with stable pairs/Donaldson-Thomas theory on the total space of $E$. This endows the residue invariants of $E$ with enumerative significance: they actually *count* curves in $E$.' address: 'Department of Mathematics, Brown University, Providence, RI 02912' author: - 'W. D. Gillam' title: | Maximal subbundles, Quot schemes,\ and curve counting --- Introduction {#section:introduction} ============ This note is concerned with the Gromov-Witten (GW), Donaldson-Thomas (DT), and Pandharipande-Thomas stable pairs (PT) residue invariants of the total space of a rank $2$ bundle $E$ over a smooth proper curve $C$ (see §\[section:curvecounting\] for a brief review). The latter invariants are well-understood from a computational perspective. In this paper I attempt to shed some light on the enumerative significance of such invariants and to explain the relationship between sheaf theoretic curve counting on $E$ and the virtual intersection theory of ${\operatorname{Quot}}$ schemes of symmetric products of $E$. In particular, in §\[section:maximalsubbundles\], I explain how to relate the “count" of maximal subbundles of $E$ (which belongs properly to the theory of stable bundles on curves) with the DT/PT theory of $E$. Recall that the ${\operatorname{Quot}}$ scheme of a trivial rank $n$ bundle on a smooth proper curve $C$ may be viewed as a compactification of the space of maps from $C$ to a Grassmannian. The relationship between the virtual intersection theory of this ${\operatorname{Quot}}$ scheme and various “Gromov invariants" has been studied by many authors [@PR], [@Ber], [@BDW], [@MO], culminating in the theory of *stable quotients* [@MOP] where the curve $C$ is also allowed to vary, as it is in GW theory. The relationship between GW invariants of Grassmannians and counts of subbundles of maximal degree has also been studied by several authors [@Hol], [@LN], [@OT]. These connections, however, are rather difficult to make, and one must take a circuitous route to link maximal subbundle counts to GW invariants. Since counting maximal subbundles is an inherently sheaf-theoretic problem, it is reasonable to suspect that the most direct connections with curve counting should be made through the sheaf theoretic curve counting theories of Donaldson-Thomas [@Tho], [@MNOP] and Pandharipande-Thomas [@PT]. In general, the relationship between the PT theory of $E$ and virtual intersection theory on the various ${\operatorname{Quot}}{\operatorname{Sym}}^n E$ is quite subtle; we will treat the general case in a separate paper [@Gil2]. In the present article, we will focus on the case of PT residue invariants in homology class $2[C]$ (twice the class of the zero section of $E$), where the most complete results can be obtained. In particular, we will see that PT residue invariants of $E$ in class $2[C]$ are completely determined by virtual intersection numbers on ${\operatorname{Quot}}{\mathcal O}_C$ (symmetric products of $C$) and on the ${\operatorname{Quot}}$ scheme ${\operatorname{Quot}}^1 E$ parameterizing rank one subsheaves of $E$. Our methods could even be used to compute the *full* PT theory of $E$ (including descendent invariants involving odd cohomology classes from $C$) in class $2[C]$, though we do not provide the details here. Once we have in hand the relationship between PT theory and virtual intersection theory of ${\operatorname{Quot}}$ schemes described above, we will be all the more interested in the latter. In §\[section:tautologicalclasses\], we suggest packaging this theory into a “tautological ring." In the course of proving the Vafa-Intriligator formula, Marian and Oprea [@MO] explained that this entire theory can be reduced, via virtual localization, to intersection theory on symmetric products of $C$, hence it can be treated as “known." On the other hand, it is quite painful in practice to write down manageable formulas for such invariants and it seemed to me to be overkill to appeal to the general results of [@MO] for the invariants actually needed in our study. Instead, we give (§\[section:quotschemes\]) a direct computation of the virtual intersection theory of the ${\operatorname{Quot}}$ scheme ${\operatorname{Quot}}^1 V$ parameterizing rank one subsheaves of a vector bundle $V$ on $C$ as follows. The universal such subbundle $S$ is a line bundle on ${\operatorname{Quot}}^1 V \times C$, hence its dual yields a map $S^\lor : {\operatorname{Quot}}^1 V \to {\operatorname{Pic}}C$. In case $V={\mathcal O}_C$ this is the “usual" map ${\operatorname{Sym}}^d C \to {\operatorname{Pic}}^d C$. Just as in the case of the “usual" map, the map $S^\lor$ is a projective space bundle when $S^\lor$ has sufficiently large degree $d$, and, as in the “usual" case, one can compute the desired (virtual) intersection theory by descending induction on $d$. The only new ingredient is the use of the virtual class; otherwise the computation is not significantly different from what MacDonald does in [@Mac]. We include a review of the DT/GW/PT correspondence for the residue invariants of $E$ in §\[section:correspondence\]. In principle, our computations could be used to verify this explicitly in degree $2[C]$, though we content ourselves with checking some special cases in §\[section:computations\]. Acknowledgements {#acknowledgements .unnumbered} ---------------- Much of this note is based on conversations with Matt Deland and Joe Ross which took place at Columbia in the spring of 2009. I thank Ben Weiland for helpful discussions and Davesh Maulik for spurring my interest in PT theory. This research was partially supported by an NSF Postdoctoral Fellowship. Conventions {#conventions .unnumbered} ----------- We work over the complex numbers throughout. All schemes considered are disjoint unions of schemes of finite type over ${\mathbb{C}}$. We write ${{\bf Sch}}$ for the category of such schemes. Set $T := \mathbb{G}_m = {\operatorname{Spec}}{\mathbb{C}}[t,t^{-1}]$. We will often consider an affine morphism $f : Y \to X$, in which case a $T$ action on $Y$ making $f$ equivariant for the trivial $T$ action on $X$ is a ${\mathbb{Z}}$-grading on $f_* {\mathcal O}_Y$ as an ${\mathcal O}_X$ algebra (so the corresponding direct sum decomposition is in the category of ${\mathcal O}_X$ modules). A $T$ equivariant ${\mathcal O}_Y$ module is then the same thing as a graded $f_* {\mathcal O}_Y$ module. Throughout, if $\pi : E \to X$ is a vector bundle on a scheme $X$, we use the same letter to denote its (locally free coherent) sheaf of sections, so $E = {\operatorname{Spec}}_X {\operatorname{Sym}}^* E^\lor$. The sheaf ${\operatorname{Sym}}^* E^\lor$ has an obvious ${\mathbb{Z}}$-grading supported in nonnegative degrees; we call the corresponding $T$ action the *scaling action*. If $Z \subseteq E$ is a subscheme, then by abuse of notation, we will also use $\pi$ to denote the restriction of $\pi$ to $Z$. When $T$ acts trivially on $X$, the $T$ equivariant cohomology ${\operatorname{H}}^*_T(X)$ is identified with ${\operatorname{H}}^*(X)[t]$; a $T$ equivariant vector bundle $V$ on $X$ decomposes into eigensubbundles $V=\oplus_n V_n$ where $T$ acts on $V_n$ through the composition of the character $\lambda \mapsto \lambda^n$ and the scaling action. For $n \neq 0$, the $T$ equivariant Euler class $e_T(V_n)$ is invertible in the localized equivariant cohomology ${\operatorname{H}}^*_T(X)_t = {\operatorname{H}}
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | =0.6 cm [**Abstract**]{} We have studied numerically the shadows of Bonnor black dihole through the technique of backward ray-tracing. The presence of magnetic dipole yields non-integrable photon motion, which affects sharply the shadow of the compact object. Our results show that there exists a critical value for the shadow. As the magnetic dipole parameter is less than the critical value, the shadow is a black disk, but as the magnetic dipole parameter is larger than the critical one, the shadow becomes a concave disk with eyebrows possessing a self-similar fractal structure. These behavior are very similar to those of the equal-mass and non-spinning Majumdar-Papapetrou binary black holes. However, we find that the two larger shadows and the smaller eyebrow-like shadows are joined together by the middle black zone for the Bonnor black dihole, which is different from that in the Majumdar-Papapetrou binary black holes spacetime where they are disconnected. With the increase of magnetic dipole parameter, the middle black zone connecting the main shadows and the eyebrow-like shadows becomes narrow. Our result show that the spacetime properties arising from the magnetic dipole yields the interesting patterns for the shadow casted by Bonnor black dihole. author: - 'Mingzhi Wang$^{1}$, Songbai Chen$^{1,2,3}$[^1], Jiliang Jing$^{1,2,3}$ [^2]' title: Shadows of Bonnor black dihole by chaotic lensing --- =0.8 cm Introduction ============ A shadow is a two-dimensional dark region in the observer’s sky corresponding to light rays that fall into an event horizon when propagated backwards in time. It is shown that the shape and size of the shadow carry the characteristic information of the geometry around the celestial body [@sha1; @sha2; @sha3], which means that the shadow can be regarded as a useful tool to probe the nature of the celestial body and to check further various theories of gravity. The investigation [@sha2; @sha3] indicate that the shadow is a perfect disk for a Schwarzschild black hole and it changes into an elongated silhouette for a rotating black hole due to its dragging effect. The cusp silhouette of shadow is found in the spacetime of a Kerr black hole with Proca hair [@fpos2] and of a Konoplya-Zhidenko rotating non-Kerr black hole [@sb10] as the black hole parameters lie in a certain range. Moreover, the shadow of a black hole with other characterizing parameters have been studied recently [@sha4; @sha5; @sha6; @sha7; @sha9; @sha10; @sha11; @sha12; @sha13; @sha14; @sha14a; @sha15; @sha16; @sb1; @sha17; @sha19; @shan1] (for details, see also a review [@shan1add]), which indicate that these parameters bring the richer silhouettes for the shadows casted by black holes. However, most of the above investigation have been focused only on the cases where the null geodesics are variable-separable and the corresponding dynamical systems are integrable. As the dynamical systems are non-integrable, the motion of photons could be chaotic, which could lead to some novel features for the black hole shadow. Recently, it is shown that due to such chaotic lensing the multi-disconnect shadows with fractal structures emerge for a Kerr black hole with scalar hair [@sw; @swo; @astro; @chaotic] or a binary black hole system [@binary; @sha18]. The further analysis show that these novel patterns with fractal structures in shadows are determined by the non-planar bound orbits [@fpos2] and the invariant phase space structures [@BI] for the photon motion in the black hole spacetimes. The similar analysis have also been done for the cases with ultra-compact object [@bstar1; @bstar2]. It is well known that there exist enormous magnetic fields around large astrophysical black holes, especially in the nucleus of galaxies [@Bm1; @Bm2; @Bm3; @Bm4]. These strong magnetic fields could be induced by currents in accretion disks near the supermassive galactic black holes. On the base of strong magnetic fields, there are substantial current theoretical models accounted for black hole jets, which are one of the most spectacular astronomical events in the sky [@Blandford1; @Blandford2; @Punsly]. In general relativity, one of the most important solutions with magnetic fields is Ernst solution [@Ernst], which describes the gravity of a black hole immersed in an external magnetic field. Interestingly, for an Ernst black hole, the polar circumference for the event horizon increases with the magnetic field, while the equatorial circumference decreases. Bonnor’s metric [@mmd1] is another important solution of the Einstein field equations in the vacuum, which describes a static massive object with a dipole magnetic field in which two static extremal magnetic black holes with charges of opposite signs are situated symmetrically on the symmetry axis. For Bonnor black dihole spacetime, the area of the horizon is finite, but the proper circumference of the horizon surface is zero. Especially, it is not a member of the Weyl electromagnetic class and it can not reduce to Schwarzschild spacetime in the limit without magnetic dipole. The new properties of spacetime structure originating from magnetic dipole will lead to chaos in motion of particles [@mmd; @mmd10; @bbon1]. Since the shadow of black hole is determined by the propagation of light ray in the spacetime, it is expectable that the chaotic lensing caused by the new spacetime structure will yields some new effects on the black hole shadow. Therefore, in this paper, we focus on studying the shadow of Bonnor black dihole [@mmd1] and probe the effect of magnetic dipole parameter on the black hole shadow. The paper is organized as follows. In Sec. II, we review briefly the metric of Bonnor black dihole and then analyze the propagation of light ray in this background. In Sec. III, we investigate the shadows casted by Bonnor black dihole. In Sec. IV, we discuss invariant phase space structures of photon motion and formation of the shadow casted by Bonnor black dihole. Finally, we present a summary. Spacetime of Bonnor black dihole and null geodesics =================================================== Let us now to review briefly the spacetime of Bonnor black dihole. In 1960s, Bonnor obtained an exact solution [@mmd1] of Einstein-Maxwell equations which describes a static massive source carrying a magnetic dipole. In the standard coordinates, the metric of this spacetime has a form [@mmd1] $$\begin{aligned} \label{xy} ds^{2}= -\bigg(\frac{P}{Y}\bigg)^{2}dt^{2}+\frac{P^{2}Y^{2}}{Q^{3}Z}(dr^{2}+Zd\theta^{2}) +\frac{Y^{2}Z\sin^{2}\theta}{P^{2}}d\phi^{2},\end{aligned}$$ where $$P=r^{2}-2mr-b^{2}\cos^{2}\theta,\;\;Q=(r-m)^{2}-(m^{2}+b^{2})\cos^{2}\theta, \;\;Y=r^{2}-b^{2}\cos^{2}\theta,\;\;Z=r^{2}-2mr-b^{2}.$$ The corresponding vector potential $A_{\mu}$ is given by $$\begin{aligned} A_{\mu}= (0,0,0,\frac{2mbr\sin^{2}\theta}{P}),\end{aligned}$$ where $\mu=0,1,2,3$ correspond to the element of $A_{\mu}$ associated with the coordinates $t, r, \theta, \phi$, respectively. It is a static axially-symmetric solution characterized by two independent parameters $m$ and $b$, which are related to the total mass of Bonnor black dihole $M$ as $M=2m$ and to the magnetic dipole moment $\mu$ as $\mu=2mb$. Obviously, this spacetime is asymptotically flat since as the polar coordinate $r$ approaches to infinity the metric tends to the Minkowski one. The event horizon of the spacetime (\[xy\]) is the null hypersurface $f$ satisfied $$\begin{aligned} g^{\mu\nu}\frac{\partial f}{\partial x^{\mu}}\frac{\partial f}{\partial x^{\nu}}=0,\end{aligned}$$ which yields $$\begin{aligned} r^{2}-2mr-b^{2}=0.\end{aligned}$$ It is obvious that there exists only a horizon and the corresponding horizon radius is $r_h=m+\sqrt{m^2+b^2}$. The area of the horizon is $\mathcal{A}=16\pi m^2r^2_h/(m^2+b^2)$, but the proper circumference of the horizon surface is zero since $g_{\phi\phi}=0$ on the horizon. This implies that the $Z=0$ surface is not a regular horizon since there exists conical singularities at $r=r_h$. The singularity along the segment $r=r_h$ can be eliminated by selecting a proper period $\Delta\phi=2\pi[b^2/(m^2+b^2)]^2$, but such a choice yields a conical deficit running along the axes $\theta=0, \;\pi$, from the endpoints of the dipole to infinity
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We formulate the necessary and sufficient conditions for the existence of a pair of maximally incompatible two-outcome measurements in a finite dimensional General Probabilistic Theory. The conditions are on the geometry of the state space; they require existence of two pairs of parallel exposed faces with additional condition on their intersections. We introduce the notion of discrimination measurement and show that the conditions for a pair of two-outcome measurements to be maximally incompatible are equivalent to requiring that a (potential, yet non-existing) joint measurement of the maximally incompatible measurements would have to discriminate affinely dependent points. We present several examples to demonstrate our results.' author: - Anna Jenčová - Martin Plávala bibliography: - 'citations.bib' title: 'Conditions on the existence of maximally incompatible two-outcome measurements in General Probabilistic Theory' --- Introduction ============ General Probabilistic Theories have recently gained a lot of attention. It was identified that several non-classical effects that we know from Quantum Mechanics, such as steering and Bell nonlocality [@WisemanJonesDoherty-nonlocal], can be found in most General Probabilistic Theories. Moreover, it was shown that one can violate even the bounds we know from Quantum Mechanics. In finite dimensional Quantum Mechanics the minimal degree of compatibility of measurements is bounded below by a dimension-dependent constant [@HeinosaariSchultzToigoZiman-maxInc], while a General Probabilistic Theory may admit pairs of maximally incompatible two-outcome measurements [@BuschHeinosaariSchultzStevens-compatibility], i.e. two-outcome measurements such that their degree of compatibility attains the minimal value $\frac{1}{2}$. In the present article we show necessary and sufficient conditions for a pair of maximally incompatible measurements to exist in a given General Probabilistic Theory. The conditions restrain the possible geometry of the state space. We also introduce the notion of discrimination two-outcome measurement and show how the concept of discrimination measurements is connected to maximally incompatible measurements. Our results are demonstrated on some examples. In particular, it is shown that maximally incompatible measurements exist for quantum channels. A somewhat different notion of compatibility of measurements on quantum channels and combs has been recently researched in [@SedlakReitznerChiribellaZiman-compatibility], where similar results were found. The article is organized as follows: in Sec. \[sec:preliminary\] we provide a quick review of General Probabilistic Theory and of the notation we will use, in Sec. \[sec:meas\] we introduce the two-outcome measurements, in Sec. \[sec:degcom\] we introduce the degree of compatibility and the linear program for compatibility of two-outcome measurements. In Sec. \[sec:maxInc\] we formulate and prove the necessary and sufficient conditions for maximally incompatible two-outcome measurements to exist. In Sec. \[sec:disc\] we introduce the concept of discrimination measurement and we show that two-outcome measurements are maximally incompatible if and only if their joint measurement would have to discriminate affinely dependent points, which is impossible. Structure of General Probabilistic Theory {#sec:preliminary} ========================================= General Probabilistic Theories form a general framework that provides a unified description of all physical systems known today. We will present the standard definition of a finite dimensional General Probabilistic Theory in a quick review just to settle the notation. The central notion is that of a state space, that is a compact convex subset $K\subset \mathbb{R}^n$, representing the set of states of some system. The convex combinations are interpreted operationally, see e.g. [@HeinosaariZiman-MLQT Part 2]. Let $A(K)$ denote the ordered linear space of affine functions $f:K \to \mathbb{R}$. The order on $A(K)$ is introduced in a natural way; let $f, g \in A(K)$ then $f \geq g$ if and only if $f(x) \geq g(x)$ for all $x \in K$. Let $A(K)^+$ be the positive cone, that is the generating, pointed and convex cone of positive affine functions on $K$. We denote the constant functions by the value they attain, i.e. $1(x)=1$ for all $x \in K$. Let $E(K) = \{ f \in A(K): 1 \geq f \geq 0 \}$ denote the set of effects on $K$. Let $A(K)^*$ be the dual to $A(K)$ and let $\< \psi, f \>$ denote the value of the functional $\psi\in A(K)^*$ on $f \in A(K)$. Using the cone $A(K)^+$ we define the dual order on $A(K)^*$ as follows: let $\psi_1, \psi_2 \in A(K)^*$, then $\psi_1 \geq \psi_2$ if and only if $\< \psi_1, f \> \geq \< \psi_2, f \>$ for every $f \in A(K)^+$. The dual positive cone is $A(K)^{*+} = \{ \psi \in A(K)^*: \psi \geq 0 \}$ where $0$ denotes the zero functional, $\<0, f \> = 0$ for all $f \in A(K)$. Let $x \in K$, then $\phi_x$ will denote the positive and normed functional such that $\< \phi_{x}, f \> = f(x)$. It can be seen that for every functional $\psi \in A(K)^{*+}$ such that $\< \psi, 1 \> = 1$ there is some $x \in K$ such that $\psi = \phi_x$, see [@AsimowEllis Theorem 4.3]. This implies that the set $\states_K = \{ \phi_x: x \in K \}$ is a base of the cone $A(K)^{*+}$, i.e. for every $\psi \in A(K)^{*+}$, $\psi \neq 0$ there is a unique $x \in K$ and unique $\alpha \in \mathbb{R}$, $\alpha > 0$ such that $\psi = \alpha \phi_x$. For any $X \subset \mathbb{R}^n$, $\conv(X)$ will denote the convex hull of $X$ and $\aff(X)$ the affine hull of $X$. Measurements in General Probabilistic Theory {#sec:meas} ============================================ Let $K\subset \mathbb{R}^n$ be a state space. A measurement on $K$ is an affine map $m: K \to \Pe(\Omega)$, where $\Omega$ is the sample space, that is a measurable space representing the set of all possible measurement outcomes, and $\Pe(\Omega)$ is the set of all probability measures on $\Omega$. We will be mostly interested in two-outcome measurements, i.e. measurements with the sample space $\Omega = \{ \omega_1, \omega_2 \}$. Let $\mu \in \Pe ( \Omega )$, then $\mu = \lambda \delta_1 + (1-\lambda) \delta_2$ for some $\lambda \in [0, 1] \subset \mathbb{R}$, where $\delta_1=\delta_{\omega_1}$, $\delta_2=\delta_{\omega_2} $ are the Dirac measures. This shows that the general form of two-outcome measurement $m_f$ is $$m_f = f \delta_1 + (1-f) \delta_2$$ for some $f \in E(K)$. Strictly speaking, this should be written as $m_f = f \otimes \delta_1 + (1-f) \otimes \delta_2$, since any map $m_f: K \to \Pe(\Omega)$ can be identified with a point of $A(K)^+ \otimes \Pe(\Omega)$, see e.g. [@Ryan-tensProd]. The interpretation is that a point $x \in K$ is mapped to the probability measure $m_f(x) = f(x) \delta_1 + (1-f(x)) \delta_2$, i.e. $f(x)$ corresponds to the probability of measuring the outcome $\omega_1$. Let $f, g \in E(K)$ and let $m_f$, $m_g$ be the corresponding two-outcome measurements. We will keep this notation throughout the paper. The two-outcome measurements $m_f$, $m_g$ are compatible if and only if there exists a function $p \in E(K)^+$ such that $$\begin{aligned} f &\geq p, \label{eq:meas-cond-1} \\ g &\geq p, \label{eq:meas-cond-2} \\ 1 + p &\geq f + g, \label{eq:meas-cond-3}\end{aligned}$$ see [@Plavala-simplex] for a derivation of these conditions from the standard conditions that can be found e.g. in [@Holevo-QT Chapter 2]. \[prop:meas-postProc\] $m_f$, $m_g$ are compatible if and only if $m_{(1-f)}$, $m_g$ are compatible
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Between the launch of the *GGS Wind* spacecraft in 1994 November and the end of 2010, the Konus-*Wind* experiment detected 296 short-duration gamma-ray bursts (including 23 bursts which can be classified as short bursts with extended emission). During this period, the IPN consisted of up to eleven spacecraft, and using triangulation, the localizations of 271 bursts were obtained. We present the most comprehensive IPN localization data on these events. The short burst detection rate, $\sim$18 per year, exceeds that of many individual experiments.' author: - 'V. D. Pal’shin, K. Hurley, D. S. Svinkin, R. L. Aptekar, S. V. Golenetskii, D. D. Frederiks, E. P. Mazets, P. P. Oleynik, M. V. Ulanov, T. Cline, I. G. Mitrofanov, D. V. Golovin, A. S. Kozyrev, M. L. Litvak, A. B. Sanin, W. Boynton, C. Fellows, K. Harshman, J. Trombka, T. McClanahan, R. Starr, J. Goldsten, R. Gold, A. Rau, A. von Kienlin, V. Savchenko, D. M. Smith, W. Hajdas, S. D. Barthelmy, J. Cummings, N. Gehrels, H. Krimm, D. Palmer, K. Yamaoka, M. Ohno, Y. Fukazawa, Y. Hanabata, T. Takahashi, M. Tashiro, Y. Terada, T. Murakami, K. Makishima, M. S. Briggs, R. M. Kippen, C. Kouveliotou, C. Meegan, G. Fishman, V. Connaughton, M. Boër, C. Guidorzi, F. Frontera, E. Montanari, F. Rossi, M. Feroci, L. Amati, L. Nicastro, M. Orlandini, E. Del Monte, E. Costa, I. Donnarumma, Y. Evangelista, I. Lapshov, F. Lazzarotto, L. Pacciani, M. Rapisarda, P. Soffitta, G. Di Cocco, F. Fuschino, M. Galli, C. Labanti, M. Marisaldi, J.-L. Atteia, R. Vanderspek, G. Ricker' title: 'IPN localizations of Konus short gamma-ray bursts' --- INTRODUCTION ============ Between 1994 November and 2010 December, the Konus gamma-ray spectrometer aboard the *Global Geospace Science Wind* spacecraft detected 1989 cosmic gamma-ray bursts (GRBs) in the triggered mode, 296 of which were classified as short-duration gamma-ray bursts or short bursts with extended emission (EE). The classification was made based on the duration distribution of an unbiased sample of 1168 Konus-*Wind* GRBs. The instrument trigger criteria cause undersampling of faint short bursts relative to faint long bursts, so this subsample of fairly bright (in terms of peak count rate in Konus-*Wind*’s trigger energy band) bursts has been chosen for the purpose of classification. Taking in account other characteristics of these short-duration bursts such as hardness ratio and spectral lag shows that about 16% of them can be in fact Type II (collapsar-origin), or at least their classification as Type I (merger-origin) is questionable (see @zhang09 for more information on the Type I/II classification scheme). Nevertheless we consider here all 296 Konus-*Wind* short-duration and possible short-duration with EE bursts (hereafter we refer to them simply as Konus short bursts). Full details of the Konus-*Wind* GRB classification are given in Svinkin et al. (2013, in preparation). Every short burst detected by Konus was searched for in the data of the spacecraft comprising the interplanetary network (IPN). We found that 271 ($\sim$92%) of the Konus-*Wind* short GRBs were observed by at least one other IPN spacecraft, enabling their localizations to be constrained by triangulation. The IPN contained between 3 and 11 spacecraft during this period. They were, in addition to Konus-*Wind*: *Ulysses* (the solar X-ray/cosmic gamma-ray burst instrument, GRB), in heliocentric orbit at distances between 670 and 3180 lt-s from Earth [@hurley92]; the *Near-Earth Asteroid Rendezvous* mission (NEAR) [the remote sensing X-ray/Gamma-Ray Spectrometer, XGRS; @trombka99], at distances up to 1300 lt-s from Earth; *Mars Odyssey* [the Gamma-Ray Spectrometer (GRS) that includes two detectors with GRB detection capabilities, the gamma sensor head (GSH), and the High Energy Neutron Detector (HEND); @boynton04; @hurley06], launched in 2001 April and in orbit around Mars starting in 2001 October, up to 1250 lt-s from Earth [@saunders04]; *Mercury Surface, Space Environment, Geochemistry, and Ranging* mission (*MESSENGER*) [the Gamma-Ray and Neutron Spectrometer, GRNS; @goldsten07], en route to Mercury (in Mercury orbit since March 2011), launched in 2004 August, but commencing full operation only in 2007, up to $\sim$700 lt-s from Earth [@gold01; @solomon07]; the *International Gamma-Ray Astrophysics Laboratory* (*INTEGRAL*) [the anti-coincidence shield ACS of the spectrometer SPI, SPI-ACS; @rau05], in an eccentric Earth orbit at up to 0.5 lt-s from Earth; and in low Earth orbits: the *Compton Gamma-Ray Observatory* [the Burst and Transient Source Experiment, BATSE; @fishman92]; *BeppoSAX* [the Gamma-Ray Burst Monitor, GRBM; @frontera97; @feroci97]; the *Ramaty High Energy Solar Spectroscopic Imager* [*RHESSI*; @lin02; @smith02]; the *High Energy Transient Explorer* (*HETE-2*) [the French Gamma-Ray Telescope, FREGATE; @ricker03; @atteia03]; the *Swift* mission [the Burst Alert Telescope, BAT; @barthelmy05; @gehrels04]; the *Suzaku* mission [the Wide-band All-sky Monitor, WAM; @yamaoka09; @takahashi07]; *AGILE* (the Mini-Calorimeter, MCAL, and Super-AGILE) [@tavani09]; the *Fermi* mission [the Gamma-Ray Burst Monitor, GBM; @meegan09], the *Coronas-F* solar observatory (Helicon) [@oraevskii02], the *Cosmos 2326* [Konus-A; @aptekar98], *Cosmos 2367* (Konus-A2), and *Cosmos 2421*(Konus-A3) spacecraft, and the *Coronas-Photon* solar observatory (Konus-RF). At least two other spacecraft detected GRBs during this period, although they were not used for triangulation and therefore were not, strictly speaking, part of the IPN. They are the *Defense Meteorological Satellite Program* (DMSP) [@terrell96; @terrell98; @terrell04] and the *Stretched Rohini Satellite Series* (SROSS) [@marar94]. Here we present the localization data obtained by the IPN for 271 Konus-*Wind* short bursts observed by at least one other IPN s/c. In a companion paper, we present the durations, energy spectra, peak fluxes, and fluences of these bursts. OBSERVATIONS ============ For each Konus short gamma-ray burst, a search was initiated in the data of the IPN spacecraft. For the near-Earth spacecraft and *INTEGRAL*, the search window was centered on the Konus-*Wind* trigger time, and its duration was somewhat greater than the *Wind* distance from Earth. For the spacecraft at interplanetary distances, the search window was twice the light-travel time to the spacecraft if the event arrival direction was unknown, which was the case for most events. If the arrival direction was known, even coarsely, the search window was defined by calculating the expected arrival time at the spacecraft, and searching in a window around it. The mission timelines and the number of Konus-*Wind* short GRBs observed by each mission/instrument are shown in Figure \[Fig\_TimeLines\]. In this study, the largest number of bursts detected by an IPN instrument, after Konus, was 139, detected by *INTEGRAL* (SPI-ACS). Table \[Table\_Basic\] lists the 271 Konus-*Wind* short GRBs observed by the IPN. The first column gives the burst designation, ‘`YYYYMMDD
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Recently, some quantum algorithms have been implemented by quantum adiabatic evolutions. In this paper, we discuss the accurate relation between the running time and the distance of the initial state and the final state of a kind of quantum adiabatic evolutions. We show that this relation can be generalized to the case of mixed states.' author: - Zhaohui Wei and Mingsheng Ying title: A relation between fidelity and quantum adiabatic evolution --- Implementing quantum algorithms via quantum adiabatic evolutions is a novel paradigm for the design of quantum algorithms, which was proposed by Farhi et al. [@FGGS00]. In a quantum adiabatic algorithm, the evolution of the quantum register is governed by a hamiltonian that varies continuously and slowly. At the beginning, the state of the system is the ground state of the initial hamiltonian. If we encode the solution of the algorithm in the ground state of the final hamiltonian and if the hamiltonian of the system evolves slowly enough, the quantum adiabatic theorem guarantees that the final state of the system will differ from the ground state of the final hamiltonian by a negligible amount. Thus after the quantum adiabatic evolution we can get the solution with high probability by measuring the final state. For example, Quantum search algorithm proposed by Grover [@GROVER97] has been implemented by quantum adiabatic evolution in [@RC02]. Recently, the new paradigm for quantum computation has been tried to solve some other interesting and important problems [@TH03; @TDK01; @FGG01]. For example, T. D. Kieu has proposed a quantum adiabatic algorithm for Hilbert’s tenth problem [@TDK01] , while this problem is known to be mathematically noncomputable. Usually, after the design of a quantum adiabatic evolution, the estimation of the running time is not easy. In [@RC02], Roland et al. introduced a policy to design a class of quantum local adiabatic evolutions with a performance that can be estimated accurately. Using this policy Roland et al. reproduced quantum search algorithm, which is as good as Grover’s algorithm. For convenience of the readers, we briefly recall the local adiabatic algorithm. Suppose $H_0$ and $H_T$ are the initial and the final Hamiltonians of the system, we choose them as $$H_0=I-|\alpha\rangle\langle\alpha|,$$ and $$H_T=I-|\beta\rangle\langle\beta|,$$ where $|\alpha\rangle$ is the initial state of the system and $|\beta\rangle$ is the final state that encodes the solution. Then we let the system vary under the following time dependent Hamiltonian: $$H(t)=(1-s)H_0+sH_T,$$ where $s=s(t)$ is a monotonic function with $s(0)=0 $ and $s(T)=1$ ($T$ is the running time of the evolution). Let $|E_0,t\rangle$ and $|E_1,t\rangle$ be the ground state and the first excited state of the Hamiltonian at time t, and let $E_0(t)$ and $E_1(t)$ be the corresponding eigenvalues. The adiabatic theorem [@LIS55] shows that we have $$|\langle E_0,T|\psi(T)\rangle|^{2}\geq1-\varepsilon^2,$$ provided that $$\frac{D_{max}}{g_{min}^2}\leq\varepsilon,\ \ \ \ 0<\varepsilon\ll1,$$ where $g_{min}$ is the minimum gap between $E_0(t)$ and $E_1(t)$ $$g_{min}=\min_{0\leq t \leq T}[E_1(t)-E_0(t)],$$ and $D_{max}$ is a measurement of the evolving rate of the Hamiltonian $$D_{max}=\max_{0\leq t \leq T}|\langle\frac{dH}{dt}\rangle_{1,0}|=\max_{0\leq t \leq T}|\langle E_1,t|\frac{dH}{dt}|E_0,t\rangle|.$$ In the local adiabatic evolution of [@RC02], $$|\alpha\rangle=\frac{1}{\sqrt{N}}\sum\limits_{i=1}^{N}{|i\rangle}, \ |\beta\rangle=|m\rangle ,$$ where $N$ is the size of the database and $m$ is the solution of the search problem. To evaluate the running time of the adiabatic evolution, Roland and Cerf calculated accurately the gap $g_{min}$ in Eq. (2) and just estimated the quantity $D_{max}$ in Eq. (3) using the bound $$|\langle\frac{dH}{dt}\rangle_{1,0}|\leq |\frac{ds}{dt}|.$$ To evaluate the performance of this algorithms, this is enough, because calculating accurately the quantity $D_{max}$ in (7) can’t improve the result much. However, in this paper we will take into account all the related quantities. Later we will find that this will result in a simple and intrinsical relation between the running time of the adiabatic evolution and the distance of the initial and the final states. In this paper, we will choose fidelity, one of the most popular distance measures in the literature, as the measure of the hardness to evolve from one state to another using adiabatic evolutions. The fidelity of states $\rho$ and $\sigma$ is defined to be $$F(\rho,\sigma)=tr\sqrt{\rho^{1/2}\sigma\rho^{1/2}}.$$ Although fidelity is not a metric, its modified version $$A(\rho,\sigma)=\arccos{F(\rho,\sigma)}$$ is easily proved to be a metric [@Nielsen00]. Another important metric for the distance between quantum states we will use in this paper is the trace distance defined as $$D(\rho,\sigma)=\frac{1}{2}tr|\rho-\sigma|.$$ Now, we can represent the main result as the following theorem. Suppose $|\alpha\rangle$ and $|\beta\rangle$ are two states of a quantum system. We can make the system evolve from the initial state $|\alpha\rangle$ to the final state $|\beta\rangle$ by a quantum adiabatic evolution, if we set the initial Hamiltonian $H_0$ and the final Hamiltonian $H_T$ of the adiabatic evolution as follows: $$H_0=I-|\alpha\rangle\langle\alpha|,$$ $$H_T=I-|\beta\rangle\langle\beta|.$$ To success with a probability at least $1-\varepsilon^2$, the minimal running time that the adiabatic evolution requires is $$T(|\alpha\rangle,|\beta\rangle)=\frac{1}{\varepsilon}\cdot\tan{(\arccos{F(|\alpha\rangle,|\beta\rangle)})},$$ where $$F(|\alpha\rangle,|\beta\rangle)=|\langle\alpha|\beta\rangle|$$ is the fidelity between $|\alpha\rangle$ and $|\beta\rangle$. [*Proof.*]{} Let $$H(s)=(1-s)(I-|\alpha\rangle\langle\alpha|)+s(I-|\beta\rangle\langle\beta|),$$ where $s=s(t)$ is a function of $t$ as described above. It is not easy to calculate the eigenvalues of $H(s)$ in the computational basis. We use the following orthonormal basis $\{|i\rangle, 1\leq i\leq N\}$ to eliminate the difficulty: $$|1\rangle=|\alpha\rangle,$$ $$|2\rangle=\frac{1}{c}(|\beta\rangle-\langle\alpha|\beta\rangle|\alpha\rangle),$$ where $c=||\beta\rangle-\langle\alpha|\beta\rangle|\alpha\rangle|=\sqrt{1-|\langle\alpha|\beta\rangle|^{2}}$. We don’t need to care about $|\alpha_i\rangle$ for $i=3,4,...,N$. Then we have $$|\beta\rangle=c|2\rangle+\langle\alpha|\beta\rangle|1\rangle.$$ Now it is not difficult to check that, in the new orthonormal basis, $H(s)$ has a form of $$H(s)= \begin{pmatrix} -s|\langle\alpha|\beta\rangle|^2+s & -sc\langle\alpha|\beta\rangle \\ -sc\langle\alpha|\beta\rangle^{*} & -sc^2+1\\ & & I_{(N-2)\times(N-2)}\\ \end{pmatrix},$$ where the empty spaces of the matrix are all zeroes. Letting $a=|\langle \alpha|\beta\rangle|$, it is easy to get the two lowest eigenvalues of $H(s)$ $$E_i(t)=\frac{1}{2}(1\pm\sqrt{1-4(1-a^2)s(1-s)}), \ i=0,1,$$ and two corresponding eigenvectors $$|E_i,t\rangle=\frac{1}{\sqrt{1+y_i^2}}(|1\rangle+y_i|2\rangle), \ i=0,1,$$ where $$y_i=\frac{\sqrt{1-a^2}}{a}-\frac{E_i(t)}{sa\sqrt{1-a^2}} \ \ (s\neq0).$$ Thus, we get $g(s)$: $$g(s)=\sqrt{1-4(1-a^2)s(1-s)}.$$
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | We investigate a double layer system with tight-binding hopping, intra-layer and inter-layer interactions, as well as a Josephson like coupling. We find that an antiferromagnetic spin polarization induces additional spin-triplet pairing (with $S_z =0$) to the singlet order parameter. This causes an undamped collective mode in the superconducting state below the particle-hole threshold, which is interpreted as a Goldstone excitation. author: - | Christian Helm, Franz Forsthofer, and Joachim Keller\ Institute for Theoretical Physics, University of Regensburg,\ D-93040 Regensburg, Germany date: 'submitted to J. of Low Temp. Phys.' title: Collective Spin Modes in Superconducting Double Layers --- PACS numbers: 71.45-d, 74.80.Dm, 74.50+r INTRODUCTION AND MODEL ====================== Collective density fluctuations in superconductors due to the breakdown of the global gauge invariance are well known theoretically [@WuGriffin; @wir]. However, since these modes couple to charge oscillations, the long range Coulomb force usually pushes up their energies to the plasma frequency. One possibility to avoid the Coulomb interaction completely, is to consider spin fluctuations instead of charge fluctuations between the layers. In the following we will show the existence of such a sharp, collective spin mode in the gap, which might have been observed[@Fong] in inelastic neutron scattering on Y-Ba-Cu-O. We consider an electronic double-layer system described by the Hamiltonian $H=H_0+H_S$: $$\begin{aligned} H_0 &=& \sum_{k\sigma} \epsilon_k (c^\dagger_{1k\sigma}c^{\phantom{\dagger}}_{1k\sigma} + c^\dagger_{2k\sigma}c^{\phantom{\dagger}}_{2k\sigma}) + t_k (c^\dagger_{2k\sigma} c^{\phantom{\dagger}}_{1k\sigma} + c^\dagger_{1k\sigma} c^{\phantom{\dagger}}_{2k\sigma}) \\ H_S &=& \small{\frac{1}{2}} \sum_{k k'q \sigma \sigma'} \sum_i {\phantom+} V_\parallel \, c^\dagger_{i k+q \sigma} c^\dagger_{i k'-q\sigma'} c^{\phantom\dagger}_{i k'\sigma'} c^{\phantom\dagger}_{i k \sigma} +V_\perp \, c^\dagger_{i k+q \sigma} c^\dagger_{j k'-q\sigma'} c^{\phantom\dagger}_{j k'\sigma'} c^{\phantom\dagger}_{i k \sigma} \nonumber \\ & &{\phantom{ {1\over 2} \sum_{k k'q \sigma \sigma'} } } +J\, ( c^\dagger_{i k+q \sigma} c^\dagger_{i k'-q\sigma'} c^{\phantom\dagger}_{j k'\sigma'} c^{\phantom\dagger}_{j k \sigma} +c^\dagger_{i k+q \sigma} c^\dagger_{j k'-q\sigma'} c^{\phantom\dagger}_{i k'\sigma'} c^{\phantom\dagger}_{j k \sigma} ) .\end{aligned}$$ Here $t_k$ describes a tight-binding coupling between the two layers $i=(1,2), j=3-i$, while $V_\parallel$ ($V_\perp$) are intra-(inter)-layer pairing interactions and the Josephson-like coupling $J$ describes the coherent transfer of two particles from one layer to the other. In a previous publication [@wir] we treated this model using the Nambu formalism including vertex corrections to calculate charge fluctuations between the layers. In this paper we are primarily interested in the calculation of correlation functions involving the operator $$S = \sum_{k} c^\dagger_{2k\uparrow} c^{\phantom\dagger}_{2k\uparrow} -c^\dagger_{2k\downarrow} c^{\phantom\dagger}_{2k\downarrow} -c^\dagger_{1k\uparrow} c^{\phantom\dagger}_{1k\uparrow} +c^\dagger_{1k\downarrow} c^{\phantom\dagger}_{1k\downarrow}$$ describing the difference of the spin polarization in the two layers and the operators coupling to it ($\Delta_{ij}^{\dagger} := c^{\dagger}_{i k \uparrow} c^{\dagger}_{j -k \downarrow}$) $$\begin{aligned} &\Phi_T = -i \sum_{k} \Delta_{21}^{\dagger} - \Delta_{12}^{\dagger} - \Delta_{21} + \Delta_{12}, \nonumber \\ &M = -i \sum_{k} c^\dagger_{2k\uparrow} c^{\phantom\dagger}_{1k\uparrow} -c^\dagger_{1k\uparrow} c^{\phantom\dagger}_{2k\uparrow} -c^\dagger_{2k\downarrow} c^{\phantom\dagger}_{1k\downarrow} +c^\dagger_{1k\downarrow} c^{\phantom\dagger}_{2k\downarrow} . %{\rm with} \Delta_{ij}^{\dagger} := c^{\dagger}_{i k \uparrow} % c^{\dagger}_{j -k \downarrow}. &\end{aligned}$$ The quantity $M$ corresponds to the spin current between the two layers. $\Phi_T$ and $A_T$ describe pairing in different layers in a spin-triplet state with total spin $S_z=0$ and are the real and imaginary part of the inter-layer triplet-pairing amplitude $\Delta_{\perp, T}:= \Delta_{12} - \Delta_{21}$ . To shorten the notation, we introduce $$P^{ij} := \sum_k \Psi_k^{\dagger} D^{ij} \Psi_k, \,\, D^{ij} := \sigma_i \otimes \tau_j, \,\,\, \Psi_k := ( c_{1 k \uparrow}, c_{1 -k \downarrow}^{\dagger}, c_{2 k \uparrow}, c_{2 -k \downarrow}^{\dagger} )^t$$ $\tau_i$ ($\sigma_i$) being the Pauli matrices in the Nambu or two-layer space, respectively (examples: $S = - P^{30}, A_T = - P^{22}, \Phi_T = -P^{21}$). ANALYTICAL RESULTS AND GOLDSTONE MODES ====================================== In general the correlation functions $\ll P^{ij},P^{lm}\gg$ have to be determined numerically. However, for constant hopping $t_k = t$ with $t, \omega \ll \Delta$ ($\Delta$ is the superconducting s-wave gap) and weak coupling the collective modes can be calculated analytically (for $\omega_S, \omega_0 \ll \Delta$) in the cases i (ii) of [*pure*]{} intra-(inter)-layer pairing. $$\label{correl} \renewcommand{\arraystretch}{1.7} \begin{array}{cccc} \mbox{We obtain}&\mbox{for} &\mbox{case (i)}& \mbox{case (ii)}\\ \ll S,S\gg &\approx& 4 N_0 \displaystyle\frac{(2t)^2}{\omega^2-\omega_S^2} &4 N_0 \displaystyle\frac{\omega_S^2}{\omega^2-\omega_S^2}\,, \\ \ll \Phi_T,S\gg &=& 0&4 i N_0 \displaystyle\frac{\omega_0^2}{\omega^2-\omega_S^2} \displaystyle \frac{\omega}{2\Delta} \, \displaystyle \frac{V_\perp -J}{2J} \,, \\ \ll A_T, S\gg &\approx& 4 N_0\displaystyle \frac{\omega_0^2}{\omega^2-\omega_S^2} \displaystyle\frac{t}{\Delta} \, \displaystyle \frac{V_\perp -J}{V_\parallel+V_\perp+2J} &0\,, \\ \ll M,S\gg &\approx& - 4i N_0 \displaystyle\frac{2t\omega}{\omega^2-\omega_S^2 } & - 4i N_0 \displaystyle\frac{2t\omega}{\omega^2-\omega_S^2}\,, \\ \omega_S^2 &=& (2t)^2+\omega_0^2&(2t)^2+\omega_0^2 \,, \\ \omega_0^2& = & \displaystyle\frac{-(V_\parallel-V_\perp+2J)} {(V
{ "pile_set_name": "ArXiv" }
null
null
\ [**Theory of Gravity**]{} A. Barros\ [Departamento de Física, Universidade Federal de Roraima,\ 69310-270, Boa Vista, RR - Brazil.]{}\ and\ C. Romero[^1]\ Departamento de Física, Universidade Federal da Paraíba,\ Caixa Postal 5008, 58059-970, João Pessoa, PB - Brazil. [**Abstract**]{} [The gravitational field of a global monopole in the context of Brans-Dicke theory of gravity is investigated. The space-time and the scalar field generated by the monopole are obtained by solving the field equations in the weak field approximation. A comparison is made with the corresponding results predicted by General Relativity.]{} $ $ Monopoles resulting from the breaking of global $O(3)$ symmetry lie among those strange and exotic objects like cosmic strings and domain walls [@1], generally referred to as topological defects of space-time, which may have existed due to phase transitions in the early universe. Likewise cosmic strings, the most studied of these structures, the gravitational field of a monopole exhibits some interesting properties, particularly those concerning the appearance of nontrivial space-time topologies. The solutions corresponding to the metrics generated by strings [@2], domain walls [@2] and global monopoles [@3] in the context of General Relativity were all first obtained using the weak field approximation. In a similar approach, the gravitational fields of cosmic strings and domain walls have been obtained regarding Brans-Dicke theory of gravity and more general scalar-tensor theories of gravity [@4; @5]. In this paper we consider the global monopole and investigate its gravitational field by working out Brans-Dicke equations using once more the weak field approximation, essentially in the same way as in the previous works mentioned above. Let us consider Brans-Dicke field equations in the form $$\begin{aligned} \label{2.1} R_{\mu \nu} = {8\pi\over \phi}\left[T_{\mu \nu} - {g_{\mu \nu}\over 2}\left({2\omega +2\over 2\omega +3}\right)T\right] + {\omega \over \phi^2}\phi_{,\mu}\phi_{,\nu} + {1\over \phi}\phi_{;\mu;\nu}\hspace{.2cm},\end{aligned}$$ $$\begin{aligned} \label{2.2} \Box \phi = {8\pi T\over 2\omega +3}\hspace{.2cm},\end{aligned}$$ where $\phi$ is the scalar field, $\omega$ is a dimensionless coupling constant and $T$ denotes the trace of $T^{\mu}_{\nu}$— the energy-momentum tensor of the matter fields. The energy-momentum tensor of a static global monopole can be approximated (outside the core) as [@3] $$\begin{aligned} \label{2.3} T^{\mu}_{\nu} = \hbox{diag}\left({\eta^2\over r^2}, {\eta^2\over r^2}, 0, 0\right),\end{aligned}$$ where $\eta$ is the energy scale of the symmetry breaking. Due to spherical symmetry we consider $\phi = \phi (r)$ and the line element $$\begin{aligned} \label{2.4} ds^2 = B(r)dt^2 - A(r)dr^2 - r^2(d\theta^2 + \sin^2\theta d\varphi^2).\end{aligned}$$ Substituting this into Eq. (\[2.1\]) and Eq. (\[2.2\]), and taking in account Eq. (\[2.3\]) we obtain the following set of equations: $$\begin{aligned} \label{2.5} {B''\over 2A} - {B'\over 4A}\left({A'\over A}+{B'\over B}\right)+{1\over r}{B'\over A} = {8\pi\over \phi}\left[{\eta^2B\over r^2(2\omega +3)}\right]-{B'\phi'\over 2A\phi}\hspace{.2cm},\end{aligned}$$ $$\begin{aligned} \label{2.6} -{B''\over 2B} + {B'\over 4B}\left({A'\over A}+{B'\over B}\right)+{1\over r}{A'\over A}=&-&{8\pi\over \phi}\left[{\eta^2A\over r^2(2\omega +3)}\right]+{\omega\phi'^2\over \phi^2}\nonumber \\ &+&{1\over \phi}\left[\phi''-{A'\over 2A}\phi'\right]\hspace{.2cm},\end{aligned}$$ $$\begin{aligned} \label{2.7} \phi'' + {1\over 2}\phi'\left[{B'\over B}-{A'\over A}+{4\over r}\right]= -{16\pi\over (2\omega +3)}\left({\eta^2\over r^2}\right)A\hspace{.2cm},\end{aligned}$$ $$\begin{aligned} \label{2.8} 1 - {r\over 2A}\left({B'\over B}-{A'\over A}\right)-{1\over A}= {8\pi\over \phi}\left[\eta^2\left({2\omega +2\over 2\omega+3}\right)\right]+{r\phi'\over A\phi}\hspace{.2cm},\end{aligned}$$ where prime denotes differentiation with respect to $r$. Now, dividing (\[2.5\]) and (\[2.6\]) by $B$ and $A$, respectively, and adding we get $$\begin{aligned} \label{2.9} {\alpha\over r} = {\omega \phi'^2\over \phi^2} +{\phi''\over \phi}-{\phi'\over 2\phi}\alpha \hspace{.2cm},\end{aligned}$$ where we have put $$\begin{aligned} \label{2.10} \alpha = {A'\over A}+{B'\over B}.\end{aligned}$$ Then, equations (\[2.7\]) and (\[2.8\]) read $$\begin{aligned} \label{2.11} \phi'' + {\phi'\over 2}\left[\alpha - {2A'\over A} + {4\over r}\right] = -{16\pi\over 2\omega +3}\left({\eta^2\over r^2}\right)A\hspace{.2cm},\end{aligned}$$ $$\begin{aligned} \label{2.12} 1-{r\over 2A}\left(\alpha - {2A'\over A}\right)-{1\over A} = {8\pi\over \phi}\left[\eta^2\left({2\omega +2\over 2\omega+3}\right)\right] +{r\over A}{\phi'\over \phi}.\end{aligned}$$ At this stage, let us consider the weak field approximation and assume that\ $A(r)=1+f(r),\qquad B(r)=1+g(r)$and $\phi(r)=\phi_o+\epsilon(r)$,\ where $\phi_o$ is a constant which may be identified to $G^{-1}$ when $\omega \rightarrow \infty$ ($G$ being the Newtonian gravitational constant), and the functions $f, g$ and ${\epsilon \over \phi_o}$ should be computed to first order in ${\eta^2\over \phi_o}$, with $|f(r)|,\hspace{.2cm}|g(r)|,\hspace{.2cm}\left|{\epsilon(r)\over \phi_o}\right| \ll 1$. In this approximation it is easy to see that $$\begin{aligned} {\phi'\over \phi}={\epsilon'\over \phi_o[1+\epsilon/\phi_o]}= {\epsilon'\over \phi_o}\hspace{.2cm}, \qquad{\phi''\over \phi} = {\epsilon''\over \phi_o[1+\epsilon/\phi_o]}= {\epsilon''\over \phi_o}\hspace{.2cm}, \nonumber\end{aligned}$$ $$\begin{aligned} {B'\over B} = {g'\over 1+g}= g', \qquad {A'\over A} = {f'\over 1+f} = f', \nonumber \end{aligned}$$ and so on. From equation (\[2.9\]) it follows that $$\begin{aligned} \label{2.13} {\alpha \over r} = {\epsilon''\over \phi_o}.\end{aligned}$$ And from (\[2.11\]) we have $$\begin{aligned} \label{2.14} \epsilon'' + {2\epsilon' \over r} = -{16\pi\over (2\omega +3)}{\eta^2\over r^2} \hspace{.2cm},\end{aligned}$$ the solution of which is given by $$\begin
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'The dynamics of multiple scalar fields on a flat FLRW spacetime can be described entirely as a relational system in terms of the matter alone. The matter dynamics is an autonomous system from which the geometrical dynamics can be inferred, and this autonomous system remains deterministic at the point corresponding to the singularity of the cosmology. We show the continuation of this system corresponds to a parity inversion at the singularity, and that the singularity itself is a surface on which the space-time manifold becomes non-orientable.' author: - David Sloan bibliography: - 'FLRW+SF.bib' title: Scalar Fields and the FLRW Singularity --- Introduction ============ Gravitational fields are not measured directly, but rather inferred from observations of matter that evolves under their effects. This is illustrated clearly by a gedankenexperiment in which two test particles are allowed to fall freely. In this the presence of a gravitational field is felt through the reduction of their relative separation. In cosmology we find ourselves in a similar situation; the expansion of the universe is not directly observed. It is found through the interaction between gravity and matter which causes the redshift of photons. The recent successes of the LIGO mission [@LIGO] in observing gravitational waves arise as a result of interferometry wherein the photons experience a changing geometry and when brought together interfere either constructively or destructively as a result of the differences in the spacetime that they experienced. What is key to this is the relational measurement of the photons; are they in or out of phase? This relational behaviour informs our work. Here we will show how given simple matter fields in a cosmological setup, the dynamics of the system can be described entirely in relational terms. We see that this relational behaviour, being more directly related to physical observations, can be described without some of the structure of space-time. As such we treat general relativity as an operational theory; a means to the end of describing the relational dynamics of matter. This changes the ontological status of space-time. We do not treat the idea of scale in a four-dimensional pseudo-Riemannian geometry as absolutely fundamental to the description of physics, but rather as a tool through which dynamics can be calculated. This change of status is common to many approaches to fundamental physics; string theory [@Strings1; @Strings2; @Strings3; @Strings4] often posits the existence of very small compact extra dimensions. In Loop Quantum Gravity [@LQG1; @LQG2] the fundamental object is a spin network to which regular geometry is a an approximation at low curvatures, and in the cosmological sector this is responsible for removing the initial singularity [@LQC1; @LQC2]. A minimalist approach is taken in Causal Set theory [@CST1; @CST2] where the idea of geometry is rebuilt from causal relations between points, and a geometry is overlaid on top of these relations, and in Group Field Theory [@GFT] condensate states are interpreted as macroscopic geometries. In this approach we will differ in one key way from the aforementioned approaches to fundamental theories of gravity. Rather than positing the existence of a more fundamental object on which our theory is based, we instead simply note that we do not have empirical access to the volume of the universe, and instead consider the relational evolution of observables. Aspects of this are captured in the Shape Dynamics [@Shapes1; @Shapes2; @Shapes3; @Shapes4] program. At the heart of this is the idea that certain necessary factors in forming a space-time, such as the idea of an overall notion of scale, are not empirically measurable. As such, any choice of how dynamics is described in terms of these non-measurable quantities should not affect the evolution of measurable quantities. In previous work [@Through] we have shown that this leads to a unique continuation of Bianchi cosmologies (homogeneous, anisotropic solutions to Einstein’s equations) through the initial singularity, and this has recently been extended by Mercati to include inflationary potentials for the matter [@FlavNew]. There are two principal reasons why this is possible; the first is that the system exhibits “Dynamical Similarity" [@DynSim] and thus solutions can be evolved in terms of a smaller set of variables than are required to describe the full phase-space. The second is that the equations of motion for these variables are Lipschitz continuous even at the initial singularity. By the Picard-Lindelöf theorem, they can be uniquely continued beyond this point and reveal that there is a qualitatively similar, but quantitatively distinct, solution on the other side. In this paper we will show that the same results hold when working with scalar fields in a flat Friedmann-Lemaître-Roberston-Walker (FLRW) cosmology. This paper is laid out as follows. In section \[FLRWSec\] we recap the dynamics of flat FLRW cosmologies in the presence of scalar fields, and express the dynamics as a flow on the usual phase space. Then in section \[DynSimSec\] we show the role of dynamical similarity in these spacetimes, establishing a vector field on phase-space whose integral curves take solutions to those which are are indistinguishable. This allows us to formulate the more compact description of the system which is well-defined at and beyond the singularity. We then show some general features of such systems. In section \[FreeFieldsSec\] we show how massless noninteracting scalar fields provide this continuation, and in section \[ShapeSec\] we show the fully intrinsic form of the equations of motion when interactions are reintroduced. We examine how one can reconstruct a geometrical interpretation on the other side of the singularity in section \[NonOrientableSec\] and show that this would appear to be an orientation flip when viewed in this way. Finally in section \[BeyondSec\] we show how the results we have obtained extend beyond the isotropic case and make contact with prior results, and give some concluding thoughts in section \[SecDiscussion\]. FLRW Cosmology with Scalar Fields {#FLRWSec} ================================= We will examine the dynamics of scalar fields in a flat FLRW cosmology. To retain the homogeneity and isotropy of our solutions, we will assume the same holds for our scalar fields, and thus each field has only temporal variation. The metric takes the form: s\^2 = -t\^2 + a(t)\^2 (x\^2+y\^2+z\^2) It is important to note that there are two tetrad representations of this system which are compatible with the geometry corresponding to left-handed and right-handed orientations $g=\eta(\mathbf{e},\mathbf{e})$, with the choices $\mathbf{e}_L = (\d t,a\d x,a\d y,a\d z)$ and $\mathbf{e}_R = (dt,-a\d x,-a\d y,-a\d z)$. In fact, since the form $\eta$ is bilinear, we could have chosen to distribute the - signs with any of these components, however since we will be primarily interested in the behaviour of the spatial parts across the initial singularity, we choose to keep a time direction fixed and will only be interested in the relative signs of the one-forms across $t=0$. Dynamics are derived from the Einstein-Hilbert action for gravity minimally coupled to matter, which has the usual scalar field Lagrangian. Our spacetime is topologically $\R \times \Sigma$ where the spatial slice $\Sigma$ can be $\R^3$ or $\mathbb{T}^3$. In the case of $\R^3$ we choose a fiducial cell to capture the entire system since homogeneity means that the dynamics of the entire space can be determined by the dynamics of any chosen subregion, and thus we avoid infinities. S = (R - \_m) = \_\_a\^3(6( + ) - + V() ) In order to simplify the algebra, in the following we make the choice to work with the volume instead of the scale factor, $v=a^3$. The momentum conjugate to $v$ is proportional to the Hubble parameter, $h=p_v=\frac{4\dot{v}}{v}$ and that to $\phi$ is $p_i=v\dot{\phi}_i$, where we choose to denote this conjugate momentum $h$ to avoid notation clashes. The Hamiltonian and symplectic structure are given: = v(- + + V() ) = hv + The Hamiltonian vector field, $\X_\H$ describes the evolution of a solution in phase-space. It is determined uniquely through the global invertibility of the symplectic form (summing over repeated indices of the scalar field and its momentum): = \_[\_]{} \_= - + - v \[HVF\] The dynamics of the matter present is given by the Klein-Gordon equation, which corresponds to the usual Hamiltonian dynamics of the scalar fields given the above: \[KG\] + + = 0 and the dynamics of the geometry is given by the Friedmann equation: h\^2 = (+V()) In the case where there is no potential for the scalar field, we can solve these analytically to see $v=v_o t$, $\phi_i = A_i \log t + B_i$. The singularity of this system corresponds to the fact that along its orbit on phase space, $\X_\H$ reaches a point at which it is no longer integrable. From the Picard-Lindelöf theorem, this arises because uniqueness of solutions to the equations of motion fails when coefficients of the basis vectors ($\frac{\partial}{\partial h}$ etc) are not Lipschitz continuous. We see that this can occur in two ways; the first is that some of the phase space variables will tend to infinity. We will show that this can
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In this work, we evaluate the energy spectra of baryons which consist of two heavy and one light quarks in the MIT bag model. The two heavy quarks constitute a heavy scalar or axial vector diquark. Concretely, we calculate the spectra of $|q(QQ'')>_{1/2}$ and $|q(QQ'')>_{3/2}$ where $Q$ and $Q''$ stand for $b$ and/or $c$ quarks. Especially, for $|q(bc)>_{1/2}$ there can be a mixing between $|q(bc)_0>_{1/2}$ and $|q(bc)_1>_{1/2}$ where the subscripts 0 and 1 refer to the spin state of the diquark (bc), the mixing is not calculable in the framework of quantum mechanics (QM) as the potential model is employed, but can be evaluated by the quantum field theory (QFT). Our numerical results indicate that the mixing is sieable' --- Evaluation of Spectra of Baryons Containing Two Heavy Quarks in Bag Model Da-Heng He$^1$, Ke Qian$^1$, Yi-Bing Ding$^{2,6}$, Xue-Qian Li$^{1,5,6}$ and Peng-Nian Shen$^{4,3,5,6}$ 1\. Department of Physics, Nankai University, Tianjin 300071, China;\ 2. Graduate School of The Chinese Academy of Sciences, Beijing, 100039, China,\ 3. Institute of High Energy Physics, CAS, P.O. Box 918(4), Beijing 100039, China\ 4. Center of Theoretical Nuclear Physics, National Laboratory of Heavy Ion Accelerator, Lanzhou 730000, China\ 5. Institute of Theoretical Physics, CAS, P.O. Box 2735, Beijing, 100080, China.\ 6. China Center of Advanced Science and Technology (World Laboratory), P.O.Box 8730, Beijing 100080, China Introduction ============= At present, the non-perturbative QCD which dominates the low energy physics phenomena, is not fully understood yet, a systematic and reliable way for evaluating the non-perturbative QCD effects on such as the hadron spectra and hadronic matrix elements is lacking. Fortunately, however, for the heavy flavor mesons or baryons which at least contain one b or c quarks (antiquarks) the situation becomes simpler due to an extra $SU_f(2)\otimes SU_s(2)$ symmetry[@Isgur]. The studies in this field provide us with valuable information about the QCD interaction and its low energy behavior. Among all the interesting subjects, the hadron spectra would be the first focus of attention. The spectra of the $J/\psi$ and $\Upsilon$ families have been thoroughly investigated in different theoretical approaches. Commonly, the spectra are evaluated in the potential model inspired by QCD, where the QCD Coulomb-type potential is directly derived from the one-gluon-exchange mechanism, and the confinement term originating from the non-perturbative QCD must be introduced by hand[@Rosner]. For the heavy quarkonia where only heavy quark flavors are involved, the potential model definitely sets a good theoretical framework for describing such systems where relativistic effects are small compared to the mass scale. An alternative model, the bag model can also provide a reasonable confinement for quarks. In fact, for light hadrons, especially light baryons, the bag model may be a better framework for describing their static behaviors. The MIT bag model has some advantages [@Jaffe]. First, even though it does not hold a translational invariance, the quarks inside the hadron bag obey the relativistic Dirac equation and moreover, they can be described in the Quantum Field Theory (QFT) framework, namely there exist creation and annihilation operators for the constituents of the hadron. The latter property is very important for this work, that we can calculate a mixing between $|q(bc)_0>_{1/2}$ and $|q(bc)_1>_{1/2}$ (the notations will be explained below) states and it is impossible in the Quantum Mechanics (QM) framework. Another interesting subject is if the diquark structure which consists of two quarks and resides in a color-anti-triplet $\bar 3$, exists in baryons. Its existence, in fact, is till in dispute. For light diquark which is composed of two light quarks, the relativistic effects are serious and the bound state should be loose. By contraries, two heavy quarks (b and c) can constitute a stable bound state of $\bar 3$, namely a diquark which serves as a source of static color field[@Falk]. As a matter of fact, the un-penetrable bag boundary which provides the confinement conditions to the constituents of the hadron, is due to the long-distance non-perturbative QCD effects, to evaluate the spectra, one needs to include the short-distance interaction between the constituents and it can be calculated in the framework of the perturbative QCD. In this work, we are going to evaluate the spectra of baryons which contains two heavy quarks (b and/or c) and a light quark and take the light-quark-heavy-diqaurk picture which obviously is reasonable for the case of concern. For evaluating the hadron spectra, the traditional method is the potential model. For baryons, the quark-diquark picture can reduce the three-body problem into a two-body problem and leads to a normal Schrödinger equation. Solving the Schrödinger equation, one can get the binding energy of quark and diquark[@Ebert; @Tong]. In recent years, remarkable progresses have been made along this direction. The authors of refs. [@Kiselev] have carefully studied the short-distance and long-distance effects, then derived a modified potential and obtained the spectroscopy of the baryons which contain two heavy quarks by using the non-relastivistic Schrödinger equation. Meantime, in Ebert et al.’s new work, the light quark is treated as a fully relativistic object and the potential is somehow different from that in their earlier work[@Ebert2]. In their works, not only the ground states of such baryons are obtained, but also the excited states are evaluated. However, the potential model has two obvious drawbacks. First, even though the diquark is heavy, the constituent quark mass of the light quark is still comparable to the linear momentum which is of order of $\Lambda_{QCD}$. Thus the reduced mass is not large and the relativistic effects are still significant. Secondly, working in the framework of QM, it is impossible to estimate the mixing of $|q(bc)_0>_{1/2}$ and $|q(bc)_1>_{1/2}$ where the subscripts 0 and 1 of the (bc)-diquark denote the total spin of the subsystem, i.e. the (bc)-diquark (we only consider the ground state of $l=0$). The reason is that there are no creation and annihilation operators in the traditional QM framework, so the transition $(bc)_1+q\rightarrow (bc)_0+q$, i.e. $A+q\rightarrow S+q$ where the notation A and S refer to the axial-vector and scalar diquarks respectively, is forbidden, even though the transition is calculable in QFT. On other side, the MIT bag model does not suffer from the two drawbacks. In this picture, since the diquark is heavy, it hardly moves so that can be supposed to sit at the center of the bag, whereas the light quark is freely moving in the bag and its equation of motion is the relativistic Dirac equation with a certain boundary condition[@Jaffe] and both the quark and diquark are quantized in the QFT. Thus the relativistic effects are automatically included. Secondly, one can deal with a possible conversion of the constituents in the bag in terms of QFT, namely one can let a constituent be created or annihilated, thus the transition $A+q\rightarrow S+q$ is allowed and the corresponding mixing of $|q(bc)_0>_{1/2}$ and $|q(bc)_1>_{1/2}$ is calculable. Usually the bag model is not very applicable to the light mesons because the spherical boundary is not a good approximation for the two-body system. Even though the quark-diquark structure is a two-body system, the aforementioned problem does not exist because the diquark is much heavier than the light quark. The picture is in analog to the solar system or an atom where only one valence electron around the heavy nucleus, and spherical boundary would be a reasonable choice. In this work, following the literature [@Jaffe], we treat the short-distance QCD interaction between the light quark and heavy diquark perturbatively. Since the interaction energy $E_{int}(R)$ is not diagonal for $|q(bc)_1>_{1/2}$ and $|q(bc)_0>_{1/2}$, we may diagonalize the matrix to obtain the eigenvalues and eigenfunctions which would be the masses of the baryons with flavor $q(bc)$ and spin 1/2. Moreover, for the other baryons $|q(bb)_1>_{1/2(3/2)}$ $|q(cc)_1>_{1/2(3/2)}$ $|q(bc)_1>_{3/2}$, the diquark must be an axial vector due to the Pauli principle[@Close]. The paper is organized as follows, after the introduction, we derive all the formulation of $E_{int}(R)$ and $M_B$ in Sec.II, then in Sec
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'This is the first in a series of papers in which we study an efficient approximation scheme for solving the Hamilton-Jacobi-Bellman equation for multi-dimensional problems in stochastic control theory. The method is a combination of a WKB style asymptotic expansion of the value function, which reduces the second order HJB partial differential equation to a hierarchy of first order PDEs, followed by a numerical algorithm to solve the first few of the resulting first order PDEs. This method is applicable to stochastic systems with a relatively large number of degrees of freedom, and does not seem to suffer from the curse of dimensionality. Computer code implementation of the method using modest computational resources runs essentially in real time. We apply the method to solve a general portfolio construction problem.' author: - | **Sakda Chaiworawitkul**\ JPMorgan Chase\ New York, NY 10179\ USA - | **Patrick S. Hagan**\ Mathematics Institute\ 24-29 St Giles\ Oxford University\ Oxford, OX1 3LB\ UK - | **Andrew Lesniewski**\ Department of Mathematics\ Baruch College\ One Bernard Baruch Way\ New York, NY 10010\ USA date: | First draft: December 3, 2013\ This draft: title: | **Semiclassical approximation in stochastic optimal control\ I. Portfolio construction problem** --- \[sec:Introduction\]Introduction ================================ The stochastic Hamilton-Jacobi-Bellman (HJB) partial differential equation is the cornerstone of stochastic optimal control theory ([@FS92], [@YZ99], [@P09]). Its solution, the value function, contains the information needed to determine the optimal policy governing the underlying dynamic optimization problem. Analytic closed form solutions to the HJB equation are notoriously difficult to obtain, and they are limited to problems where the underlying state dynamics has a simple form. Typically, these solutions are only available for systems with one degree of freedom. A variety of numerical approaches to stochastic optimal control have been studied. An approach based on the Markov chain approximation is developed in [@KD01]. This approach avoids referring to the HJB equation altogether, and is, instead, based on a suitable discretization of the underlying stochastic process. Other recent approaches, such as [@FLO7], [@KLP13], and [@AK14], rely on ingenious discretization schemes of the HJB equation. These numerical methods are generally limited to systems with low numbers of degrees of freedom, as they are susceptible to the “curse of dimensionality”. In this paper, we present a methodology for effectively solving a class of stochastic HJB equations for systems with $n$ degrees of freedom, where $n$ is a moderately large number ($\lessapprox 200$). The solution methodology is based on an analytic approximation to the full HJB equation which reduces it to an infinite hierarchy of first order partial differential equations. This is accomplished by means of an asymptotic expansion analogous to the Wentzel-Kramers-Brillouin (WKB) method used in quantum mechanics, optics, quantitative finance, and other fields of applied science, see e.g. [@BO99], [@KC85]. The first in the hierarchy of equations is the classical Hamilton-Jacobi (HJ) equation which is analogous to the equation describing the motion of a particle on a Riemannian manifold[^1] subject to external forces. Its structure is somewhat less complicated than that of the full HJB equation, and its properties have been well understood. The solution to this equation is in essence the most likely trajectory for the optimal control of the stochastic system. Similar ideas, within a completely different setup have been pursued in [@T11] and [@HDM14]. The remaining equations are linear first order PDEs, with progressively more complex structure of coefficient functions. The approximate character of the solution of the HJB equation that we discuss is twofold. Firstly, we solve the Hamilton-Jacobi equation and the first of the linear PDEs in the hierarchy only. The WKB expansion is asymptotic, and the expectation is that these two equations capture the nature of the actual solution close enough. The remaining members of the hierarchy are neglected as they are believed that they contain information which does not significantly affect the shape of the solution. We refer to this approximation as the semiclassical (or eikonal) approximation in analogy with a similar approximation in physics. Interestingly, there is a class of non-trivial stochastic optimal control problems for which the semiclassical approximation produces the actual exact solutions. Two examples of such problems are discussed in the paper. Secondly, the solutions to the two leading order PDEs are constructed through numerical approximations. The key element of the numerical algorithm is a suitable symplectic method of numerical integration of Hamilton’s canonical equations, which are the characteristic equations of the HJ equation. Here, we use the powerful Störmer-Verlet (or leapfrog) method [@HLW03], [@LR04] to construct numerically the characteristics. Furthermore, we use a Newton-type search method in order to construct the numerical solution to the HJ equation out of the characteristics. This method uses a system of variational equations associated with Hamilton’s equations. This work has been motivated by our study of a stochastic extension of the continuous time version of the Markowitz mean variance portfolio optimization. The methodology developed here should, however, provide a practical method for implementation of the resulting portfolio construction. We believe, however, that the method is of broader interest and can be applied to a class of stochastic optimization problems outside of portfolio construction theory. \[sec:hjbEq\]Portfolio construction problem and the HJB equation ================================================================ We assume that the underlying source of stochasticity is a standard $p$-dimensional Wiener process $Z\oft\in\bR^p$ with independent components, $$\eE[dZ\oft dZ\oft^\tT]=\id dt.$$ Here, $\id$ denotes the $p\times p$ identity matrix. We let $(\Omega,(\sF)_{t\geq 0},\eP)$ denote the filtered probability space, which is associated with the Wiener process $Z$. We formulate the portfolio construction problem as the following stochastic control problem. We consider a controlled stochastic dynamical system whose states are described by a multi-dimensional diffusion process $(X\oft,W\oft)$, which takes values in $\cU\times\bR$, where $\cU\subset\bR^n$ is an open set. The components $X^i$, $i=1,\ldots,n$, of $X$ represent the prices of the individual assets in the portfolio, and $W$ is total value of the portfolio. We assume that $n\leq p$. The allocations of each of the assets in the portfolio are given by an $(\sF)_{t\geq 0}$-adapted process $\varphi\oft\in\bR^n$. The dynamics of $(X,W)$ is given by the system of stochastic differential equations: $$\label{eq:xDyn} \begin{split} dX\oft&=a(X\oft)dt+b(X\oft)dZ\oft,\\ X\of0&=X_0. \end{split}$$ The drift and diffusion coefficients $\cU\ni x\to a(x)\in\bR^n$ and $\cU\ni x\to b(x)\in\mathrm{Mat}_{n,p}(\bR)$, respectively, satisfy the usual Hölder and quadratic growth conditions, which guarantee the existence and uniqueness of a strong solution to this system. Note that we are not requiring the presence of a riskless asset in the portfolio: such assumption is unrealistic and unnecessary. If one wishes to consider a riskless asset, it is sufficient to take a suitable limit of the relevant components of $a$ and $b$. The process $W$ is given by $$\label{eq:yDyn1} \begin{split} dW\oft&=\varphi\oft^\tT dX\oft,\\ W\of0&=W_0. \end{split}$$ Explicitly, equation reads: $$\label{eq:yDyn} dW\oft=\varphi\oft^\tT a(X\oft)dt+\varphi\oft^\tT b(X\oft)dZ\oft.$$ We refer to the process $W$ as the investor’s wealth process. We assume that the investor has a finite time horizon $T$ and the utility function $U$. We shall assume that $U$ is a member of the HARA family of utility functions, see Appendix \[sec:UtilityFunctions\] for the their definition and summary of properties. The investor’s objective is to maximize the expected utility of his wealth at time $T$. We are thus led to the following cost functional: $$J[\varphi]=\eE\big[U(W(T))\big],$$ which represents the investor’s objective function. Let $$\label{eq:covDef} \cC\ofx=b\ofx^\tT b\ofx$$ denote the instantaneous covariance matrix of the price processes. For technical reasons, we shall make the following additional assumptions on the functions $a:\cU\to\bR^n$ and $b:\cU\to\mathrm{Mat}_{n,p}(\bR)$: - [The functions $a\ofx$ and $b\ofx$ are three times continuously differentiable for
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Beyond traditional security methods, unmanned aerial vehicles (UAVs) have become an important surveillance tool used in security domains to collect the required annotated data. However, collecting annotated data from videos taken by UAVs efficiently, and using these data to build datasets that can be used for learning payoffs or adversary behaviors in game-theoretic approaches and security applications, is an under-explored research question. This paper presents [VIOLA]{}, a novel labeling application that includes (i) a workload distribution framework to efficiently gather human labels from videos in a secured manner; (ii) a software interface with features designed for labeling videos taken by UAVs in the domain of wildlife security. We also present the evolution of [VIOLA]{} and analyze how the changes made in the development process relate to the efficiency of labeling, including when seemingly obvious improvements did not lead to increased efficiency. [VIOLA]{} enables collecting massive amounts of data with detailed information from challenging security videos such as those collected aboard UAVs for wildlife security. [VIOLA]{} will lead to the development of new approaches that integrate deep learning for real-time detection and response.' author: - | Elizabeth Bondi, Debarun Kar, Venil Noronha, Donnabell Dmello, Milind Tambe\ \ \ Fei Fang\ \ \ - | Arvind Iyer, Robert Hannaford\ \ \ bibliography: - 'Gamesec2017.bib' title: Video Labeling for Automatic Video Surveillance in Security Domains ---
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Motivated by our observation of fast echo decay and a surprising coherence freeze, we have developed a pump-probe spectroscopy technique for vibrational states of ultracold $^{85}$Rb atoms in an optical lattice to gain information about the memory dynamics of the system. We use pump-probe spectroscopy to monitor the time-dependent changes of frequencies experienced by atoms and to characterize the probability distribution of these frequency trajectories. We show that the inferred distribution, unlike a naive microscopic model of the lattice, correctly predicts the main features of the observed echo decay.' author: - Samansa Maneshi - Chao Zhuang - 'Christopher R. Paul' - 'Luciano S. Cruz' - 'Aephraim M. Steinberg' title: 'Coherence freeze in an optical lattice investigated via pump-probe spectroscopy' --- Characterizing decoherence mechanisms is a crucial task for experiments aiming to control quantum systems, e.g., for quantum information processing (QIP). In this work, we demonstrate how two-dimensional (2D) pump-probe spectroscopy may be extended to provide important information on these mechanisms. As a model system, we study quantum vibrational states of ultracold atoms in an optical lattice. In addition to being a leading candidate system for QIP [@BrennenJaksch], optical lattices are proving a versatile testing ground for the development of quantum measurement and control techniques [@OMandel; @Anderlini] and a powerful tool for quantum simulations, e.g. the study of Anderson localization and the Hubbard model [@MottAnderson]. In our experiment, we study the vibrational coherence of $^{85}$Rb atoms trapped in a shallow one-dimensional standing wave. Through our 2D pump-probe technique, we obtain detailed microscopic information on the frequency drift experienced by atoms in the lattice, enabling us to predict the evolution of coherence. Since the pioneering development of the technique in NMR[@Jeener-Ernst], 2D spectroscopy has been widely used to obtain high-resolution spectra and gain information about relaxations, couplings, and many-body interactions, in realms ranging from NMR [@Ernst] to molecular spectroscopy [@Mukamel-Jonas; @Hybl; @Brixner; @MillerNature] to semiconductor quantum wells [@Cundiff; @KWStone]. Here, we show that similar powerful techniques can be applied to the quantized center-of-mass motion of trapped atoms, and more generally, offer a new tool for the characterization of systems in QIP and quantum control. ![(Color online) Two typical measurements of echo amplitude vs. time. The echo pulse and the observed echo envelope are centered at times $t_p$ and $2t_p$, respectively. After an initial decay, echo amplitude stays constant for about $1ms$ forming a plateau, before decaying to zero. The average lattice depths are $20E_R$ (circles) and $18E_R$ (squares).[]{data-label="fig1"}](Fig1.eps) We have previously measured the evolution of coherence between the lowest two vibrational states of potential wells [@Ours]. The dephasing time is about $0.3ms$ ($T^{\star}_2$). This dephasing is partly due to an inhomogeneous distribution of lattice depths as a result of the transverse Gaussian profile of the laser beams. To measure the homogeneous decoherence time ($T_2$), we perform pulse echoes, measuring the echo amplitude as a function of time [@Ours]. Figure \[fig1\] shows two typical measurements of echo amplitude carried out on different dates under slightly different conditions such as different average lattice depths and different dephasing times. The echo amplitude initially decays with a time constant of about $0.7ms$, which is much faster than the photon scattering time ($\sim 60ms$) in the lattice. It then exhibits a $1ms$-long coherence freeze followed by a final decay. Absent real decoherence on the short time scale of $1ms$, only loss of frequency memory would inhibit the appearance of echoes. This loss comes about when atoms experience time-varying frequencies. We use 2D pump-probe spectroscopy to monitor this frequency drift. Our 2D pump-probe spectroscopy is essentially a version of spectral hole-burning for vibrational states. By monitoring the changes in the hole spectrum as a function of time we gain information on the atoms’ frequency drift. Information obtained from our 2D spectra enables us to characterize the temporal decay of frequency memory and through our simulations we find that “coherence freeze" is related to the shape of this memory loss function. Similar plateaus in echo decay and a two-stage decay of echo amplitude have been observed in a Cooper-pair box [@Nakamura], for a single electron spin in a quantum dot [@Vandersypen] and for electron spins in a semiconductor [@SClark]. Those plateaus or two-stage decays have been either explained through [*[a priori]{}*]{} models or simply described phenomenologically. Here, we are introducing an experimental technique to directly probe the origin of plateaus. The periodic potential in our experiment is formed by interfering two laser beams blue-detuned by $ 25GHz$ from the D2 transition line, $F=3 \shortrightarrow F^{\prime}=4$ ($\lambda=780nm$), thus trapping atoms in the regions of low intensity, which minimizes the photon scattering rate and the transverse forces. The two laser beams intersect with parallel linear polarizations at an angle of $\theta = (49.0 \pm 0.2)^{\circ}$, resulting in a spacing of $L=(0.930 \pm 0.004) \mu m$ between the wells. Due to gravity, the full effective potential also possesses a “tilt” of $2.86 E_R$ per lattice site, where $E_R=\frac{h^2}{8mL^2}$ is the effective lattice recoil energy. The photon scattering time in our experiment is $\approx 60ms$ and the Landau-Zenner tunneling times for transitions from the lowest two levels are greater than $ 160ms$. Atoms are loaded to the lattice during a molasses cooling stage and prepared in the ground vibrational state by adiabatic filtering [@StefanQPT]. Due to the short coherence length of atoms in optical molasses ($60 nm$ at $10 \mu K$), there is no coherence between the wells. We measure populations of atoms in the ground vibrational, the first excited, and the (lossy) higher excited states $P_1$, $P_2$, and $P_{L}$, respectively, by fluorescence imaging of the atomic cloud after adiabatic filtering [@StefanQPT]. The pump and probe pulses are sinusoidal phase modulations of one of the laser beams forming the lattice. The modulation is of the form $\phi(t)=A(t)[1-cos(\omega_m t)]$, where $A(t)$ is a square envelope function with amplitude $2\pi/72$ and $\omega_m$ is a variable frequency. The duration of each pulse is $8$ cycles, i.e., $T=8 (2\pi/\omega_m)$. This phase modulation shakes the lattice back and forth periodically, coupling vibrational states of opposite parity. To first-order in modulation amplitude, the phase-modulating part of the Hamiltonian has the same form as the electric dipole Hamiltonian. The inhomogeneous spectrum of vibrational excitations is measured in an average lattice depth of $24E_R$ by applying probe pulses at different frequencies and measuring state populations. Figure \[fig2\](a) shows state populations $P_1$, $P_2$, and $P_{L}$ (black circles) as a function of probe frequency. We then measure the pump-probe spectrum for a fixed delay. A pump pulse with a specific frequency is applied, exciting atoms in wells whose vibrational transition frequency matches the frequency of the pump pulse, therefore burning a hole in the spectrum of the ground state population. After a delay, probe pulses at different frequencies are applied, coupling the ground and excited states of atoms whose frequencies match those of the probe pulses. The red squares in Fig. \[fig2\](a) show populations $\Pi_1$, $\Pi_2$, and $\Pi_{L}$ as a function of probe frequency for a delay of $2ms$ and at a pump frequency of $6.45kHz$. The central part of $\Pi_1$ shows the hole burnt into the spectrum of the ground state population. To characterize the hole, we plot the difference between the pump-probe and the probe-alone spectra, $\Delta P_1= \Pi_1-P_1$, in Fig. \[fig2\](b). We monitor the frequency drift of atoms by changing the delay between the pump and probe pulses. Figure \[fig2\](c) shows the r.m.s. width of $\Delta P_1$ as a function of delay. The r.m.s. width increases with increasing delay until it approaches the inhomogeneous width of the lattice. For pump-probe delays shorter than $2ms$, the coherence present between the lowest two states results in Ramsey fringes making it impractical to extract useful spectra from the measurements. ![(Color online) Experiment:(a) black circles: probe-alone populations, $P_1, P_2,$ and $P_{L}$. Red squares: pump-probe populations, $\Pi_1, \Pi_2, \Pi_{L}$ for a pump at $6.45kHz$ and a delay of $2ms$. The circled region shows the hole-burning signal in $P_1$ and $\Pi_1$. (b) The difference spectrum $\Delta P_1 = \Pi_1 -P_1$. (c) Growth
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'P. E. Shtykovskiy$^{1,2 \,*}$, M. R. Gilfanov$^{2,1}$' title: '**High Mass X-ray Binaries and Recent Star Formation History of the Small Magellanic Cloud**' --- [Astronomy Letters, Vol. 33, No. 7, 2007, pp. 437-454. Translated from Pis’ma v Astronomicheskii Zhurnal, Vol. 33, No. 7, 2007, pp. 492-512.]{} We study the relation between high-mass X-ray binary (HMXB) population and recent star formation history (SFH) for the Small Magellanic Cloud (SMC). Using archival optical SMC observations, we have approximated the color-magnitude diagrams of the stellar population by model stellar populations and, in this way, reconstructed the spatially resolved SFH of the galaxy over the past 100 Myr.We analyze the errors and stability of this method for determining the recent SFH and show that uncertainties in the models of massive stars at late evolutionary stages are the main factor that limits its accuracy. By combining the SFH with the spatial distribution of HMXBs obtained from XMM-Newton observations, we have derived the dependence of the HMXB number on the time elapsed since the star formation event. The number of young systems with ages 10 Myr is shown to be smaller than the prediction based on the type-II supernova rate. The HMXB number reaches its maximum $\sim$20–50 Myr after the star formation event. This may be attributable, at least partly, to a low luminosity threshold in the population of X-ray sources studied, Lmin$\sim10^{34}$ erg/s. Be/X systems make a dominant contribution to this population, while the contribution from HMXBs with black holes is relatively small. [**Key words:**]{} high mass X-ray binaries, Small Magellanic Cloud, star formation. [$^{*}$ E-mail: pavel@hea.iki.rssi.ru]{} INTRODUCTION {#introduction .unnumbered} ============ High-mass X-ray binaries (HMXBs) are close binary systems in which the compact object (a black hole or a neutron star) accretes matter from an early-type massive star. Because of the short lifetime of the donor star, they are closely related to recent star formation and, in the simplest picture, their number should be roughly proportional to the star formation rate of the host galaxy. Indeed, Chandra observations of nearby galaxies suggest that, to the first approximation, the HMXB luminosity function follows a universal power law whose normalization is proportional to the star formation rate (SFR) of the host galaxy (Grimm et al. 2003). On the other hand, obvious considerations based on the present view of the evolution of binary systems suggest that the relation between HMXB population and star formation should be more complex than a linear one. There is also experimental evidence for this. For example, previously (Shtykovskiy and Gilfanov 2005a), we showed that the linear relation between the number of HMXBs and the SFR cannot explain their spatial distribution over the Large Magellanic Cloud (LMC), because their number does not correlate with the H$_{\alpha}$ line intensity, a well-known SFR indicator. The largest number of HMXBs is observed in the region of moderate star formation LMC 4, while they are virtually absent in the most active star-forming region in the LMC, 30 Dor. Previously (Shtykovskiy and Gilfanov 2005a), we suggested that this discrepancy could arise from the dependence of the HMXB number on the time elapsed since the star formation event. Indeed, the age of the stellar population in 30 Dor is $\approx1-2$ Myr, which is not enough for the formation of compact objects even from the most massive stars and, accordingly, for the appearance of accreting X-ray sources. At the same time, the characteristic age of the stellar population in LMC 4, $\approx10-30$ Myr, is favorable for the formation of an abundant HMXB population. Thus, on the spatial scales corresponding to individual star clusters, the linear relation between the HMXB number and the instantaneous SFR does not hold and the recent star formation history (SFH) on time scales of the order of the lifetime of the HMXB population, i.e., $\sim2-100$ Myr, should be taken into account. Obviously, the number of active HMXBs at a certain time is determined by the total contribution from systems of different ages according to the dependences of the star formation history SFR(t) and a certain function $\eta_{HMXB}(t)$ describing the dependence of the HMXB number on the time elapsed since the star formation event. The universal relation N$_{HMXB}=A\times$SFR on the scales of galaxies results from the spatial averaging of $\eta_{HMXB}(t)$ over star-forming regions of different ages. The Small Magellanic Cloud (SMC) is an ideal laboratory that allows these and other aspects of HMXB formation and evolution to be studied. Indeed, owing to its appreciable SFR and small distance (60 kpc), there are dozens of known HMXBs in it. On the other hand, the SMC proximity makes it possible to study in detail its stellar population and, in particular, to reconstruct its SFH. Another peculiarity of the SMC, namely, its low metallicity, makes it potentially possible to study the effect of the heavy-element abundance on the properties of the HMXB population. In this paper, we use XMM-Newton observations of the SMC (Shtykovskiy and Gilfanov 2005b) and archival optical observations (Zaritsky et al. 2002) to analyze the relation between the number of HMXBs and the recent SFH of the galaxy. Our goal is to derive the dependence of the HMXB number on the time elapsed since the star formation event. EVOLUTION OF THE HMXB POPULATION AFTER THE STAR FORMATION EVENT {#sec:hmxbevol} =============================================================== To describe the evolution of the HMXB population, let us introduce a function $\eta_{HMXB}(t)$ that describes the dependence of the number of observed HMXBs with luminosities above a given value on the time t elapsed since the star formation event normalized to the mass of the formed massive stars: $$\begin{aligned} \eta_{HMXB}(t)=\frac{N_{HMXB}(t)}{M(>8M_{\odot})} \label{eq:etahmxbteor1}\end{aligned}$$ where M($>$8 M$_{\odot}$) is the mass of the stars more massive than 8 M$_{\odot}$ formed in the star formation event and N$_{HMXB}(t)$ is the number of HMXBs with luminosities exceeding a certain threshold. The luminosity of $10^{34}$ erg/s that corresponds to the sensitivity achieved by XMM-Newton in the SMC observations is taken as the latter. Obviously, the function $\eta_{HMXB}(t)$ is non-zero only in a limited time interval. Indeed, the first X-ray binaries appear only after the formation of the first black holes and/or neutron stars. The lifetimes of the stars that explode as type II supernovae (SNe II) to produce a compact object lie in the interval from $\approx2-3$ Myr for the most massive stars, $\approx100$ M$_{\odot}$, to $\approx40$ Myr for stars with a mass of $\approx$8 M$_{\odot}$, the least massive stars capable of producing a compact object. In this picture, it would be natural to expect the X-ray binaries in which the compact object is a black hole to appear first and the (probably more abundant) population of accreting neutrons stars to appear next. On the other hand, the HMXB lifetime is limited by the lifetime of the companion star. Since the least massive companion stars observed during an active X-ray phase have a mass of $\approx6 M_{\odot}$, this lifetime is $\sim60$ Myr for a single star when the peculiarities of the stellar evolution in binary systems are disregarded. Given the mass transfer from the more massive star to the future donor star, this lifetime can be slightly modified. This also includes the X-ray source stage proper with characteristic time scales much shorter than those considered above, $\sim10^3-10^6$ yr, depending on the type of the companion star and the binary parameters. Obviously, the function $\eta_{HMXB}(t)$ must be closely related to the rate of SNe II $\eta_{SNII}(t)$ producing a compact object. To the first approximation, the relation may be assumed to be linear: $$\begin{aligned} \eta_{HMXB}(t)= A\cdot\eta_{SNII}(t) \label{eq:etahmxbteor0}\end{aligned}$$ The supernova rate can be easily determined from the stellar mass–lifetime relation (Schaller et al. 1992) and the initial mass function (IMF), which below is assumed to be a Salpeter one in the range 0.1–100M$_{\odot}$. Note that the IMF shape in the range of low masses is unimportant for us, since all of the relations are eventually normalized to the mass of massive stars with M$>$8 M$_{\odot
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We present measurements of the large-scale cosmic-ray anisotropies in right ascension, using data collected by the surface detector array of the Pierre Auger Observatory over more than 14 years. We determine the equatorial dipole component, $\vec{d}_\perp$, through a Fourier analysis in right ascension that includes weights for each event so as to account for the main detector-induced systematic effects. For the energies at which the trigger efficiency of the array is small, the “East-West” method is employed. Besides using the data from the array with detectors separated by 1500 m, we also include data from the smaller but denser sub-array of detectors with 750 m separation, which allows us to extend the analysis down to $\sim 0.03$ EeV. The most significant equatorial dipole amplitude obtained is that in the cumulative bin above 8 EeV, $d_\perp=6.0^{+1.0}_{-0.9}$%, which is inconsistent with isotropy at the 6$\sigma$ level. In the bins below 8 EeV, we obtain 99% CL upper-bounds on $d_\perp$ at the level of 1 to 3 percent. At energies below 1 EeV, even though the amplitudes are not significant, the phases determined in most of the bins are not far from the right ascension of the Galactic center, at $\alpha_{\rm GC}=-94^\circ$, suggesting a predominantly Galactic origin for anisotropies at these energies. The reconstructed dipole phases in the energy bins above 4 EeV point instead to right ascensions that are almost opposite to the Galactic center one, indicative of an extragalactic cosmic ray origin.' author: - 'A. Aab, P. Abreu, M. Aglietta, I.F.M. Albuquerque, J.M. Albury, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, P.R. Araújo Ferreira, H. Asorey, P. Assis, G. Avila, A.M. Badescu, A. Bakalova, A. Balaceanu, F. Barbato, R.J. Barreira Luz, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, T. Bister, J. Biteau, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, L. Bonneau Arbeletche, N. Borodai, A.M. Botti, J. Brack, T. Bretz, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, L. Calcagni, A. Cancio, F. Canfora, I. Caracas, J.M. Carceller, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, M. Cerda, J.A. Chinellato, K. Choi, J. Chudoba, L. Chytka, R.W. Clay, A.C. Cobos Cerutti, R. Colalillo, A. Coleman, M.R. Coluccia, R. Conceição, A. Condorelli, G. Consolati, F. Contreras, F. Convenga, C.E. Covault, S. Dasso, K. Daumiller, B.R. Dawson, J.A. Day, R.M. de Almeida, J. de Jesús, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, D. de Oliveira Franco, V. de Souza, J. Debatin, M. del Río, O. Deligny, N. Dhital, A. Di Matteo, M.L. Díaz Castro, C. Dobrigkeit, J.C. D’Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, I. Epicoco, M. Erdmann, C.O. Escobar, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Feldbusch, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, M.M. Freire, T. Fujii, A. Fuster, C. Galea, C. Galelli, B. García, A.L. Garcia Vegas, H. Gemmeke, F. Gesualdi, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, J. Glombitza, F. Gobbi, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, J.P. Gongora, N. González, I. Goos, D. Góra, A. Gorgi, M. Gottowik, T.D. Grubb, F. Guarino, G.P. Guedes, E. Guido, S. Hahn, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, V.M. Harvey, A. Haungs, T. Hebbeker, D. Heck, G.C. Hill, C. Hojvat, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, J.A. Johnsen, J. Jurysek, A. Kääpä, K.H. Kampert, B. Keilhauer, J. Kemp, H.O. Klages, M. Kleifges, J. Kleinfeller, M. Köpke, G. Kukec Mezek, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M.A. Leigui de Oliveira, V. Lenok, A. Letessier-Selvon, I. Lhenry-Yvon, D. Lo Presti, L. Lopes, R. López, A. López Casado, R. Lorek, Q. Luce, A. Lucero, A. Machado Payeras, M. Malacari, G. Mancarella, D. Mandat, B.C. Manning, J. Manshanden, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, M. Mastrodicasa, H.J. Mathes, J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Miramonti, D. Mockler, S. Mollerach, F. Montanet, C. Morello, G. Morlino, M. Mostafá, A.L. Müller, M.
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We consider normal-form games with ${{n}}$ players and two strategies for each player where the payoffs are Bernoulli random variables. We define the associated to a strategy profile as the sum of the payoffs of all players divided by ${{n}}$. We assume that payoff vectors corresponding to different profiles are i.i.d., and the payoffs within the same profile are conditionally independent given some underlying random parameter. Under these conditions we examine the asymptotic behavior of the that correspond to the optimum, to the best and to the worst . We perform a detailed analysis of some particular cases showing that these random quantities converge, as ${{n}}\to\infty$, to some function of the models’ parameters. Moreover, we show that these functions exhibit some interesting phase-transition phenomena.' address: - '$^{\dagger}$ Dipartimento di Economia e Finanza, LUISS, Viale Romania 32, 00197 Roma, Italy.' - '$^{\#}$ Dipartimento di Economia e Finanza, LUISS, Viale Romania 32, 00197 Roma, Italy.' author: - 'Matteo Quattropani$^{\dagger}$' - 'Marco Scarsini$^{\#}$' bibliography: - 'bibrandomPoA.bib' title: Efficiency of equilibria in random binary games --- Introduction {#se:intro} ============ The concept of is central in game theory. [@Nash:PNAS1950; @Nash:AM1951] proved that every finite game admits . In general, may fail to exist. Given that the concept of is epistemically more clearly understood than the one of , it is important to understand how rare it is to have games without . One way to address the problem is to consider games in normal form whose payoffs are random. In a random game the number of is also a random variable, whose distribution is interesting to study. It is known that this distribution depends on the assumptions made on the distribution of the random payoffs. The simplest case that has been considered in the literature deals with i.i.d. payoffs having a continuous distribution function. This implies that ties happen with probability zero. Even in this simple case, although it is easy to compute the expected number of , the characterization of their exact distribution is non-trivial. Asymptotic results exist as either the number of players or the number of strategies for each player diverge. In both cases the number of converges to a Poisson distribution with parameter $1$. Generalizations of the simple case can be achieved either by removing the assumptions that all payoffs are independent or by allowing for discontinuities in their distribution functions, or both. In both cases the number of diverges and some holds. To the best of our knowledge, the literature on this topic has focused on the distribution of the number of but not on their , i.e., the sum of the payoffs of each player. The issue of efficiency of equilibria and its measure has received an attention for more than a century and, at the end of the last millennium, has led to the definition of the as a pessimistic measure of inefficiency [@KouPap:STACS1999; @Pap:ACMSTC2001], followed by the as its optimistic counterpart [@SchSti:P14SIAM2003; @AnsDasKleTarWexRou:SIAMJC2008]. The is the ratio of the optimum over the of the worst equilibrium. The is the ratio of the optimum over the of the best equilibrium. It is interesting to study how these three quantities behave in a random game. Our contribution ---------------- We consider a model with ${{n}}$ players and two strategies for each player. Payoffs are assumed to be random. To be more precise the payoff vectors corresponding to each strategy profile are assumed to be i.i.d. and payoffs within the same strategy profile ${\boldsymbol{{{s}}}}$ to be conditionally i.i.d. Bernoulli random variables, given a parameter $\Phi({\boldsymbol{{{s}}}})$ distributed according to the probability law ${\pi}$ on $[0,1]$. A model with a similar dependence structure was considered in @RinSca:GEB2000, but there the payoffs have a Gaussian distribution. We will study the asymptotic behavior of the in this game as ${{n}}\to\infty$. In particular, we focus our analysis on the optimal , on the of the best, the worst, and the *typical* . As a preliminary step, we will consider the asymptotic behavior of the random number of . We consider three relevant cases for the measure ${\pi}$. First we look at the case where the support of ${\pi}$ is the whole interval $[0,1]$ and we show that the asymptotic behavior of the number of does not depend on ${\pi}$. Moreover we show that in this case the asymptotic behavior of the of the optimum, and the best equilibrium coincide and have maximal , i.e., equal to 1. On the other hand, we show that efficiency of the worst depends on ${\pi}$ only through its mean. The same analysis is performed for the case in which ${\pi}$ is the Dirac mass at $p\in(0,1)$, which corresponds to i.i.d. payoffs. Finally we deal with a model where the dependence within the profile depends on a single parameter $q$ and perform the same asymptotic analysis as a function of $p$ and $q$. For each of these models we analyze the behavior of the best and worst equilibria as a function of the relevant parameters, showing some interesting irregularities. The techniques we use in this paper are standard in the probabilistic literature, and amount mostly to first and second moment analysis, large deviations and calculus. Nonetheless, a refined analysis of a perturbation of the large deviation rate of binomial random variables is required to provide precise asymptotic results on the phase-transition mentioned in the abstract. Related literature ------------------ The distribution of the number of in games with random payoffs has been studied for a number of years. Many papers assume the random payoffs to be i.i.d. from a continuous distribution. Under this hypothesis, several papers studied the asymptotic behavior of random games, as the number of strategies grows. For instance, [@Gol:AMM1957] showed that in zero-sum two-person games the probability of having a goes to zero. He also briefly dealt with the case of payoffs with a Bernoulli distribution. [@GolGolNew:JRNBSB1968] studied general two-person games and showed that the probability of having at least one converges to $1-\operatorname{e}^{-1}$. [@Dre:JCT1970] generalized this result to the case of an arbitrary finite number of players. Other papers have looked at the asymptotic distribution of the number of , again when the number of strategies diverges. [@Pow:IJGT1990] showed that, when the number of strategies of at least two players goes to infinity, the distribution of the number of converges to a $\operatorname{\mathsf{{Poisson}}}(1)$. She then compared the case of continuous and discontinuous distributions. [@Sta:GEB1995] derived an exact formula for the distribution of in random games and obtained the result in [@Pow:IJGT1990] as a corollary. [@Sta:MOR1996] dealt with the case of two-person symmetric games and obtained Poisson convergence for the number of both symmetric and asymmetric . In all the above models, the expected number of is in fact 1. Under different hypotheses, this expected number diverges. For instance, [@Sta:MSS1997; @Sta:EL1999] showed that this is the case for games with vector payoffs and for games of common interest, respectively. [@RinSca:GEB2000] weakened the hypothesis of i.i.d. payoffs; that is, they assumed that payoff vectors corresponding to different strategy profiles are i.i.d., but they allowed some dependence within the same payoff vector. In this setting, they proved asymptotic results when either the number of players or the number of strategies diverges. More precisely, if each payoff vector has a multinormal exchangeable distribution with correlation coefficient $\rho$, then, if $\rho$ is positive, the number of diverges and a central limit theorem holds. [@Rai:P7YSM2003] used Chen-Stein method to bound the distance between the distribution of the normalized number of and a normal distribution. His result is very general, since it does not assume continuity of the payoff distributions. [@Tak:GEB2008] considered the distribution of the number of in a random game with two players, conditionally on the game having nondecreasing best-response functions. This assumption greatly increases the expected number of . [@DasDimMos:AAP2011] extended the framework of games with random payoffs to graphical games. Strategy profiles are vertices of a graph and players’ strategies are binary, like in our model. Moreover, their payoff depends only on their strategy and the strategies of their neighbors. The authors studied how the structure of the graph affects existence of and they examined both deterministic and random graphs. [@AmiColSca:arXiv2019] showed that in games with ${{n}}$ players and two actions for each player, the key quantity that determines the behavior of the number of is the probability that two different payoffs assume the same value. They then studied the behavior of best-response dynamics in random games. The issue of solution concepts in games with random payoffs has been explored by various authors in different directions. For instance, [@Coh:PNAS1998] studied the probability that Nash equilibria (both pure and
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Effective electrostatic interactions between colloidal particles, coated with polyelectrolyte brushes and suspended in an electrolyte solvent, are described via linear response theory. The inner cores of the macroions are modeled as hard spheres, the outer brushes as spherical shells of continuously distributed charge, the microions (counterions and salt ions) as point charges, and the solvent as a dielectric continuum. The multi-component mixture of macroions and microions is formally mapped onto an equivalent one-component suspension by integrating out from the partition function the microion degrees of freedom. Applying second-order perturbation theory and a random phase approximation, analytical expressions are derived for the effective pair interaction and a one-body volume energy, which is a natural by-product of the one-component reduction. The combination of an inner core and an outer shell, respectively impenetrable and penetrable to microions, allows the interactions between macroions to be tuned by varying the core diameter and brush thickness. In the limiting cases of vanishing core diameter and vanishing shell thickness, the interactions reduce to those derived previously for star polyelectrolytes and charged colloids, respectively.' author: - 'H. Wang' - 'A. R. Denton' title: 'Effective Electrostatic Interactions in Suspensions of Polyelectrolyte Brush-Coated Colloids' --- Introduction ============ Polyelectrolytes [@PE1; @PE2] are ionizable polymers that dissolve in a polar solvent, such as water, through dissociation of counterions. Solutions of polyelectrolytes are complex mixtures of macroions and microions (counterions and salt ions) in which direct electrostatic interactions between macroions are screened by surrounding microions. Polyelectrolyte chains, grafted or adsorbed by one end to a surface at high concentration, form a dense brush that can significantly modify interactions between surfaces in solution. When attached to colloidal particles, [*e.g.*]{}, latex particles in paints or casein micelles in milk [@Tuinier02], polyelectrolyte brushes can stabilize colloidal suspensions by inhibiting flocculation [@Evans; @Hunter]. Biological polyelectrolytes (biopolymers), such as proteins in cell membranes, can modify intercellular and cell-surface interactions. Conformations and density profiles of polyelectrolyte (PE) brushes have been studied by a variety of experimental, theoretical, and simulation methods, including dynamic light scattering [@Guo-Ballauff01], small-angle neutron scattering [@Mir95; @Guenoun98; @Groenewegen00], transmission electron microscopy [@Groenewegen00], neutron reflectometry [@Tran99], surface adsorption [@Hariharan-Russel98], atomic force microscopy [@Mei-Ballauff03], self-consistent field theory [@Miklavic88; @Misra89; @Misra96; @Zhulina-Borisov97; @Gurovitch-Sens99; @Borisov01; @Klein-Wolterink-Borisov03], scaling theory [@Pincus91; @Schiessel-Pincus98; @Borisov01; @Klein-Wolterink-Borisov03], Poisson-Boltzmann theory [@Miklavic90], Monte Carlo simulation [@Miklavic90], and molecular dynamics simulation [@Seidel00; @Likos02]. Comparatively few studies have focused on electrostatic interactions between PE brush-coated surfaces. Interactions between neutral surfaces – both planar and curved (spherical) – with grafted PE brushes have been modeled using scaling theory [@Pincus91], while interactions between charged surfaces coated with oppositely-charged PEs have been investigated for planar [@Miklavic90] and spherical (colloidal) surfaces [@Podgornik95] via Monte Carlo simulation and a variety of theoretical methods. While microscopic models that include chain and microion degrees of freedom provide the most realistic description of PE brushes, simulation of such explicit models for more than one or two brushes can be computationally demanding. The purpose of the present paper is to develop an alternative, coarse-grained theoretical approach, based on the concept of effective interactions, which may prove useful for predicting thermodynamic and other bulk properties of suspensions of PE brush-coated colloids. Modeling each brush as a spherical shell of continuously distributed charge, we adapt linear response theory, previously developed for charged colloids [@Silbert91; @Denton99; @Denton00] and PEs [@Denton03], to derive effective electrostatic interactions. The theory is based on mapping the multi-component mixture onto an equivalent one-component system of “pseudo-macroions" by integrating out from the partition function the degrees of freedom of the microions. Within the theory, microions play three physically important roles: reducing (renormalizing) the bare charge on a macroion; screening direct Coulomb interactions between macroions; and generating a one-body volume energy. The volume energy – a natural by-product of the one-component reduction – contributes to the total free energy and can significantly influence thermodynamic behavior of deionized suspensions. Outlining the remainder of the paper, Sec. \[Model\] defines the model suspension of PE brush-coated colloids; Sec. \[Theory\] reviews the linear response theory; Secs. \[Analytical Results\] and \[Numerical Results\] present analytical and numerical results for counterion density profiles, effective pair interactions, and volume energies in bulk suspensions; and finally, Sec. \[Conclusions\] summarizes and concludes. Model {#Model} ===== The system of interest is modeled as a suspension of $N_m$ spherical, core-shell macroions of charge $-Ze$ (valence $Z$), core radius $a$, and PE brush shell thickness $l$ (outer radius $R=a+l$), and $N_c$ point counterions of charge $ze$ in an electrolyte solvent in volume $V$ at temperature $T$ (see Fig. \[PEbrush\]). The core is assumed to be neutral, the macroion charge coming entirely from the PE shell. Assuming a symmetric electrolyte and equal salt and counterion valences, the electrolyte contains $N_s$ point salt ions of charge $ze$ and $N_s$ of charge $-ze$. The microions thus number $N_+=N_c+N_s$ positive and $N_-=N_s$ negative, for a total of $N_{\mu}=N_c+2N_s$. Global charge neutrality in a bulk suspension constrains macroion and counterion numbers via $ZN_m=zN_c$. Number densities of macroions, counterions, and salt ions are denoted by $n_m$, $n_c$, and $n_s$, respectively. Within the primitive model of ionic liquids [@HM], the solvent is treated as a dielectric continuum of dielectric constant $\epsilon$, which acts only to reduce the strength of Coulomb interactions between ions. In PE solutions, the counterions can be classified into four regions: (1) those within narrow tubes enclosing the PE chains, of radius comparable to the Bjerrum length, $\lambda_B=e^2/(\epsilon k_{\rm B}T)$; (2) those outside of the tubes but still closely associated with the chains; (3) those not closely associated with the chains, but still inside of the PE shells; and (4) those entirely outside of the macroions. Counterions in regions (1)-(3) can be regarded as trapped by the macroions, while those in region (4) are free to move throughout the suspension. Within region (1), the counterions may be either condensed and immobilized on a chain or more loosely bound and free to move along a chain. These chain-localized (condensed or mobile) counterions tend to distribute uniformly along, and partially neutralize, the chains. In our model, counterions in regions (1) and (2) act to renormalize the bare macroion valence. The parameter $Z$ thus should be physically interpreted as an [*effective*]{} macroion valence, generally much lower than the bare valence (number of ionizable monomers). From the Manning counterion condensation criterion [@PE1], according to which the linear charge density of a PE chain saturates at $\sim e/\lambda_B$, we can expect the bare charge in an aqueous solution to be renormalized down by at least an order of magnitude. The local number density profiles of charged monomers in the PE brushes, $\rho_{\rm mon}(r)$, and of counterions, $\rho_c(r)$, are modeled here as continuous, spherically symmetric distributions. Charge discreteness can be reasonably neglected if we ignore structure on length scales shorter than the minimum separation between charges. Spherical symmetry of charge distributions can be assumed if intra-macroion chain-chain interactions, which favor isotropic distribution of chains, dominate over inter-macroion interactions, which favor anisotropy. The density profile of charged monomers depends on the conformations of chains in the PE shells. Electrostatic repulsion between charged monomers tends to radially stretch and stiffen PE chains. Indeed, neutron scattering experiments [@Guenoun98] on diblock (neutral-charged) copolymer micelles, as well as simulations [@Likos02], provide strong evidence that the arms of spherical PE brushes can exhibit rodlike behavior. Here we assume the ideal case of fully stretched chains of equal length – a porcupine conformation [@Pincus91] – and model the charged monomer number density profile by $$\rho_{\rm mon}(r
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'The paper is devoted to the relationship between the continuous Markovian description of Lévy flights developed previously (*Lubashevsky et al., Phys. Rev. E **79** (2009) 011110, **80** (2009) 031148; Eur. Phys. J. B **78** (2010) 207, **82** (2011) 189*) and their equivalent representation in terms of discrete steps of a wandering particle, a certain generalization of continuous time random walks. To simplify understanding the key points of the technique to be created, our consideration is confined to the one-dimensional model for continuous random motion of a particle with inertia. Its dynamics governed by stochastic self-acceleration is described as motion on the phase plane $\{x,v\}$ comprising the position $x$ and velocity $v=dx/dt$ of the given particle. A notion of random walks inside a certain neighborhood $\mathcal{L}$ of the line $v=0$ (the $x$-axis) and outside it is developed. It enables us to represent a continuous trajectory of particle motion on the plane $\{x,v\}$ as a collection of the corresponding discrete steps. Each of these steps matches one complete fragment of the velocity fluctuations originating and terminating at the “boundary” of $\mathcal{L}$. As demonstrated, the characteristic length of particle spatial displacement is mainly determined by velocity fluctuations with large amplitude, which endows the derived random walks along the $x$-axis with the characteristic properties of Lévy flights. Using the developed classification of random trajectories a certain parameter-free core stochastic process is constructed. Its peculiarity is that all the characteristics of Lévy flights similar to the exponent of the Lévy scaling law are no more than the parameters of the corresponding transformation from the particle velocity $v$ to the related variable of the core process. In this way the previously found validity of the continuous Markovian model for all the regimes of Lévy flights is explained. Based on the obtained results an efficient “single-peak” approximation is constructed. In particular, it enables us to calculate the basic characteristics of Lévy flights using the probabilistic properties of extreme velocity fluctuations and the shape of the most probable trajectory of particle motion within such extreme fluctuations.' address: 'University of Aizu, Ikki-machi, Aizu-Wakamatsu, Fukushima 965-8560, Japan' author: - Ihor Lubashevsky title: | Equivalent continuous and discrete realizations of Lévy flights:\ Model of one-dimensional motion of inertial particle --- Lévy flights ,nonlinear Markovian processes ,random motion trajectories ,extreme fluctuations ,power-law heavy tails ,time scaling law ,continuous time random walks Introduction ============ During the last two decades there has been a great deal of research into Lévy type stochastic processes in various systems (for a review see, e.g., Ref. [@CTRW]). According to the accepted classification of the Lévy type transport phenomena [@CTRW], Lévy flights are Markovian random walks characterized by the divergence of the second moment of walker displacement $x(t)$, i.e., $\left<[x(t)]^2\right> \to \infty$ for any time scale $t$. It is caused by a power-law asymptotics of the distribution function $\mathcal{P}(x,t)$. For example, in the one-dimensional case this distribution function exhibits the asymptotic behavior $\mathcal{P}(x,t)\sim [\overline{x}(t)]^\alpha/x^{1+\alpha}$ for $x\gg\overline{x}(t)$, where $\overline{x}(t)$ is the characteristic length of the walker displacements during the time interval $t$ and the exponent $\alpha$ belongs to the interval $0<\alpha<2$. The time dependence of the quantity $\overline{x}(t)$ obeys the scaling law $\overline{x}(t)\propto t^{1/\alpha}$. Lévy flights are met, for instance, in the motion of tracer particles in turbulent flows [@Swinney], the diffusion of particles in random media [@Bouchaud], human travel behavior and spreading of epidemics [@Brockmann], or economic time series in finance [@Stanley]. As far as the developed techniques of modeling such stochastic processes are concerned, worthy of mention are, in particular, the Langevin equation with Lévy noise (see, e.g., Ref. [@Weron]) and the corresponding Fokker-Planck equations [@Schertzer1; @Schertzer2; @CiteNew1; @CN100], the description of anomalous diffusion with power-law distributions of spatial and temporal steps [@Fogedby1; @Sokolov], Lévy flights in heterogeneous media [@Fogedby2; @Honkonen; @BrockmannGeisel; @citeNNN3; @citeNNN4] and in external fields [@BrockmannSokolov; @Fogedby3], constructing the Fokker-Planck equation for Lévy type processes in nonhomogeneous media [@CiteNew2; @CiteNew3; @CiteNew4], first passage time analysis and escaping problem for Lévy flights [@fptp1; @fptp1Chech; @fptp2; @fptp3; @fptp4; @fptp5; @fptp6; @CiteNew5; @CiteNew6; @citeNNN1; @citeNNN2]. One of the widely used approaches to coping with Lévy flights, especially in complex environment, is the so-called continuous time random walks (CTRW) [@CTRW1; @CTRW2]. It models, in particular, a general class of Lévy type stochastic processes described by the fractional Fokker-Planck equation [@CTRW3]. Its pivot point is the representation of a stochastic process at hand as a collection of random jumps (steps) $\{\delta\mathbf{x}, \delta t\}$ of a wandering particle in space and time as well. In the frameworks of the coupled CTRW the particle is assumed to move uniformly along a straight line connecting the initial and terminal points of one step. In this case the discrete collection of steps is converted into a continuous trajectory and the velocity $\mathbf{v}=\delta\mathbf{x}/\delta t$ of motion within one step is introduced. As a result, the given stochastic process is described by the probabilistic properties, e.g., of the collection of random variables $\{\mathbf{v},\delta t \}$. A more detailed description of particle motion lies beyond the CTRW model. Unfortunately, for Lévy flights fine details of the particle motion within one step can be important especially in heterogeneous media or systems with boundaries because of the divergence of the moment $\left<[\delta\mathbf{x}(\delta t)]^2\right>$. Broadly speaking, it is due to a Lévy particle being able to jump over a long distance during a short time. The fact that Lévy flights can exhibit nontrivial properties on scales of one step was demonstrated in Ref. [@CiteNew5] studied the first passage time problem for Lévy flights based on the leapover statistics. Previously a new approach to tackling this problem was proposed [@we1; @we2; @we3; @we33]. It is based on the following nonlinear stochastic differential equation with white noise $\xi(t)$ $$\label{int:1} \tau\frac{dv}{dt} = - \lambda v + \sqrt{\tau\big(v_a^2+v^2\big)} \xi(t)$$ governing random motion of a particle wandering, e.g., in the one-dimensional space $\mathbb{R}_x$. Here $v=dx/dt$ is the particle velocity, the time scale $\tau$ characterizes the delay in variations of the particle velocity which is caused by the particle inertia, $\lambda$ is a dimensionless friction coefficient. The parameter $v_a$ quantifies the relative contribution of the additive component $\xi_a(t)$ of the Langevin force with respect to the multiplicative one $v \xi_m(t)$ which are combined within one term $$\label{int:2} v_a\xi_a(t)+ v\xi_m(t)\Rightarrow\sqrt{\big(v_a^2+v^2\big)} \xi(t)\,.$$ Here we have not specified the type of the stochastic differential equation because in the given case all the types are interrelated via the renormalization of the friction coefficient $\lambda$. It should be noted that models similar to Eq.  within replacement can be classified as the generalized Cauchy stochastic process [@Konno] and has been employed to study stochastic behavior of various nonequilibrium systems, in particular, lasers [@22], on-off intermittency [@26], economic activity [@27], passive scalar field advected by fluid [@28], etc. Model  generates continuous Markovian trajectories obeying the Lévy statistics on time scales $t\gg\tau$ [@we1; @we2; @we3]. Using a special singular perturbation technique [@we2] it was rigorously proved for the superdiffusive regime matching $1<\alpha<2$ [@we1; @we2] and also verified numerically for the quasiballistic ($\alpha = 1$) and superballistic ($0<\alpha<1$) regimes [@we3]. Moreover, the main expressions obtained for the distribution function $\mathcal{P}(x,t)$ and the scaling law $\overline{x}(t)$ within the interval $1< \alpha < 2$ turn out
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'A virtual graphical construction is made to show the difference between neutrino and anti-neutrino oscillations in the presence of CP violation with CPT conservation.' author: - | R. G. Moorhouse$^b$\ $^b$University of Glasgow,Glasgow G12 8QQ, U.K.\ title: Note on a Pattern from CP Violation in Neutrino Oscillations --- Introduction ============ There is interest in the possibility that CPT violation may occur and then show in neutrino oscillation experiments[@BL],[@CDK], [@MINOS].. However this may be, CP violation is long established and it is of importance to seek it in neutrino oscillation results. If one takes a conservative point of view that nature conserves CPT, and there are 3 generations of neutrinos, then the consequences of CP violation in neutrino oscillation become more definite. In particular if CP were conserved $\nu$ transition probabilities would be the same as $\bar{\nu}$ transition probabilities while the occurence therein of CP violation makes these different [@Kayser]. This differencc is dependent on the neutrino oscillation parameter $L/E$, $L$ being the distance of travel from creation to detection and $E$ the energy of the initial neutrino: and it is also sensitive to the value of the small ratio of neutrino mass squared differences. There results a complicated 2-variable dependence in addition to the linear dependence on the leptonic Jarlskog parameter, $J_{lep}$[@HSW]. Pattern for the difference between $\nu$ and $\bar\nu$ oscillations =================================================================== The input to the formula for neutrino transition probabilities is largely from the mixing matrix elements $U_{\alpha i}$ where $\alpha$ is one of the 3 flavour indices and $i$ one of the 3 mass eigenstate indices. From these 9 elements plaquettes [@BD] can be constructed, these being phase invariant products of 2 $U$ elements multiplied by products of 2 $U^{\star}$ elements which occur in transition probabilities $\nu_{\alpha} \to \nu_{\beta}$ of neutrino beams. Here $\alpha,\beta$ are flavour indices of beam neutrinos. The construction is as follows. Greek letters denoting flavour indices $(e,\mu,\tau)$ and Roman letters mass eigenstate indices there are 9 plaquettes,labelled $\Pi_{\alpha i}$: $$\Pi_{\alpha i} \equiv U_{\beta j}U^{\star}_{\beta k}U_{\gamma k}U^{\star}_{\gamma j} \label{119}$$ where $\alpha,\beta,\gamma$ are non-equal and in cyclic order and $i,j,k$ are also non-equal and in cyclic order (The pattern discussed in this paper applies for an inverted hierarchy as well as for the normal hierarchy; that is there is no necessary association between a particular $\alpha$ and a particular $i$.) . Making use of well-known formalism [@Kayser] the beam transition probability for $\nu_{\alpha} \to \nu_{\beta}$,$\alpha \neq \beta$ can be written as $$\begin{aligned} P(\nu_{\alpha} \to \nu_{\beta})= -4\sum_{i=1}^3 \Re(\Pi_{{\gamma}i})\sin^2((m_{{\nu}k}^2-m_{{\nu}j}^2) L/4E)\\ +2\sum_{i=1}^3 \Im(\Pi_{{\gamma}i})\sin((m_{{\nu}k}^2-m_{{\nu}j}^2) L/2E)\label{120}\end{aligned}$$ where L is the length travelled by neutrino energy E from creation to annihilation at detection. The survival probability, $P(\nu_{\alpha} \to \nu_{\alpha})$, (given in [@Kayser]) can be calculated from the transition probabilities above. So the $3 \times 3$ plaquette matrix $\Pi$ and the neutrino mass eigenstate values squared differences carry all the information on transition and survival probabilities of a given beam. The last term (\[120\]) being only non-zero when CP is not conserved. Indeed all the nine $\Im \Pi_{{\alpha} i}$ are equal and equal to [@HDS] the leptonic Jarlskog invariant; $$\Im \Pi_{{\alpha} i}=J_{lep}$$. So J, as usual, is signalling CP violation. With CPT invariance the transition probability for anti-neutrinos $P(\bar{\nu}_{\alpha} \to \bar{\nu}_{\beta};\Pi )= P(\nu_{\alpha} \to \nu_{\beta};\Pi^\star)$ [@Kayser]. Thus the contribution of CP violation in anti-neutrino transitions is of the same magnitude but opposite sign to that in neutrino transitions, giving rise to a, in principle measurable, difference in the overall probability since the CP conserving contributions are the same.. The part of the probability (\[120\]) arising from CP violation is 2$J\xi$ where $$\xi= \sum_{i=1}^3 \sin((m_{{\nu}k}^2-m_{{\nu}j}^2)L/2E) \label{121}$$ This sum of sine functions (the sum of whose arguments is zero) may readily be transformed to $$\xi= 4\sin(x_d)\sin(y_d)\sin(x_d+y_d) \label{122}$$ $$\begin{aligned} (x_d,y_d)=(d_1L/4E,d_2L/4E) \\ d_1=(m_{{\nu}2}^2-m_{{\nu}1}^2), d_2=(m_{{\nu}3}^2-m_{{\nu}2}^2)\label{123}\end{aligned}$$ . Now consider the function $$\Xi(x,y) \equiv 4\sin(x)\sin(y)\sin(x+y) \label{126}$$ where in $\Xi (x,y)$ the arguments $x,y$ are freely varying and not restricted as in $\xi$. This function $\Xi$ has multiple maxima,minima with values $+3\sqrt3/2,-3\sqrt3 /2$ at arguments say $x_m$ and $y_m$ which are integer multiples of $\pi/3$ (but obviously not spanning all such integer multiples). Given mass squared differences then $\xi(L/E)$ is a function varying only with $L/E$ and the above maxima and minima cannot generally be attained. However one can distinguish regions of $L/E$ where relatively high values of $\xi$ are attained. These are, naturally, given by values of $x_d,y_d$ near to $x_m,y_m$ points of $\Xi$. These latter points can be located in the $(x,y)$ plane through the necessary condition that there the first derivatives of $\Xi$ with respect to both $x$ and $y$ should vanish. A simple geometrical picture can be given as follows. On the $x$ and $y$ positive quartile of the plane construct a square grid with neighbouring grid lines a distance $\pi/3$ apart resulting in a pattern of squares of side $\pi/3$. All the maximum and minimum points of $\Xi$ are at intersection points of the grid lines and are given by $$\begin{aligned} (x_m,y_m)=(1+3l,1+3k)\pi/3 \label{127}\\ (x_m,y_m)=(2+3l,2+3k)\pi/3 \label{128}\end{aligned}$$ where $l$ and $k$ are any non-negative integers. The points $(\ref{127})$ have $\Xi=3\sqrt3/2$ and the points $(\ref{128})$ have $\Xi=-3\sqrt3/2$. It is near these special points in the $x,y$ plane that $\xi(L/E)$ (eqn. \[121\]) has numerically large values. Note that for seeking observation of CP violation using the difference between $\nu$ and $\bar{\nu}$ transitions it does not matter whether $\xi$ is positive or negative so both maximum and minimum points of $\Xi$ are equally potentially important. As $L/E$ varies the points $(x_d,y_d)$ (\[123\]) trace a straight line in the $(x,y)$ plane starting at $(0,0)$ and ascending as $(L/E)$ increases. This line of $\xi$ makes a small angle arctan$(d_1/d_2)$ with the y-axis and pases through the archipelago of special points given by (\[127\],\[128\]). Points $(x_d,y_d)$ on the line of $\xi$ which are close to the $\Xi$ special points (\[127\],\[128\]) give numerically large values of $\xi$ and the associated values of $L/E$ signify neutrinos whose transtions contain a relatively large CP violating part. To give an idea of how much of the plane has a value of $\Xi$ near maximum or minimum then the value of $\Xi$ near the special points should be evaluated. Let $\delta$ be the distance between a near point and the special point which it is near to. Then near a maximum or minimum $\Xi=\pm 3\sqrt3/2(1- \Delta)$ respectively. where $\Delta \leq 2\delta^2$ . . Thus
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We performed terahertz magneto-optical spectroscopy of FeSe thin film to elucidate the charge carrier dynamics. The measured diagonal (longitudinal) and off-diagonal (Hall) conductivity spectra are well reproduced by two-carrier Drude model, from which the carrier densities, scattering times and effective masses of electron and hole carriers are determined in a wide range of temperature. The hole density decreases below the structural transition temperature while electron density increases, which is attributed to the band structure modification in the electronic nematic phase. The scattering time of the hole carrier becomes substantially longer than that of the electron at lower temperature, which accounts for the increase of the positive dc Hall coefficient at low temperature.' author: - Naotaka Yoshikawa - Masayuki Takayama - Naoki Shikama - Tomoya Ishikawa - Fuyuki Nabeshima - Atsutaka Maeda - Ryo Shimano bibliography: - 'refs\_FeSeFaraday.bib' title: | Charge carrier dynamics of FeSe thin film investigated by\ terahertz magneto-optical spectroscopy --- =1 Since the discovery of iron-based superconductors (FeSCs), tremendous research efforts have been devoted to reveal the pairing mechanism of superconductivity. The elucidation of interplay between the nematic order, antiferromagnetic spin order, and superconductivity emergent in FeSCs has been believed to provide a clue to understand the emergent superconductivity. Among FeSCs, FeSe provides a unique playground to study the role of nematicity, because it lacks the long-range magnetic order in the nematic phase that appears below the tetragonal-orthorhombic structural transition temperature $T_{\mathrm{s}}\simeq \SI{90}{\kelvin}$, as evidenced by a significant electronic anisotropy from transport and nuclear magnetic resonance (NMR) spectral properties[@McQueen:2009hs; @Baek:2014gs; @Bohmer:2015fk]. While the superconducting transition temperature $T_{\mathrm{c}}$ of bulk FeSe is $\sim\SI{9}{K}$ at ambient pressure[@Hsu:2008ep], it shows a remarkable tunability. $T_{\mathrm{c}}$ increases to as high as under hydrostatic pressure[@Medvedev:2009ex; @Imai:2009hw; @Mizuguchi:2008bn; @Sun:2016dh], and single-layer FeSe grown on SrTiO$_3$ shows $T_{\mathrm{c}}$ up to [@Ge:2014hc; @He:2013cn; @Tan:2013jb]. Electron doping by ionic-gating in FeSe thin flakes enhances the superconductivity toward [@Lei:2016gl; @Shiogai:2015fw; @Hanzawa:2017fa; @Kouno:2018fp]. Intercalation also enhances [$T_{\mathrm{c}}$]{} by a similar doping effect in addition to an effect of separating the layers[@BurrardLucas:2012fm]. One important key to understand the [$T_{\mathrm{c}}$]{} increase of FeSe is considered to be a change of the Fermi surface topology. The high tunability of the electronic structure of FeSe achieved by various ways is related to its extremely small effective Fermi energy, which has been demonstrated in FeSe[@Kasahara:2014gt] as well as FeSe$_{1-x}$Te$_x$[@Lubashevsky:2012br; @Okazaki:2014im]. FeSe is a semimetal with the Fermi surface consisting of hole pockets around the Brillouin zone center $\mathit{\Gamma}$ point and electron pockets around the zone corner $M$ point. The low-energy electronic structure around the Fermi level has been experimentally revealed by angle-resolved photoemission spectroscopy (ARPES)[@Shimojima:2014kc; @Zhang:2015fx; @Watson:2015kn; @Nakayama:2014eo; @Fanfarillo:2016kz]. ARPES studies have also shown a significant modification of the band structure below $T_{\mathrm{s}}$ which is attributed to the development of an electronic nematicity. For the understanding of unconventional superconductivity in FeSe, it is also indispensable to investigate the charge carrier dynamics in normal and nematic phases as well as in superconducting phase. The Hall resistivity measured by magneto-transport shows an unusual temperature dependence with the sign change owing to the nearly compensated electron and hole carriers[@Kasahara:2014gt; @Watson:2015hx; @Sun:2016by; @Nabeshima:2018fi]. In bulk FeSe, the presence of a small number of highly mobile electron-like carrier at the nematic phase was also identified by the Hall resistivity[@Watson:2015hx] and mobility spectrum analysis[@Huynh:2014ch], which could be attributed to the Dirac-like dispersion near the $M$ point[@Tan:2016cd]. However, the complexity of the multi-band Fermi surfaces of FeSe makes it difficult to grasp the properties of charge carriers only by dc transport measurements. This is because the characterization of the carriers by dc transport measurements needs to assume some models such as a compensated two-band model, where the compensated electron is assumed to have same carrier density as that of the hole ($n_e=n_h$)[@Watson:2015hx; @Nabeshima:2018fi; @Huynh:2014ch; @Ovchenkov:2017fp]. Although three band model can also be used by including the nonlinear term when dc transport is measured up to high magnetic field, the characterization is not complete because the mobility, which is determined by dc transport in addition to carrier densities, is a function of effective mass and scattering time. For more detailed characterization of charge carriers, quantum oscillations are a well-established technique[@Watson:2015kn; @Watson:2015hx; @Terashima:2014ft; @Audouard:2015hp]. However, quantum oscillations are able to be observed typically in bulk crystals grown by the vapor transport techniques. Thus, the observation of quantum oscillations of FeSe has been limited in bulk single crystals and at very low temperature. Therefore, the properties of charge carriers of FeSe in a wide range of temperature in particular across the structural phase transition temperature have remained to be clarified. In this study, we investigate the charge dynamics in a thin-film FeSe by terahertz (THz) magneto-optical spectroscopy. The obtained diagonal (longitudinal) and off-diagonal (Hall) conductivity spectra are well described by two-carrier Drude model, from which the carrier densities, scattering times and effective masses of electron and hole carriers were independently determined. The temperature dependence of THz magneto-optical spectra revealed the significant change of the carrier densities below $T_{\mathrm{s}}$, which is plausibly attributed to the band structure modification in the nematic phase. The scattering time of the hole carrier substantially increases at lower temperature, which explains the peculiar temperature dependence of the dc Hall coefficient in FeSe thin films. ![(a) Temperature dependence of dc resistivity $\rho$ (red line) and $d\rho/dT$ (blue line) of the FeSe thin film. A kink anomaly at $T_{\mathrm{s}}$ is indicated by black arrow. Inset shows an enlarged view of the resistivity curve around [$T_{\mathrm{c}}$]{} $\sim\SI{3}{K}$. (b) Schematic of our THz magneto-spectroscopy. (c) Faraday rotation spectrum and (d) ellipticity spectrum induced by FeSe film at with the magnetic field of .](Fig1.pdf){width="\columnwidth"} A FeSe thin film with the thickness of 46 nm was fabricated on LaAlO$_3$ (LAO) substrate by pulsed-laser deposition method[@Imai:2010ez; @Imai:2010jl]. The temperature dependence of dc resistivity shows the superconducting transition at [$T_{\mathrm{c}}$]{} $\sim\SI{3}{K}$ defined by the zero resistivity (Fig. 1(a)). A kink anomaly in $d\rho/dT$ curve indicates the structural transition at $T_{\mathrm{s}}\sim \SI{80}{K}$. Figure 1(b) shows the schematic of our THz magneto-spectroscopy based on THz time-domain spectroscopy (THz-TDS)[@Ikebe:2008cq; @Ikebe:2009it; @Shimano:2013ez]. The output of a mode-locked Ti:sapphire laser with the pulse duration of 110 fs, center wavelength of 800 nm, and repetition rate of 76 MHz was focused onto a p-type $(111)$ InAs surface to generate THz pulses. Linearly polarized THz incident pulses were focused on the sample placed in a split-type superconducting magnet which can produce the magnetic field up to in Faraday configuration, that is, the magnetic field is parallel to the wavevector of the THz wave. The THz-wave was detected by electro-optical sampling with a $(110)$ ZnTe crystal. By measuring the waveform of the parallel polarization component defined as $E_x(t)$ and perpendicular polarization component $E_y (t)$ of the transmitted THz pulses by using wire-grid polarizers, the Faraday rotation angle $\theta$ and ellipticity $\eta$ induced by the FeSe film in the magnetic field can be obtained. Here, the approximated expression $E_y (\omega)/E_x (\omega)\sim \theta(\omega)+i\eta(\omega)$ for small Faraday rotation angle was used
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Recent results from the NA48/2 and NA62 kaon decay-in-flight experiments at CERN are presented. A precision measurement of the helicity-suppressed ratio $R_K$ of the $K^\pm\to e^\pm\nu$ and $K^\pm\to\mu^\pm\nu$ decay rates has been performed using the full dedicated data set collected by the NA62 experiment ($R_K$ phase); the result is in agreement with the Standard Model expectation. New measurements of the $K^\pm\to\pi^\pm\gamma\gamma$ decay at the NA48/2 and NA62 experiments provide further tests of the Chiral Perturbation Theory. A planned measurement of the branching ratio of the ultra-rare $K^+\to\pi^+\nu\bar\nu$ decay at 10% precision is expected to represent a powerful test of the Standard Model.' author: - 'Evgueni Goudzovski   for the NA48/2 and NA62 collaborations' title: 'Kaon experiments at CERN: recent results and prospects' --- INTRODUCTION {#introduction .unnumbered} ============ In 2003–04, the NA48/2 experiment has collected at the CERN SPS the world largest sample of charged kaon decays, with the main goal of searching for direct CP violation in the $K^\pm\to3\pi$ decays [@ba07]. In 2007–08, the NA62 experiment ($R_K$ phase) collected a large minimum bias data sample with the same detector but modified data taking conditions, with the main goal of measuring the ratio of the rates of the $K^\pm\to\ell^\pm\nu$ decays ($\ell=e,\mu$). The large statistics accumulated by both experiments has allowed the studies of a range of rare $K^\pm$ decay modes. The main stage of the NA62 experiment, expected to start physics data taking in 2014, aims at measuring the $K^+\to\pi^+\nu\bar\nu$ decay rate. The recent results and prospects of these experiments are discussed here. BEAM AND DETECTOR IN 2003–08 ============================ The beam line has been designed to deliver simultaneous narrow momentum band $K^+$ and $K^-$ beams derived from the primary 400 GeV/$c$ protons extracted from the CERN SPS. Central beam momenta of 60 GeV/$c$ and 74 GeV/$c$ have been used. The beam kaons decayed in a fiducial decay volume contained in a 114 m long cylindrical vacuum tank. A detailed description of the detector used in 2003–08 is available in [@fa07]. The momenta of charged decay products are measured in a magnetic spectrometer, housed in a tank filled with helium placed after the decay volume. The spectrometer comprises four drift chambers (DCHs), two upstream and two downstream of a dipole magnet which gives a horizontal transverse momentum kick of $120~\mathrm{MeV}/c$ or $265~\mathrm{MeV}/c$ to singly-charged particles. Each DCH is composed of eight planes of sense wires. A plastic scintillator hodoscope (HOD) producing fast trigger signals and providing precise time measurements of charged particles is placed after the spectrometer. A 127 cm thick liquid krypton (LKr) electromagnetic calorimeter located further downstream is used for lepton identification and as a photon veto detector. Its 13248 readout cells have a transverse size of approximately 2$\times$2 cm$^2$ each, without longitudinal segmentation. LEPTON UNIVERSALITY TEST WITH 2007–08 DATA ========================================== Decays of pseudoscalar mesons to light leptons ($P^\pm\to\ell^\pm\nu$, denoted $P_{\ell 2}$ below) are suppressed in the Standard Model (SM) by helicity considerations. Ratios of leptonic decay rates of the same meson can be computed very precisely: in particular, the SM prediction for the ratio $R_K=\Gamma(K_{e2})/\Gamma(K_{\mu 2})$ is [@ci07] $$\label{Rdef} R_K^\mathrm{SM} = \left(\frac{m_e}{m_\mu}\right)^2 \left(\frac{m_K^2-m_e^2}{m_K^2-m_\mu^2}\right)^2 (1 + \delta R_{\mathrm{QED}})=(2.477 \pm 0.001)\times 10^{-5},$$ where $\delta R_{\mathrm{QED}}=(-3.79\pm0.04)\%$ is an electromagnetic correction. Within extensions of the SM involving two Higgs doublets, $R_K$ is sensitive to lepton flavour violating effects induced by loop processes with the charged Higgs boson ($H^\pm$) exchange [@ma06]. A recent study [@gi12] has concluded that $R_K$ can be enhanced by ${\cal O}(1\%)$ within the Minimal Supersymmetric Standard Model. However, the potential new physics effects are constrained by other observables such as $B_s\to\mu^+\mu^-$ and $B_u\to\tau\nu$ decay rates [@fo12]. On the other hand, $R_K$ is sensitive to the neutrino mixing parameters within SM extensions involving a fourth generation of quarks and leptons [@la10]. The analysis strategy is based on counting the numbers of reconstructed $K_{e2}$ and $K_{\mu 2}$ candidates collected concurrently. Therefore the analysis does not rely on the absolute beam flux measurement, and several systematic effects cancel at first order. The study is performed independently for 40 data samples (10 bins of reconstructed lepton momentum and 4 samples with different data taking conditions) by computing the ratio $R_K$ as $$R_K = \frac{1}{D}\cdot \frac{N(K_{e2})-N_{\rm B}(K_{e2})}{N(K_{\mu2}) - N_{\rm B}(K_{\mu2})}\cdot \frac{A(K_{\mu2})}{A(K_{e2})} \cdot \frac{f_\mu\times\epsilon(K_{\mu2})} {f_e\times\epsilon(K_{e2})}\cdot\frac{1}{f_\mathrm{LKr}}, \label{eq:rkcomp}$$ where $N(K_{\ell 2})$ are the numbers of selected $K_{\ell 2}$ candidates $(\ell=e,\mu)$, $N_{\rm B}(K_{\ell 2})$ are the numbers of background events, $A(K_{\mu 2})/A(K_{e2})$ is the geometric acceptance correction, $f_\ell$ are the efficiencies of $e$/$\mu$ identification, $\epsilon(K_{\ell 2})$ are the trigger efficiencies, $f_\mathrm{LKr}$ is the global efficiency of the LKr calorimeter readout (which provides the information used for electron identification), and $D$ is the downscaling factor of the $K_{\mu2}$ trigger. The data sample is characterized by high values of $f_\ell$ and $\epsilon(K_{\ell 2})$ well above 99%. A Monte Carlo (MC) simulation is used to evaluate the acceptance correction and the geometric part of the acceptances for most background processes entering the computation of $N_B(K_{\ell 2})$. Particle identification, trigger and readout efficiencies and the beam halo background are measured directly from control data samples. Two selection criteria are used to distinguish $K_{e2}$ and $K_{\mu2}$ decays. Kinematic identification is based on the reconstructed squared missing mass assuming the track to be a electron or a muon: $M_{\mathrm{miss}}^2(\ell) = (P_K - P_\ell)^2$, where $P_K$ and $P_\ell$ ($\ell = e,\mu$) are the kaon and lepton 4-momenta (Fig. \[fig:mm2\]). A selection condition $M_1^2<M_{\mathrm{miss}}^2(\ell)<M_2^2$ is applied; $M_{1,2}^2$ vary across the lepton momentum bins depending on resolution. Lepton type identification is based on the ratio $E/p$ of energy deposit in the LKr calorimeter to track momentum measured by the spectrometer. Particles with $(E/p)_{\rm min}<E/p<1.1$ ($E/p<0.85$) are identified as electrons (muons), where $(E/p)_{\rm min}$ is 0.90 or 0.95, depending on momentum. The numbers of selected $K_{e2}$ and $K_{\mu 2}$ candidates are 145,958 and $4.2817\times 10^7$ (the latter pre-scaled at trigger level). The background contamination in the $K_{e2}$ sample has been estimated by MC simulations and, where possible, direct measurements to be $(10.95\pm0.27)\%$. The largest background contribution is the $K_{\mu2}$ decay with a mis-identified muon via the ‘catastrophic’ bremsstrahlung process in the LKr. To reduce the uncertainty due to background subtraction, the muon mis-identification probability $P_{\mu e}$ has been measured as a function of momentum using dedicated data samples. The contributions to the systematic uncertainty of the result include the uncertainties on the backgrounds, helium purity in the spectrometer tank (which influences the detection efficiency via bremsstrahlung and scattering), beam simulation, spectrometer alignment, particle
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We extend to a generalized pseudoeffect algebra (GPEA) the notion of the exocenter of a generalized effect algebra (GEA) and show that elements of the exocenter are in one-to-one correspondence with direct decompositions of the GPEA; thus the exocenter is a generalization of the center of a pseudoeffect algebra (PEA). The exocenter forms a boolean algebra and the central elements of the GPEA correspond to elements of a sublattice of the exocenter which forms a generalized boolean algebra. We extend to GPEAs the notion of central orthocompleteness, prove that the exocenter of a centrally orthocomplete GPEA (COGPEA) is a complete boolean algebra and show that the sublattice corresponding to the center is a complete boolean subalgebra. We also show that in a COGPEA, every element admits an exocentral cover and that the family of all exocentral covers, the so-called exocentral cover system, has the properties of a hull system on a generalized effect algebra. We extend the notion of type determining (TD) sets, originally introduced for effect algebras and then extended to GEAs and PEAs, to GPEAs, and prove a type-decomposition theorem, analogous to the type decomposition of von Neumann algebras.' address: 'Department of Mathematics an Statistics, Univ. of Massachusetts, Amherst, MA, USA; Štefánikova 49, 814 73 Bratislava, Slovakia' author: - 'David J. Foulis, Sylvia Pulmannová and Elena Vinceková' title: The exocenter and type decomposition of a generalized pseudoeffect algebra --- Introduction {#sc:Intro} ============ Our purpose in this article is to define and study extensions to generalized pseudoeffect algebras of the notions of the center, central orthocompleteness, central cover, type determining sets and type decompositions for an effect algebra, resp. for a pseudoeffect algebra (see [@FPType; @COEA; @ExoCen; @CenGEA; @TDPA; @GFP]). Effect algebras (EAs) [@FandB] were originally introduced as a basis for the representation of quantum measurements [@BLM], especially those that involve fuzziness or unsharpness. Special kinds of effect algebras include orthoalgebras, MV-algebras, Heyting MV-algebras, orthomodular posets, orthomodular lattices, and boolean algebras. An account of the axiomatic approach to quantum mechanics employing EAs can be found in [@DvPuTrends]. Several authors have studied or employed algebraic structures that, roughly speaking, are EAs “without a largest element." These studies go back to M.H. Stone’s work [@Stone] on generalized boolean algebras; later M.F. Janowitz [@Jan] extended Stone’s work to generalized orthomodular lattices. More recent developments along these lines include [@FandB; @HedPu; @KR; @KCh; @MI; @PV07; @Zdenka99; @W]. The notion of a (possibly) non-commutative effect algebra, called a pseudoeffect algebra, was introduced and studied in [@DV1; @DV2; @D]. Whereas a prototypic example of an effect algebra is the order interval from $0$ to a positive element in a partially ordered abelian group, an analogous interval in a partially ordered non-commutative group is a prototype of a pseudoeffect algebra. Pseudoeffect algebras “without a largest element", called generalized pseudoeffect algebras, also have been studied in the literature [@DvVepo; @DvVegen; @PVext; @XieLi]. The classic decomposition of a von Neumann algebra as a direct sum of subalgebras of types I, II and III [@MvN], which plays an important role in the theory of von Neumann algebras, is reflected by a direct sum decomposition of the complete orthomodular lattice (OML) of its projections. The type-decomposition for a von Neumann algebra is dependent on the von Neumann-Murray dimension theory, and likewise the early type-decomposition theorems for OMLs were based on the dimension theories of L. Loomis [@L] and of S. Maeda [@M]. Decompositions of complete OMLs into direct summands with various special properties were obtained in [@CChM; @K; @R] without explicitly employing lattice dimension theory. More recent and considerably more general results on type-decompositions based on dimension theory can be found in [@GW]. Dimension theory for effect algebras was developed in [@HandD]. As a continuation of the aforementioned work, the theory of so called type determining sets was introduced and applied, first to obtain direct decompositions for centrally orthocomplete effect algebras [@FPType; @COEA], and later for centrally orthocomplete pseudoeffect algebras [@TDPA]. While direct decompositions of effect algebras and pseudoeffect algebras are completely described by their central elements [@D; @GFP], for the generalized structures without a top element, we need to replace the center by the so called exocenter, which is composed of special endomorphisms, resp. ideals [@ExoCen; @Je00]. The present paper is organized as follows. In Section \[sc:GPEAs\], we introduce basic definitions and facts concerning generalized pseudoeffect algebras (GPEAs). In Section \[sc:ExoCenter\] we introduce the notion of the exocenter of a GPEA and study its properties. Section \[sc:CenterGPEA\] is devoted to central elements in a GPEA and relations between the center and the exocenter. The notion of central orthocompleteness is extended to GPEAs in Section \[sc:CO\] where it is shown that the center of a centrally orthocomplete GPEA (COGPEA) is a complete boolean algebra. In Section \[sc:ExoCenCover\] we introduce the exocentral cover, which extends the notion of a central cover for an EA. In Section \[sc:TDsets\], we develop the theory of type determining sets for GPEAs and show some examples. Finally, in Section \[sc:TypeDecomp\], we develop the theory of type decompositions of COGPEAs into direct summands of various types. We note that COGPEAs are, up to now, the most general algebraic structures for which the theory of type determining sets has been applied to obtain direct decompositions. Generalized pseudoeffect algebras {#sc:GPEAs} ================================= We abbreviate ‘if and only if’ as ‘iff’ and the notation $:=$ means ‘equals by definition’. \[def:gpea\] A *generalized pseudoeffect algebra* (GPEA) is a partial algebraic structure $(E,\oplus,0)$, where $\oplus$ is a partial binary operation on $E$ called the *orthosummation*, $0$ is a constant in $E$ called the *zero element*, and the following conditions hold for all $a,b,c\in E$: 1. 1. (*associativity*) $(a\oplus b)$ and $(a\oplus b)\oplus c$ exist iff $b\oplus c$ and $a\oplus (b\oplus c)$ exist and in this case $(a\oplus b)\oplus c=a\oplus (b\oplus c)$. 2. (*conjugacy*) If $a\oplus b$ exists, then there are elements $d,e\in E$ such that $a\oplus b=d\oplus a=b\oplus e$. 3. (*cancellation*) If $a\oplus b=a\oplus c$, or $b \oplus a=c\oplus a$, then $b=c$. 4. (*positivity*) If $a\oplus b=0$, then $a=b=0$. 5. (*zero element*) $a\oplus 0$ and $0\oplus a$ always exist and are both equal to $a$. As a consequence of (GPEA3), the elements $d$ and $e$ in (GPEA2) are uniquely determined by $a$ and $b$. Following the usual convention, we often refer to a GPEA $(E,\oplus,0)$ simply as $E$. If $E$ and $F$ are GPEAs, then a mapping $\phi\colon E\to F$ is a *GPEA-morphism* iff, for all $a,b\in E$, if $a\oplus b$ exists in $E$, then $\phi(a)\oplus\phi(b)$ exists in $F$ and $\phi(a\oplus b) =\phi(a)\oplus \phi(b)$. If $\phi\colon E\to F$ is a bijective GPEA-morphism and $\phi\sp{-1}\colon F\to E$ is also a GPEA-morphism, then $\phi$ is a *GPEA-isomorphism*. In what follows, $(E,\oplus,0)$ is a generalized pseudoeffect algebra. In general, lower case Latin letters $a,b,c,...,x,y,z$, with or without subscripts, will denote elements of $E$. If we write an equation involving an orthosum, e.g. $x\oplus y=z$, we tacitly assume its existence. \[df:leqetc\] The relation $\leq$
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'A novel multiscale method for non M-matrices using Multiscale Restricted Smoothed Basis (MsRSB) functions is presented. The original MsRSB method is enhanced with a filtering strategy enforcing M-matrix properties to enable the robust application of MsRSB as a preconditioner. Through applications to porous media flow and linear elastic geomechanics, the method is proven to be effective for scalar and vector problems with multipoint finite volume (FV) and finite element (FE) discretization schemes, respectively. Realistic complex (un)structured two- and three-dimensional test cases are considered to illustrate the method’s performance.' address: - 'Department of Energy Resources Engineering, Stanford University, Stanford, CA, USA' - SINTEF Digital - 'Norwegian University of Science and Technology, Department of Mathematical Sciences' - 'Atmospheric, Earth, and Energy Division, Lawrence Livermore National Laboratory, Livermore, CA, U.S.A.' author: - Sebastian BM Bosma - Sergey Klevtsov - 'Olav M[ø]{}yner' - Nicola Castelletto bibliography: - 'main\_arXiv.bib' title: 'Enhanced multiscale restriction-smoothed basis (MsRSB) preconditioning with applications to porous media flow and geomechanics' --- Multiscale methods ,MsRSB ,Multipoint flux approximation ,Finite element method ,Preconditioning ,Geomechanics Introduction ============ Large-scale numerical simulations are often required to understand and predict real world dynamics. In many applications, the use of high-resolution grids is required to characterize the heterogeneity of the material properties and the geometric complexity of the domains. Such simulations impose severe computational challenges and motivate the need for efficient solution schemes. Attractive multilevel strategies to achieve this are multiscale methods [@EfeHou09]. In this paper, we propose a generalization of the multiscale restriction-smoothed basis method (MsRSB) recently put forward in [@MsRSB_Moyner2016], and investigate its use as an effective preconditioner for multipoint flux approximation finite volume (FV) and finite element (FE) discretizations of second-order elliptic problems. Specifically we focus on applications to porous media flow and linear elastic geomechanics. The original idea underlying multiscale discretization methods for heterogeneous second-order elliptic problems can be traced back four decades [@StrFed79; @BabOsb83]. In essence, these methods aim at constructing accurate coarse-scale problems that preserve information of fine scale heterogeneity and can be solved at low computational cost. This is accomplished by numerically computing multiscale basis functions, which are local solutions of the original problem, that are used to both: (i) construct the coarse-scale problem, and (ii) interpolate the coarse-scale solution back to the fine-scale. Various methods to obtain these basis functions have been developed, for example generalized finite-element (GFE) methods [@BabCalOsb94], multiscale finite-element (MsFE) methods [@HouWu97], numerical-subgrid upscaling [@Arb02], multiscale mixed finite-element (MsMFE) methods [@CheHou03], multiscale finite-volume (MsFV) methods [@MSFV_Jenny2003], multiscale mortar mixed finite-element (MsMMFE) methods [@Arb_etal07], multilevel multiscale mimetic (M^3^) methods [@LipMouSvy08], multiscale mixed/mimetic finite-element (MsMFEM) [@AarKrogLie08] and generalized multiscale finite element (GMsFE) [@EfeGalHou13] methods, to name a few. In the geoscience community, multiscale methods have been extensively applied both as single-pass [@MSFV_Jenny2003] and iterative schemes [@NorBjo08; @Haj_etal08] to resolve some of the limitations of existing upscaling methods. They have established a solid framework for simulating complex subsurface flow processes, e.g. [@JuaTch08; @Nor09; @Hel_etal10; @ZhoTch12; @WanHajTch14; @Koz_etal16; @Cus_etal15; @ChuEfeLee15; @ParEdw15; @TenKobHaj16; @Lie_etal16; @LieMoyNat17; @Cus_etal18]. Multiscale methods for linear elastic problems have focused primarily on the derivation of accurate coarse space basis functions which are robust with respect to material property heterogeneities and enable scalable performance [@BucIliAnd13; @BucIliAnd14; @Spi_etal14; @MultiscaleFEM_Castelleto2017; @ChuLee19]. Applications to the poroelasticity equations include [@ZhaFuWu09; @BroVas16a; @BroVas16b; @DanGanWhe18; @Akk_etal18; @SokBasHaj19; @Cas_etal19]. The MsRSB method was proposed in the context of FV simulation for fluid flow in highly heterogeneous porous media [@MsRSB_Moyner2016]. Based on a two-grid approach, the MsRSB method constructs multiscale basis functions through restricted smoothing on the fine-scale matrix. In more detail, the basis functions, which are consistent with the local differential operators, are constructed with a cheap relaxation scheme, i.e. a weighted Jacobi iteration, similar to approaches used in smoothed aggregation multigrid methods [@VanManBre96; @VanManBre01; @Bre_etal05]. An important advantage of MsRSB is that smoothing by relaxation provides a great deal of flexibility in handling unstructured grids, an essential requirement, for example, in applications involving complex geological structures. MsRSB has been widely proven and implemented in open source and commercial simulators using a linear two-point flux approximation (TPFA) [@Lie_etal16]. Because of the two-point structure, the linear TPFA scheme is monotone [@Dro14], i.e. it preserves the positivity of the differential solution [@BerPle94], and leads to an M-matrix with a small stencil. This is the reason why linear TPFA is the scheme of choice in most engineering software. Unfortunately, the consistency of TPFA is not guaranteed for arbitrary grids and anisotropic permeability distributions, potentially leading to inaccurate results [@MRST]. Therefore, other FV methods such as multipoint flux approximation (MPFA) and/or nonlinear schemes [@Dro14; @TerMalTch17] must be considered to achieve consistent fluxes. To date, MsRSB has not been combined with MPFA or other consistent discretizations. Hence, in this paper, we focus on enhancing MsRSB to enable the solution of second-order elliptic problems using discretization methods that do not result in an M-matrix. Based on the MPFA-O method [@MPFA_Aavatsmark], we show that the MsRSB basis construction as presented in [@MsRSB_Moyner2016] can fail due to divergent iterations for an anisotropic diffusion problem. We propose a variant of the original MsRSB approach that restores the desired behavior by enforcing M-matrix properties based on a filtering strategy. We develop the new method focusing on FV discretizations for porous media single-phase flow, and extend its use to vector elliptic problems by targeting FE-based simulation of linear elastic geomechanics. The paper is structured as follows. First, the original multiscale restriction-smoothed basis method is briefly reviewed in \[sec:MsRSB\]. Second, MsRSB for an MPFA flow discretization is analyzed and the novel approach is proposed in Section \[sec:MPFA\]. Next, the proposed method is extended to geomechanics in Section \[sec:geomechanics\]. Challenging two- and three-dimensional experiments are presented to demonstrate properties, robustness and scalability of the method throughout Section \[sec:MPFA\] and \[sec:geomechanics\], including comparisons to existing methods and published results. Finally the report is concluded and future work specified. The Multiscale Restriction-Smoothed Basis method (MsRSB) {#sec:MsRSB} ======================================================== We propose a two-level preconditioning framework based on MsRSB for accelerating iterative Krylov methods to solve linear systems of the form: $${A} {\mathbf{u}} = {\mathbf{f}}, \label{eq:linsys_general}$$ where the coefficient matrix ${A} \in \mathbb{R}^{n \times n}$ arises from a finite volume (FV) or finite element (FE) discretization of a scalar or vector second-order elliptic problem. Furthermore, ${\mathbf{u}} = \{ u_i \}_{i=1}^{n} \in \mathbb{R}^{n}$ is the solution vector containing the unknown degrees of freedom, and ${\mathbf{f}} = \{ f_j \}_{j=1}^{n} \in \mathbb{R}^{n}$ is the discrete forcing term. In this work we develop the method and illustrate its performance focusing on two simple but representative models routinely employed in practical simulation of subsurface processes: (i) the incompressible single-phase flow equation, and (ii) the linear elastostatic equations. For the flow problem we will concentrate on FV fine-
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | Zero-shot video classification for fine-grained activity recognition has largely been explored using methods similar to its image-based counterpart, namely by defining image-derived attributes that serve to discriminate among classes. However, such methods do not capture the fundamental dynamics of activities and are thus limited to cases where static image content alone suffices to classify an activity. For example, reversible actions such as entering and exiting a car are often indistinguishable. In this work, we present a framework for straightforward modeling of activities as a state machine of dynamic attributes. We show that encoding the temporal structure of attributes greatly increases our modeling power, allowing us to capture action direction, for example. Further, we can extend this to activity detection using dynamic programming, providing, to our knowledge, the first example of zero-shot joint segmentation and classification of complex action sequences in a larger video. We evaluate our method on the Olympic Sports dataset where our model establishes a new state of the art for standard zero-shot-learning (ZSL) evaluation as well as outperforming all other models in the inductive category for general (GZSL) zero-shot evaluation. Additionally, we are the first to demonstrate zero-shot decoding of complex action sequences on a widely used surgical dataset. Lastly, we show that that we can even eliminate the need to train attribute detectors by using off-the-shelf object detectors to recognize activities in challenging surveillance videos. author: - | Jonathan D. Jones $\textsuperscript{*}$ Tae Soo Kim$\textsuperscript{*}$ Michael Peven[^1] Jin Bai\ Zihao Xiao Yi Zhang Weichao Qiu Alan Yuille Gregory D. Hager\ Johns Hopkins University\ 3400 N. Charles Street, Baltimore, MD, USA\ [{jdjones,tkim60,mpeven,jbai12,zxiao10,yzhan286,wqiu7,ayuille1,hager}@jhu.edu]{} bibliography: - 'egbib.bib' title: 'Zero-shot Recognition of Complex Action Sequences' --- Introduction ============ When learning activity recognition models using deep neural networks, most approaches assume a fully supervised problem setting where 1) all categories of query actions are known *a priori*, 2) example instances from such categories are made available during training and 3) the pre-defined closed set of labels are supported by a large and relatively balanced set of examples. Taken together, this has led to an emphasis on ever more advanced regression-style approaches, whereby a neural network model is trained and scored on held-out examples from the same label set in an end-to-end bottom-up fashion. However, many real-world applications do not fit this model because they are naturally “open-set” problems where new labels may be defined at test time, and/or are fine-grained and compositional so that a combinatorial number of possible activity labels may exist, and/or may be data poor so that sufficient labeled training data may not exist for the desired use case. For example, in video surveillance, the goal is often to detect specific unusual activities in a zero-shot manner, *e.g.* “locate instances where a light brown package is being placed under a car by a man wearing a gray parka.” To successfully answer such a structured query, the ability of a zero-shot system to compose together detectable actor-object relational attributes in an on-demand fashion is highly desired. ![image](figures/money_2.png){width="1.0\linewidth"} In this paper, we present a framework for zero-shot recognition of complex action sequences that models an activity as a sequence of dynamic action signatures. In our framework, an action signature is a particular configuration of visually detectable entities, such as attributes, objects and relations, that describe a temporally local segment of a video. A fundamental observation in our work is that such configurations are often *dynamic*, rather than static—*i.e.* an action’s attributes change over time in a characteristic manner. For example, the act of *a person entering a vehicle* as shown in Figure \[fig:money\] can be defined as “a person near a vehicle moving into a vehicle”. This can be described as the attribute sequences `a person exists` followed by `a person does not exist` and `a vehicle exists`. In the remainder of this paper, we show that dynamic action signatures provide a powerful semantic label embedding for zero-shot activity classification and establish a new state-of-the-art zero-shot classification benchmark on a standard zero-shot-learning (ZSL) dataset, Olympic Sports [@olympic-sports-2010]. We also use our methodology to impose constraints *on the predicted action sequences themselves*, leading to the first zero-shot segmentation results on complex action sequences in a challenging surgical dataset [@jigsaws-2014], and establish, for the first time, a zero-shot baseline result that is competitive with end-to-end trained methods. Finally, in section \[sec:diva\] we eliminate any kind of supervised training on the dataset from which unseen (test) cases are drawn by using publicly available, off-the-shelf object detectors to provide action signatures for video surveillance. We combine this with our activity models to provide a true *de novo* model of an activity. We provide both quantitative and qualitative results of our zero-shot framework using these “on the fly” models on the challenging DIVA dataset[^2], which contains fine-grained human-object interactions under a real world video surveillance setting. In summary, the main contributions of the paper are: - A zero-shot classification of complex action sequences with dynamic action signatures which establishes a new state-of-the-art on Olympic Sports [@olympic-sports-2010] dataset. We outperform all other methods for the ZSL evaluation regardless of training assumptions (inductive/transductive). - To the best of our knowledge, we are the first to demonstrate zero-shot decoding of complex action sequences. We present our results on a surgical dataset, JIGSAWS [@jigsaws-2014], to jointly segment and classify fine-grained surgical gestures where we establish an impressive baseline. - A demonstration of zero-shot classification of fine-grained human-object interactions that requires no supervised training of attributes by leveraging off-the-shelf object detectors in video surveillance. Related Work ============ Methodology {#sec:methodology} =========== We first establish a basic hierarchy of concepts. At the highest level we have the *activity*—for example, suturing in robotic surgery. Each activity can be decomposed into a sequence of actions $(y_1, \ldots, y_N)$. Possible examples of actions are “Pushing needle through tissue" in a suturing activity or “throwing javelin" in a sporting event. Zero-shot learning approaches further decompose each action $y$ into a set of $K$ elementary attributes (usually taken to be binary-valued) $y = \{a_1, \ldots, a_K\}$. Given a video recording of an activity (represented as a sequence of frames $X = (x_1, \ldots, x_T)$), our goal is to map each frame $x_t$ to its corresponding action $y_t$ by detecting the presence or absence of each attribute $\hat{a}(x_t)$ in the video, then choosing the action whose signature $a(y_t)$ best fits those attributes. In other words, we choose the action with highest score: $$\label{eq:decode_cost} \hat{y}(x) = \argmax_{y} score(a(y), \hat{a}(x))$$ Our methods focus on defining signatures conveniently, and computing the score efficiently. Dynamic Attribute Labeling {#dynamic_attributes} -------------------------- \[sec:das\] Previous work in zero-shot action recognition defines each signature over a set of attributes that are *static*—*i.e.* each attribute is presumed to be constant through the duration of the action. However, in many scenarios the actions of interest are distinguished by their time evolution rather than the presence or absence of static attributes. Take “person entering a vehicle" and “person exiting a vehicle", for example. Both of these actions share the static attribute “vehicle present". However, they are differentiated from each other by what happens to the person over time—in an “entering" action the person disappears into the vehicle, but in an “exiting" action the person appears out of it. In this section we outline a simple and elegant method for implementing *dynamic* attribute signatures, which also generalizes previous work. Our method is flexible enough to accept a high-level ordering of events, but also permits more temporal information to be provided if it is known. For example, it can implement a signature for “person appears" like “person is absent, then person is present", or one additionally specifying that a person should be absent for the first 75% of a segment, and present for the remaining 25%. Finally, several existing zero-shot learning datasets are annotated with static attributes, but do not have temporal information. Our framework allows new dynamic signatures to be defined quickly and easily by specifying the temporal evolution of relevant attributes on a per-activity basis. Activity Signatures {#sec:activity_signatures} ------------------- Because they are well-studied, flexible, and easily-composable, we implement our methods using finite-state logic (specifically using the OpenFST toolkit [@open
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'The *containment rate* of query $Q1$ in query $Q2$ over database $D$ is the percentage of $Q1$’s result tuples over $D$ that are also in $Q2$’s result over $D$. We directly estimate containment rates between pairs of queries over a specific database. For this, we use a specialized deep learning scheme, CRN, which is tailored to representing pairs of SQL queries. Result-cardinality estimation is a core component of query optimization. We describe a novel approach for estimating queries’ result-cardinalities using estimated containment rates among queries. This containment rate estimation may rely on CRN or embed, unchanged, known *cardinality* estimation methods. Experimentally, our novel approach for estimating cardinalities, using containment rates between queries, on a challenging real-world database, realizes significant improvements to state of the art cardinality estimation methods.' author: - | Rojeh Hayek\ \ \ Oded Shmueli\ \ \ bibliography: - 'main.bib' nocite: '[@*]' title: Improved Cardinality Estimation by Learning Queries Containment Rates --- Introduction ============ Query $Q1$ is contained in (resp. equivalent to), query $Q2$, analytically, if for all the database states $D$, $Q1$’s result over $D$ is contained in (resp., equals) $Q2$’s result over $D$. Query containment is a well-known concept that has applications in query optimization. It has been extensively researched in database theory, and many algorithms were proposed for determining containment under different assumptions [@cnt1; @cnt2; @cnt3; @cnt4]. However, determining query containment analytically is not practically sufficient. Two queries may be analytically unrelated by containment, although, the execution result on a *specific* database of one query may actually be contained in the other. For example, consider the queries:\ Q1: *select \* from movies where title = ’Titanic’*\ Q2: *select \* from movies where release = 1997 and director = ’James Cameron’*\ Both queries execution results are identical since there is only one movie called Titanic that was released in 1997 and directed by James Cameron (he has not directed any other movie in 1997). Yet, using the analytic criterion, the queries are unrelated at all by containment. To our knowledge, while query containment and equivalence have been well researched in past decades, determining the containment rate between two queries on a *specific* database, has not been considered by past research. By definition, the containment rate of query $Q1$ in query $Q2$ on database $D$ is the percentage of rows in $Q1$’s execution result over $D$ that are also in $Q2$’s execution result over $D$. Determining containment rates allows us to solve other problems, such as determining equivalence between two queries, or whether one query is fully contained in another, on the same *specific* database. In addition, containment rates can be used in many practical applications, for instance, query clustering, query recommendation [@SimTuples; @SimStructure], and in cardinality estimation as will be described subsequently. Our approach for estimating containment rates is based on a specialized deep learning model, CRN, which enables us to express query features using sets and vectors. An input query is converted into three sets, $T$, $J$ and $P$ representing the query’s tables, joins and column predicates, respectively. Each element of these sets is represented by a vector. Using these vectors, CRN generates a single vector that represents the whole input query. Finally, to estimate the containment rate of two represented queries, CRN measures the distance between the representative vectors of both queries, using another specialized neural network. Thus, the CRN model relies on the ability of the neural network to learn the vector representation of queries relative to the *specific* database. As a result, we obtain a small and accurate model for estimating containment rates. In addition to the CRN model, we introduce a novel technique for estimating queries’ cardinalities using estimated query containment rates. We show that using the proposed technique we improve current cardinality estimation techniques significantly. This is especially the case when there are multiple joins, where the known cardinality estimation techniques suffer from under-estimated results and errors that grow exponentially as the number of joins increases [@joinsHard]. Our technique estimates the cardinalities more robustly (x150/x175 with 4 joins queries, and x1650/x120 with 5 joins queries, compared with PostgreSQL and MSCN, respectively). We compare our technique with PostgreSQL [@postgreSQL], and the pioneering MSCN model [@LearnedCrd], by examining, on the real-world IMDb database [@HowGoodCar], join crossing correlations queries which are known to present a tough challenge to cardinality estimation methods [@HowGoodCar; @crdHard; @JoinCross]. We show that by employing known existing cardinality estimation methods for containment estimation, we can improve on their cardinality estimates as well, without changing the methods themselves. Thus, our novel approach is highly promising for solving the cardinality estimation problem, the “Achilles heel” of query optimization [@crdHard2], a cause of many performance issues [@HowGoodCar]. The rest of this paper is organized as follows. In Section \[Containment Rate Definition\] we define the containment rate problem and in Sections \[Learned Containment Rates\]-\[Containment Evaluation\] we describe and evaluate the CRN model for solving this problem. In Sections \[Cardinality Estimation Using Containment Rates\]-\[Cardinality Evaluation\] we describe and evaluate our new approach for estimating cardinalities using containment rates. In Section \[Improving Existing Cardinality Estimation Models\] we show how one can adapt the new ideas to improve existing cardinality estimation models. Sections \[Related work\]-\[Conclusion\] present related work, conclusions and future work. Containment Rate Definition {#Containment Rate Definition} =========================== We define the containment rate between two queries $Q1$, and $Q2$ on a *specific* database $D$. *Query $Q1$ is $x\%$-contained in query $Q2$ on database $D$ if precisely $x\%$ of $Q1$’s execution result rows on database $D$ are also in $Q2$’s execution result on database $D$.* The containment rate is formally a function from **QxQxD** to **R**, where **Q** is the set of all queries, **D** of all databases, and **R** the Real numbers. This function can be directly calculated using the cardinality of the results of queries $Q1$ and $Q2$ as follows: $$x\% = \frac{|Q1(D)\ {\mbox{$\cap$}}\ Q2(D)|}{|Q1(D)|} * 100$$ Where, $Q(D)$ denotes $Q$’s execution result on database $D$. (in case $Q1$’s execution result is empty, then $Q1$ is 0%-contained in $Q2$). Note that the containment rate is defined only on pairs of queries whose SELECT and FROM clauses are *identical*. Containment Rate Operator ------------------------- We denote the containment rate *operator* between queries $Q1$ and $Q2$ on database $D$ as: $$Q1 \subset_{\%}^D Q2$$ Operator $\subset_{\%}^D$ returns the containment rate between the given input queries on database $D$. That is, $Q1 \subset_{\%}^D Q2$ returns $x\%$, if $Q1$ is $x\%$-contained in query $Q2$ on database $D$. For simplicity, we do not mention the *specific* database, as it is usually clear from context. Therefore, we write the containment rate operator as $\subset_{\%}$. Learned Containment Rates {#Learned Containment Rates} ========================= From a high-level perspective, applying machine learning to the containment rate estimation problem is straightforward. Following the training of the CRN model with pairs of queries $(Q1,Q2)$ and the actual containment rates $Q1 \subset_{\%} Q2$, the model is used as an estimator for other, unseen pairs of queries. There are, however, several questions whose answers determine whether the machine learning model (CRN) will be successful. (1) Which supervised learning algorithm/model should be used. (2) How to represent queries as input and the containment rates as output to the model (“featurization”). (3) How to obtain the initial training dataset (“cold start problem”). Next, we describe how we address each one of these questions. Cold Start Problem {#Defining the database and the development set} ------------------ ### Defining the Database {#Defining the database} We generated a training-set, and later on evaluated our model on it, using the IMDb database. IMDb contains many correlations and has been shown to be very challenging for cardinality estimators [@HowGoodCar]. This database contains a plethora of information about movies and related facts about actors, directors, and production companies, with more than 2.5M movie titles produced over 130 years (starting from 1880) by 235,000 different companies with over 4M actors. ### Generating the Development Dataset {#queries generator} Our approach for solving the “cold start problem” is to obtain an initial training corpus using a specialized queries generator that randomly generates queries based on the IMDB
{ "pile_set_name": "ArXiv" }
null
null
$$$$ **Heaviside transform of the effective potential** **in the Gross-Neveu model** 1.5cm Hirofumi Yamada *Department of Mathematics, Chiba Institute of Technology* *2-1-1 Shibazono, Narashino-shi, Chiba 275* *Japan* *e-mail:yamadah@cc.it-chiba.ac.jp* **Abstract** Unconventional way of handling the perturbative series is presented with the help of Heaviside transformation with respect to the mass. We apply Heaviside transform to the effective potential in the massive Gross-Neveu model and carry out perturbative approximation of the massless potential by dealing with the resulting Heaviside function. We find that accurate values of the dynamical mass can be obtained from the Heaviside function already at finite orders where just the several of diagrams are incorporated. We prove that our approximants converges to the exact massless potential in the infinite order. Small mass expansion of the effective potential can be also obtained in our approach. [**1 Introduction**]{} Even if the proof of dynamical massless symmetry breaking requires genuine non-perturbative approaches, it does not necessarily mean that the perturbative expansion is totally useless. There is the possibility that non-perturbative quantities in the massless limit may be approximately calculated via perturbative approach. The purpose of this paper is to explore the possibility and show a concrete affirmative result by re-visiting the Gross-Neveu model${^{1}}$. Let us consider the effective potential of the Gross-Neveu model. As is well known, ordinary massless perturbation expansion gives infra-red divergences and to cure the problem one must sum up all the one-loop diagrams. Then the summed result reveals the non-trivial vacuum configuration of $<\bar \psi \psi>$ and the dynamical generation of the mass. The point we like to note is whether such a non-perturbative effect needs, in the approximate evaluation, the infinite sum of perturbative contributions. To resolve the issue, we deal with a truncated series $V_{pert}$, without conventional loop summation, and study the approximate calculation of the effective potential $V$ at $m=0$. A naive way of approximation would go as the following: To get around the infrared singularity we turn to the massive case and probe $V_{pert}(\sigma,m)$ at small $m$. Since the limit, $m\rightarrow 0$, cannot be taken in $V_{pert}(\sigma,m)$, we may choose some non-zero $m$ ($=m^{*}$) and approximate the effective potential $V(\sigma, m=0)$ by $V_{pert}(\sigma, m^{*})$. However, the problem is that $V_{pert}(\sigma, m)$ is not valid for small enough $m$. This is the place where the Heaviside function comes in. Our suggestion to resolve the problem is to contact the Heaviside transformation of $V(\sigma,m)$ with respect to the mass$^{2,3}$. Heaviside transform of the effective potential, $\hat V$, is a function of $\sigma$ and $x$ which is conjugate with $m$. Then, the key relation is that $\lim_{m\rightarrow 0}V(\sigma, m)=\lim_{x\rightarrow \infty}\hat V(\sigma, x)$. Of course this is valid only when the both limits exist and do not apply for $V_{pert}$ and its Heaviside function, $\hat V_{pert}$, because those functions diverge in the limits. However there arises the possibility that $\hat V(\sigma,\infty)$ and hence $V(\sigma,0)$ may be well approximated by putting some finite value of $x$ into $\hat V_{pert}$. This is because $\hat V_{pert}$ has the convergence radius much larger than that of $V_{pert}$. Although $\hat V_{pert}$ shares the similar infra-red problems with $V_{pert}$, we will find that $\hat V_{pert}$ is much more convenient in this kind of massless approximation. Actually we will demonstrate that, at finite perturbative orders where just the several of Feynman diagrams are taken into account, the accurate dynamical mass is obtained via the Heaviside transform approach. Throughout this paper, we use dimensional regularization$^{4}$. We confine ourselves with the leading order of large $N$ expansion and $N$ is omitted for the sake of simplicity. [**2. Heaviside transform with respect to the mass**]{} In this section we summarize basic features of the Heaviside transform and illustrate our strategy by taking a simple example. Let $\Omega(m)$ be a given function of the mass $m$. The Heaviside transform of $\Omega(m)$ is given by the Bromwich integral, $$\hat \Omega(x)=\int^{s+i\infty}_{s-i\infty}{dm \over 2\pi i}{\exp(m x) \over m}\Omega(m),$$ where the vertical straight contour should lie in the right of all the possible poles and the cut of $\Omega(m)/m$ (In (1), the real parameter $s$ specifies the location of the contour). Since $\Omega(m)/m$ is analytic in the domain, $Re(m)>s$, $\hat \Omega(x)$ is zero when $x<0$. It is known that the Laplace transformation (of the second kind) gives the original function as, $$\Omega(m)=m\int^{\infty}_{-\infty}dx\exp(-m x)\hat\Omega(x).$$ Since $\hat \Omega(x)=0$ for $x<0$, the region of the integration effectively reduces to $[0,\infty)$. It is easy to derive the relation, $$\lim_{m\rightarrow +0}\Omega(m)=\lim_{x\rightarrow +\infty} \hat\Omega(x),$$ where the both limits are assumed to exist. As noted before, the point of our scheme consists in utilizing $\hat \Omega$ to approximate the massless value of $\Omega$, $\Omega(0)$, by relying upon (3). To illustrate our strategy based on (3), let us consider a simple example. Given a following truncated series in $1/m$, f\_[L]{}(m)=\^[L]{}\_[n=0]{}[(-1)\^[n]{} m\^[n+1]{}]{}, we try to approximate the value of $f(m)=f_{\infty}(m)=(1+m)^{-1}$ at $m=0$, $f(0)=1$, by using information just contained in the truncated series (4). Since the convergence radius, $\rho$, of $f_{\infty}(m)$ is unity, we cannot have approximation better than $1/2$ from $f_{L}(m)$. However, the state changes if we deal with its Heaviside function. The Heaviside transform of $f_{L}(m)$ is given by f\_[L]{}(x)=\^[L]{}\_[n=1]{}(-1)\^n\^[s+i]{}\_[s-i]{} [dm 2i]{}[(m x) m]{} [1 m\^[n+1]{}]{}=\^[L]{}\_[n=0]{}(-1)\^n [x\^[n+1]{} (n+1)!]{}(x), where $$\theta(x)=\left\{ \begin{array}{@{\,}ll} 1 & \mbox{$(x>0)$}\\ 0 & \mbox{$(x<0)$.} \end{array} \right.$$ From (5) it is easy to find that $\hat f(x)=(1-e^{-x})\theta(x)$ and (3) holds for $f$ and $\hat f$. For our purpose it is crucial that $\rho=\infty$ for $\hat f_{\infty}$ while $\rho=1$ for $f_{\infty}$. The infinite convergence radius ensures us to probe the large $x$ behavior of $\hat f$ by $\hat f_{L}$ to arbitrary precision by increasing perturbative order. Due to the truncation, however, $\hat f_{L}$ diverges as $x\rightarrow \infty$. Then, in approximating $\hat f(\infty)$ and therefore $f(0)$, we stop taking the limit and input some finite value into $x$. The input value of $x$, say $x^{*}$, should be taken as large as possible in the reliable perturbative region in $x$. At this place we understand that the good convergence property of $\hat f_{L}$ is one of the advantage of Heaviside function. Since the upper limit of perturbative region is not a rigorously defined concept, we determine the input value $x^{*}$ in heuristic way. Our suggestion to fix $x^{*}$ is as the following: The series (5) is valid for small $x$ but breaks down at large $x$. The breaking appears as the domination of the highest term in $\hat f_{L}$ which leads to the unlimited growth or decreasing of the function (see Fig.1). Thus $x^{*}$ is located somewhere around the beginning of the dominating behavior. Then, for odd $L$ and large even $L$, we find the plateau region just before the domination and that the region represents the end of the perturbative regime. Thus, we choose the stationary point in the plateau region as representing the typical violation of the perturbation expansion. Hence we fix $x^{*}$ by the stationarity condition, =0. The condition (7) reads as (x\^[\*]{})\^[L]{}\_[n=0]{}[(-x\^[\*]{})\^n n!]{}+(x)\^[L]{}\_[n=0
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We investigate quantum transport through a two-terminal nanoscale device characterized by a model peaked transmission function of the energy carriers. The device is in contact with two reservoirs held at different temperatures and chemical potentials. The above ideal model introduced by Mahan and Sofo for the search of the electronic structure of a thermoelectric material which maximizes the figure of merit, is here addressed in the non linear regime starting from the general expressions of particle-, electric charge-, and heat- currents. We individuate the parameters region where the electron system acts as energy pump (thermal machine) or heat pump (refrigerator machine). We provide contour plots of the power and heat currents involved in the two regions of the parameter space, and evaluate the corresponding thermal efficiency and coefficient of performance. The present transmission model sheds light on the implications of quantum bounds in nanostructures and provides a wealth of precious information on general aspects of transport. Our results can be a guide for the design of realistic thermoelectric devices with sharp density of states near the chemical potentials.' author: - 'G. Bevilacqua$^{1}$, G. Grosso$^{2,3}$, G. Menichetti$^{4,2}$, G. Pastori Parravicini$^{2,5}$' title: Thermoelectric regimes of materials with peaked transmission function --- INTRODUCTION ============ The development of nanotechnology trained new strategies to increase the efficiency of thermoelectric (TE) processes [@WHITNEY18]. The pioneering papers by Hicks and Dresselhaus [@DRESS93a; @DRESS93b; @DRESS07] evidenced the importance of investigation of nanoscale quantum transport for the enhancement of the thermoelectric dimensionless figure of merit $ZT$. In the linear regime $ZT$ is defined as $ZT=\sigma S^2 T/(\kappa_{el}+\kappa_{ph})$, where $\sigma$ is the electronic conductance, $S$ the Seebeck coefficient, $T$ the absolute temperature, and $\kappa_{el}$ ($\kappa_{ph}$) the electronic (phononic) thermal conductance. Several ideas and strategies where reported to maximise the TE figure of merit by suitable choice of device design and appropriate material (see e.g. Refs. ). Most attempts proposed the increase of phonon scattering so to decrease the lattice thermal conductivity, which can be reached by engineering nanostructured devices; other attempts proposed to increase the power factor, $\sigma S^2$, varying the concentration of charge carriers.[@DMITRIEV10] As alternative approach Mahan and Sofo [@SOFO96] addressed the problem in a formal way, looking for the material with suitable shape of the carrier energy levels distribution, i.e. with transport distribution function $\mathcal T(E)$, which guarantees, at given lattice thermal conductivity, the highest figure of merit. The authors demonstrate that for this goal the carriers in the material should possess energy distribution as narrow as possible, i.e. a $\delta$-like shape. Along this line, the impact of energy spectrum width [@LINKE05; @LUO13] and of other shapes in $\mathcal T(E)$, as step-, box-, lorentzian[@BW36] and Fano[@MIRO10; @SSP] like features, have been successively considered [@BEVI16] depending on specific problems or suggested by quantum broadening effects due the contacts. In particular, sharp features in $\mathcal T(E)$ approaching $\delta$-shape have been realised and analysed in terms of single lorentzian peaks of vanishing width $\Gamma$ [@LIU18; @LUO16], in quantum dots weakly interacting with the contacts [@RSAN15; @TALBO17; @MENI18] and in the presence of electron-electron interaction[@KRO18], single molecule junctions [@TORRES15], molecular electronics [@REDDY07; @LAMBERT16; @HANGGI84], resonant tunneling devices [@PATIL17]. The subject of this paper is the analysis of the effects of a peaked transmission function on the thermoelectric transport properties of a nanostructured system, in the absence of lattice contribution to the thermal conductivity, and beyond the linear response regime. Overcome of linear response condition is commonly reached in low-dimensional systems where large values of temperature and electrical potential gradients may easily occur due to their small dimension which can be smaller than the electronic scattering length (see e.g. Refs. ). We consider a thermoelectric system composed of two reservoirs of particles obeying the Fermi-Dirac statistics, connected to the device through left and right perfect leads. $T_L$ is the temperature of the left (hot) reservoir and $T_R$ is the temperature of the right (cold) reservoir, with $\mu_L$ and $\mu_R$ chemical potentials, respectively. For a system characterised by two electron reservoirs connected through perfect leads to a conductor with peaked transmission function at the resonance energy $E_d$, we show, for each difference of temperatures and chemical potentials between the two reservoirs, when the system behaves as good thermal machine, or as good refrigerator, or as useless energy dissipator, according to the position of the resonance energy on the energy axis. We provide contour plots of the power and of heat currents which highlight different thermoelectric behaviors of the system as function of the thermodynamic parameters $T_L , T_R,\mu_L , \mu_R$, and of the transmission filter energy. The above result allows to individuate regions of high performance, when the system works as thermal machine or as refrigerator. The paper is organised as follows: in Section II we provide some definitions and expressions concerning TE transport in the non linear response regime. In Sections III we analyse the cases of transport through a peaked transmission function in the case $\mu_L < \mu_R$ and $\mu_L > \mu_R$, under the condition $T_L > T_R$. Section IV contains contour plots of exchanged power and of heat currents which define the thermoelectric behavior of the considered device, with a discussion of the results. Section V contains conclusive remarks. General expressions of thermoelectric transport equations in the non-linear response regime =========================================================================================== In this section we consider transport through a two-terminal mesoscopic electronic system characterized by the transmission function ${\mathcal T}(E)$. A most general tool to address the transmission function in nanostructures is the non-equilibrium Keldysh Green’s function approach[@KELD64; @AGG06; @WANG08; @RYNDYKD09; @DO18; @MEIR94; @DARE16; @FRED014]; this formalism is exact (i.e. without conceptual approximations: all Feynman diagrams summed out at any order) in the particular case of non-interacting systems. In realistic cases, one needs to go through [*ab initio*]{} evaluation of the transmission function; often one can directly focus on special functional shapes of the transmission ( Lorentzian resonances and antiresonances, Fano profiles) generally encountered in the actual transmission features of thermoelectric materials, due to quantum interference effects.[@GOO09; @DATTA97; @DUBI11] The purpose of this paper is the study of the thermoelectric regimes linked to the presence of a peaked transmission function. This study is of relevance in its own right, and most importantly because it paves the way to the understanding of a variety of peaked transmission functions of wide impact in the nano-material world. Following a well established convention, we assume without loss of generality that the temperature of the left reservoir is hotter than the one of the right reservoir, namely $T_L > T_R$; no a priori assumption is done on the chemical potentials $\mu_L , \mu_R$ of the particle reservoirs. The $left$ or $right$ particle number current $I_N^{(left,right)}$, charge (electric) current $I_e^{(left,right)}$, and heat (thermal) currents $I_Q^{(left,right)}$, are given respectively by the expressions: $$\begin{aligned} I_N &=& I_N^{(left)} = I_N^{(right)} = \frac{1}{h} \int_{-\infty}^{+\infty} dE \, {\mathcal T} (E) \left[ f_{L}(E) - f_{R}(E) \right] % Eq.(1a) \\ [2mm] I_e &=& I_e^{(left)} = I_e^{(right)} = -eI_N % Eq.(1b) \\ [2mm] I_{Q}^{(left)} &=& \frac{1}{h} \int_{-\infty}^{+\infty} dE (E- \mu_{L}) \, {\mathcal T}(E) \left[ f_{L}(E) - f_{R}(E) \right] % Eq.(1c) \\[2mm] I_{Q}^{(right)} &=& \frac{1}{h} \int_{-\infty}^{+\infty} dE (E- \mu_{R}) \, {\mathcal T}(E) \left[ f_{L}(E) - f_{R}(E) \right]\, ; % Eq.(1d) \end{aligned}$$ ($-e$) is the electric charge and $h$ the Planck constant. The output or input power ${\mathcal P}$, due to the transport of spinless electrons across the device in any regime (power generator regime, refrigeration regime, dissipative regime) is given by $$\begin
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In hierarchical models, where spheroidal galaxies are primarily produced via a continuous merging of disk galaxies, the number of intrinsically red systems at faint limits will be substantially lower than in “traditional” models where the bulk of star formation was completed at high redshifts. In this paper we analyse the optical–near-infrared colour distribution of a large flux-limited sample of field spheroidal galaxies identified morphologically from archival [*Hubble Space Telescope*]{} data. The $I_{814}-HK''$ colour distribution for a sample jointly limited at $I_{814}<$23 mag and $HK''<$19.5 mag is used to constrain their star formation history. We compare visual and automated methods for selecting spheroidals from our deep HST images and, in both cases, detect a significant deficit of intrinsically red spheroidals relative to the predictions of high-redshift monolithic collapse models. However the overall space density of spheroidals (irrespective of colour) is not substantially different from that seen locally. Spectral synthesis modelling of our results suggests that high redshift spheroidals are dominated by evolved stellar populations polluted by some amount of subsidiary star formation. Despite its effect on the optical-infrared colour, this star formation probably makes only a modest contribution to the overall stellar mass. We briefly discuss the implications of our results in the context of earlier predictions based on models where spheroidals assemble hierarchically.' author: - | F. Menanteau $^1$, R. S. Ellis$^1$, R. G. Abraham$^{1,2}$, A. J. Barger$^{3}$, and L. L. Cowie$^3$\ $^1$Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 OHA, England\ $^2$Royal Greenwich Observatory, Madingley Road, Cambridge, CB3 0EZ, England\ $^3$Institute for Astronomy, 2680 Woodlawn Drive, Honolulu, HI 96822, USA\ date: 'Received:   Accepted: ' title: 'The Optical-Infrared Colour Distribution of a Statistically-Complete Sample of Faint Field Spheroidal Galaxies' --- \[firstpage\] INTRODUCTION ============ The age distribution of elliptical galaxies is a controversial issue central to testing hierarchical models of galaxy formation. The traditional viewpoint (Baade 1957, Sandage 1986) interprets the low specific angular momentum and high central densities of elliptical galaxies with their dissipationless formation at high redshift. In support of this viewpoint, observers have cited the small scatter in the colour-magnitude relation for cluster spheroidals at low redshifts (Sandage & Visvanathan 1978, Bower et al 1992) and, more recently, such studies have been extended via HST imaging to high redshift clusters (Ellis et al 1997, Stanford et al 1997). Examples of individual massive galaxies with established stellar populations can be found at quite significant redshifts (Dunlop 1997). In contrast, hierarchical models for the evolution of galaxies (Kauffmann et al 1996, Baugh et al 1996) predict a late redshift of formation for most galactic-size objects because of the need for gas cooling after the slow merger of dark matter halos. These models propose that most spheroidal galaxies are produced by subsequent mergers of these systems, the most massive examples of which accumulate since $z\simeq$1. Although examples of apparently old ellipticals can be found in clusters to quite high redshift, this may not be at variance with expectations for hierarchical cold dark matter (CDM) models since clusters represent regions of high density where evolution might be accelerated (Governato et al 1998). By restricting evolutionary studies to high density regions, a high mean redshift of star formation and homogeneous rest-frame UV colours would result; such characteristics would not be shared by the field population. Constraints on the evolution of field spheroidals derived from optical number counts as a function of morphology (Glazebrook et al 1995, Im et al 1996, Abraham et al 1996a) are fairly weak, because of uncertainties in the local luminosity function. Nonetheless, there is growing evidence of differential evolution when their properties are compared to their clustered counterparts. Using a modest field sample, Schade et al (1998) find a rest-frame scatter of $\delta(U-V)$=0.27 for distant bulge-dominated objects in the HST imaging survey of CFRS/LDSS galaxies, which is significantly larger than the value of $\simeq$0.07-0.10 found in cluster spheroidals at $z\simeq$0.55 by Ellis et al 1997. Likewise, in their study of a small sample of galaxies of known redshift in the [*Hubble Deep Field*]{} (HDF), Abraham et al (1998) found a significant fraction ($\simeq$40%) of distant ellipticals showed a dispersion in their internal colours indicating they had suffered recent star formation possibly arising from dynamical perturbations. Less direct evidence for evolution in the field spheroidal population has been claimed from observations which attempt to isolate early-type systems based on predicted colours, rather than morphology. Kauffmann et al (1995) claimed evidence for a strong drop in the volume density of early-type galaxies via a $V/V_{max}$ analysis of colour-selected galaxies in the [*Canada-France Redshift Survey*]{} (CFRS) sample (Lilly et al 1995). Their claim remains controversial (Totani & Yoshii 1998, Im & Castertano 1998) because of the difficulty of isolating a robust sample of field spheroidals from $V-I$ colour alone (c.f. Schade et al 1998), and the discrepancies noted between their analyses and those conducted by the CFRS team. In addition to small sample sizes, a weakness in most studies of high redshift spheroidals has been the paucity of infrared data. As shown by numerous authors (eg. Charlot & Silk 1994), near-IR observations are crucial for understanding the star formation history of distant galaxies, because at high redshifts optical data can be severely affected by both dust and relatively minor episodes of star-formation. Recognizing these deficiencies, Moustakas et al (1997) and Glazebrook et al (1998) have studied the optica-infrared colours of small samples of of morphologically-selected galaxies. Zepf (1997) and Barger et al (1998) discussed the extent of the red tail in the optical-IR colour distribution of HDF galaxies. Defining this tail ($V_{606}$-$K>$7 and $I_{814}$-$K>$4) in the context of evolutionary tracks defined by Bruzual & Charlot’s (1993) evolutionary models, they found few sources in areas of multicolour space corresponding to high redshift passively-evolving spheroidals. The ultimate verification of a continued production of field ellipticals as required in hierarchical models would be the observation of a decrease with redshift in their comoving space density. Such a test requires a large sample of morphologically-selected ellipticals from which the luminosity function can be constructed as a function of redshift. By probing faint in a few deep fields, Zepf (1997) and Barger et al (1998) were unable to take advantage of the source morphology; constraints derived from these surveys relate to the entire population. Moreover, there is little hope in the immediate term of securing spectroscopic redshifts for such faint samples. The alternative adopted here is to combine shallower near-infrared imaging with more extensive HST archival imaging data, allowing us to isolate a larger sample of [*brighter, morphologically-selected*]{} spheroidals where, ultimately, redshifts and spectroscopic diagnostics will become possible. Our interim objective here is to analyse the optical-infrared colour distribution of faint spheroidals which we will demonstrate already provides valuable constraints on a possible early epoch of star formation. A plan of the paper follows. In $\S$2.1 we discuss the available HST data and review procedures for selecting morphological spheroidals from the images. In $\S$2.2 we discuss the corresponding ground-based infrared imaging programme and the reduction of that data. The merging of these data to form the final catalogue is described in $\S$2.3. In $\S$3 we discuss the optical-infrared colour distribution for our sample in the context of predictions based on simple star formation histories and consider the redshift distribution of our sample for which limited data is available. We also examine constraints based on deeper data available within the Hubble Deep Field. In $\S$4 we summarise our conclusions. CONSTRUCTION OF THE CATALOGUE ============================= THE HST SAMPLE -------------- In searching the HST archive for suitable fields, we adopted a minimum $I$ F814W-band exposure time of 2500 sec and a Galactic latitude of $|b|$=19$^{\circ}$ so that stellar contamination would not be a major concern. These criteria led to 48 fields accessible from the Mauna Kea Observatory comprising a total area of 0.0625 deg$^2$(225 arcmin$^2$). Table 1 lists the fields adopted, including several for which limited redshift data is available e.g. the HDF and its flanking fields (Williams et al 1996), the Groth strip (Groth et al 1994) and the CFRS/LDSS survey fields (Brinchmann et al 1997). F606W imaging is available for 25 of the fields in Table 1. Object selection and photometry for each field was performed using the [SExtractor]{} package (Bertin & Arnouts 1996). Although the detection limit varies from field to field, the -band data is always complete to
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We give an algorithm, based on the $\varphi$-expansion of Parry, in order to compute the topological entropy of a class of shift spaces. The idea is to solve an inverse problem for the dynamical systems $\beta x +\alpha\mod 1$. The first part is an exposition of the $\varphi$-expansion applied to piecewise monotone dynamical systems. We formulate for the validity of the $\varphi$-expansion, necessary and sufficient conditions, which are different from those in Parry’s paper [@P2].' author: - | B. Faller[^1] and C.-E. Pfister[^2]\ EPF-L, Institut d’analyse et calcul scientifique\ CH-1015 Lausanne, Switzerland date: '16.05.2008\' title: | Computation of Topological Entropy via $\varphi$-expansion,\ an Inverse Problem for the Dynamical Systems $\beta x+\alpha\mod1$ --- Introduction {#section1} ============ In 1957 Rényi published his paper [@R] about representations for real numbers by $f$-expansions, called hereafter $\varphi$-expansions, which had tremendous impact in Dynamical Systems Theory. The ideas of Rényi were further developed by Parry in [@P1] and [@P2]. See also the book of Schweiger [@Sch]. The first part of the paper, section \[section2\], is an exposition of the theory of $\varphi$-expansions in the setting of piecewise monotone dynamical systems. Although many of the results of section \[section2\] are known, for example see [@Bo] chapter 9 for Theorem \[thm2.5\], we state necessary and sufficient conditions for the validity of the $\varphi$-expansion, which are different from those in Parry’s paper [@P2], Theorem \[thm2.1bis\] and Theorem \[thm2.1ter\]. We then use $\varphi$-expansions to study two interesting and related problems in sections \[section3\] and \[section4\]. When one applies the method of section \[section2\] to the dynamical system $\beta x+\alpha\mod1$, one obtains a symbolic shift which is entirely described by two strings $\ud{u}^\ab$ and $\ud{v}^\ab$ of symbols in a finite alphabet $\tA=\{0,\ldots,k-1\}$. The shift space is given by $$\label{1.1} \BSigma(\ud{u}^\ab,\ud{v}^\ab)=\big\{\ud{x}\in\tA^{\Z_+}\colon \ud{u}^\ab\preceq\sigma^n\ud{x}\preceq\ud{v}^\ab\;\,\forall n\geq 0 \big\}\,,$$ where $\preceq$ is the lexicographic order and $\sigma$ the shift map. The particular case $\alpha=0$ has been much studied from many different viewpoints ($\beta$-shifts). For $\alpha\not=0$ the structure of the shift space is richer. A natural problem is to study all shift spaces $\Sigma(\ud{u},\ud{v})$ of the form when we replace $\ud{u}^\ab$ and $\ud{v}^\ab$ by a pair of strings $\ud{u}$ and $\ud{v}$. In section \[section3\] we give an algorithm, Theorem \[thm3.1\], based on the $\varphi$-expansion, which allows to compute the topological entropy of shift spaces $\Sigma(\ud{u},\ud{v})$. One of the essential tool is the follower-set graph associated to the shift space. This graph is presented in details in subsection \[subsectionfollower\]. The algorithm is given in subsection \[subsectionalgo\] and the computations of the topological entropy in subsection \[topological\]. The basic idea of the algorithm is to compute two real numbers $\bar{\alpha}$ and $\bar{\beta}$, given the strings $\ud{u}$ and $\ud{v}$, and to show that the shift space $\BSigma(\ud{u},\ud{v})$ is a modification of the shift space $\Sigma(\ud{u}^\abb,\ud{v}^\abb)$ obtained from the dynamical system $\bar{\beta}x+\bar{\alpha}\mod1$, and that the topological entropies of the two shift spaces are the same. In the last section we consider the following inverse problem for the dynamical systems $\beta x+\alpha \mod1$: given $\ud{u}$ and $\ud{v}$, find $\alpha$ and $\beta$ so that $$\ud{u}=\ud{u}^\ab\quad\text{and}\quad\ud{v}=\ud{v}^\ab\,.$$ The solution of this problem is given in Theorems \[thm4.1\] and \[thm4.2\] for all $\beta>1$. $\varphi$-expansion for piecewise monotone dynamical\ systems {#section2} ===================================================== Piecewise monotone dynamical systems {#subsection2.1} ------------------------------------ Let $X:=[0,1]$ (with the euclidean distance). We consider the case of piecewise monotone dynamical systems of the following type. Let $0=a_0<a_1<\cdots<a_k=1$ and $I_j:=(a_j,a_{j+1})$, $j\in\tA$. We set $\tA:=\{0,\ldots,k-1\}$, $k\geq 2$, and $$S_0:=X\backslash \bigcup_{j\in\tA}I_j\,.$$ For each $j\in\tA$ let $$f_j:I_j\mapsto J_j:=f_j(I_j)\subset [0,1]$$ be a strictly monotone continuous map. When necessary we also denote by $f_j$ the continuous extension of the map on the closure $\overline{I}_j$ of $I_j$. We define a map $T$ on $X\backslash S_0$ by setting $$T(x):=f_j(x)\quad \text{if $x\in I_j$}\,.$$ The map $T$ is left undefined on $S_0$. We also assume that $$\label{2.1} \big(\bigcup_{i\in\tA}J_i\big)\cap I_j=I_j\quad\forall j\,.$$ We introduce sets $X_j$, $S_j$, and $S$ by setting for $j\geq 1$ $$X_0:=[0,1]\,,\quad X_j:=X_{j-1}\backslash S_{j-1}\,,\quad S_j:=\{x\in X_j\colon T(x)\in S_{j-1}\}\,,\quad S:=\bigcup_{j\geq 0}S_j\,.$$ \[lem2.1\] Under the condition , $T^n(X_{n+1})= X_1$ and $T(X\backslash S)=X\backslash S$. $X\backslash S$ is dense in $X$. Condition is equivalent to $T(X_1)\supset X_1$. Since $X_2=X_1\backslash S_1$ and $S_1=\{x\in X_1\colon T(x)\not\in X_1\}$, we have $T(X_2)=X_1$. Suppose that $T^n(X_{n+1})=X_1$; we prove that $T^{n+1}(X_{n+2})=X_{1}$. One has $X_{n+1}=X_{n+2}\cup S_{n+1}$ and $$X_1=T^n(X_{n+1})=T^n(X_{n+2})\cup T^n(S_{n+1})\,.$$ Applying once more $T$, $$X_1\subset T(X_1)=T^{n+1}(X_{n+2})\cup T^{n+1}(S_{n+1})\,.$$ $T^{n+1}$ is defined on $X_{n+1}$ and $S_{n+1}\subset X_{n+1}$. $$T^{n+1}S_{n+1}=\{x\in X_{n+1}\colon T^{n+1}(x)\in S_0\}= \{x\in X_{n+1}\colon T^{n+1}(x)\not\in X_1\}\,.$$ Hence $T^{n+1}(X_{n+2})=X_{1}$. Clearly $T(X\backslash S)\subset X\backslash S$ and $T(S\backslash S_0)\subset S$. Since $X_1$ is the disjoint union of $X\backslash S$ and $S\backslash S_0$, and $TX_1\supset X_1$, we have $T(X\backslash S)=X\backslash S$. The sets $X\backslash S_k$ are open and dense in $X$. By Baire’s Theorem $X\backslash S=\bigcap_{k}(X\backslash S_k)$ is dense. Let $\Z_+:=\{0,1,2,\ldots\}$ and $\tA^{\Z_+}$ be equipped with the product topology. Elements of $\tA^{\Z_+}$ are called [strings]{}
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Interfacing a ferromagnet with a polarized ferroelectric gate generates a non-uniform, interfacial spin density coupled to the ferroelectric polarization.' author: - Yaojin Li$^1$ - Min Chen$^1$ - Jamal Berakdar$^2$ - 'Chenglong Jia$^{1,2}$' title: 'Gate-controlled magnon-assisted switching of magnetization in ferroelectric/ferromagnetic junctions' --- Electrical control of magnetism has the potential to boost spintronic devices with a number of novel functionalities [@Eerenstein:2006km; @M.; @Weisheit:2007; @T.; @Maruyama:2009; @D.; @Chiba:2011; @Jia:2015iz]. To mention an example, magnetization switching can be achieved via a spin-polarized electric current due to the spin-transfer torque or the spin-orbital torque in the presence of a spin orbital interaction [@J.; @C.; @Slonczewski:1996; @L.; @Berger:1996; @J.; @A.; @Katine:2000; @M.; @D.; @Stiles:2002; @Y.; @Tserkovnyak:2008; @Brataas:2012fb; @Fan:2014hb; @Brataas:2014dla; @Oh:2016ev; @Fukami:2016kq]. One may also use an electric field to manipulate the magnetization dynamics [@Vaz:2012dp; @T.; @Y.; @Liu:2011; @T.; @Nozaki:2012; @Brovko:2014gsb; @Schueler2017; @Matsukura:2015hya; @Y.; @Shiota:2016] in which case the electric field may lead to modulations in the charge carrier density or may affect the magnetic properties such as the magnetic moment, the exchange interaction and/or the magnetic anisotropy [@Vaz:2012dp; @Brovko:2014gsb; @Schueler2017; @Matsukura:2015hya]. Compared to driving magnetization via a spin-polarized current, an electric field governing the magnetization has a clear advantage as it allows for non-volatile device concepts with significantly reduced energy dissipation. On the other hand, an external electric field applied to an itinerant ferromagnet (FM) is shielded by charge accumulation or depletion caused by spin-dependent screening charge that extends on a length scale of only a few angstroms into the FM [@Zhang:1999cx]. This extreme surface confinement of screening hinders its utilization to steer the magnetic dynamics of bulk or a relatively thick nanometer-sized FM [@Shiota:2012dh; @Wang:2012jf]. Experimentally, ultra-thin metallic FM films were thus necessary to observe an electric field influence on the dynamic of an FM [@Vaz:2012dp; @Brovko:2014gsb; @Nan:2014ck]. In this work we show that while the spin-polarized screening charge is surface confined, in the spin channel a local non-uniform spiral spin density builds up at the interface and goes over into the initial uniform (bulk) magnetization away from the interface. Hence, this interfacial spin spiral acts as a topological defect in the initial uniform magnetization vector field. The range of the spiral defect is set by the spin diffusion length $\lambda_m$ [@J.; @Bass:2007] which is much larger than the charge screening length.This spin-spiral constitutes a magnetoelectric effect that has a substantial influence on the traversal magnetization dynamics of FM layers with thickness over tens of nanometers [@footnot]. The interfacial spiral spin density can be viewed as a magnonic accumulation stabilized by the interfacial, spin-dependent charge rearrangement at the contact region between the FM and the ferroelectrics (having the FE polarization ${\mbox{\boldmath$\mathrm{P}$}}$) and by the uniform (bulk) magnetization of FM far away from the interface [@Jia; @C.; @L:2014]. ${\mbox{\boldmath$\mathrm{P}$}}$ responds to an external electric field and so does the magnetic dynamics. As shown below, this magnonic-assisted magnetoelectric coupling arising when using a dielectric FE gate, allows a (ferro)electric field control of the effective driving field that governs the magnetization switching of a FM layer with a thickness on the range of the spin diffusion length $\lambda_m$, which is clearly of an advantage for designing spin-based, non-volatile nanoelectronic devices.\ In Sec. \[sec1\] we discuss the mathematical details of the spin-spiral magnetoelectric coupling, followed by its implementation into the equations of motion for the magnetization dynamics in Sec. \[sec2\]. In Sec. \[sec3\] results of numerical simulations are presented and discussed showing to which extent the spin-spiral magnetoelectric coupling can allow for the electric field control of the magnetization in FE/FM composites. Ways to enhance the effects are discussed and brief conclusions are made in Sec. \[sec4\]. Theoretically, the above magnon accumulation scenario maybe viewed as follows: $$\mathcal{F}_{sd}=J_{sd}\frac{M}{M_{s}}{\mbox{\boldmath$\mathrm{s}$}}\cdot {{\mbox{\boldmath$\mathrm{m}$}}}, \label{eq:sd}$$ Within the Stoner mean-field theory [@Soulen; @R.J:1998] the spin polarization $\eta$ of the electron density in transition FM metals is usually less than 1, $${\mbox{\boldmath$\mathrm{s}$}}={\mbox{\boldmath$\mathrm{s}$}}_{\parallel}+{\mbox{\boldmath$\mathrm{s}$}}_{\perp}$$ where ${\mbox{\boldmath$\mathrm{s}$}}_{\parallel}$ represents the spin density whose direction follows adiabatically the intrinsic magnetization ${\mbox{\boldmath$\mathrm{M}$}}$ at an instantaneous time t. ${\mbox{\boldmath$\mathrm{s}$}}_{\perp}$ describes the transverse deviation from ${\mbox{\boldmath$\mathrm{M}$}}$. Given that the steady-state charge accumulation entails much higher energy processes than spin excitations, $$\begin{aligned} &&\frac{\partial{\mbox{\boldmath$\mathrm{s}$}}_{\parallel}}{\partial t}{\mbox{\boldmath$\mathrm{m}$}}+{\mbox{\boldmath$\mathrm{s}$}}_{\parallel}\frac{\partial{\mbox{\boldmath$\mathrm{m}$}}}{\partial t}+\frac{\partial{\mbox{\boldmath$\mathrm{s}$}}_{\perp}}{\partial t} -D_{0}\nabla^{2}_{z}{\mbox{\boldmath$\mathrm{s}$}}_{\parallel}-D_{0}\nabla^{2}_{z}{\mbox{\boldmath$\mathrm{s}$}}_{\perp} \nonumber \\ &&= -\frac{{\mbox{\boldmath$\mathrm{s}$}}_{\parallel}}{\tau_{sf}}-\frac{{\mbox{\boldmath$\mathrm{s}$}}_{\perp}}{\tau_{sf}}-\frac{{\mbox{\boldmath$\mathrm{s}$}}_{\perp}\times{\mbox{\boldmath$\mathrm{m}$}}}{\tau_{ex}}\end{aligned}$$ where $D_0$ is the diffusion constant and $\tau_{ex} \approx \hbar/(2J_{sd})$. $\tau_{sf}$ is the spin-flip relaxation time due to scattering with impurities, electrons, and phonons; $\tau_{sf}\sim 10^{-12}-10^{-14}$ s [@L.; @Piraux:1998] and $\tau_{ex}/\tau_{sf}\sim10^{-2}$ in typical FM metals [@J.; @Bass:2007]. The time-derivative terms $\frac{\partial{\mbox{\boldmath$\mathrm{s}$}}_{\parallel}}{\partial t}$, $\frac{\partial{\mbox{\boldmath$\mathrm{m}$}}}{\partial t}$ and $\frac{\partial{\mbox{\boldmath$\mathrm{s}$}}_{\perp}}{\partial t}$ below THz are negligible compared with ${\mbox{\boldmath$\mathrm{s}$}}/\tau_{sf}$ and ${\mbox{\boldmath$\mathrm{s}$}}/\tau_{ex}$. Thus the steady state is set by [@Jia; @C.; @L:2014] $$D_{0}\nabla^{2}_{z}{\mbox{\boldmath$\mathrm{s}$}}_{\parallel}=\frac{{\mbox{\boldmath$\mathrm{s}$}}_{\parallel}}{\tau_{sf}} ~~~\text{and}~~~ D_{0}\nabla^{2}_{z}{\mbox{\boldmath$\mathrm{s}$}}_{\perp}=\frac{{\mbox{\boldmath$\mathrm{s}$}}_{\perp}\times{\mbox{\boldmath$\mathrm{m}$}}}{\tau_{ex}},$$ implying an exponentially decaying spiral spin density, [@Jia; @C.; @L:2014] $$\begin{gathered} \label{eq:14} s_{\parallel}=\eta\frac{\sigma_{FM}}{\lambda_{m}e}e^{-z/\lambda_{m}}, \\ {\mbox{\boldmath$\mathrm{s}$}}_{\perp}=(1-\eta)Q_{m}\frac{\sigma_{FM}}{e}e^{-(1-i){\mbox{\boldmath$\mathrm{Q}$}}_{m}\cdot{\mbox{\boldmath$\mathrm{r}$}}}.\end{gathered}$$ Here $\sigma_{FM} = \sigma_{FE} \approx \epsilon_{FE} E$ is the surface charge density due to the electric neutrality constraint at the interface, $\epsilon_{FE}$ and $E$ are the dielectric permittivity of FE and an applied normal electric field, respectively. $\lambda_{m}=\sqrt{D_{0}\tau_{sf}}$ is the effective spin-diffusion length and the normal spin spiral wave vector ${\mbox{\boldmath$\mathrm{Q}$}}_{m}=\frac{1}{\sqrt{2D_{0}\tau_{ex}}}\hat{{\mbox{\boldmath$\mathrm
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Let $M_n$ denote a random symmetric $n \times n$ matrix whose upper diagonal entries are independent and identically distributed Bernoulli random variables (which take values $1$ and $-1$ with probability $1/2$ each). It is widely conjectured that $M_n$ is singular with probability at most $(2+o(1))^{-n}$. On the other hand, the best known upper bound on the singularity probability of $M_n$, due to Vershynin (2011), is $2^{-n^c}$, for some unspecified small constant $c > 0$. This improves on a polynomial singularity bound due to Costello, Tao, and Vu (2005), and a bound of Nguyen (2011) showing that the singularity probability decays faster than any polynomial. In this paper, improving on all previous results, we show that the probability of singularity of $M_n$ is at most $2^{-n^{1/4}\sqrt{\log{n}}/1000}$ for all sufficiently large $n$. The proof utilizes and extends a novel combinatorial approach to discrete random matrix theory, which has been recently introduced by the authors together with Luh and Samotij.' author: - 'Asaf Ferber [^1]' - 'Vishesh Jain[^2]' bibliography: - 'symmetric.bib' title: 'Singularity of random symmetric matrices – a combinatorial approach to improved bounds' --- § ¶ Ł Introduction ============ The invertibility problem for Bernoulli matrices is one of the most outstanding problems in discrete random matrix theory. Letting $A_n$ denote a random $n\times n$ matrix, whose entries are independent and identically distributed (i.i.d.) Bernoulli random variables which take values $\pm 1$ with probability $1/2$ each, this problem asks for the value of $c_n$, which is the probability that $A_n$ is singular. By considering the event that two rows or two columns of $A_n$ are equal (up to a sign), it is clear that $$c_n \geq (1+o(1))n^{2}2^{1-n}.$$ It has been widely conjectured that this bound is, in fact, tight. On the other hand, perhaps surprisingly, it is non-trivial even to show that $c_n$ tends to $0$ as $n$ goes to infinity; this was accomplished in a classical work of Komlós in 1967 [@komlos1967determinant] which showed that $$c_n = O\left(n^{-1/2}\right)$$ using the classical Erdős-Littlewood-Offord anti-concentration inequality. Subsequently, a breakthrough result due to Kahn, Komlós, and Szemerédi in 1995 [@kahn1995probability] showed that $$c_n = O(0.999^{n}).$$ Improving upon an intermediate result by Tao and Vu [@tao2007singularity], the current ‘world record’ is $$c_n \leq (2+o(1))^{-n/2},$$ due to Bourgain, Vu, and Wood [@bourgain2010singularity]. Another widely studied model of random matrices is that of random *symmetric* matrices; apart from being important for applications, it is also very interesting from a technical perspective as it is one of the simplest models with nontrivial correlations between its entries. Formally, let $M_n$ denote a random $n\times n$ symmetric matrix, whose upper-diagonal entries are i.i.d. Bernoulli random variables which take values $\pm 1$ with probability $1/2$ each, and let $q_n$ denote the probability that $M_n$ is singular. Despite its similarity to $c_n$, much less is known about $q_n$. The problem of whether $q_n$ tends to $0$ as $n$ goes to infinity was first posed by Weiss in the early 1990s and only settled in 2005 by Costello, Tao, and Vu [@costello2006random], who showed that $$q_n = O(n^{-1/8 + o(1)}).$$ In order to do this, they introduced and studied a quadratic variant of the Erdős-Littlewood-Offord inequality. Subsequently, Nguyen [@nguyen2012inverse] developed a quadratic variant of *inverse* Littlewood-Offord theory to show that $$q_n = O_{C}(n^{-C})$$ for any $C>0$, where the implicit constant in $O_{C}(\cdot)$ depends only on $C$. This so-called quadratic inverse Littlewood-Offord theorem in [@nguyen2012inverse] builds on previous work of Nguyen and Vu [@nguyen2011optimal], which is itself based on deep Freiman-type theorems in additive combinatorics (see [@tao2008john] and the references therein). The current best known upper bound on $q_n$ is due to Vershynin [@vershynin2014invertibility], who used a sophisticated and technical geometric framework pioneered by Rudelson and Vershynin [@rudelson2008littlewood; @rudelson2010non] to show that $$q_n = O(2^{-n^c})$$ for some unspecified small constant $c > 0$. As far as lower bounds on $q_n$ are concerned, once again, by considering the event that the first and last rows of $M_n$ are equal (up to a sign), we see that $q_n \geq (2+o(1))^{-n}$. It is commonly believed that this lower bound is tight. \[conjecture:prob-singularity\] We have $$q_n = (2+o(1))^{-n}.$$ In this paper, we obtain a much stronger upper bound on $q_n$, thereby making progress towards \[conjecture:prob-singularity\]. \[thm:main-thm\] There exists a natural number $n_0 \in \N$ such that for all $n\geq n_0$, $$q_n \leq 2^{-n^{1/4}\sqrt{\log{n}}/1000}.$$ Apart from providing a stronger conclusion, our proof of the above theorem is considerably shorter than previous works, and introduces and extends several novel combinatorial tools and ideas in discrete random matrix theory (some of which are based on joint work of the authors with Luh and Samotij [@FJLS2018]). We believe that these ideas allow for a unified approach to the singularity problem for many different discrete random matrix models, which have previously been handled in an ad-hoc manner. For completeness and for the convenience of the reader, we have included full proofs of all the simple background lemmas that we use from other papers, making this paper completely self contained. Outline of the proof and comparison with previous work ------------------------------------------------------ In this subsection, we provide a very brief, and rather imprecise, outline of our proof, and compare it to previous works of Nguyen [@nguyen2012inverse] and Vershynin [@vershynin2014invertibility]; for further comparison with the work of Costello, Tao, and Vu, see [@nguyen2012inverse]. Let $x:=(x_1,\ldots,x_n)$ be the first row of $M_n$, let $M^{1}_{n-1}$ denote the bottom-right $(n-1)\times (n-1)$ submatrix of $M_n$, and for $2\leq i,j \leq n$, let $c_{ij}$ denote the cofactor of $M^{1}_{n-1}$ obtained by removing its $(i-1)^{st}$ row and $(j-1)^{st}$ column. Then, Laplace’s formula for the determinant gives $$\det(M_n)=x_1\det(M_{n-1})-\sum_{i,j=2}^n c_{ij}x_ix_j,$$ so that our goal is to bound the probability (over the randomness of $x$ and $c_{ij}$) that this polynomial is zero. By a standard reduction due to [@costello2006random] (see \[lem:rank reduction,lem:second reduction,corollary:remove-first-row\]), we may further assume that $M^{1}_{n-1}$ has rank either $n-2$ or $n-1$. In this outline, we will only discuss the case when $M^{1}_{n-1}$ has rank $n-1$; the other case is easier, and is handled exactly as in [@nguyen2012inverse] (see \[lemma:reduction-to-linear,eqn:conclusion-degenerate-case\]). A decoupling argument due to [@costello2006random] (see \[lemma:decoupling-CTV\]) further reduces the problem (albeit in a manner incurring a loss) to bounding from above the probability that $$\sum_{i\in U_1}\sum_{j \in U_2}c_{ij}(x_i - x_i')(x_j - x_j')=0,$$ where $U_1 \sqcup U_2 $ is an arbitrary non-trivial partition of $[n-1]$, and $x_i', x_j'$ are independent copies of $x_i, x_j$ (see \[corollary:decoupling-conclusion\]). For the remainder of this discussion, the reader should think of $|U_2|$ as ‘
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We reply to the recent comments on our published papers, Phys. Rev. Lett. 109 (2012) 152005 and Phys. Lett. B717 (2012) 214. We point out that the criticisms about the transverse polarization parton sum rule we obtained are invalid.' author: - Xiangdong Ji - Xiaonu Xiong - Feng Yuan title: | Reply to arXiv:1211.3957 and arXiv:1211.4731 by Leader [*et al.*]{}\ and arXiv:1212.0761 by Harindranath [*et al.*]{} --- The comments by Leader [*et al.*]{} [@Leader:2012md] and by Harindranath [*et al.*]{} [@Harindranath:2012wn] on our Phys. Rev. Lett. paper [@Ji:2012sj] arise from a mis-understanding of our result. We have in fact published a longer paper [@Ji:2012vj] following the Letter, fully explaining what our partonic transverse spin sum rule mean. We reiterate that our result stands following a careful consideration of all pertinent issues. First of all, we remarked the simple fact that the ordinary transverse angular momentum (AM) does not commute with the longitudinal boost, and thus a frame-independent picture for the transverse spin is not the transverse AM alone, but the well-known Pauli-Lubanski (PL) spin $\hat W_\perp$ [@Ji:2012vj]. The PL spin is diagonalized in the transversely-polarized nucleon state with arbitrary longitudinal momentum. The PL spin is defined as $\hat W^\mu \sim \epsilon^{\mu\alpha\beta\gamma} \hat J_{\alpha\beta} \hat P_\gamma$, and we take the nucleon state with $P^\mu=(P^0,P_\perp=0,P^3)$ and replace $\hat P^\mu$ by its eigenvalue, so that $\hat W^\mu$ linearly depends on the angular momentum operator $\hat J^{ij}$, as well as the boost operator $\hat J^{0i}$. In the Letter paper, we restrict ourselves to the light-cone rest frame with residual momentum $P^3=0$, and thus only $P^+$ and $P^-$ do not vanish. If taking $\mu=1$, and $\alpha=2$, $\beta, \gamma$ to be $+$ and $-$, we have $W^1 \sim -\hat J^{2+}P^- + \hat J^{2-}P^+$. In light-cone quantization, the $\hat J^{2-}$ is a higher-twist contribution depending on products of three or four parton operators; however, its matrix element is related to that of $J^{2+}$ by simple Lorentz symmetry, and hence its contribution is considered known. Thus the leading-twist parton picture arises from $J^{2+}$ which is interaction-independent. One can obtain a simple partonic interpretation for this part related to the tensor $T^{++}$, as explained in the Letter paper, and the result is an integral over the intuitive parton transverse AM density $x(q(x)+E(x))/2$ and consistent with Burkardt wave-packet picture [@burkardt]. Thus, the key aspect of finding a partonic picture for transverse PL spin is to focus on the leading twist part and do away the other parts through Lorentz symmetry, a strategy first pointed out by Burkardt. Note that the spin operators of quarks and gluons do not contribute at the leading twist as they are now higher-twist operators in light-cone quantization. The PL vector was also the starting point of Ref. [@Harindranath:2012wn] and an earlier publication of the same authors, Ref. [@Harindranath:2001rc], in which the equations (2.6) and (2.7) reduce to $W^i$ when the external particle has no transverse momentum, $P^i=0$. One can easily find that they agree with our starting point of the discussion, contrary to the claim in their comment, Ref. [@Harindranath:2012wn]. Moreover, our conclusion does not contradict with that in Ref. [@Harindranath:2001rc]: In our longer version of Ref. [@Ji:2012vj], we find twist-3 and twist-4 parts of $W^i$ are interacting-dependent. Our new result [@Ji:2012sj] beyond Ref. [@Harindranath:2001rc] is that there is a twsit-two contribution of the transverse polarization which can be understood in a simple parton picture, related to the generalized parton distributions (GPD), whereas the interaction-dependent part is related to that of the twist-2 GPD contribution by symmetry. Finally, Leader in a separate note [@Leader:2012ar] criticized our light-front result when generalized to an arbitrary residual momentum frame [@Ji:2012vj]. A careful reading of our paper reveals that we have already commented on the role of higher term $\bar C$ in the paragraph following Eqs. (16) and (23). The frame-independence of our result remains to be true for the leading twist part, which is a consequence of the dependence of the transverse spin $\hat W_\perp$ on the very boost operator that serves to cancel the frame dependence of the transverse AM. [99]{} E. Leader and C. Lorce, arXiv:1211.4731 \[hep-ph\]. A. Harindranath, R. Kundu, A. Mukherjee and R. Ratabole, arXiv:1212.0761 \[hep-ph\]. X. Ji, X. Xiong and F. Yuan, Phys. Rev. Lett.  [**109**]{}, 152005 (2012) \[arXiv:1202.2843 \[hep-ph\]\]. X. Ji, X. Xiong and F. Yuan, Phys. Lett. B [**717**]{}, 214 (2012) \[arXiv:1209.3246 \[hep-ph\]\]. M. Burkardt, Phys.Rev. D72, 094020 (2005) \[hep- ph/0505189\]. A. Harindranath, A. Mukherjee and R. Ratabole, Phys. Rev. D [**63**]{}, 045006 (2001). E. Leader, arXiv:1211.3957 \[hep-ph\].
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | We present X-ray/-ray spectra of the binary GX 339–4 observed in the hard state simultaneously by and [*CGRO*]{} OSSE during an outburst in 1991 September. The X-ray spectra are well represented by a power law with a photon spectral index of $\Gamma\simeq 1.75$ and a Compton reflection component with a fluorescent Fe K$\alpha$ line corresponding to a solid angle of an optically-thick, ionized, medium of $\sim 0.4\times 2\pi$. The OSSE data ($\geq 50$ keV) require a sharp high-energy cutoff in the power-law spectrum. The broad-band spectra are very well modelled by repeated Compton scattering in a thermal plasma with an optical depth of $\tau\sim 1$ and $kT\simeq 50$ keV. We also study the distance to the system and find it to be $\ga 3$ kpc, ruling out earlier determinations of $\sim 1$ kpc. Using this limit, the observed reddening and the orbital period, we find the allowed range of the mass of the primary is consistent with it being a black hole. We find the data are incosistent with models of either homogenous or patchy coronae above the surface of an accretion disc. Rather, they are consistent with the presence of a hot inner hot disc with the viscosity parameter of $\alpha\sim 1$ accreting at a rate close to the maximum set by advection. The hot disc is surrounded by a cold outer disc, which gives rise to the reflection component and a soft X-ray excess, also present in the data. The seed photons for Comptonization are unlikely to be due to thermal synchrotron radiation. Rather, they are supplied by the outer cold disc and/or cold clouds within the hot disc.  pair production is negligible if electrons are thermal. The hot disc model, which scaled parameters are independent of the black-hole mass, is supported by the similarity of the spectrum of GX 339–4 to those of other black-hole binaries and Seyfert 1s. On the other hand, their spectra in the soft -ray regime are significantly harder than those of weakly-magnetized neutron stars. Based on this difference, we propose that the presence of broad-band spectra corresponding to thermal Comptonization with $kT\ga 50$ keV represents a black-hole signature. author: - | \ $^1$N. Copernicus Astronomical Center, Bartycka 18, 00-716 Warsaw, Poland\ $^2$Stockholm Observatory, S-133 36 Saltsjöbaden, Sweden\ $^3$Astronomical Observatory, Jagiellonian University, Orla 171, 30-244 Cracow, Poland\ $^4$Laboratory for High Energy Astrophysics, NASA/Goddard Space Flight Center, Greenbelt, MD 20771, USA\ $^5$E. O. Hulburt Center for Space Research, Naval Research Laboratory, Washington, DC 20375, USA\ date: 'Accepted 1998 July 28. Received 1998 January 2' title: | Broad-band X-ray/$\bmath{\gamma}$-ray spectra and binary parameters\ of GX 339–4 and their astrophysical implications --- = -1cm == == == == \#1[[ \#1]{}]{} \#1[[ \#1]{}]{} @mathgroup@group @mathgroup@normal@group[eur]{}[m]{}[n]{} @mathgroup@bold@group[eur]{}[b]{}[n]{} @mathgroup@group @mathgroup@normal@group[msa]{}[m]{}[n]{} @mathgroup@bold@group[msa]{}[m]{}[n]{} =“019 =”016 =“040 =”336 ="33E == == == == \#1[[ \#1]{}]{} \#1[[ \#1]{}]{} == == == == \[firstpage\] accretion, accretion discs – binaries: general – gamma-rays: observations – gamma-rays: theory – stars: individual (GX 339–4) – X-rays: stars. INTRODUCTION {#s:intro} ============ GX 339–4, a bright and well-studied binary X-ray source, is commonly classified as a black hole candidate based on the similarity of its X-ray spectral states and short-time variability to those of Cyg X-1 (e.g. Tanaka & Lewin 1995). However, determinations of the mass of its compact star, $M_{\rm X}$, have been inconclusive (e.g. Cowley, Crampton & Hutchings 1987, hereafter C87; Callanan et al. 1992, hereafter C92), and thus its nature has been uncertain. Therefore, further studies of the properties of GX 339–4 as well as their comparison to those of objects with more direct evidence for harbouring a black hole is of crucial importance. In this work, we present two, very similar, broad-band X-ray/-ray (hereafter X) spectra of GX 339–4 obtained during a strong outburst of the source in September 1991 (Harmon et al. 1994) simultaneously by (Makino et al.1987) and the Oriented Scintillation Spectroscopy Experiment (OSSE) detector (Johnson et al. 1993) on board the [*Compton Gamma Ray Observatory*]{} ([*CGRO*]{}). The source was in the hard (also called ‘low’) spectral state. The and OSSE observations were reported separately by Ueda, Ebisawa & Done (1994, hereafter U94) and Grabelsky et al. (1995, hereafter G95), respectively. However, the data from the two instruments have not been fitted together, and, e.g. G95 found models with Compton reflection of X photons from an accretion disc unlikely whereas U94 found strong evidence in the data for the presence of this process. Here, we re-analyze the simultaneous and OSSE data based on the present accurate calibration of those instruments. This leads to a reconciliation of the apparent discrepancies between the data sets from the two instruments, and allows us to fit the joint data with physical models. We also study the distance, reddening, Galactic column density and the masses of the binary members. Those results are then used in studying radiative processes, geometry and physical models of the source. Finally, we find the X spectrum of GX 339–4 similar to those of black-hole binaries and Seyfert AGNs, and, in particular, virtually identical to that of NGC 4151, the Seyfert brightest in hard X-rays. This favours physical models with scaled parameters independent of the central mass, such as a hot accretion disc with unsaturated thermal Comptonization (Shapiro, Lightman & Eardley 1976, hereafter S76). On the other hand, the spectrum of GX 339–4 is significantly different from those observed from neutron star binaries, which supports the black-hole nature of the compact object in GX 339–4. THE PARAMETERS OF THE BINARY ============================ In order to analyze the X-ray data meaningfully, we need to estimate basic parameters of the binary system. Of importance here are the Galactic column density, $\nh$, the interstellar reddening, $\ebv$, the distance, $d$ (for which published estimates range from 1.3 to 4 kpc), the masses of the primary and secondary, $M_{\rm X}$ and $M_{\rm c}$, respectively, and the inclination (with respect to the normal to the orbital plane), $i$. Reddening and column density ---------------------------- Grindlay (1979) found strong interstellar Na[i]{} D absorption lines and diffuse interstellar bands at $\lambda \sim 5775$–5795, 6010, 6176, 6284, and 6376 Å, while C87 found a strong interstellar Ca[ii]{} K absorption line and diffuse $\lambda 4430$ Å absorption band. The equivalent widths of these features are consistent with $\ebv \simeq 1$–1.3. From the uncertainties of the published estimates, we derive the weighted mean of $$\ebv=1.2 \pm 0.1\,.$$ The most extended all-sky study of the distribution of neutral H based on high-resolution [*IUE*]{} observations of Ly$\alpha$ absorption towards 554 OB stars shows their $\nh$ well correlated with the column density of dust, measured by $\ebv$, with $\langle \nh/\ebv\rangle = 4.93 \times 10^{21}\, {\rm cm^{-2}\, mag^{-1}}$ (Diplas & Savage 1994). $\ebv = 1.2 \pm 0.1$ derived above thus indicates $$\nh = (6.0 \pm 0.6) \times 10^{21} \rm cm^{-2}\,.$$ This $\nh$ is in excellent agreement with that derived from X-ray data. We obtain $\nh=(6.2\pm 0.7) \times 10^{21}$ cm$^{-2}$ from the depth of the O edge of $\tau_{\rm O}=2.6\pm 0.3$ measured by Vrtilek et al. (1991), and assuming the O abundance of Anders & Ebihara (1982). On the other hand, Vrtilek et al.(1991) and Ilovaisky et al. (1986) have obtained $\nh= (6.6\pm 0.3)\times 10^{21}$ cm$^{- 2}$ and $(5.0\pm
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | We outline our methods for obtaining high precision mass profiles, combining independent weak-lensing distortion, magnification, and strong-lensing measurements. For massive clusters the strong and weak lensing regimes contribute equal logarithmic coverage of the radial profile. The utility of high-quality data is limited by the cosmic noise from large scale structure along the line of sight. This noise is overcome when stacking clusters, as too are the effects of cluster asphericity and substructure, permitting a stringent test of theoretical models. We derive a mean radial mass profile of four similar mass clusters of high-quality [*Hubble Space Telescope*]{} and Subaru images, in the range $R=40$kpc$\,h^{-1}$ to 2800kpc$h^{-1}$, where the inner radial boundary is sufficiently large to avoid smoothing from miscentering effects. The stacked mass profile is detected at $58\sigma$ significance over the entire radial range, with the contribution from the cosmic noise included. We show that the projected mass profile has a continuously steepening gradient out to beyond the virial radius, in remarkably good agreement with the standard Navarro-Frenk-White form predicted for the family of CDM-dominated halos in gravitational equilibrium. The central slope is constrained to lie in the range, $-d\ln\rho/d\ln{r}=0.89^{+0.27}_{-0.39}$. The mean concentration is $c_{\rm vir}=7.68^{+0.42}_{-0.40}$ (at $M_{\rm vir}=1.54^{+0.11}_{-0.10}\times 10^{15}M_\odot\,h^{-1}$), which is high for relaxed, high-mass clusters, but consistent with $\Lambda$CDM when a sizable projection bias estimated from $N$-body simulations is considered. This possible tension will be more definitively explored with new cluster surveys, such as CLASH, LoCuSS, Subaru HSC, and XXM-XXL, to construct the $c_{\rm vir}$–$M_{\rm vir}$ relation over a wider mass range. author: - 'Keiichi Umetsu, Tom Broadhurst, Adi Zitrin, Elinor Medezinski, Dan Coe, Marc Postman' title: 'A Precise Cluster Mass Profile Averaged from the Highest-Quality Lensing Data' --- Introduction {#sec:intro} ============ Clusters of galaxies represent the largest gravitationally-bound objects in the universe, which contain a wealth of astrophysical and cosmological information, related to the nature of dark matter, primordial density perturbations, and the emergence of structure over cosmic time. Observational constraints on the properties and evolution of clusters provide independent and fundamental tests of any viable cosmology, structure formation scenario, and possible modifications of the laws of gravity, complementing large-scale cosmic microwave background and galaxy clustering measurements [e.g., @Komatsu+2011_WMAP7; @Percival+2010_BAO]. A key ingredient of cluster-based cosmological tests is the mass and internal mass distribution of clusters. In this context, the current cosmological paradigm of structure formation, the standard $\Lambda$ cold (i.e., non relativistic) dark matter ($\Lambda$CDM, hereafter) model, provides observationally testable predictions for CDM-dominated halos over a large dynamical range in density and radius. Unlike galaxies where substantial baryonic cooling is present, massive clusters are not expected to be significantly affected by gas cooling [e.g., @Blumenthal+1986; @Broadhurst+Barkana2008]. This is because the majority of baryons ($\sim 80\%$) in massive clusters comprise a hot, X-ray emitting diffuse intracluster medium (hereafter ICM), in which the high temperature and low density prevent efficient cooling and gas contraction, and hence the gas pressure roughly traces the gravitational potential produced by the dominant dark matter [see @Kawaharada+2010; @Molnar+2010_ApJL]. The ICM represents only a minor fraction of the total mass near the centers of clusters [@2008MNRAS.386.1092L; @2009ApJ...694.1643U]. Consequently, for clusters in a state of quasi equilibrium, the form of their total mass profiles reflects closely the distribution of dark matter [@Mead+2010_AGN]. High-resolution $N$-body simulations of collisionless CDM exhibit an approximately “universal” form for the spherically-averaged density profile of virialized dark matter halos [@1997ApJ...490..493N NFW hereafter], with some intrinsic variance in the mass assembly histories of individual halos [@Jing+Suto2000; @Tasitsiomi+2004; @Navarro+2010]. The predicted logarithmic gradient $\gamma_{\rm 3D}(r)\equiv -d\ln{\rho}/d\ln{r}$ of the NFW form flattens progressively toward the center of mass, with a central cusp slope flatter than a purely isothermal structure ($\gamma_{\rm 3D}=2$) interior to the inner characteristic radius $r_s ({\lower.5ex\hbox{$\; \buildrel < \over \sim \;$}}300\,$kpc$h^{-1}$ for cluster-sized halos), providing a distinctive prediction for the empirical form of CDM halos in gravitational equilibrium. A useful index of the degree of concentration is $c_{\rm vir}=r_{\rm vir}/r_s$, which compares the virial radius $r_{\rm vir}$ to the characteristic radius $r_s$ of the Navarro-Frenk-White (NFW, hereafter) profile. This empirical NFW profile is characterized by the total mass within the virial radius, $M_{\rm vir}$, and the halo concentration $c_{\rm vir}$. Theoretical progress has been made in understanding of the form of this profile in terms of the dynamical structure using both numerical and analytical approaches [@Taylor+Navarro2001; @Lapi+Cavaliere2009; @Navarro+2010], though we must currently rely on the quality of $N$-body simulations when making comparisons with CDM-based predictions for cluster mass profiles. In the context of standard hierarchical clustering models, the halo concentration should decline with increasing halo mass as dark matter halos that are more massive collapse later when the mean background density of the universe is correspondingly lower [@2001MNRAS.321..559B; @Zhao+2003; @2007MNRAS.381.1450N]. This prediction for the halo mass-concentration relation and its evolution has been established thoroughly with detailed simulations [e.g., @1997ApJ...490..493N; @2001MNRAS.321..559B; @2007MNRAS.381.1450N; @Duffy+2008; @Klypin+2010], although sizable scatter around the mean relation is present due partly to variations in formation epoch of halos [@2002ApJ...568...52W; @2007MNRAS.381.1450N; @Zhao+2009]. Massive clusters are of particular interest in this context because they are predicted to have a relatively shallow mass profile with a pronounced radial curvature. Gravitational lensing of background galaxies offers a robust way of directly obtaining the mass distribution of galaxy clusters [see @2001PhR...340..291B; @Umetsu2010_Fermi and references therein] without requiring any assumptions on the dynamical and physical state of the system . A detailed examination of this fundamental prediction has been the focus of our preceding work [@BTU+05; @Medezinski+07; @BUM+08; @UB2008; @2009ApJ...694.1643U; @Lemze+2009; @Umetsu+2010_CL0024; @Umetsu+2011]. Systematic cluster lensing surveys are in progress to obtain mass profiles of representative clusters over a wide range of radius by combining high-quality strong and weak lensing data. Deep multicolor images of massive cluster cores from Advanced Camera for Surveys (ACS) observations with the [*Hubble Space Telescope*]{} ([*HST*]{}) allow us to identify many sets of multiple images spanning a wide range of redshifts for detailed strong-lens modeling [e.g., @2005ApJ...621...53B; @Zitrin+2009_CL0024; @Zitrin+2010_A1703; @Zitrin+2011_MACS; @Zitrin+2010_MS1358; @Zitrin+2011_A383]. The wide-field prime-focus cameras of Subaru and CFHT have been routinely producing data of sufficient quality to obtain accurate measurements of the weak-lensing signal, providing model-independent cluster mass profiles out to beyond the virial radius [e.g., @BTU+05; @BUM+08; @2007ApJ...668..643L; @UB2008; @2009ApJ...694.1643U; @Umetsu+2010_CL0024; @Umetsu+2011; @Coe+2010]. Our earlier work has demonstrated that without adequate color information, the weak-lensing signal can be heavily diluted particularly toward the cluster center by the presence of unlensed cluster members, leading to biased cluster mass profile measurements with underestimated concentrations and internal inconsistency, with the weak-lensing based profile underpredicting the observed Einstein radius [@BTU+05; @UB2008; @Medezinski+2010]. Careful lensing work on individual clusters has shown that full mass profiles constructed from combined strong and weak lensing measurements show a continuous steepening radial trend consistent with the predicted form for the family of collisionless CDM halos . Intriguingly these initial results from combined strong and
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'I demonstrate that an effect similar to the Römer delay, familiar from timing radio pulsars, should be detectable in the first eclipsing double white dwarf (WD) binary, [NLTT 11748]{}. By measuring the difference of the time between the secondary and primary eclipses from one-half period (4.6s), one can determine the physical size of the orbit and hence constrain the masses of the individual WDs. A measurement with uncertainty $<0.1\,$s—possible with modern large telescopes—will determine the individual masses to $\pm0.02M_\odot$ when combined with good-quality ($<1\,{\ensuremath{{\rm km\,s}^{-1}}}$) radial velocity data, although the eccentricity must also be known to high accuracy ($\pm 10^{-3}$). Mass constraints improve as $P^{-1/2}$ (where $P$ is the orbital period), so this works best in wide binaries and should be detectable even for non-degenerate stars, but such constraints require the mass ratio to differ from one and undistorted orbits.' author: - 'David L. Kaplan' title: Mass Constraints from Eclipse Timing in Double White Dwarf Binaries --- Introduction ============ Since the discovery of binary pulsars [@ht75], precision timing (typical uncertainties $<1\,\mu$s) has been used to derive a variety of physical constraints [see the discussion in @lk04]. The arrival-time delay across the orbit (the Römer delay) immediately gives the projected semimajor axis of the pulsar [@bt76]. This, especially when coupled with the relatively narrow mass distribution of neutron stars [@tc99], constrains the mass of the companion. I contrast this with eclipse timing of planetary systems (typically uncertainties are $\gtrsim$seconds). Here, with a mass ratio $\approx 10^{-3}$ the radial velocity curve gives a limit on the mass of the companion planet. With transiting systems, $\sin i\approx 1$ and the mass of the planet is further constrained but not known uniquely [@cblm00], although with knowledge of the stellar parameters one can infer the planetary mass and radius [e.g., @bcg+01]. If individual eclipses can be timed to high precision (and here I mean both primary and secondary eclipses, i.e., transits and occultations), one can learn more about the system (e.g., @winn10). Variations in the eclipse times can unveil the presence of additional bodies in the system [e.g., @assc05; @hm05], precession [e.g., @me02], and kinematics of the system [@rafikov09]. With the recent discovery [@sks+10] of , an eclipsing double white dwarf (WD) binary with a tight enough orbit that the binary will merge within a Hubble time, a whole new series of questions may be asked. The initial constraints are the radial velocity amplitude of the lighter object (owing to the inverted mass–radius relation of WDs, this object is the larger and brighter member of the system) and the widths and depths of both transit and occultation. From this, assuming a cold C/O WD for the heavier object, @sks+10 were able to limit the masses and radii of both objects, but could not determine unique constraints. Measurement of spectral lines from the fainter object would determine both masses uniquely, but this is challenging as the fainter object is only $\approx 3.5$% of the flux of the brighter. A number of other close WD binaries have been discovered in the last 2 years (see Table \[tab:wd\] for those with undetermined inclinations). Most of them, like [NLTT 11748]{}, appear to have a low-mass ($\lesssim 0.2\,M_{\odot}$) He WD in orbit with a more massive (0.5–1.0$M_{\odot}$) C/O WD. Such systems are of interest because of their eventual evolution, with mass transfer brought on by gravitational radiation [@nypzv01] and are presumed to be the progenitors of highly variable objects: R CrB stars, AM CVn binaries, and Type Ia supernovae [@it84; @webbink84]. Many of these binaries are also of immediate interest as verification targets for the *Laser Interferometer Space Antenna* (*LISA*) mission [@nelemans09]. [l c c c c c c c l]{} SDSS J1053+5200 & 1.02 & 265 & 0.26 & $>0.017$ & 0.20 & 0.04& 0.2 & 1,2\ SDSS J1436+5010 & 1.10 & 347 & 0.45 & 0.014 & 0.22 & 0.04& 0.7 & 1,2\ SDSS J0849+0445 & 1.89 & 367 & 0.65 & 0.012 & 0.17 & 0.05& 2.0 & 1\ WD 2331+290 & 4.08 & 156 & 0.39 & 0.015 & 0.32 & 0.016 & 0.5 & 3,4,5\ SDSS J1257+5428 & 4.55 & 323 & 0.92 & 0.009 & 0.15 & 0.04& 4.7 & 6,7,8\ [NLTT 11748]{}& 5.64 & 271 & 0.74 & 0.010 & 0.15 & 0.04& 4.6 & 9,10\ SDSS J0822+2753 & 5.85 & 271 & 0.71 & 0.010 & 0.17 & 0.04& 4.7 & 1\ Of the 11 compact WD binaries known, only [NLTT 11748]{} is known to be eclipsing, but searches for the other sources are not uniformly constraining and additional systems may yet be discovered. The flux ratios vary for the systems, and in some cases it may be easier to directly measure the radial velocity curves for both members of the binary. Without two radial velocity curves, mass constraints are limited. Such constraints are invaluable in understanding the detailed formation histories and expected evolution of these systems as well as in determining the mass–radius relation from eclipse measurements. Moreover, their use as *LISA* verification sources is improved by accurate knowledge of the binary parameters. In this [Letter]{}, I discuss an effect that uses precision timing of the eclipses in such double WD systems to help constrain the individual masses of the WDs. This technique is known in other contexts, being common in radio pulsar systems and planetary systems [@kcn+07; @hdd+10; @ack+10], although in the latter it is largely a nuisance parameter and does not constrain the systems. I discuss its applicability to eclipsing double WD systems, the required observational precision and the resulting accuracy. Light Travel Delay and Mass Constraints ======================================= In a system with a circular orbit, one often speaks of the primary and secondary eclipses as occurring exactly $1/2$ period apart, but this is not the case. If the members of the binary are of unequal mass the finite speed of light will cause an apparent shift in the phase of the secondary eclipse from $P/2$, where $P$ is the period of the binary [@loeb05; @fabrycky10]. This is similar to the shifts in eclipse timing caused by a perturbing third body on a binary system [@sd95; @ddj+98; @ddk+00; @sterken05; @lkk+09; @qdl+09], although here one only requires two bodies and the frequency of the shift is known. In the case of a planet with mass $m\ll M$ orbiting a star with mass $M$, one has a primary eclipse when the planet is in front of the star. The light is blocked at time $t=0$. However, that light was emitted earlier by the star, at time $t_1=0-a/c$, since it traveled a distance $a$ (the semimajor axis). For the secondary eclipse, the light is emitted by the planet at time $t=P/2$ but is blocked $a/c$ later, at $t_2=P/2+a/c$. The difference of these times exceeds $P/2$ by ${\ensuremath{\Delta t_{\rm LT}}}=t_2-t_1-P/2=2a/c$, the sought-after quantity. For two finite masses, I consider two objects orbiting their center of mass with period $P$, masses $M_1$ and $M_2$, and semimajor axis $a$. The total mass of the system is $M=M_1+M_2$, and of course $4\pi^2a^3=P^2G M$; the first object orbits at a radius $a_1=a(M_2/M)$ and the second object orbits at a radius $a_2=a(M_1/M)$. Near primary eclipse, the primary is at $[x,y]=[2\pi a_1 t/P, a_1]$ and the secondary is at $[-2\pi a_2 t/P,-a_2]$ at time $t$, with the observer at $[0,-\infty]$. I project the image of the two objects to the barycenter at $y=0$. This gives $x_{\rm B
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We propose that the dispersion management of coherent atomic matter waves can be exploited to overcome quantum back-action in condensate-based optomechanical sensors. The effective mass of an atomic Bose-Einstein condensate modulated by an optical lattice can become negative, resulting in a negative-frequency optomechanical oscillator, negative environment temperature, and optomechanical properties opposite to those of a positive-mass system. This enables a quantum-mechanics-free subsystem insulated from quantum back-action.' author: - 'Keye Zhang$^1$, Pierre Meystre$^2$, and Weiping Zhang$^{1}$' title: 'Back-action-free quantum optomechanics with negative-mass Bose-Einstein condensates' --- Introduction ============ Atomic Bose-Einstein condensates (BECs) present a number of desirable features for precision measurements as well as for a broad spectrum of tests of fundamental physics. These include, for example, thermal-noise-free sensors for atomic clock and interferometry applications [@Dunningham2005] and high-resolution magnetometers [@Vengalattore2007], tests of the Casimir-Polder force [@Obrecht2007], the development of quantum simulators for studies of quantum phase transitions [@Greiner2002] and artificial gauge fields [@Spielman2009], cavity QED experiments [@Brennecke2007], and studies of decoherence and quantum entanglement in many-body systems [@Esteve2008; @Cramer2013]. These applications benefit significantly from the extremely low temperatures, high-order coherence, and bosonic stimulation properties of BECs. However, the quantum nature of the condensates usually results in quantum back-action that randomly disturbs the quantum state to be detected [@Murch2008; @Treutlein2007], resulting, e.g., in the standard quantum limit (SQL) of displacement measurements  [@Braginsky2]. Recent experiments have also demonstrated that in BECs quantum back-action can be suppressed using spin squeezing or particle entanglement caused by atom-atom interactions [@Esteve2008; @Gross2010]. This approach is inspired by ideas originally developed in the context of gravitational wave detection  [@Braginsky; @Braginsky2], where the injection of squeezed light fields in the empty input port of the gravitational wave interferometer was proposed to beat the SQL. However, strong degrees of squeezing and the entanglement of large numbers of particles remain challenging due to their increasing sensitivity to decoherence. In this paper we show that the dispersion management of the Schr[ö]{}dinger field provides a promising alternative to the elimination of quantum back-action effects in BEC-based measurement schemes. When trapped in a weak optical lattice potential, the condensate can be forced into a regime of anomalous dispersion where it acts as a macroscopic quantum object with negative effective mass [@Eiermann2003]. That negative mass can serve as a back-action canceler to a normal, positive mass partner and isolate quantum-mechanics-free subsystems (QMFSs), as discussed in a recent proposal by Tsang and Caves [@Tsang2012]. A similar noise-canceling effect is also expected to be realized by cavity photons with opposite detunings [@Tsang2010] as well as atomic ensembles with opposite spins [@Wasilewski2010]. Cavity optomechanical systems based on the collective motion of BECs [@Brennecke2008] and non-degenerate ultracold atomic gases [@Murch2008] have proven to be particularly well suited to demonstrate a number of quantum effects, including the observation of the quantum back-action of position measurements [@Murch2008], the asymmetry in the power spectrum of displacement noise due to the noncommuting nature of boson creation and annihilation operators [@Brahms2012], and the optomechanical cooling of a collective motional mode of an atomic ensemble down to the quantum regime [@SSmith2011]. These experiments pave the way to promising ultracold-atoms-based quantum metrology schemes, which we use to illustrate the role of the negative effective mass of the condensate in overcoming the quantum back-action. Back-Action-Free Quantum Optomechanics ====================================== Reference [@Tsang2012] showed that a simple setup to implement a QMFS comprises two harmonic oscillators, $A$ and $B$, of identical frequencies and opposite masses. In the following we assume that they are coupled optomechanically to a common optical field mode $\hat c$ as well as to time-dependent external perturbations $f_A$ and $f_B$ through the interaction Hamiltonian $$V= \hbar [\Delta_c + G(\hat{q}+\hat{q}')]\hat{c}^{\dagger}\hat{c}+f_A \hat{q} +f_B \hat{q}'.\label{H1}$$ Considering then the variables $$\begin{aligned} \hat{Q}&=&\hat{q}+\hat{q}'\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\hat{P}=\frac{1}{2}(\hat{p}+\hat{p}')\nonumber\\ \hat{\Phi}&=&\frac{1}{2}(\hat{q}-\hat{q}')\,\,\,\,\,\,\,\,\,\,\,\hat{\Pi}=\hat{p}-\hat{p}'\end{aligned}$$ It is easily verified that $[\hat{Q}, \hat{\Pi}]=0 $ and $$\dot {\hat{Q}} = \frac{\hat{\Pi}}{m},\,\,\,\dot{ \hat{\Pi} }= -m\omega^2 \hat{Q}+f_B-f_A,\,\,\,\dot{\hat{c}} = i\Delta_{c}\hat{c}-iG\hat{Q}\hat{c}.\label{dQ}$$ so that the dynamical pair of observables formed by the collective position $\hat{Q}$ and relative momentum $\hat{\Pi}$ form a QMFS. Equations (\[dQ\]) describe the motion of a particle driven by the difference in the external perturbations, $f_B-f_A$, resulting in a frequency shift of the cavity field that can be detected by interferometry. However, unlike the general optomechanical case, since the radiation pressure effect is absent in the equation for $\hat{\Pi}$, this measurement does not introduce any back-action and hence is not subject to the SQL. Complementary conclusions hold for the QMFS characterized by the pair operators $\hat{\Phi}$ and $\hat{P}$ for an optomechanical coupling of the form $G(\hat{q}-\hat{q}')$. In that case the frequency shift is proportional to $f_B+f_A$. ![(Color online) Relationship diagram for the back-action evading setup in the “bare” (left) and “composite” (right) representations. The displacement of the composite oscillator $E$ results in a change in the phase of the cavity field $C$ that could be measured by homodyne detection, but the measurement back-action only affects the composite oscillator $D$. []{data-label="loop"}](loop.eps){width="3.5in"} Further insight into the underlying physics of this back-action-free measurement scheme can be gained by considering the quantum state dynamics. We assume that the system is initially uncorrelated, with the cavity field in a coherent state and the positive-mass oscillator $A$ and negative-mass oscillator $B$ both in their ground state, $$\left| {\psi (0)} \right\rangle = {\left| \alpha \right\rangle _C} \otimes {\left| 0 \right\rangle _A} \otimes {\left| 0 \right\rangle _B}.$$ As a result of the optomechanical interaction, (\[H1\]), the oscillators $A$ and $B$ become entangled with the cavity field $C$. The correlation loop of the total scheme is shown in Fig.\[loop\]. However, when expressing the state of the system in terms of the composite oscillators $D$ and $E$, described by the operators $\{\hat{Q}, \hat{P}\} $ and $\{\hat{\Phi}, \hat{\Pi}\} $, respectively, we find that it does not suffer three-body entanglement among the subsystems $C$, $D$, and $E$, but only two-body entanglement. Specifically we find, except for an unimportant constant phase factor, $$\begin{aligned} | \psi (t)\rangle &=& e^{ - |\alpha |^2/2}\sum_n \frac{\alpha^n}{\sqrt {n!}} \exp \left [\frac{-i4nGQ_s}{ \omega }\left (\omega t - \sin \omega t\right )\right ]\nonumber \\ &\times&| n \rangle _C|\phi_n(t) \rangle _D \otimes |\varphi(t)\rangle _E, \label{state} \end{aligned}$$ where $$\begin{aligned} \phi_n(t)&=&\frac{-1}{\omega \sqrt{\hbar m \omega}} (f_A+f_B+2\hbar G n)\left ( 1- e^{-i\omega t} \right ),\nonumber \\ \varphi(t)&=&\sqrt{\frac{m\omega}{\hbar}}Q_s\left ( 1- e^{-i\omega t} \right ),\nonumber\end{aligned}$$ $Q_{s}=(f_B-f_A)/m\omega^2$, and $|n\rangle_C$ are photon Fork states. Equation (\[state\]) shows that in contrast to the composite oscillator $D$,
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Using the Kaczmarz algorithm, we prove that for any singular Borel probability measure $\mu$ on $[0,1)$, every $f\in L^2(\mu)$ possesses a Fourier series of the form $f(x)=\sum_{n=0}^{\infty}c_ne^{2\pi inx}$. We show that the coefficients $c_{n}$ can be computed in terms of the quantities $\widehat{f}(n) = \int_{0}^{1} f(x) e^{-2\pi i n x} d \mu(x)$. We also demonstrate a Shannon-type sampling theorem for functions that are in a sense $\mu$-bandlimited.' address: 'Department of Mathematics, Iowa State University, 396 Carver Hall, Ames, IA 50011' author: - 'John E. Herr and Eric S. Weber' bibliography: - 'fssm.bib' title: Fourier Series for Singular Measures --- Introduction ============ For a Borel probability measure $\mu$, a spectrum is a sequence $\{ \lambda_{n} \}_{n\in I}$ such that the functions $\{ e^{2 \pi i \lambda_{n} x} : n \in I \} \subset L^2(\mu)$ constitute an orthonormal basis. If $\mu$ possesses a spectrum, we say $\mu$ is spectral, and then every $f \in L^2(\mu)$ possesses a (nonharmonic) Fourier series of the form $ f(x) = \sum_{n \in I} \langle f(x), e^{2 \pi i \lambda_{n} x} \rangle e^{2 \pi i \lambda_{n} x}$. In [@JP98], Jorgensen and Pedersen considered the question of whether measures induced by iterated function systems on $\mathbb{R}^d$ are spectral. Remarkably, they demonstrated that the quaternary Cantor measure $\mu_4$ is spectral. Equally remarkably, they also showed that no three exponentials are orthogonal with respect to the ternary Cantor measure $\mu_3$, and hence $\mu_3$ is not spectral. The lack of a spectrum for $\mu_3$ motivated subsequent research to relax the orthogonality condition, instead searching for an exponential frame or Riesz basis, since an exponential frame would provide a Fourier series (see [@DS52]) similar to the spectral case. Though these searches have yielded partial results, it is still an open question whether $L^2(\mu_3)$ possesses an exponential frame. It is known that there exist singular measures without exponential frames. In fact, Lai [@Lai12] showed that self-affine measures induced by iterated function systems with no overlap cannot possess exponential frames if the probability weights are not equal. In this paper, we demonstrate that the Kaczmarz algorithm educes another potentially fruitful substitute for exponential spectra and exponential frames: the “effective” sequences defined by Kwapień and Mycielski [@KwMy01]. We show that the nonnegative integral exponentials in $L^2(\mu)$ for any singular Borel probability measure $\mu$ are such an effective sequence and that this effectivity allows us to define a Fourier series representation of any function in $L^2(\mu)$. This recovers a result of Poltoratskiĭ [@Pol93] concerning the normalized Cauchy transform. A sequence $\{f_n\}_{n=0}^{\infty}$ in a Hilbert space $\mathbb{H}$ is said to be *Bessel* if there exists a constant $B>0$ such that for any $x \in \mathbb{H}$, $$\label{besselcond} \sum_{n=0}^{\infty}\lvert\langle x,f_n\rangle\rvert^2\leq B\lVert x\rVert^2.$$ This is equivalent to the existence of a constant $D>0$ such that $$\left\lVert\sum_{n=0}^{K}c_nf_n\right\rVert\leq D\sqrt{\sum_{n=0}^{K}\lvert c_n\rvert^2}$$ for any finite sequence $\{c_0,c_1,\ldots,c_K\}$ of complex numbers. The sequence is called a *frame* if in addition there exists a constant $A>0$ such that for any $x\in\mathbb{H}$, $$\label{framecond}A\lVert x\rVert^2\leq\sum_{n=0}^{\infty}\lvert\langle x,f_n\rangle\rvert^2\leq B\lVert x\rVert^2.$$ If $A=B$, then the frame is said to be *tight*. If $A=B=1$, then $\{f_n\}_{n=0}^{\infty}$ is a *Parseval frame*. The constant $A$ is called the *lower frame bound* and the constant $B$ is called the *upper frame bound* or *Bessel bound*. The *Fourier-Stieltjes transform* of a finite Borel measure $\mu$ on $[0,1)$, denoted $\widehat{\mu}$, is defined by $$\widehat{\mu}(x):=\int_{0}^{1}e^{-2\pi ixy}\,d\mu(y).$$ Effective Sequences ------------------- Let $\{\varphi_n\}_{n=0}^{\infty}$ be a linearly dense sequence of unit vectors in a Hilbert space $\mathbb{H}$. Given any element $x\in\mathbb{H}$, we may define a sequence $\{x_n\}_{n=0}^{\infty}$ in the following manner: $$\begin{aligned} x_0&=\langle x,\varphi_0\rangle \varphi_0\\ x_n&=x_{n-1}+\langle x-x_{n-1},\varphi_n\rangle \varphi_n.\end{aligned}$$ If $\lim_{n\rightarrow\infty}\lVert x-x_n\rVert=0$ regardless of the choice of $x$, then the sequence $\{\varphi_n\}_{n=0}^{\infty}$ is said to be effective. The above formula is known as the Kaczmarz algorithm. In 1937, Stefan Kaczmarz [@Kacz37] proved the effectivity of linearly dense periodic sequences in the finite-dimensional case. In 2001, these results were extended to infinite-dimensional Banach spaces under certain conditions by Kwapień and Mycielski [@KwMy01]. These two also gave the following formula for the sequence $\{x_n\}_{n=0}^{\infty}$, which we state here for the Hilbert space setting: Define $$\begin{aligned} \begin{split}\label{gs}g_0&=\varphi_0\\ g_n&=\varphi_n-\sum_{i=0}^{n-1}\langle \varphi_n,\varphi_i\rangle g_i.\end{split}\end{aligned}$$ Then $$\label{xnsum}x_n=\sum_{i=0}^{n}\langle x,g_i\rangle \varphi_i.$$ As shown by [@KwMy01], and also more clearly for the Hilbert space setting by [@HalSzw05], we have $$\lVert x\rVert^2-\lim_{n\rightarrow\infty}\lVert x-x_n\rVert^2=\sum_{n=0}^{\infty}\lvert\langle x,g_n\rangle\rvert^2,$$ from which it follows that $\{\varphi_n\}_{n=0}^{\infty}$ is effective if and only if $$\label{gnframe}\sum_{n=0}^{\infty}\lvert\langle x,g_n\rangle\rvert^2=\lVert x\rVert^2.$$ That is to say, $\{\varphi_n\}_{n=0}^{\infty}$ is effective if and only if the associated sequence $\{g_n\}_{n=0}^{\infty}$ is a Parseval frame. If $\{\varphi_n\}_{n=0}^{\infty}$ is effective, then $\eqref{xnsum}$ implies that for any $x\in \mathbb{H}$, $\sum_{i=0}^{\infty}\langle x,g_i\rangle \varphi_i$ converges to $x$ in norm, and as noted $\{g_n\}_{n=0}^{\infty}$ is a Parseval frame. This does not mean that $\{g_n\}_{n=0}^{\infty}$ and $\{\varphi_n\}_{n=0}^{\infty}$ are dual frames, since $\{\varphi_n\}_{n=0}^{\infty}$ need not even be a frame. However, $\{\varphi_n\}_{n=0}^{\infty}$ and $\{g_n\}_{n=0}^{\infty}$ are pseudo-dual in the following sense, first given by Li and Ogawa in [@LiOg01]: \[pseudodef\] Let $\mathcal{H}$ be a separable Hilbert space. Two sequences $\{\varphi_n\}$ and $\{\varphi_n^\star\}$ in $\mathcal{H}$ form a pair of *pseudoframes* for $\mathcal{H}$ if for all $x,y\in\mathcal{H}$, $\displaystyle\langle x,y\rangle=\sum_{n}\langle x,\varphi_n^\star\rangle\langle \varphi_n,y\rangle$. All frames are pseudoframes, but not the converse. Observe that if $x,y\in\mathbb{H}$ and $\{\varphi_n\}_{n=0}^{\infty}$ is effective, then $$\begin{aligned} \langle x,y\rangle&=\left\langle \sum_{
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | The scientific output 1994-2014 of the University Centre in Svalbard (UNIS) was bibliometrically analysed. It was found that the majority of the papers have been published as international cooperations and rank above world average. Analysis of the papers’ content reveals that UNIS works and publishes in a wide variety of scientific topics.\ **Keywords**: Svalbard, Bibliometry, Pudovkin-Garfield Percentile Rank Index, Content Analysis. author: - 'Johannes Stegmann[^1]' title: 'Research at UNIS - The University Centre in Svalbard. A bibliometric study' --- Introduction ============ The sensitivity of the Arctic to climate changes and the heavy impact of such transformations on other world regions (Post et al., 2009) as well as its presumed richness in oil, gas and other mineral deposits has moved the Arctic into the focus of intensive scientific, economic, political and public attention (Humrich, 2013).\ The University Centre in Svalbard (UNIS) was established in 1993 as “Arctic extension” of Norway’s universities (UNIS, 2009 a). UNIS is to “represent and secure Norwegian polar interests” (UNIS, 2009 a). UNIS’s mission is also to offer an international research platform for all kinds of basic Arctic research (UNIS, 2009 b).\ It seems to be of interest to analyse UNIS’ scientific acitivities from a bibliometric point of view. This communication tries to answer the following questions: (i) What is produced by UNIS in terms of scientic papers? (ii) To what extent is UNIS’ propagated internationality realised in terms of international coauthorships? (iii) What is the standing of UNIS’ publications in terms of appropriate international standards? (iv) What is the content of UNIS authored papers in terms of subfields and subject topics? Methods ======= Papers published since 1994 by UNIS were retrieved and downloaded from the Web of Science (WoS) on January 19, 2014, using an appropriate address search profile.\ For the analysis of UNIS’ paper output and its distribution to different document types all retrieved records were used. For the analysis of UNIS’ research those papers not being research articles (i.e. not of document type “ARTICLE”) were excluded.\ The citation performance of UNIS’ papers was measured applying the Percentile Rank Index (PRI) developed by Pudovking and Garfield (Pudovkin and Garfield, 2009). I call this version of a percentile rank index PG-PRI but use in this paper “PG-PRI” and “PRI” synonymously because no other PRI methods are involved here.\ Prior to PG-PRI calculation of a paper in question the citation rank of this paper among its “paper peers”, i.e. all papers published in the same source journal in the same year must be determined. Because most papers need some time to gather cites it makes no sense to include too recent papers in a PRI analysis. In this study, only research papers (document type “ARTICLE”) of UNIS published before 2013 (i.e. published in the years 1994-2012) were included (723 papers). For each of these 723 research articles its publication year and publishing journal was determined, and all papers (document type “ARTICLE” only) of the corresponding journal-year pair were retrieved and downloaded. In summary, the papers of 514 journal-year pairs were retrieved and downloaded between the 6^th^ and 10^th^ February 2014. Then, the papers of each journal-year set were ranked top-down according to citations received. In case of ties (several papers having the same citation frequency), each of the tied values was assigned the average of the ranks for the tied set (Pudovkin and Garfield, 2009, Pudovkin et al., 2012). The position of each of the UNIS papers in the corresponding paper set was determined. PG-PRI values were calculated according to the formula $$PRI = \frac{N-R+1}{N}*100$$ where N is the number of papers in the year set of the journal and R is the citation rank of the paper (Pudovkin and Garfield, 2009). R=1 is the top rank (most cited paper) with PRI=100 (Pudovkin and Garfield, 2009).\ For determination of the global (expected) average PRI the Svalbard papers were ordered according to the number of papers published in the corresponding journal-year set. The average PRI was calculated according to the formula $$PRI_{globav} = 50 + \frac{50}{N}.$$ where N is the number of papers published in the journal-year pair at the median position of the ordered set (Pudovkin et al., 2012). In the present study, the median N was found to be 150; therefore, $$PRI_{globav} = 50.33$$\ For cluster analysis of keywords the co-word analysis technique described by Callon et al. (1991) was applied. A detailed description of the algorithm can be found in Stegmann and Grohmann (2003).\ Extraction of record field contents, clustering, data analysis and visualisation were done using homemade programs and scripts for perl (version 5.14.2) and the software package R version 2.14.1 (R Core Team, 2013). All operations were performed on a commercial PC run under Ubuntu version 12.04 LTS. Results and Discussion ====================== Output (papers) --------------- Since 1994 UNIS published 875 papers, more than 85% of them being research papers (Figure 1). In UNIS’ starting years only few papers were published but the annual publication numbers gradually increased up to 94 in 2012. In 2013 only 73 UNIS papers were retrieved but probably not yet all papers with 2013 as publication year had been recorded to the WoS database at the time of retrieval (Januar 2014). The number of UNIS papers retrieved from the WoS database are in good agreement with the corresponding numbers derived from UNIS’ annual reports 2009 to 2012 (UNIS 2009 b, 2010, 2011, 2012). For the subsequent analysis of Svalbard’s research papers of document type “ARTICLE” only were included. These amount to 748 papers for the whole time span (1994 - January 2014). These papers have 2331 distinct authors; the most prolific author is (co)author of 66 papers. The average number of authors per paper is 3.1; predominant is the class with 4 authors per paper. Only 4.1% (31 papers) of the research papers are single-authored (not shown). The paper with the highest number of authors (376) is the yearly published [*State of the Climate*]{} report, a special supplement to the [*Bulletin of the American Meteorological Society*]{} (Blunden and Arndt, 2012).\ 67% of UNIS’s research papers are international papers, jointly authored by at least one author of UNIS (i.e. from Norway) and at least one author from another country. In total, 56 different countries (including Norway) are involved in UNIS’ research papers. Table 1 shows the top 15 cooperating countries. Among them are the other (besides Norway) circumpolar countries: Canada, Denmark (due to its autonomous region Greenland), Iceland, Russia, USA.\ UNIS has published its papers in more than 200 journals; the top ten are displayed in Table 2 (see also Table 3 and next section). Benchmarking (PG-PRI) --------------------- For the analysis of the international standing of UNIS’ research the percentile rank indexing method of Pudovkin and Garfield (2009) was applied (PG-PRI, see Methods). Figure 2 displays the PG-PRI value of each of the 723 research articles of UNIS. Table 4 lists some PRI ranges. The average PRI value of UNIS’ 723 research articles published 1994-2012 is 53.9, well above the expected (global) mean of 50.33 (see Methods). In addition, more than one half (392 = 54.2%) of the UNIS papers have PG-PRI values above the global mean (Figure 2, Table 4). The PG-PRI has the inherent capability for international comparison of an author’s/institute’s papers because it compares the citation performance of the research papers in question with their “direct peers”, i.e. papers of the same type published in the same journals in the same time span ((Pudovkin and Garfield, 2009, Pudovkin et al., 2012). From the data in Figure 2 and Table 4 it is concluded that UNIS’ research perform well above the average of comparable world research. This conclusion is supported by the high impact factor ranks of the top ten journals with UNIS papers within their JCR categories (see Table 3). Content (categories, keywords) ------------------------------ Rough indicators of the scientific (sub)fields to which papers contribute are the WoS categories to which journals are assigned. UNIS contributes to 52 WoS categories. The top 15 categories to which UNIS research papers (i.e. the publishing journals) have been assigned are shown in Table 5. Earth, marine, environmental sciences play a role, but also space and evolutionary sciences are important.\ Deeper insights into UNIS’ research areas may be achieved by an analysis of the keywords assigned to the articles. The keywords were extracted from the record fields DE (author keywords) and ID (keyword plus). 3999 distinct keywords were extracted and - in a first
{ "pile_set_name": "ArXiv" }
null
null
[ **Subderivative-subdifferential duality formula**]{} -------------------------------------------------------------------- Marc Lassonde Université des Antilles, BP 150, 97159 Pointe à Pitre, France; and LIMOS, Université Blaise Pascal, 63000 Clermont-Ferrand, France E-mail: marc.lassonde@gmail.com -------------------------------------------------------------------- **Abstract.** We provide a formula linking the radial subderivative to other subderivatives and subdifferentials for arbitrary extended real-valued lower semicontinuous functions. **Keywords:** lower semicontinuity, radial subderivative, Dini subderivative, subdifferential. **2010 Mathematics Subject Classification:** 49J52, 49K27, 26D10, 26B25. Introduction {#intro} ============ Tyrrell Rockafellar and Roger Wets [@RW98 p. 298] discussing the duality between subderivatives and subdifferentials write > In the presence of regularity, the subgradients and subderivatives of a function $f$ are completely dual to each other. \[…\] For functions $f$ that aren’t subdifferentially regular, subderivatives and subgradients can have distinct and independent roles, and some of the duality must be relinquished. Jean-Paul Penot [@Pen13 p. 263], in the introduction to the chapter dealing with elementary and viscosity subdifferentials, writes > In the present framework, in contrast to the convex objects, the passages from directional derivatives (and tangent cones) to subdifferentials (and normal cones, respectively) are one-way routes, because the first notions are nonconvex, while a dual object exhibits convexity properties. In the chapter concerning Clarke subdifferentials [@Pen13 p. 357], he notes > In fact, in this theory, a complete primal-dual picture is available: besides a normal cone concept, one has a notion of tangent cone to a set, and besides a subdifferential for a function one has a notion of directional derivative. Moreover, inherent convexity properties ensure a full duality between these notions. \[…\]. These facts represent great theoretical and practical advantages. In this paper, we consider arbitrary extended real-valued lower semicontinuous functions and arbitrary subdifferentials. In spite of the above quotes, we show that there is always a duality formula linking the subderivatives and subdifferentials of such functions. Moreover, we show that at points where the (lower semicontinuous) function satisfies a mild regularity property (called radial accessibility), the upper radial subderivative is always a lower bound for the expressions in the duality formula. This lower bound is an equality in particular for convex functions, but also for various other classes of functions. For such functions, the radial subderivative can therefore be recovered from the subdifferential, and consequently the function itself, up to a constant, can be recovered from the subdifferential. This issue is discussed elsewhere. Subderivatives ============== In the sequel, $X$ is a real Banach space with unit ball $B_X$, $X^*$ is its topological dual, and ${\langle}.,. {\rangle}$ is the duality pairing. For $x, y \in X$, we let $[x,y]:=\{ x+t(y-x) {:}t\in[0,1]\}$; the sets $]x,y[$ and $[x,y[$ are defined accordingly. Set-valued operators $T:X\rightrightarrows X^*$ are identified with their graph $T\subset X\times X^*$. For a subset $A\subset X$, $x\in X$ and ${\lambda}>0$, we let $d_A(x):=\inf_{y\in A} \|x-y\|$ and $B_{\lambda}(A):=\{ y\in X{:}d_A(y)\le {\lambda}\}$. All extended-real-valued functions $f : X\to{{]}{-\infty},+\infty]}$ are assumed to be lower semicontinuous (lsc) and *proper*, which means that the set ${{\rm dom} \kern.15em}f:=\{x\in X{:}f(x)<\infty\}$ is non-empty. For a lsc function $f:X\to{{]}{-\infty},+\infty]}$, a point ${\bar{x}}\in{{\rm dom} \kern.15em}f$ and a direction $u\in X$, we consider the following basic subderivatives (we essentially follow the terminology of Penot’s textbook [@Pen13]): - the (lower right Dini) *radial subderivative*: $$\label{Dinisub} f^r({\bar{x}};u):=\liminf_{t\searrow 0}\,\frac{f({\bar{x}}+tu)-f({\bar{x}})}{t},$$ its upper version: $$\label{Dinisub} f^r_+({\bar{x}};u):=\limsup_{t\searrow 0}\,\frac{f({\bar{x}}+tu)-f({\bar{x}})}{t},$$ and its upper strict version (the *Clarke subderivative*): $$\label{Clarkesub} f^0({\bar{x}};u):= \limsup_{t \searrow 0 \atop{(x,f(x)) \to ({\bar{x}},f({\bar{x}}))}}\frac{f(x+tu) -f(x)}{t};$$ - the (lower right Dini-Hadamard) *directional subderivative*: $$\label{Hsubderiv} f^d({\bar{x}};u):= \liminf_{t \searrow 0 \atop{u' \to u}}\frac{f({\bar{x}}+tu')-f({\bar{x}})}{t},$$ and its upper strict version (the Clarke-Rockafellar subderivative): $$\label{Csubderiv} f^\uparrow({\bar{x}};u):= \sup_{\delta>0} \limsup_{t \searrow 0 \atop{(x,f(x)) \to ({\bar{x}},f({\bar{x}}))}} \inf_{u' \in B_{\delta}(u)}\frac{f(x+tu') -f(x)}{t}.$$ It is immediate from these definitions that the following inequalities hold ($\rightarrow$ means $\le$): $$\begin{aligned} f^r({\bar{x}};u) & \rightarrow f^r_+({\bar{x}};u)\rightarrow f^0({\bar{x}};u)\\ \uparrow \quad & \qquad\qquad\qquad\quad\uparrow\\ f^d({\bar{x}};u) & \qquad\longrightarrow \quad\quad f^\uparrow({\bar{x}};u)\end{aligned}$$ It is well known (and easily seen) that for a function $f$ locally Lipschitz at ${\bar{x}}$, we have $f^r({\bar{x}};u)=f^d({\bar{x}};u)$ and $f^0({\bar{x}};u)=f^\uparrow({\bar{x}};u)$, whereas for a lsc convex $f$, we have $f^d({\bar{x}};u)=f^\uparrow({\bar{x}};u)$. A function $f$ satisfying such an equality is called *regular*. However, in general, $f^d({\bar{x}};u)<f^\uparrow({\bar{x}};u)$, and there are many other types of subderivatives $f'$ which lie between $f^d$ and $f^\uparrow$. PS. The inequality stated in the theorem below is (much) less elementary. It is the analytic form of Treiman’s theorem [@Tre83] on the inclusion of the lower limit of Boulingand contingent cones at neighbouring points of ${\bar{x}}$ into the Clarke tangent cone at ${\bar{x}}$ in the context of a Banach space (in finite dimensional spaces, equality holds between these objects, as was shown earlier by Cornet [@Cor81] and Penot [@Pen81]). A proof of this inequality (or equality in finite dimensional spaces) based on this geometrical approach was given by Ioffe [@Iof84] (see also Rockafellar-Wets [@RW98 Theorem 8.18]). For a proof (in the general context of Banach spaces) using a multidirectional mean value inequality rather than the above geometric approach, see Correa-Gajardo-Thibault [@CGT09]. \[Treiman\] Let $X$ be a Banach space, $f:X\to{{]}{-\infty},+\infty]}$ be lsc, ${\bar{x}}\in{{\rm dom} \kern.15em}f$ and $u\in X$. Then: [ $$f^\uparrow({\bar{x}};u)\le \sup_{{\varepsilon}>0}\limsup_{x\to{\bar{x}}} \inf_{u'\in B(u,{\varepsilon})}f^d(x;u').$$ ]{} Subdifferentials ================ Given a lsc function $f:X\to{{]}{-\infty},+\infty]}$ and a point ${\bar{x}}\in{{\rm dom} \kern.15em}f$, we consider the following two basic subsets of the dual space $X
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | In this paper, the temporal evolution of 3-dimensional relativistic current sheets in Poynting-dominated plasma is studied for the first time. Over the past few decades, a lot of efforts have been conducted on studying the evolution of current sheets in 2-dimensional space, and concluded that sufficiently long current sheets always evolves into the so-called “plasmoid-chain”, which provides fast reconnection rate independent of its resistivity. However, it is suspected that plasmoid-chain can exist only in the case of 2-dimensional approximation, and would show transition to turbulence in 3-dimensional space. We performed 3-dimensional numerical simulation of relativistic current sheet using resistive relativistic magnetohydrodynamic approximation. The results showed that the 3-dimensional current sheet evolve not into plasmoid-chain but turbulence. The resulting reconnection rate is $0.004$ which is much smaller than that of plasmoid-chain. The energy conversion from magnetic field to kinetic energy of turbulence is just 0.01% which is much smaller than typical non-relativistic cases. Using the energy principle, we also showed that the plasmoid is always unstable for a displacement in opposite direction to its acceleration, probably interchange-type instability, and this always results in seeds of turbulence behind the plasmoids. Finally, the temperature distribution along the sheet is discussed, and it is found that the sheet is less active than plasmoid-chain. Our finding can be applied for many high energy astrophysical phenomena, and can provide a basic model of the general current sheet in Poynting-dominated plasma. author: - | M. Takamoto,$^{1}$[^1]\ $^{1}$Department of Earth and Planetary Science, University of Tokyo, Tokyo 113-0033, Japan\ title: 'Evolution of 3-dimensional Relativistic Current Sheets and Development of Self-Generated Turbulence' --- \[firstpage\] Turbulence — MHD — plasmas — methods:numerical. Introduction {#sec:sec1} ============ Recently, the development of many high energy astronomical observation devices has allowed us to find many flare phenomena from various high energy astrophysical phenomena, such as Crab pulsar wind [@2012ApJ...749...26B; @2013MNRAS.436L..20B; @2014RPPh...77f6901B; @2015PPCF...57a4034P; @2015MNRAS.454.2972T] and blazars [@2012ApJ...754..114H; @2015ApJ...808L..18A]. Relativistic magnetic reconnection is considered to be a good candidates for those phenomena. This is because magnetic reconnection efficiently converts the magnetic field energy into plasma kinetic, thermal, photon, and non-thermal particle energy. In addition, it is known that sufficiently long current sheets in 2-dimensional space always evolves into the so-called “plasmoid-chain” in which the current sheets are filled with many plasmoids generated by the secondary tearing instability . The plasmoids experience many collisions to the neighboring ones, and it is expected that the energy released by such collisions can be responsible for flare phenomena observed in the high energy astrophysical phenomena. The research of magnetic reconnection in relativistic plasma, in particular, in Poynting-dominated plasma has been conducted vigorously for these decades. After several initial analytic work [@1994PhRvL..72..494B; @2003MNRAS.346..540L; @2003ApJ...589..893L; @2005MNRAS.358..113L], numerical simulation became the main method for studying relativistic magnetic reconnection due to its strong non-linear effects. Using relativistic magnetohydrodynamic (RMHD) approximation, @2006ApJ...647L.123W [@2011ApJ...739L..53T] have studied the initial phase of the tearing instability in current sheets with low Lundquist number, $S \equiv 4 \pi L_{\rm sheet} c_{\rm A}/\eta \lesssim 10^3$ where $L_{\rm sheet}$ is the sheet length, $c_{\rm A}$ is the Alfvén velocity, and $\eta$ is the resistivity. They observed strong compression in the downstream region predicted by @1994PhRvL..72..494B [@2003ApJ...589..893L; @2005MNRAS.358..113L], though the observed reconnection rate is very similar to the non-relativistic case predicted by @2003ApJ...589..893L [@2005MNRAS.358..113L]. However, it has been shown that relativistic magnetic reconnection results in faster reconnection rate than the non-relativistic case in Poynting-energy dominated plasma if much larger Lundquist number, $S > 10^4$, is considered and the sheet evolved into “plasmoid-chain” [@2013ApJ...775...50T]. On the other hand, there are also several work taking into account plasma effects, such as considering two-fluid approximation and fully collisionless plasma. [@2009ApJ...696.1385Z; @2009ApJ...705..907Z] have performed numerical simulations of relativistic magnetic reconnection by assuming relativistic two-fluid approximation, and observed enhancement of reconnection rate as electromagnetic energy increases. A similar effect were later observed in resistive RMHD simulation of Petschek reconnection [@2010ApJ...716L.214Z]. Collisionless reconnection work using Particle-in-cell (PIC) have also performed in this decade [@2014ApJ...783L..21S; @2015PhRvL.114i5002L; @2015ApJ...806..167G]. In addition to the increase of reconnection rate as electromagnetic field energy, they found that relativistic magnetic reconnection in Poynting-energy dominated plasma is a very efficient accelerator of particles, and can be a good candidate for flare events of high energy astrophysical phenomena. In spite of the above very active researches, there are still only a few work of magnetic reconnection in 3-dimensional space. It is considered that the current sheet in 3-dimensional scale will not evolve into plasmoid-chain but turbulence because of many instabilities in current sheet which can break the symmetry assumed in 2-dimensional work. In this case, it is known that turbulence enhances the magnetic reconnection rate. One reason of this is that turbulent motion of magnetic field increases the dissipation around reconnection point [@2013PhRvL.110y5001H]; In addition, more importantly, it was shown that turbulent eddy motion drives diffusion of magnetic field line separation, resulting in broader exhaust region and faster reconnection rate [@1999ApJ...517..700L; @2009ApJ...700...63K; @2015ApJ...815...16T] The above work assumed an external driven turbulence, and it depends on each phenomenon if there is sufficiently strong turbulence in those environments. However, current sheets have various kinds of instability, and it is expected that such instabilities will evolve into turbulence. Hence, the recent main research interest is to find detailed mechanisms of the self-generated turbulence in sheet, and the resulting reconnection rate. In non-relativistic case, there are a few work on self-generating turbulence in current sheets [@2015ApJ...806L..12O; @2016ApJ...818...20H; @2017ApJ...838...91K]. They reported that the 3-dimensional self-generate turbulent current sheets show smaller reconnection rate $\sim 0.005 - 0.01$, and 1 to 5 % of the magnetic field energy conversion into kinetic energy of the turbulence. In this paper, we report the first study of the temporal evolution of relativistic 3-dimensional current sheets in Poynting-dominated plasma using a new Godunov type scheme of resistive relativistic magnetohydrodynamic simulation. Since recent studies showed that turbulence in Poynting-energy dominated plasma has very different properties from non-relativistic one [@2015ApJ...815...16T; @2016ApJ...831L..11T; @2017MNRAS.472.4542T], we expect that such effects may modify the behavior of self-generated turbulence in relativistic sheets. In Section 2 we introduce the numerical setup. The numerical result is presented in Section 3, and its theoretical discussion is given in Section 4. Section 5 summarizes our conclusions. Numerical Setup {#sec:sec2} =============== In this paper, an evolution of a very long current sheet is modeled using the relativistic resistive magnetohydrodynamic approximation. We use a newly developed resistive relativistic magnetohydrodynamics (RRMHD) scheme explained in the Appendix, which is an extension of our previous work [@2011ApJ...735..113T], and allows us to obtain the full-Godunov solver of RRMHD for the first time. We calculate the RRMHD equations in a conservative fashion, and the mass density, momentum, and energy are conserved within machine round-off error. We use the constrained transport algorithm [@1988ApJ...332..659E] to preserve the divergence free constraint on the magnetic field. The multi-dimensional extension is achieved using the unsplit method [@2005JCoPh.205..509G; @2008JCoPh.227.4123G]. For the equation of state, a relativistic ideal gas with $h = 1 + (\Gamma / (\Gamma - 1))(p / \rho)$, $\Gamma = 4 / 3$ is assumed where $\rho$ is the rest mass density, and $p$ is the gas pressure in the plasma rest frame. The resistivity $\eta$ is determined from the Lundquist number: $S \equiv 4 \pi L_{\rm sheet} c_{\rm A}/ \eta = 2.912 \times 10^5$. For our numerical calculations, we prepare a
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | [We show that every $p$-fold strictly-cyclic branched covering of a $b$-bridge link in $\S^3$ admits a $p$-symmetric Heegaard splitting of genus $g=(b-1)(p-1)$. This gives a complete converse to a result of Birman and Hilden, and gives an intrinsic characterization of $p$-symmetric Heegaard splittings as $p$-fold strictly-cyclic branched coverings of links.\ \ [*Mathematics Subject Classification 2000:*]{} Primary 57M12, 57R65; Secondary 20F05, 57M05, 57M25.\ [*Keywords:*]{} 3-manifolds, Heegaard splittings, cyclic branched coverings, links, plats, bridge number, braid number.]{} author: - Michele Mulazzani title: 'An intrinsic characterization of $p$-symmetric Heegaard splittings [^1]' --- Introduction ============ The concept of $p$-symmetric Heegard splittings has been introduced by Birman and Hilden (see [@BH]) in an extrinsic way, depending on a particular embedding of the handlebodies of the splitting in the ambient space $\E^3$. The definition of such particular splittings was motivated by the aim to prove that every closed, orientable 3-manifold of Heegaard genus $g\le 2$ is a 2-fold covering of $\S^3$ branched over a link of bridge number $g+1$ and that, conversely, the 2-fold covering of $\S^3$ branched over a link of bridge number $b\le 3$ is a closed, orientable 3-manifold of Heegaard genus $b-1$ (compare also [@Vi]). A genus $g$ Heegaard splitting $M=Y_g\cup_{\f}Y'_g$ is called [*$p$-symmetric*]{}, with $p>1$, if there exist a disjoint embedding of $Y_g$ and $Y'_g$ into $\E^3$ such that $Y'_g=\t(Y_g)$, for a translation $\t$ of $\E^3$, and an orientation-preserving homeomorphism $\P:\E^3\to\E^3$ of period $p$, such that $\P(Y_g)=Y_g$ and, if $\GG$ denotes the cyclic group of order $p$ generated by $\P$ and $\F:\partial Y_g\to\partial Y_g$ is the orientation-preserving homeomorphism $\F=\t^{-1}_{\vert\partial Y'_g}\f$, the following conditions are fulfilled: - $Y_g/\GG$ is homeomorphic to a 3-ball; - $\mbox{Fix}(\P_{\vert Y_g}^h)=\mbox{Fix}(\P_{\vert Y_g})$, for each $1\le h\le p-1$; - $\mbox{Fix}(\P_{\vert Y_g})/\GG$ is an unknotted set of arcs[^2] in the ball $Y_g/{\cal G}$; - there exists an integer $p_0$ such that $\F\P_{\vert\partial Y_g}\F^{-1}=(\P_{\vert\partial Y_g})^{p_0}$. [**Remark 1**]{} By the positive solution of the Smith Conjecture [@MB] it is easy to see that necessarily $p_0\equiv\pm 1$ mod $p$. The map $\P'=\t\P\t^{-1}$ is obviously an orientation-preserving homeomorphism of period $p$ of $\E^3$ with the same properties as $\P$, with respect to $Y'_g$, and the relation $\f\P_{\vert\partial Y_g}\f^{-1}=(\P'_{\vert\partial Y'_g})^{p_0}$ easily holds. The [*$p$-symmetric Heegaard genus*]{} $g_p(M)$ of a 3-manifold $M$ is the smallest integer $g$ such that $M$ admits a $p$-symmetric Heegaard splitting of genus $g$. The following results have been established in [@BH]: 1. Every closed, orientable 3-manifold of $p$-symmetric Heegaard genus $g$ admits a representation as a $p$-fold cyclic covering of $\S^3$, branched over a link which admits a $b$-bridge presentation, where $g=(b-1)(p-1)$. 2. The $p$-fold cyclic covering of $\S^3$ branched over a knot of braid number $b$ is a closed, orientable 3-manifold $M$ which admits a $p$-symmetric Heegaard splitting of genus $g=(b-1)(p-1)$. Note that statement 2 is not a complete converse of 1, since it only concerns knots and, moreover, $b$ denotes the braid number, which is greater than or equal to (often greater than) the bridge number. In this paper we fill this gap, giving a complete converse to statement 1. Since the coverings involved in 1 are strictly-cyclic (see next section for details on strictly-cyclic branched coverings of links), our statement will concern this kind of coverings. More precisely, we shall prove in Theorem \[Theorem 3\] that a $p$-fold strictly-cyclic covering of $\S^3$, branched over a link of bridge number $b$, is a closed, orientable 3-manifold $M$ which admits a $p$-symmetric Heegaard splitting of genus $g=(b-1)(p-1)$, and therefore has $p$-symmetric Heegaard genus $g_p(M)\le (b-1)(p-1)$. This result gives an intrinsic interpretation of $p$-symmetric Heegaard splittings as $p$-fold strictly-cyclic branched coverings of links. Main results ============ Let $\b=\{(p_k(t),t)\,\vert\, 1\le k\le 2n\,,\,t\in[0,1]\}\subset\E^2\times[0,1]$ be a geometric $2n$-string braid of $\E^3$ [@Bi], where $p_1,\ldots,p_{2n}:[0,1]\to\E^2$ are continuous maps such that $p_{k}(t)\neq p_{k'}(t)$, for every $k\neq k'$ and $t\in[0,1]$, and such that $\{p_1(0),\ldots,p_{2n}(0)\}=\{p_1(1),\ldots,p_{2n}(1)\}$. We set $P_k=p_k(0)$, for each $k=1,\ldots,2n$, and $A_i=(P_{2i-1},0),B_i=(P_{2i},0),A'_i=(P_{2i-1},1),B'_i=(P_{2i},1)$, for each $i=1,\ldots,n$ (see Figure 1). Moreover, we set $\FF=\{P_1,\ldots,P_{2n}\}$, $\FF_1=\{P_1,P_3\ldots,P_{2n-1}\}$ and $\FF_2=\{P_2,P_4,\ldots,P_{2n}\}$. The braid $\b$ is realized through an ambient isotopy ${\wh\b}:\E^2\times[0,1]\to\E^2\times[0,1]$, ${\wh\b}(x,t)=(\b_t(x),t)$, where $\b_t$ is an homeomorphism of $\E^2$ such that $\b_0=\mbox{Id}_{\E^2}$ and $\b_t(P_i)=p_i(t)$, for every $t\in[0,1]$. Therefore, the braid $\b$ naturally defines an orientation-preserving homeomorphism ${\wti\b}=\b_1:\E^2\to\E^2$, which fixes the set $\FF$. Note that $\b$ uniquely defines ${\wti\b}$, up to isotopy of $\E^2$ mod $\FF$. Connecting the point $A_i$ with $B_i$ by a circular arc $\a_i$ (called [*top arc*]{}) and the point $A'_i$ with $B'_i$ by a circular arc $\a'_i$ (called [*bottom arc*]{}), as in Figure 1, for each $i=1,\ldots,n$, we obtain a $2n$-plat presentation of a link $L$ in $\E^3$, or equivalently in $\S^3$. As is well known, every link admits plat presentations and, moreover, a $2n$-plat presentation corresponds to an $n$-bridge presentation of the link. So, the bridge number $b(L)$ of a link $L$ is the smallest positive integer $n$ such that $L$ admits a representation by a $2n$-plat. For further details on braid, plat and bridge presentations of links we refer to [@Bi]. ![A $2n$-plat presentation of a link.[]{data-label="Fig. 1"}](Figure1.eps) [**Remark 2**]{} A $2n$-plat presentation of
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Motivated by the observation of several molecule candidates in the heavy quark sector, we discuss the possibility of a state with $J^{PC}=3^{-+}$. In a one-boson-exchange model investigation for the S wave $C=+$ $D^*\bar{D}_2^*$ states, one finds that the strongest attraction is in the case $J=3$ and $I=0$ for both $\pi$ and $\sigma$ exchanges. Numerical analysis indicates that this hadronic bound state may exist. If a state around the $D^*\bar{D}_2^*$ threshold ($\approx$4472 MeV) in the channel $J/\psi\omega$ (P wave) is observed, the heavy quark spin symmetry implies that it is not a $c\bar{c}$ meson and the $J^{PC}$ are very likely to be $3^{-+}$.' author: - 'W. Zhu, T. Yao, Yan-Rui Liu' title: 'Possibility of a $J^{PC}=3^{-+}$ state' --- Introduction {#sec1} ============ Mesons with exotic properties play an important role in understanding the nature of strong interactions. The observation of the so called XYZ states in the heavy quark sector has triggered lots of discussions on their quark structures, decays, and formation mechanisms. It also motivates people to study new states beyond the quark model assignments. The X(3872), first observed in the $J\psi\pi^+\pi^-$ invariant mass distribution by Belle collaboration in 2003 [@X3872-belle], is the strangest heavy quark state. Even now, its angular momentum is not determined. Since its extreme closeness to the $D^0\bar{D}^{*0}$ threshold, lots of discussions about its properties are based on the molecule assumption. However, it is very difficult to identify the X(3872) as a shallow bound state of $D^0\bar{D}^{*0}$ since there are no explicitly exotic molecule properties. A charged charmonium- or bottomonium-like meson labeled as $Z$ is absolutely exotic because its number of quarks and antiquarks must be four or more. Such states include the $Z(4430)$ observed in the $\psi'\pi^\pm$ mass distribution [@Z4430-belle], the $Z_1(4050)$ and $Z_2(4250)$ observed in the $\chi_{c1}\pi^+$ mass distribution [@Z1Z2-belle], and the $Z_b(10610)$ and $Z_b(10650)$ in the mass spectra of the $\Upsilon(nS)\pi^\pm$ ($n$=1,2,3) and $\pi^\pm h_b(mP)$ ($m$=1,2) [@Zb-belle]. They are all observed by Belle Collaboration. Though BABAR has not confirmed them [@Z4430-babar; @Z1Z2-babar], the existence signal of multiquark states is still exciting. Since $Z(4430)$ is around the $D^*D_1$ threshold, $Z_b(10610)$ is around the $BB^*$ threshold, and $Z_b(10650)$ is around the $B^*B^*$ threshold, molecular models seem to be applicable to their structure investigations [@LLDZ08-4430mole; @DingHLY09; @SunHLLZ; @ZhangZH11; @OhkodaYYSH; @YangPDZ12; @LiWDZ13]. To identify a state as a molecule is an important issue in hadron studies. One should consider not only bound state problem of two hadrons, but also how to observe a molecular state in possible production processes. In Refs. [@Nstar-dyn; @Nstar-chiqm; @Nstar-obe; @Nstar-cc], bound states of $\Sigma_c\bar{D}$ and $\Sigma_c\bar{D}^*$ were studied. Since the quantum numbers are the same as the nucleon but the masses are much higher, identifying them as multiquark baryons is rather apparent. To obtain a deeper understanding of the strong interaction, it is necessary to explore possible molecules with explicitly exotic quantum numbers. Quark model gives us a constraint on the quantum numbers of a meson, namely, a meson with $J^{PC}=0^{--}$, $0^{+-}$, $1^{-+}$, $2^{+-}$, $3^{-+}$, $\cdots$ could not be a $q\bar{q}$ state, but it may be a multiquark state. So the study on such states may deepen our understanding of nature. If two $q\bar{q}$ mesons can form a molecule with such quantum numbers, one gets the simplest configuration. Next simpler configuration is the baryon-antibaryon case. A possible place to search for them is around hadron-hadron thresholds. There are some discussions on low spin heavy quark exotic states in Refs. [@ShenCLHZYL10; @HuCLHZYL11]. Here we would like to discuss the possibility of a higher spin state, $J^{PC}=3^{-+}$. One will see that identification of it from strong decay is possible. First, we check meson-antimeson systems that can form $3^{-+}$ states, where meson (antimeson) means that its quark structure is $c\bar{q}$ ($\bar{c}q$). The established mesons may be found in the Particle Data Book [@PDG]. One checks various combinations and finds that the lowest S-wave system is $D^*\bar{D}_2^*$. The next S-wave one is $D_s^*D_{s2}^*$. Between these two thresholds, one needs $D$ or $G$ wave to combine other meson-antimeson pairs (see Fig. \[th3-\]). Below the threshold of $D^*\bar{D}_2^*$, the orbital angular momentum is $D$, $F$, or $G$-wave. Above the $D_s^*D_{s2}^*$ threshold, a partial wave of $P$, $F$, or $H$ is needed. Since the difference between these two thresholds is more than 200 MeV, one may neglect the channel coupling and choose the $D^*D_2^*$ system to study. ![Thresholds of $J^{PC}=3^{-+}$ meson-antimeson systems between that of $D^*D_2^*$ and that of $D_s^*D_{s2}^*$. $S$, $D$, $G$, and $I$ are orbital angular momenta.[]{data-label="th3-"}](th3-) Secondly, we check baryon-antibaryon systems. If one combines the established $cqq$ baryons and their antibaryons, one finds that the lowest S-wave threshold is for $\Lambda_c(2880)$ and $\bar{\Lambda}_c$ ($\approx5168$ MeV). Even for the lowest threshold of $\Lambda_c(2595)$ and $\bar{\Lambda}_c$ in F-wave, the value ($\approx4879$ MeV) is still higher than that of $D_s^*D_{s2}^*$. Thus, we may safely ignore the possible baryon-antibaryon contributions in this study. In a $3^{-+}$ $D^*\bar{D}_2^*$ state, partial waves of $S$, $D$, $G$, and $I$ may all contribute. As a first step exploration, we consider only the dominant S-wave interactions. Possible coupled channel effects will be deferred to future works. The present study is organized as follows. After the introduction in Sec. \[sec1\], we present the main ingredients for our study in Sec. \[sec2\]. Then we give the numerical results in Sec. \[sec3\]. The final part is for discussions and conclusions. Wavefunctions, amplitudes, and Lagrangian {#sec2} ========================================= We study the meson-antimeson bound state problem in a meson exchange model. The potential is derived from the scattering amplitudes [@LLDZ08-4430wave] and the flavor wave functions of the system are necessary. Since the states we are discussing have a definite C-parity while the combination of $c\bar{q}$ and $\bar{c}q$ mesons does not, a relative sign problem arises between the two parts of a flavor wave function. One has to find the relation between the flavor wave function and the potential with definite C-parity. There are some discussions about this problem in the literatures [@LLDZ08-4430wave; @LLDZ08-3872wave; @ThomasC08; @Stancu08-wave; @LiuZ09]. Here we revisit it by using the G-parity transformation rule which relates the amplitudes between $NN$ and $N\bar{N}$ [@KlemptBMR]. Since $D$ mesons do not have defined C-parity, we may assume arbitrary complex phases $\alpha$ and $\beta$ under the C-parity transformations $$\begin{aligned} &\bar{D}^{*0}\leftrightarrow \alpha_2 D^{*0},\qquad D^{*-}\leftrightarrow \alpha_1D^{*+},&\nonumber\\ &\bar{D}^{*0}_2\leftrightarrow\beta_2 D^{*0}_2,\qquad D^{*-}_2\leftrightarrow \beta_1D^{*+}_2.&\end{aligned}$$ According to the SU(2) transformation, one finds the
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In textual information extraction and other sequence labeling tasks it is now common to use recurrent neural networks (such as LSTM) to form rich embedded representations of long-term input co-occurrence patterns. Representation of output co-occurrence patterns is typically limited to a hand-designed graphical model, such as a linear-chain CRF representing short-term Markov dependencies among successive labels. This paper presents a method that learns embedded representations of latent output structure in sequence data. Our model takes the form of a finite-state machine with a large number of latent states per label (a latent variable CRF), where the state-transition matrix is factorized—effectively forming an embedded representation of state-transitions capable of enforcing long-term label dependencies, while supporting exact Viterbi inference over output labels. We demonstrate accuracy improvements and interpretable latent structure in a synthetic but complex task based on CoNLL named entity recognition.' bibliography: - 'example\_paper.bib' --- Introduction {#sec:intro} ============ Neural networks have long been used for prediction tasks involving complex structured outputs [@lecun2006tutorial; @collobert11; @DBLP:journals/corr/LampleBSKD16]. In structured prediction, output variables obey local and global constraints that are difficult to satisify using purely local feedforward prediction from an input representation. For example, in sequence tagging tasks such as named entity recognition, the outputs must obey several hard constraints e.g., I-PER cannot follow B-ORG. The results of [@collobert11] show a significant improvement when such structural output constraints are enforced by incorporating a linear-chain graphical model that captures the interactions between adjacent output variables. The addition of a graphical model to enforce output consistency is now common practice in deep structured prediction models for tasks such as sequence tagging [@DBLP:journals/corr/LampleBSKD16] and image segmentation [@chen2014semantic]. From a probabilistic perspective, the potentials of a probabilistic graphical model over the output variables $y$ are often parameterized using a deep neural network that learns global features of the input $x$ [@lecun2006tutorial; @collobert11; @DBLP:journals/corr/LampleBSKD16]. This approach takes advantage of deep architectures to learn robust feature representations for $x$, but is limited to relatively simple pre-existing graphical model structures to model the interactions among $y$. This paper presents work in which feature learning is used not only to learn rich representations of inputs, but also to learn latent output structure. We present a model for sequence tagging that takes the form of a latent-variable conditional random field [@quattoni07; @sutton07; @morency07], where interactions in the latent state space are parametrized by low-rank embeddings. This low-rank structure allows us to use a larger number of latent states learning rich and interpretable substructures in the output space without overfitting. Additionally, unlike LSTMs, the model permits exact MAP and marginal inference via the Viterbi and forward-backward algorithms. Because the model learns large numbers of latent hidden states, interactions among $y$ are not limited to simple Markov dependencies among labels as in most deep learning approaches to sequence tagging. Previous work on representation learning for structured outputs has taken several forms. Output-embedding models such as [@srikumar2014learning] have focused on learning low-rank similarity among label vectors $y$, with no additional latent structure. The input-output HMM [@bengio95] incorporates learned latent variables, parameterized by a neural network, but the lack of low-rank structure limits the size of the latent space. Structured prediction energy networks [@belanger2016structured] use deep neural networks to learn global output representations, but do not allow for exact inference and are difficult to apply in cases when the number of outputs varies independently of the number of inputs, such as entity extraction systems. In this preliminary work, we demonstrate the utility of learning a large embedded latent output space on a synthetic task based on CoNLL named entity recognition (NER). We consider the task synthetic because we employ input features involving only single tokens, which allows us to better examine the effects of both learned latent output variables and low-rank embedding structure. (The use of NER data is preferable, however, to completely synthetically generated data because its real-world text naturally contains easily interpretable complex latent structure.) We demonstrate significant accuracy gains from low-rank embeddings of large numbers of latent variables in output space, and explore the interpretable latent structure learned by the model. These results show promise for future application of low-rank latent embeddings to sequence modeling tasks involving more complex long-term memory, such as citation extraction, resum[' e]{}s, and semantic role labeling. Related Work {#sec:related} ============ The ability of neural networks to efficiently represent local context features sometimes allows them to make surprisingly good independent decisions for each structured output variable [@collobert11]. However, these independent classifiers are often insufficient for structured prediction tasks where there are strong dependencies between the output labels [@collobert11; @DBLP:journals/corr/LampleBSKD16]. A natural solution is to use these neural feature representations to parameterize the factors of a conditional random field [@lafferty01] for joint inference over output variables [@collobert11; @jaderberg14; @DBLP:journals/corr/LampleBSKD16]. However, most previous work restricts the linear-chain CRF states to be the labels themselves—learning no additional output structure. The latent dynamic conditional random field (LDCRF) learns additional output structure beyond the labels by employing hidden states (latent variables) with Markov dependencies, each associated with a label; it has been applied to human gesture recognition [@morency07]. The dynamic conditional random field (DCRF) learns a factorized representation of each state [@sutton07]. The hidden-state conditional random field (HCRF) also employs a Markov sequence of latent variables, but the latent variables are used to predict a single label rather than a sequence of labels; it has been applied to phoneme recognition [@gunawardana05] and gesture recognition [@quattoni07]. All these models learn output representations while preserving the ability to perform exact joint inference by belief propagation. While the above use a log-linear parameterization of the potentials over latent variables, the input-output HMM [@bengio95] uses a separate neural network for each source state to produce transition probabilities to its destination states. Experiments in all of the above parameterizations use only a small hidden state space due to the large numbers of parameters required. In this paper we enable a large number of states by using a low-rank factorization of the transition potentials between latent states, effectively learning distributed embeddings for the states. This is superficially similar to the label embedding model of [@srikumar2014learning], but that work learns embeddings only to model similarity between observable output labels, and does not learn a latent output state structure. Embedded Latent CRF Model {#sec:model} ========================= We consider the task of sequence labeling: given an input sequence $\textbf{x} = \{x_1, x_2, \ldots, x_T\}$, find the corresponding output labels $\textbf{y} = \{y_1, y_2, \ldots, y_T\}$ where each output $y_i$ is one of $N$ possible output labels. Each input $x_i$ is associated with a feature vector $f_i \in \mathbb{R}^n$, such as that produced by a feed-forward or recurrent neural network. The models we consider will associate each input sequence with a sequence of hidden states $\{z_1, z_2, \ldots, z_T\}$. These discrete hidden states capture rich transition dynamics of the output labels. We consider the case where the number of hidden states $M$ is much larger than the number of output labels, $M >> N$. Given the above notation, the energy for a particular configuration is: $$\begin{aligned} \mathcal{E}({\mathbf{y}}, {\mathbf{z}}| {\mathbf{x}}) = \sum_{t=1}^T ( & \psi_{zf}(f_t, z_t) + \psi_{zy}(z_t, y_t) \nonumber \\ &+ \psi_{zz}(z_t, z_{t+1})) \label{eq:energy}\end{aligned}$$ where $\psi$ are scalar scoring functions of their arguments. $\psi_{zf}(f_t, z_t)$ and $\psi_{zy}(z_t, y_t)$ are the local scores for the interaction between the input features and the hidden states, and the hidden state and the output state, respectively. $\psi_{zz}(z_t, z_{t+1})$ are the scores for transitioning from a hidden state $z_t$ to hidden state $z_{t+1}$. The distribution over output labels is given by: $$\begin{aligned} {\mathbb{P}}({\mathbf{y}}|{\mathbf{x}}) = \frac{1}{Z} \sum_{{\mathbf{z}}} \exp \left( \mathcal{E}({\mathbf{y}}, {\mathbf{z}}| {\mathbf{x}}) \right); \label{eq:py}\end{aligned}$$ $Z = \sum_{{\mathbf{y}}} \sum_{{\mathbf{z}}} \exp \left( \mathcal{E}({\mathbf{y}}, {\mathbf{z}}| {\mathbf{x}}) \right)$ is the partition function.
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'The purpose of this note is two-fold. Firstly, we prove that the variety $\mathbf{RDMSH_1}$ of regular De Morgan semi-Heyting algebras of level 1 satisfies Stone identity and present (equational) axiomatizations for several subvarieties of $\mathbf{RDMSH_1}$. Secondly, we give a concrete description of the lattice of subvarieties of the variety $\mathbf{RDQDStSH_1}$ of regular dually quasi-De Morgan Stone semi-Heyting algebras that contains $\mathbf{RDMSH_1}$. Furthermore, we prove that every subvariety of $\mathbf{RDQDStSH_1}$, and hence of $\mathbf{RDMSH_1}$, has Amalgamation Property. The note concludes with some open problems for further investigation.' author: - 'Hanamantagouda P. Sankappanavar' title: 'A Note on Regular De Morgan Semi-Heyting Algebras' --- \[section\] \[Lemma\][**THEOREM**]{} \[Lemma\][**CLAIM**]{} \[Lemma\][**COROLLARY**]{} \[Lemma\][**PROPOSITION**]{} \[Lemma\][**EXAMPLE**]{} \[Lemma\][**FACT**]{} \[Lemma\][**DEFINITION**]{} \[Lemma\][**NOTATION**]{} \[Lemma\][**REMARK**]{} [**Introduction**]{} {#SA} ==================== Semi-Heyting algebras were introduced by us in [@Sa07] as an abstraction of Heyting algebras. They share several important properties with Heyting algebras, such as distributivity, pseudocomplementedness, and so on. On the other hand, interestingly, there are also semi-Heyting algebras, which, in some sense, are “quite opposite” to Heyting algebras. For example, the identity $0 \to 1 \approx 0$, as well as the commutative law $x \to y \approx y \to x$, hold in some semi-Heyting algebras. The subvariety of commutative semi-Heyting algebras was defined in [@Sa07] and is further investigated in [@Sa10]. Quasi-De Morgan algebras were defined in [@Sa87a] as a common abstraction of De Morgan algebras and distributive $p$-algebras. In [@Sa12], expanding semi-Heyting algebras by adding a dual quasi-De Morgan operation, we introduced the variety $\mathbf{DQDSH}$ of dually quasi-De Morgan semi-Heyting algebras as a common generalization of De Morgan Heyting algebras (see [@Sa87] and [@Mo80]) and dually pseudocomplemented Heyting algebras (see [@Sa85]) so that we could settle an old conjecture of ours. The concept of regularity has played an important role in the theory of pseudocomplemented De Morgan algebras (see [@Sa86]). Recently, in [@Sa14] and [@Sa14a], we inroduced and examined the concept of regularity in the context of $\mathbf{DQDSH}$ and gave an explicit description of (twenty five) simple algebras in the (sub)variety $\mathbf{DQDStSH_1}$ of regular dually quasi-De Morgan Stone semi-Heyting algebras of level 1. The work in [@Sa14] and [@Sa14a] led us to conjecture that the variety $\mathbf{RDMSH_1}$ of regular De Morgan algebras satisfies Stone identity. The purpose of this note is two-fold. Firstly, we prove that the variety $\mathbf{RDMSH_1}$ of regular De Morgan semi-Heyting algebras of level 1 satisfies Stone identity, thus settlieng the above mentioned conjecture affirmatively. As applications of this result and the main theorem of [@Sa14], we present (equational) axiomatizations for several subvarieties of $\mathbf{RDMSH_1}$. Secondly, we give a concrete description of the lattice of subvarieties of the variety $\mathbf{RDQDStSH_1}$ of regular dually quasi-De Morgan Stone semi-Heyting algebras, of which $\mathbf{RDMSH_1}$ is a subvariety. Furthermore, we prove that every subvariety of $\mathbf{RDQDStSH_1}$, and hence of $\mathbf{RDMSH_1}$, has Amalgamation Property. The note concludes with some open problems for further investigation. **[Dually Quasi-De Morgan Semi-Heyting Algebras]{}** {#SB} ==================================================== The following definition is taken from [@Sa07]. An algebra ${\mathbf L}= \langle L, \vee ,\wedge ,\to,0,1 \rangle$ is a [*semi-Heyting algebra*]{} if\ $\langle L,\vee ,\wedge ,0,1 \rangle$ is a bounded lattice and ${\mathbf L}$ satisfies: 1. $x \wedge (x \to y) \approx x \wedge y$ 2. $x \wedge(y \to z) \approx x \wedge ((x \wedge y) \to (x \wedge z))$ 3. $x \to x \approx 1$. Let ${\mathbf L}$ be a semi-Heyting algebra and, for $x \in {\mathbf L}$, let $x^*:=x \to 0$. ${\mathbf L}$ is a [*Heyting algebra*]{} if ${\mathbf L}$ satisfies: 1. $(x \wedge y) \to y \approx 1$. ${\mathbf L}$ is a [*commutative semi-Heyting algebra*]{} if ${\mathbf L}$ satisfies: 1. $x \to y \approx y \to x$. ${\mathbf L}$ is a [*Boolean semi-Heyting algebra*]{} if ${\mathbf L}$ satisfies: 1. $x \lor x^{*} \approx 1$. ${\mathbf L}$ is a [*Stone semi-Heyting algebra*]{} if ${\mathbf L}$ satisfies: 1. $x^* \lor x^{**} \approx 1$. Semi-Heyting algebras are distributive and pseudocomplemented, with $a^*$ as the pseudocomplement of an element $a$. We will use these and other properties (see [@Sa07]) of semi-Heyting algebras, frequently without explicit mention, throughout this paper. The following definition is taken from [@Sa12]. An algebra ${\mathbf L}= \langle L, \vee ,\wedge ,\to, ', 0,1 \rangle $ is a [*semi-Heyting algebra with a dual quasi-De Morgan operation*]{} or [*dually quasi-De Morgan semi-Heyting algebra*]{} [(]{}$\mathbf {DQDSH}$-algebra, for short[)]{} if\ $\langle L, \vee ,\wedge ,\to, 0,1 \rangle $ is a semi-Heyting algebra, and ${\mathbf L}$ satisfies: - $0' \approx 1$ and $1' \approx 0$ - $(x \land y)' \approx x' \lor y'$ - $(x \lor y)'' \approx x'' \lor y''$ - $x'' \leq x$. Let $\mathbf{L} \in \mathbf {DQDSH}$. Then ${\bf L}$ is a [*dually Quasi-De Morgan Stone semi-Heyting algebra*]{} [(]{}$\mathbf{DQDStSH}$-algebra[)]{} if ${\bf L}$ satisfies (St). $\mathbf {L}$ is a [*De Morgan semi-Heyting algebra*]{} or [*symmetric semi-Heyting algebra*]{} [(]{}$\mathbf{DMSH}$-algebra[)]{} if ${\bf L}$ satisfies: - $x'' \approx x$. $\mathbf{L}$ is a [*dually pseudocomplemented semi-Heyting algebra*]{} [(]{}$\mathbf {DPCSH}$-algebra if $\mathbf{L}$ satisfies: - $x \lor x' \approx 1$. The varieties of $\mathbf {DQDSH}$-algebras, $\mathbf{DQDStSH}$-algebras, $\mathbf{DMSH}$-algebras and $\mathbf {DPCSH}$-algebras are denoted, respectively, by $\mathbf {DQDSH}$, $\mathbf{DQDStSH}$, $\mathbf{DMSH}$ and $\mathbf {DPCSH}$. Furthermore, $\mathbf {DMcmSH}$ denotes the subvariety of $\mathbf {DMSH}$ defined by the commutative identity (Co), and $\mathbf {DQDBSH}$ denotes the one defined by (Bo). If the underlying semi-Heyting algebra of a $\mathbf{DQDSH}$-algebra is a Heyting algebra we denote the algebra by $\mathbf{DQDH}$-algebra, and the corresponding variety is denoted by $\mathbf{DQDH}$. In the sequel, $a'{^*}'$ will be denoted by $a^+$, for $a \in \mathbf{L} \in \mathbf{DQDSH}$. The following lemma will often be used without explicit reference to it. Most of the items in this lemma were proved in [@Sa12], and the others are left to the reader. \[2.2\] Let ${\mathbf L} \in \mathbf{DQDSH}$ and let $x,y, z \in L$. Then 1. $1'^{*}=1$ 2. $x \leq y$ implies $x' \geq y'$ 3. $(x \land y)'^{*}=x'^{*} \land y'^{*}$ 4. $ x'''
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | For any $m \geq 1$, let $H_m$ denote the quantity $\liminf_{n \to \infty} (p_{n+m}-p_n)$, where $p_n$ is the $n^{\operatorname{th}}$ prime. A celebrated recent result of Zhang showed the finiteness of $H_1$, with the explicit bound $H_1 \leq 70000000$. This was then improved by us (the Polymath8 project) to $H_1 \leq 4680$, and then by Maynard to $H_1 \leq 600$, who also established for the first time a finiteness result for $H_m$ for $m \geq 2$, and specifically that $H_m \ll m^3 e^{4m}$. If one also assumes the Elliott-Halberstam conjecture, Maynard obtained the bound $H_1 \leq 12$, improving upon the previous bound $H_1 \leq 16$ of Goldston, Pintz, and Y[i]{}ld[i]{}r[i]{}m, as well as the bound $H_m \ll m^3 e^{2m}$. In this paper, we extend the methods of Maynard by generalizing the Selberg sieve further, and by performing more extensive numerical calculations. As a consequence, we can obtain the bound $H_1 \leq 246$ unconditionally, and $H_1 \leq 6$ under the assumption of the generalized Elliott-Halberstam conjecture. Indeed, under the latter conjecture we show the stronger statement that for any admissible triple $(h_1,h_2,h_3)$, there are infinitely many $n$ for which at least two of $n+h_1,n+h_2,n+h_3$ are prime, and also obtain a related disjunction asserting that either the twin prime conjecture holds, or the even Goldbach conjecture is asymptotically true if one allows an additive error of at most $2$, or both. We also modify the “parity problem” argument of Selberg to show that the $H_1 \leq 6$ bound is the best possible that one can obtain from purely sieve-theoretic considerations. For larger $m$, we use the distributional results obtained previously by our project to obtain the unconditional asymptotic bound $H_m \ll m e^{(4-\frac{28}{157})m}$, or $H_m \ll m e^{2m}$ under the assumption of the Elliott-Halberstam conjecture. We also obtain explicit upper bounds for $H_m$ when $m=2,3,4,5$. address: ' , ' author: - title: 'Variants of the Selberg sieve, and bounded intervals containing many primes' --- Introduction ============ For any natural number $m$, let $H_m$ denote the quantity $$H_m \coloneqq \liminf_{n \to \infty} (p_{n+m} - p_n),$$ where $p_n$ denotes the $n^{\operatorname{th}}$ prime. The twin prime conjecture asserts that $H_1=2$; more generally, the Hardy-Littlewood prime tuples conjecture [@hardy] implies that $H_m = H(m+1)$ for all $m \geq 1$, where $H(k)$ is the diameter of the narrowest admissible $k$-tuple (see Section \[subclaim-sec\] for a definition of this term). Asymptotically, one has the bounds $$(\frac{1}{2}+o(1)) k \log k \leq H(k) \leq (1+o(1)) k \log k$$ as $k \to \infty$ (see Theorem \[hk-bound\] below); thus the prime tuples conjecture implies that $H_m$ is comparable to $m \log m$ as $m \to \infty$. Until very recently, it was not known if any of the $H_m$ were finite, even in the easiest case $m=1$. In the breakthrough work of Goldston, Pintz, and Y[i]{}ld[i]{}r[i]{}m [@gpy], several results in this direction were established, including the following conditional result assuming the Elliott-Halberstam conjecture $\operatorname*{EH}[\vartheta]$ (see Claim \[eh-def\] below) concerning the distribution of the prime numbers in arithmetic progressions: \[gpy-thm\] Assume the Elliott-Halberstam conjecture $\operatorname*{EH}[\vartheta]$ for all $0 < \vartheta < 1$. Then $H_1 \leq 16$. Furthermore, it was shown in [@gpy] that any result of the form $\operatorname*{EH}[\frac{1}{2} + 2\varpi]$ for some fixed $0 < \varpi < 1/4$ would imply an explicit finite upper bound on $H_1$ (with this bound equal to $16$ for $\varpi > 0.229855$). Unfortunately, the only results of the type $\operatorname*{EH}[\vartheta]$ that are known come from the Bombieri-Vinogradov theorem (Theorem \[bv-thm\]), which only establishes $\operatorname*{EH}[\vartheta]$ for $0 < \vartheta < 1/2$. The first unconditional bound on $H_1$ was established in a breakthrough work of Zhang [@zhang]: \[zhang-thm\] $H_1 \leq \num{70000000}$. Zhang’s argument followed the general strategy from [@gpy] on finding small gaps between primes, with the major new ingredient being a proof of a weaker version of $\operatorname*{EH}[\frac{1}{2}+2\varpi]$, which we call $\operatorname*{MPZ}[\varpi,\delta]$; see Claim \[mpz-claim\] below. It was quickly realized that Zhang’s numerical bound on $H_1$ could be improved. By optimizing many of the components in Zhang’s argument, we were able [@polymath8a; @polymath8a-unabridged] to improve Zhang’s bound to $$H_1 \leq \num{4680}.$$ Very shortly afterwards, a further breakthrough was obtained by Maynard [@maynard-new] (with related work obtained independently in unpublished work of Tao), who developed a more flexible “multidimensional” version of the Selberg sieve to obtain stronger bounds on $H_m$. This argument worked without using any equidistribution results on primes beyond the Bombieri-Vinogradov theorem, and amongst other things was able to establish finiteness of $H_m$ for all $m$, not just for $m=1$. More precisely, Maynard established the following results. Unconditionally, we have the following bounds: - $H_1 \leq 600$. - $H_m \leq C m^3 e^{4m}$ for all $m \geq 1$ and an absolute (and effective) constant $C$. Assuming the Elliott-Halberstam conjecture $\operatorname*{EH}[\vartheta]$ for all $0 < \vartheta < 1$, we have the following improvements: - $H_1 \leq 12$. - $H_2 \leq 600$. - $H_m \leq C m^3 e^{2m}$ for all $m \geq 1$ and an absolute (and effective) constant $C$. For a survey of these recent developments, see [@granville]. In this paper, we refine Maynard’s methods to obtain the following further improvements. \[main\] Unconditionally, we have the following bounds: - $H_1 \leq 246$. - $H_2 \leq \num{398130}$. - $H_3 \leq \num{24797814}$. - $H_4 \leq \num{1431556072}$. - $H_5 \leq \num{80550202480}$. - $H_m \leq C m \exp( (4 - \frac{28}{157}) m )$ for all $m \geq 1$ and an absolute (and effective) constant $C$. Assume the Elliott-Halberstam conjecture $\operatorname*{EH}[\vartheta]$ for all $0 < \vartheta < 1$. Then we have the following improvements: - $H_2 \leq 270$. - $H_3 \leq \num{52116}$. - $H_4 \leq \num{474266}$. - $H_5 \leq \num{4137854}$. - $H_m \leq Cme^{2m}$ for all $m \geq 1$ and an absolute (and effective) constant $C$. Finally, assume the generalized Elliott-Halberstam conjecture $\operatorname*{GEH}[\vartheta]$ (see Claim \[geh-def\] below) for all $0 < \vartheta < 1$. Then - $H_1 \leq 6$. - $H_2 \leq 252$. In Section \[subclaim-sec\] we will describe the key propositions that will be combined together to prove the various components of Theorem \[main\]. As with Theorem \[gpy-thm\], the results in (vii)-(xiii) do not require $\operatorname*{EH}[\
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We use first-principles density functional theory total energy and linear response phonon calculations to compute the Helmholtz and Gibbs free energy as a function of temperature, pressure, and cell volume in the flexible metal-organic framework material MIL-53(Cr) within the quasiharmonic approximation. GGA and metaGGA calculations were performed, each including empirical van der Waals (vdW) forces under the D2, D3, or D3(BJ) parameterizations. At all temperatures up to 500 K and pressures from -30 MPa to 30 MPa, two minima in the free energy versus volume are found, corresponding to the narrow pore ($np$) and large pore ($lp$) structures. Critical positive and negative pressures are identified, beyond which there is only one free energy minimum. While all results overestimated the stability of the $np$ phase relative to the $lp$ phase, the best overall agreement with experiment is found for the metaGGA PBEsol+RTPSS+U+J approach with D3 or D3(BJ) vdW forces. For these parameterizations, the calculated free energy barrier for the $np$-$lp$ transition is only 3 to 6 kJ per mole of Cr$_4$(OH)$_4$(C$_8$H$_4$O$_4$)$_4$.' author: - Eric Cockayne title: 'Thermodynamics of the Flexible Metal-Organic Framework Material MIL-53(Cr) From First Principles' --- \#1[[*\#1*]{}]{} \#1[[Eq. (\[eq:\#1\])]{}]{} \#1[[Fig. \[fig:\#1\]]{}]{} \#1[[Sec. \[sec:\#1\]]{}]{} \#1[[Ref. ]{}]{} \#1[[Table \[tab:\#1\]]{}]{} Microporous flexible metal-organic framework materials are fascinating both from a fundamental point of view and for their numerous potential applications such as gas storage, gas separation, sensors, drug delivery, etc.[@Ferey09; @Alhamami14; @Schneemann14; @Coudert15; @Ferey16] A well-studied example is the MIL-53 family,[@Serre02] with formula M(OH)(C$_8$H$_4$O$_4)$, where is M is a trivalent species such as Cr, Sc, Al, Ga or Fe. These structures consist of zigzag M-OH-M-OH$\dots$ chains, crosslinked by 1,4-benzodicarboxylate O$_2$C-C$_6$H$_4$-CO$_2$ (bdc) units (). Each M is coordinated by two oxygens of OH units and four carboxylate oxygens yielding octahedral oxygen coordination. ![Structure of MIL-53(Cr). Cr atoms green, O red, C gray, and H yellow. (a) bdc linkers joining zigzag Cr-OH-Cr-$\dots$ chains. (b) Each zigzag chain is coordinated with four neighboring chains; each Cr is octahedrally coordinated with six O. (c) Narrow pore ($np$) phase showing bdc rotations. (d) Large pore ($lp$) phase. In (c) and (d), the H are not shown.[]{data-label="fig:mil53x"}](mil53x.pdf){width="85mm"} These MIL-53 compounds exhibit a variety of topologically equivalent structures with different volumes, but generally include a narrow pore ($np$) structure and a large pore ($lp$) structure, both with formula M$_4$(OH)$_4$(bdc)$_4$ per conventional unit cell, but with significantly different volumes. In MIL-53(Al), the phase transition between $np$ and $lp$ forms can be reversibly achieved by cycling the temperature;[@Liu08] the cell parameter corresponding to the direction of the short axis of the lozenge pores was found to increase by 87 % in the $np$-$lp$ transformation. By way of comparison, the strain variations achieved or predicted in functional “hard" materials such as (PbMg$_{1/3}$Nb$_{2/3}$O$_3$)$_{(1-x)}$-(PbTiO$_3$)$_{x}$[@Park97] or BiFeO$_3$[@Dieguez11] are much smaller. The large hysteresis[@Liu08] in the $np$-$lp$ phase transition of MIL-53(Al) indicates that the transition is first-order. Taking the transition temperature as the midrange of the hysteresis loop, the transition temperature $T_c$ is approximately 260 K; an estimate based on experimental sorption measurements places the transition at a somewhat lower temperature of 203 K.[@Boutin10] For empty MIL-53(Cr), the $lp$ structure is thermodynamically preferred at all temperatures. In this system, a phase transition to a $np$ structure has instead been observed in the case of (1) sorption of a variety of sorbates; (2) pressure. The hysteresis of the process in each case[@Serre07] indicates again that there is a transition barrier. By fitting sorption isotherms, it was determined that the free energy difference between the $lp$ and $np$ forms of MIL-53(Cr) was only about 12 kJ mol$^{-1}$ of Cr$_4$(OH)$_4$(bdc)$_4$.[@Coudert08; @DevatourVinot09; @Coombes09] An experiment that put the system under hydrostatic pressure[@Beurroies10] came up with a similar free energy difference. The phase transition of MIL-53(Al) was explained by Walker et al.[@Walker10] in 2010. Van der Waals interactions stabilize the $np$ structure at low temperature, and vibrational entropy drives the structural transition to the $lp$ phase above $T_c$. Density functional theory (DFT) phonon calculations were used to quantify the vibrational entropy. In that work, however, the DFT energy and vibrational entropy were determined for only the $np$ and $lp$ structures. However, to build an accurate picture of the $np$-$lp$ phase transition, including the hysteresis and possible coexistence of $np$ and $lp$ phases,[@Triguero12] it is necessary to know the quantitative free energy landscape over the [*full*]{} volume range spanning the $np$ and $lp$ structures. This free-energy landscape of MIL-53 systems has previously been modeled in an [*ad hoc*]{} manner.[@Triguero11; @Ghysels13] This paper uses density functional total energy and phonon linear response calculations to compute the Helmholtz and Gibbs free energy in MIL-53(Cr) as a function of temperature, pressure, and cell volume, under the quasiharmonic approximation. MIL-53(Cr) was chosen because of its relatively simple phase transformation behavior and because it is well-characterized experimentally. The thermodynamic calculations are performed within the quasiharmonic approximation. In the quasiharmonic approximation, the anharmonic lattice dynamics that leads to thermal expansion, etc., is approximated by harmonic lattice dynamics where the phonon frequencies are volume-dependent. Suppose that one has a crystal where the rank-ordered frequencies ${\nu_{\mu} (V)}$ can be determined for an arbitrarily large supercell (equivalently at arbitrary points in the Brillouin zone of the primitive cell). The contribution of phonons to the thermodynamics is then given well-known expressions.[@Maradudin71; @vandeWalle02; @Fultz10; @Huang16] Defining a dimensionless parameter $x_{\mu}(V,T) = \frac{h \nu_{\mu}(V)}{k_B T}$, the molar internal energy as a function of volume and temperature is given by $$\begin{aligned} \frac{U}{N}(V,T) = {\rm Lim}_{|a_{\rm min}|\rightarrow \infty} \frac{1}{N} \bigl(U_0(V) + \nonumber \\ k_B T \sum_{\mu = 4}^{3 N_A} [\frac{x_{\mu}(V,T)}{2} {\rm coth}(\frac {x_{\mu}(V,T)}{2})]\bigr), \label{eq:inten}\end{aligned}$$ the Helmholtz free energy by $$\begin{aligned} \frac{F}{N}(V,T) = {\rm Lim}_{|a_{\rm min}|\rightarrow \infty} \frac{1}{N} \bigl(U_0(V) + \nonumber \\ k_B T \sum_{\mu = 4}^{3 N_A} [\frac{x_{\mu}(V,T)}{2} + {\rm ln} (1 - e^{-x_{\mu}(V,T)})]\bigr), \label{eq:helm}\end{aligned}$$ and the Gibbs free energy is given by $\frac{G}{N}(V,T) = \frac{F}{N}(V,T) + P V$. $U_0(V)$ is the ground state energy neglecting zero-point vibrations, $N$ the number of moles and $N_A$ the number of atoms in the supercell, and the summation begins at $\mu = 4$ to avoid the weak singularity due to the zero-frequency translational modes. First principles density functional theory calculations, as encoded in the [VASP]{} software (), were used to compute $U_0(V
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.' address: - '$^1$Extreme Computing Research Center (ECRC), King Abdullah University of Science and Technology (KAUST), Thuwal 23955, Saudi Arabia.' - '$^2$Department of Computer Science, American University of Beirut (AUB), Beirut, Lebanon.' author: - Wajih Halim Boukaram$^1$ - George Turkiyyah$^2$ - Hatem Ltaief$^1$ - 'David E. Keyes$^1$' bibliography: - 'arxiv\_batch\_svd.bib' title: Batched QR and SVD Algorithms on GPUs with Applications in Hierarchical Matrix Compression --- Introduction {#sec:intro} ============ The singular value decomposition (SVD) is a factorization of a general $m \times n$ matrix $A$ of the form $$A = U \Sigma V^*.$$ $U$ is an $m \times m$ orthonormal matrix whose columns $U_i$ are called the left singular vectors. $\Sigma$ is an $m \times n$ diagonal matrix whose diagonal entries $\sigma_i$ are called the singular values and are sorted in decreasing order. $V$ is an $n \times n$ orthonormal matrix whose columns $V_i$ are called the right singular vectors. When $m > n$, we can compute a reduced form $A = \hat{U} \hat{\Sigma} V^*$ where $\hat{U}$ is an $m \times n$ matrix and $\hat{\Sigma}$ is an $n \times n$ diagonal matrix. One can easily obtain the full form from the reduced one by extending $\hat{U}$ with $(m - n)$ orthogonal vectors and $\hat{\Sigma}$ with an $(m - n)$ zero block row. Without any loss of generality, we will focus on the reduced SVD of real matrices in our discussions. The SVD of a matrix is a crucial component in many applications in signal processing and statistics as well as matrix compression, where truncating the $(n - k)$ singular values that are smaller than some threshold gives us a rank-$k$ approximation $\tilde{A}$ of the matrix $A$. This matrix is the unique minimizer of the function $f_k(B) = || A - B ||_F$. In the context of hierarchical matrix operations, effective compression relies on the ability to perform the computation of large batches of independent SVDs of small matrices of low numerical rank. Randomized methods [@halko2011finding] are well suited for computing a truncated SVD of these types of matrices and are built on three computational kernels: the QR factorization, matrix-matrix multiplications and SVDs of smaller $k \times k$ matrices. Motivated by this task, we discuss the implementation of high performance batched QR and SVD kernels on the GPU, focusing on the more challenging SVD tasks. The remainder of this paper is organized as follows. Section \[sec:background\] presents different algorithms used to compute the QR factorization and the SVD as well as some considerations when optimizing for GPUs. Section \[sec:batch\_qr\] discusses the batched QR factorization and compares its performance with existing libraries. Sections \[sec:registers\], \[sec:shared\] and \[sec:block\_global\] discuss the various implementations of the SVD based on the level of the memory hierarchy in which the matrices can reside. Specifically, Section \[sec:registers\] describes the implementation for very small matrix sizes that can fit in registers, Section \[sec:shared\] describes the implementation for matrices that can reside in shared memory, and Section \[sec:block\_global\] describes the block Jacobi implementation for larger matrix sizes that must reside in global memory. Section \[sec:randomized\] details the implementation of the batched randomized SVD routine. We then discuss some details of the application to hierarchical matrix compression in Section \[sec:application\]. We conclude and discuss future work in Section \[sec:conclusion\]. Background {#sec:background} ========== In this section we give a review of the most common algorithms used to compute the QR factorization and the SVD of a matrix as well as discuss some considerations when optimizing on the GPU. QR Factorization ---------------- The QR factorization decomposes an $m \times n$ matrix $A$ into the product of an orthogonal $m \times m$ matrix $Q$ and an upper triangular $m \times n$ matrix $R$ [@golub2013matrix]. We can also compute a reduced form of the decomposition where Q is an $m \times n$ matrix and R is $n \times n$ upper triangular. The most common QR algorithm is based on transforming $A$ into an upper triangular matrix using a series of orthogonal transformations generated using Householder reflectors. Other algorithms such as the Gram-Schmidt or Modified Gram-Schmidt can produce the QR factorization by orthogonalizing a column with all previous columns; however, these methods are less stable than the Householder orthogonalization and the orthogonality of the resulting $Q$ factor suffers with the condition number of the matrix. Another method is based on Givens rotations, where entries in the subdiagonal part of the matrix are zeroed out to form the triangular factor and the rotations are accumulated to form the orthogonal factor. This method is very stable and has more parallelism than the Householder method; however it is more expensive, doing about 50% more work, and it is more challenging to extract the parallelism efficiently on the GPU. For our implementation, we rely on the Householder method due to its numerical stability and simplicity. The method is described in pseudo-code in Algorithm \[alg:qr\]. \[t\] $[Q, R] = [I, A]$ $v = \text{house}(R(i))$ $R = (I - 2vv^T) R$ \[alg:qr:trailing\_update\] $Q = Q (I - 2vv^T)$ SVD Algorithms -------------- Most implementations of the SVD are based on the two-phase approach popularized by Trefethen et al. [@trefethen1997numerical], where the matrix $A$ first undergoes bidiagonalization of the form $A = Q_U B Q_V^T$ where $Q_U$ and $Q_V$ are orthonormal matrices and $B$ is a bidiagonal matrix. The matrix $B$ is then diagonalized using some variant of the QR algorithm, the divide and conquer method or a combination of both to produce a decomposition $B = U_B \Sigma V_B^T$. The complete SVD is then determined as $A = (Q_U U_B) \Sigma (Q_V V_B)^T$ during the backward transformation. These methods require significant algorithmic and programming effort to become robust and efficient while still suffering from a loss of relative accuracy [@demmel1992jacobi]. An alternative is the one-sided Jacobi method where all $n(n-1)/2$ pairs of columns are repeatedly orthogonalized in sweeps using plane rotations until all columns are mutually orthogonal. When the process converges (i.e., all columns are mutually orthogonal up to machine precision), the left singular vectors are the normalized columns of the modified matrix with the singular values as the norms of those columns. The right singular vectors can be computed either by accumulating the rotations or by solving a system of equations. Our application does not need the right vectors, so we omit the details of computing them. Algorithm \[alg:jacobi\] describes the one-sided Jacobi method. Since each pair of columns can be orthogonalized independently, the method is also easily parallelized. The simplicity and inherent parallelism of the method make it an attractive first choice for an implementation on the GPU. \[b\] $G = A_{ij}^T A_{ij}$ \[alg:jacobi:gram\] $R = rot(G)$ $A_{ij} = A_{ij} R$ \[alg:jacobi:rot\] GPU Optimization Considerations ------------------------------- GPU kernels are launched by specifying a grid configuration which lets us organize threads into blocks and blocks into a grid. Launching a GPU kernel causes a short stall (as much as 10 microseconds) as the kernel is prepared for execution. This kernel launch overhead prevents kernels that complete their work faster than the overhead from executing in parallel, essentially serializing them. To overcome this limitation when processing small workloads, the work is batched into a single kernel call when possible [@batchqr_haidar; @batch_haidar]. All operations can then be executed in parallel without incurring the kernel launch overhead, with the grid configuration used to determine thread work assignment. A warp is a group of threads (32 threads in current generation GPUs, such as the NVIDIA K40) within a block that executes a single instruction in lockstep, without requiring any explicit synchronization. The occupancy of a kernel tells us the ratio of active
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In this paper, we apply the statefinder diagnostic to the cosmology with the Abnormally Weighting Energy hypothesis (AWE cosmology), in which dark energy in the observational (ordinary matter) frame results from the violation of weak equivalence principle (WEP) by pressureless matter. It is found that there exist closed loops in the statefinder plane, which is an interesting characteristic of the evolution trajectories of statefinder parameters and can be used to distinguish AWE cosmology from the other cosmological models.' author: - 'Dao-jun Liu' - 'Wei-zhong Liu' title: Statefinder diagnostic for cosmology with the abnormally weighting energy hypothesis --- Understanding the acceleration of the cosmic expansion is one of the deepest problems of modern cosmology and physics. In order to explain the acceleration, an unexpected energy component of the cosmic budget, dark energy, is introduced by many cosmologists. Perhaps the simplest proposal is the Einstein’s cosmological constant $\Lambda$ (vacuum energy), whose energy density remains constant with time. However, due to some conceptual problems associated with the cosmological constant (for a review, see [@ccp]), a large variety of alternative possibilities have been explored. The most popular among them is quintessence scenario which uses a scalar field $\phi$ with a suitably chosen potential $V(\phi)$ so as to make the vacuum energy vary with time. Inclusion of a non-minimal coupling to gravity in quintessence models together with further generalization leads to models of dark energy in a scalar-tensor theory of gravity. Besides, some other models invoke unusual material in the universe such as Chaplygin gas, tachyon, phantom or k-essence (see, for a review, [@dde] and reference therein). The possibility that dark energy comes from the modifications of four-dimensional general theory of relativity (GR) on large scales due to the presence of extra dimensions [@DGP] or other assumptions [@fR] has also been explored. A merit of these models is the absence of matter violating the strong energy condition (SEC). Recently, Füzfa and Alimi propose a completely new interpretation of dark energy that does also not require the violation of strong energy condition [@AWE]. They assume that dark energy does not couple to gravitation as usual matter and weights abnormally, *i.e.*, violates the weak equivalence principle (WEP) on large scales. The abnormally weighting energy (AWE) hypothesis naturally derives from more general effective theories of gravitation motivated by string theory in which the couplings of the different matter fields to the dilaton are not universal in general (see [@AWE07] and the reference therein). In Ref.[@AWE07], Füzfa and Alimi also applied the above AWE hypothesis to a pressureless fluid to explain dark energy effects and further to consider a unified approach to dark energy and dark matter. As so many dark energy models have been proposed, a discrimination between these rivals is needed. A new geometrical diagnostic, dubbed the statefinder pair $\{r, s\}$ is proposed by Sahni *et al* [@statefinder], where $r$ is only determined by the scalar factor $a$ and its derivatives with respect to the cosmic time $t$, just as the Hubble parameter $H$ and the deceleration parameter $q$, and $s$ is a simple combination of $r$ and $q$. The statefinder pair has been used to explor a series of dark energy and cosmological models, including $\Lambda$CDM, quintessence, coupled quintessence, Chaplygin gas, holographic dark energy models, braneworld models, Cardassion models and so on [@SF03; @Pavon; @TJZhang]. In this paper, we apply the statefinder diagnostic to the AWE cosmology. We find that there is a typical characteristic of the evolution of statefinder parameters for the AWE cosmology that can be distinguished from the other cosmological models. As is presented in Ref.[@AWE07], in the AWE cosmology, the energy content of the universe is divided into three parts: a gravitational sector with metric field ($g_{\mu\nu}^{*}$ ) and scalar field ($\phi$) components, a matter sector containing the usual fluids (baryons, photons, normally weighting dark matter if any, etc) and an abnormally weighting energy (AWE) sector. The normally and abnormally weighting matter are assumed to interact only through their gravitational influence without any direct interaction. The corresponding action in the Einstein frame can be written as $$\begin{aligned} \label{action} S&=&\frac{1}{2\kappa_*}\int \sqrt{-g_*}d^4x\{R^*-2g_*^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi\}\nonumber\\ &+&S_m[\psi_m, A^2_m(\phi)g^*_{\mu\nu}]+S_{awe}[\psi_{awe}, A^2_{awe}(\phi)g^*_{\mu\nu}]\end{aligned}$$ where $S_m$ is the action for the matter sector with matter fields $\psi_m$, $S_{awe}$ is the action for AWE sector with fields $\psi_{awe}$, $R^*$ is the curvature scalar, $\kappa_{*}=8\pi G_*$ and $G_*$ is the ’bare’ gravitational coupling constant. $A_{awe}(\phi)$ and $A_{m}(\phi)$ are the constitutive coupling functions to the metric $g_{\mu\nu}^*$ for the AWE and matter sectors respectively. Considering a flat Friedmann-Lemaitre-Robertson-Walker(FLRW) universe with metric $$\label{line1} ds_*^2=-dt_*^2+a_*^2(t_*)dl_*^2,$$ where $a_*(t_*)$ and $dl_*$ are the scale factor and Euclidean line element in the Einstein frame. The Friedmann equation derived from the action (\[action\]) is $$H_*^2=\left(\frac{1}{a_*}\frac{da_*}{dt_*}\right)^2 =\frac{({d\phi}/{dt_*})^2}{3}+\frac{\kappa_*}{3}(\rho_m^*+\rho_{awe}^*)$$ where $\rho_m^*$ and $\rho_{awe}^*$ are energy density of normally and abnormally weighting matter respectively. Assuming further that both the matter sector and AWE sector are constituted by a pressureless fluid, one can obtain the evolution of $\rho_m^*$ and $\rho_{awe}^*$, $$\rho_{m,awe}^*=A_{m,awe}(\phi)\frac{C_{m,awe}}{a_*^3},$$ where $C_{m,awe}$ are two constants to be specified. Introducing a new variable $\lambda=\ln (a_*/a_*^i)$ where $a_*^i$ is a constant, the Klein-Gordon equation ruling the scalar field dynamics is reduced to be $$\label{KG1} \frac{2\phi''}{3-\phi'^2}+\phi'+\frac{R_c\alpha_m(\phi)A_{m}(\phi)+\alpha_{awe}(\phi)A_{awe}(\phi)}{R_cA_{m}(\phi)+A_{awe}(\phi)}=0,$$ where a prime denotes a derivative with respect to $\lambda$, the parameter $R_c= C_m/C_{awe}$ and the functions $\alpha_{m,awe}=d\ln(A_{m,awe}(\phi))/d\phi$. However, Einstein frame, in which the physical degrees of freedom are separated, is not correspond to a physically observable frame. Cosmology and more generally everyday physics are built upon observations based on “normal” matter which couples universally to a unique metric $g_{\mu\nu}$ and according to the AWE action (\[action\]), $g_{\mu\nu}$ defines the observational frame through the following conformal transformation: $$g_{\mu\nu}=A_m^2(\phi)g_{\mu\nu}^*.$$ Therefore, the line element of FLRW metric (\[line1\]) in the observational frame can be written down as $$ds^2=-dt^2+a^2(t)dl^2,$$ where the scale factor $a(t)$ and the element of cosmic time read $$a(t)=A_m(\phi)a_*(t_*)=e^{\lambda}A_m(\phi)a^i_*,\;\;dt=A_m(\phi)dt_*.$$ Therefore, the Friedmann equation in the observational frame reads $$\begin{aligned} \label{H21} H^2\equiv\left(\frac{\dot{a}}{a}\right)^2 =\frac{8\pi G_*}{3}\frac{C_m}{a^3}\frac{A_m^2(\phi)\left(1+\frac{A_{awe}(\phi)}{A_m(\phi)}R_c^{-1}\right)}{\left(1-\alpha_m(\phi)\frac{d\phi}{dN}\right)^2-\frac{1}{3}\left(\frac{d\phi}{dN}\right)^2},\end{aligned}$$ where the overdot denotes the derivation with respect to the time $t$ and $N\equiv \ln(a/a^i)$ ( $a^i$ is value of the scale factor when $t=t_i$). Further, the Friedmann equation (\[H21\]) can be rewritten as $${H^2}=
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'The zero-temperature Glauber dynamics of the random-field Ising model describes various ubiquitous phenomena such as avalanches, hysteresis, and related critical phenomena. Here, for a model on a random graph with a special initial condition, we derive exactly an evolution equation for an order parameter. Through a bifurcation analysis of the obtained equation, we reveal a new class of cooperative slow dynamics with the determination of critical exponents.' author: - Hiroki Ohta - 'Shin-ichi Sasa' title: 'A universal form of slow dynamics in zero-temperature random-field Ising model' --- [Department of Pure and Applied Sciences, University of Tokyo, 3-8-1 Komaba Meguro-ku, Tokyo 153-8902, Japan]{} Slow dynamical behaviors caused by cooperative phenomena are observed in various many-body systems. In addition to well-studied examples such as critical slowing down [@H-H], phase ordering kinetics [@Bray], and slow relaxation in glassy systems [@Cavagna], seemingly different phenomena from these examples have also been discovered successively. In order to judge whether or not an observed phenomenon is qualitatively new, one needs to determine a universality class including the phenomenon. In this context, it is significant to develop a theoretical method for classifying slow dynamics. Here, let us recall a standard procedure for classifying equilibrium critical phenomena. First, for an order parameter $m$ of a mean-field model, a qualitative change in the solutions of a self-consistent equation $m={\cal F}(m)$ is investigated; then, the differences between the results of the mean-field model and finite-dimensional systems are studied by, for example, a renormalization group method. On the basis of this success, an analysis of the dynamics of a typical mean-field model is expected to be a first step toward determining a universality class of slow dynamics. As an example, in the fully connected Ising model with Glauber dynamics, an evolution equation for the magnetization, $\partial_t m={\cal G}(m)$, can be derived exactly. The analysis of this equation reveals that the critical behavior is described by a pitchfork bifurcation in the dynamical system theory [@Gucken]. As another example, an evolution equation for a time-correlation function and a response function was derived exactly for the fully connected spherical $p$-spin glass model [@CHZ; @Kurchan]. The obtained evolution equation represents one universality class related to dynamical glass transitions. The main purpose of this Letter is to present a non-trivial class of slow dynamics by exactly deriving an evolution equation for an order parameter. The model that we consider is the zero-temperature Glauber dynamics of a random-field Ising model, which is a simple model for describing various ubiquitous phenomena such as avalanches, hysteresis, and related critical phenomena [@Inomata; @Sethna; @Vives; @Durin; @Shin1]. As a simple, but still non-trivial case, we analyze the model on a random graph [@fn:global], which is regarded as one type of Bethe lattices [@MP]. Thus far, several interesting results on the quasi-static properties of the model on Bethe lattices have been obtained [@Duxbury1; @Illa; @Dhar1; @Colaiori1; @Alava; @Rosinberg0; @Rosinberg1]. In this Letter, by performing the bifurcation analysis of the derived equation, we determine the critical exponents characterizing singular behaviors of dynamical processes. #### Model: {#model .unnumbered} Let $G(c,N)$ be a regular random graph consisting of $N$ sites, where each site is connected to $c$ sites chosen randomly. For a spin variable $\sigma_i= \pm 1$ and a random field $h_i'$ on the graph $G(c,N)$, the random-field Ising model is defined by the Hamiltonian $$H=-\frac{1}{2}\sum_{i=1}^N\sum_{j\in B_i} \sigma_{i}\sigma_j-\sum_{i=1}^N (h+{h}_i')\sigma_i, \label{model}$$ where $B_i$ represents a set of sites connected to the $i$ site and $h$ is a uniform external field. The random field ${h}_i'$ obeys a Gaussian distribution $D_R({h}_i')$ with variance $R$. We collectively denote $(\sigma_i)_{i=1}^N$ and $(h_i)_{i=1}^N$ by ${{\boldsymbol \sigma}}$ and ${{\boldsymbol h}}$, respectively. Let $u_i$ be the number of upward spins in $B_i$. Then, for a given configuration, we express the energy increment for the spin flip at $i$ site as $-2\sigma_i\Delta_i$, where $$\Delta_i\equiv c-2u_i-(h+{h}_i'). \label{Delta:def}$$ The zero-temperature Glauber dynamics is defined as a stochastic process in the limit that the temperature tends to zero for a standard Glauber dynamics with a finite temperature. Specifically, we study a case in which the initial condition is given by $\sigma_i=-1$ for any $i$. In this case, once $\sigma_i$ becomes positive, it never returns. Thus, the time evolution rule is expressed by the following simple rule: if $\sigma_i=-1$ and $u_i$ satisfies $\Delta_i \le 0 $, the spin flips at the rate of $1/\tau_0$; otherwise, the transition is forbidden. Note that $\sigma_i(t)=-1$ when $\Delta_i(t) >0$, and $\Delta_i(t)$ is a non-increasing function of $t$ in each sample [@Dhar1]. In the argument below, a probability induced by the probability measure for the stochastic time evolution for a given realization ${{\boldsymbol h}}$ is denoted by $P^{{{\boldsymbol h}}}$, and the average of a quantity $X$ over ${{\boldsymbol h}}$ is represented by $\overline{X}$. #### Order parameter equation: {#order-parameter-equation .unnumbered} We first note that the local structure of a random graph is the same as a Cayley tree. In contrast to the case of Cayley trees, a random graph is statistically homogeneous, which simplifies the theoretical analysis. Furthermore, when analyzing the model on a random graph in the limit $N \to \infty$, we may ignore effects of loops. Even with this assumption, the theoretical analysis of dynamical behaviors is not straightforward, because $\sigma_j$ and $\sigma_k$, $j, k \in B_i$, are generally correlated. We overcome this difficulty by the following three-step approach. The first step is to consider a modified system in which $\sigma_i=-1$ is fixed irrespective of the spin configurations. We denote a probability in this modified system by $Q^{{{\boldsymbol h}}}$. We then define $q(t)\equiv\overline{Q^{{{\boldsymbol h}}}(\sigma_j(t)=1)}$ for $j \in B_i$, where $q(t)$ is independent of $i$ and $j$ owing to the statistical homogeneity of the random graph. The second step is to confirm the fact that any configurations with $\Delta_i(t)>0$ in the original system are realized at time $t$ in the modified system as well, provided that the random field and the history of a process are identical for the two systems. This fact leads to a non-trivial claim that $P^{{{\boldsymbol h}}}(\Delta_i(t) > 0)$ is equal to $Q^{{{\boldsymbol h}}}(\Delta_i(t) > 0)$. By utilizing this relation, one may express $P^{{{\boldsymbol h}}}(\Delta_i(t) > 0)$ in terms of $Q^{{{\boldsymbol h}}}(\sigma_j(t)=1)$. The average of this expression over ${{\boldsymbol h}}$, with the definition $\rho(t)\equiv\overline{P^{{{\boldsymbol h}}}(\Delta_i(t) > 0)}$, leads to $$\rho(t) = \sum_{u=0}^{c} \left( \begin{array}{c} c \\ u \end{array} \right) q(t)^{u}(1-q(t))^{c-u} \int_{-\infty}^{c-2u-h} dh' D_R(h'), \label{Gq}$$ where we have employed the statistical independence of $\sigma_j$ and $\sigma_k$ with $j,k \in B_i$ in the modified system. The expression (\[Gq\]) implies that $q(t)$ defined in the modified system has a one-to-one correspondence with the quantity $\rho(t)$ defined in the original system. The third step is to define $p(t)\equiv \overline{Q^{{{\boldsymbol h}}}(\sigma_j=-1, \Delta_j \le 0)}$ and $r(t)\equiv \overline{Q^{{{\boldsymbol h}}}(\Delta_j(t) > 0)}$ for $j \in B_i$. Then, by a procedure similar to the derivation of (\[Gq\]), we find that $dq(t)/dt$ is equal to $p(t)/\tau_0$. $r(t)$ is also expressed as a function of $q(t)$ because $r(t)$ is equal to a probability of $\Delta_j(t) > 0
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Short electron pulses are demonstrated to trigger and control magnetic excitations, even at low electron current densities. We show that the tangential magnetic field surrounding a picosecond electron pulse can imprint topologically protected magnetic textures such as skyrmions in a sample with a residual Dzyaloshinskii-Moriya spin-orbital coupling. Characteristics of the created excitations such as the topological charge can be steered via the duration and the strength of the electron pulses. The study points to a possible way for a spatio-temporally controlled generation of skyrmionic excitations.' author: - 'A. F. Schäffer$^1$, H. A. Dürr$^2$, J. Berakdar$^1$' title: Ultrafast imprinting of topologically protected magnetic textures via pulsed electrons --- Tremendous progress has been made towards the realization of spatiotemporally controlled electron sources for probing the materials local structural, electronic and magnetic dynamics [@1; @2; @3; @4]. Working schemes rely on the electron emission from a laser-irradiated nanoscale apex [@5; @6; @7; @8; @9; @10; @11; @12; @13; @14; @15; @16; @17; @18; @19; @20; @21; @22] with the electron pulse duration being controllable with the laser pulse duration. The laser intensity dictates the electron number in the bunch. Electron pulse acceleration and control is achievable by intense THz)fields [@23; @24; @25]. Here we explore the potential of very fast, relativistic electron bunches for a possible control of the magnetic dynamics in a thin film which is traversed by the electrons. Our focus is on the sample spin dynamics triggered by the electric and magnetic fields associated with the electron bunch [@26]. In fact, a pioneering experiment [@27] explored the ultimate speed limit for precessional magnetic dynamics of CoCrPt film driven by the magnetic field ${\mbox{\boldmath$\mathrm{B}$}}({\mbox{\boldmath$\mathrm{r}$}},t)$ of short relativistic electron pulses (with a duration of $\delta = 2.3\,$ps) passing a 14nm thin film of granular CoCrPt ferromagentic samples with grain sizes of $20.6\pm 4$nm. The main experimental results are shown along with our simulations in Fig.\[fig\_comp\]. Prior to the electron-pulse the sample was magnetized homogeneously in $z$ direction. The pulse induced ring pattern of the magnetic domains pointing either up or down (with respect to the easy direction of the magnetic films) is well captured by our micromagnetic simulations and can be interpreted by the analytical model enclosed in the supplementary materials. As pointed out in [@27], the critical precessional angle $\phi\geq\pi/2$ is determined by the local strength of the magnetic field and indicates the achieved angular velocity $\omega$. The pulse duration $\delta$ plays a crucial role [@28]. As discussed in Ref.[@28], an appropriate sequence of ps pulses allows for an optimal control scheme achieving a ballistic magnetic switching, even in the presence of high thermal fluctuations. Longer pulses might drive the system back to the initial state [@28]. So, the critical precessional angle and $\delta$ are the two key parameters [@27] for the established final precessional angle $\phi=\omega\delta$. Note, the demagnetization fields are also relevant, as inferred from Fig. \[fig\_comp\] but they do not change the main picture (further details are in the supplementary materials). ![Comparison between experimental (a)[@27], and numerical results (b), (c). Both numerical simulations and the experimental data cover an area of $150\times 150\,\mu$m$^2$. In contrast to panel (b), in (c) the demagnetizing fields are included in simulations. The grey shading signals the magnetization’s $z$-component with white color meaning $m_z=+\hat{e}_z$ and black $m_z=-\hat{e}_z$. The electrons in the beam impinging normal to the sample have an energy of 28GeV. The pulse’s time-envelope is taken as a Gaussian with a pulse duration of $\sigma_t = 2.3\,$ps, which translates to a number of $n_e\approx 10^{10}$ electrons and an equivalent time-dependence of the generated Oersted field whose radial $\rho$ dependence away from the beam axis derives to $B(\rho)=54.7\,$T$\mu$m$/(\rho+\epsilon)$ (at the peak electron bunch intensity). The cut-off distance $\epsilon=40\,$nm is included in order to avoid a divergent behavior at the origin and can be understood as a rough approximation of the beam width. []{data-label="fig_comp"}](fig1.eps){width=".6\linewidth"} Having substantiated our methods against experiment we turn to the main focus of our study, namely the generation of topologically protected magnetic excitations such as skyrmions via the electron pulses. We consider samples exhibiting Dzyaloshinskii-Moriya (DM) spin-orbital coupling are appropriate. A recent work [@29] evidences that ultrathin nano discs of materials such Co$_{70.5}$Fe$_{4.5}$Si$_{15}$B$_{10} $[@30] sandwiched between Pt and Ru/Ta are well suited for our purpose. The magnetization’s structure may nucleate spontaneously into skyrmionic configurations. We adapted the experimentally verified parameters for this sample and present here the result for the magnetic dynamics triggered by short electron beam pulses. Taking a nano disc of a variable size the ground state with a topological number $|N|=1$ is realized after propagating an initially homogeneous magnetization in $\pm z$ direction according to the Landau-Lifshitz-Gilbert equation (LLG) including DM interactions. The two possible ground states, depending on the initial magnetization’s direction are shown in \[fig\_groundstate\] along with the material’s parameters.\ Our main focus is on how to efficiently and swiftly create skyrmions, an issue of relevance when it comes to practical applications. Previous theoretical predictions (e.g. [@31]) utilize a spin-polarized current for the skyrmion generation. Large currents densities and a finite spin polarization of injected currents are needed, however. Thus, it is of interest to investigate the creation and annihilation of skyrmions with current pulses similar to those discussed above using the surrounding magnetic field. Of interest is the skyrmion generation and modification via a nano-focussed relativistic electron pulse. While currently such pulses can be generated with micron size beam dimensions [@ued_ref] future sources are expected to reach ficus sizes down to the few nm range [@32]. In principle the possibility of beam damage occurring in the beam’s focus as in the case of the experiment in ref.[@27] is present. However, ongoing experiments with relativistic electron beams [@ued_ref] indicate that the use of ultra thin freestanding films may alleviate damage concerns.\ Topologically protected magnetic configurations, like magnetic skyrmions, are well defined quasiparticles. They can be characterized mathematically by the topological or winding number $N=\frac{1}{4\pi}\int {\mbox{\boldmath$\mathrm{m}$}}\cdot\left(\frac{\partial {\mbox{\boldmath$\mathrm{m}$}}}{\partial x}\times\frac{\partial {\mbox{\boldmath$\mathrm{m}$}}}{\partial y}\right){\mathrm{d}}x{\mathrm{d}}y$[@33] which counts simply how often the unit vector of the magnetization wraps the unit sphere when integrated over the two-dimensional sample. Therefore, skyrmions are typically a quasiparticle in thin (mono)layers. The topological number adopts integer values indicating the magnetic configuration to be skyrmionic ($N=\pm 1$) or skyrmion multiplexes ($|N| >1$). If the topological number is not an integer the topological protection is lifted and the magnetic texture is unstable upon small perturbations. The topological stability of skyrmionic states stem from the necessity of flipping at least one single magnetic moment by $180^\circ$, to overcome the barrier and transfer the state into a “trivial” state, like a single domain or vortex state. In the following, we will attempt to overcome this energy barrier with the previous methods so that the magnetization will be converted into a state with a different topological invariant. Advantageous is the spatial structure of the magnetic field curling around the beam’s center, which gives a good point of action in order to manipulate topologically protected configurations.\ ![Magnetic ground states for a nano disc with a diameter of 300nm and a thickness of 1.5nm. The material parameters are $M\ind{sat}=450\times 10^3\,$A/m, $A\ind{ex}=10\,$pJ/m, $\alpha=0.01$, $K_u=1.2\times 10^5\,$J/m$^3$ (out-of plane anisotropy), and the interfacial DMI-constant $D\ind{ind}=0.31\times 10^{-3}\,$mJ/m$^2$. (a) corresponds to $N=1$, whereas (b) possesses $N=-1$, both skyrmions are of the Néel type. Bottom panel illustrates pictorially the influence of the magnetic field associated with the electron bunch. The cones correspond to the initial magnetic configuration as in (a) and (b), whereas the golden arrows show the induced magnetic field. The resulting torque points perpendicular to the magnetization, affecting the magnetic configuration accordingly. []{data-label
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'A. Luminari, F. Tombesi, E. Piconcelli, F. Nicastro, K. Fukumura, D. Kazanas, F. Fiore, L. Zappacosta' date: 'Received 27/09/19; accepted 25/11/19' title: 'On the importance of special relativistic effects in modelling ultra-fast outflows' --- Introduction ============ Outflows are ubiquitously observed from a variety of astrophysical sources and their impact on the surrounding environment depends on their energetic. In particular, mildly relativistic and ionised outflows from the innermost regions of Active Galactic Nuclei (AGNs) are often seen in UV and X-ray absorption spectra (e.g., [@Chartas; @T10; @R11; @B19]) and may carry sufficient energy to regulate both the growth of the central super-massive black hole (SMBH) and the evolution of the surrounding host galaxy ([@C14; @F12; @T15; @Z12]). This critically depends on the kinetic power of these outflows, which in turn depends on both their velocity and mass flux ([@Dimatteo; @KP15]). The line-of-sight velocity is typically inferred via the blue-shift of the absorption features imprinted by the outflowing material onto the continuum emission of the central source, compared to the systemic redshift of the host galaxy. The mass outflow rate $\Dot{M}_{out}$, instead, for a given covering factor and distance of the outflow, is estimated by measuring the optical depth of the absorption features. The observed optical depth is considered a proxy of the outflow column density $N_H$ along the line of sight, independently on its outflow velocity $v_{out}$. In this work we show that this assumption no longer holds for outflows escaping the central continuum source of radiation with velocities corresponding to a fraction of the speed of light $c$ (e.g. $v_{out} \buildrel > \over \sim 0.05 c$). For such outflows, the observed (i.e. apparent) optical depth of the spectral features produced by the absorbing material, significantly underestimates the intrinsic $N_H$ and, consequently, the mass transfer rate of the outflows. Therefore, a velocity-dependent correction must be adopted to account for this effect in the estimate of $N_H$. This pure special-relativistic effect is universal (i.e. applies to any fast-moving line-of-sight outflow), and affects not only our estimate of the kinetic power of the outflow but also the ability of the radiative source to effectively accelerate the outflow outwards. For AGN outflows, this may have deep implications on the feedback mechanism and the co-evolution with respect to the host galaxy ([@KH13]). The paper is organised as follows. In Sect. \[physics\] we overview the special relativity treatment for a fast-moving gas embedded in a radiation field. In Section \[prescription\] we show how to incorporate such treatment in modelling outflow spectra. In Section \[conclusions\] we discuss the results and their implications on estimating $\dot{M}_{out}, \dot{E}_{out}$, and we summarise in Sect. \[sect5\]. Special Relativistic Transformation in the Outflow Reference Frame {#physics} ================================================================== According to special relativity, the luminosity $L'$ seen by a clump of gas moving at relativistic speed is reduced of a factor $\Psi$, with respect to a static gas, as follows: $$L'=L\cdot \Psi \label{main}$$ where $L$ is the luminosity seen by an observer at rest and $\Psi$, i.e. the de-boosting factor, is defined as: $$\Psi\equiv\psi^4= \frac{1}{\gamma^4 (1-\beta cos(\theta))^4} \label{main_long}$$ where $\gamma \equiv \frac{1}{\sqrt{1-\beta^2}}$, $\beta=v_{out}/c$, $v_{out}$ is the gas velocity and $\theta$ is the angle between the incident luminosity $L$ and the direction of motion of the gas. Figure \[psi\] shows $\Psi$ as a function of $v_{out}$ for $\theta=180\ deg$, corresponding to a radial outward motion of the gas. The deboosting factor is due to the combination of the space-time dilatation in the gas reference frame, $K'$, and the relativistic Doppler shift of the received radiation ([@RL]). Using Eq. \[main\], the radiative intensity (i.e., the luminosity per solid angle) $\frac{dL'}{d\Omega'}$ received by the outflowing gas in $K'$ can be written as a function of the intensity in the rest frame $K$, as follows: $$\frac{dL'}{d\Omega'}= \Psi \frac{dL}{d\Omega} =\psi dE\cdot \psi^3\frac{1}{dt d\Omega} \label{expl}$$ where $dE, dt, d\Omega$ corresponds to the energy, time and solid angle intervals in $K$. Specifically, in Eq. \[expl\], $\psi dE$ is the energy transformation term, which represents the Doppler shift of the wavelengths in $K'$. The second term, $\psi^3\frac{1}{dt d\Omega}$, indicates a reduction of the intensity due to the space-time dilatation in $K'$. Noteworthy, Eq. \[main\] and \[expl\] also describe the emission from gas moving at relativistic velocity, as usually observed in high velocity systems such as jets in Blazars and GRBs ([@Urry; @G93]). When radiation is emitted along the direction of motion, i.e. $\theta\approx0\ deg$, $\Psi$ increases with increasing $v_{out}$, while $\Psi\leq1$ when it is emitted perpendicularly or backward ($\theta=90\ deg$ and $180\ deg$, respectively). The overall result is to concentrate the emitted radiation into a narrow cone along the direction of motion, an effect known as “relativistic beaming” ([@RL; @EHT]). Another way of describing the reduction of the luminosity seen by the outflowing gas is the following. In $K'$, the luminosity source appears as moving away with velocity $v_{out}$ and $\theta=180\ deg$ (for a pure radial motion), which results into a de-boosting of the received luminosity due to the relativistic beaming, according to Eq. \[main\]. ![Deboosting factor $\Psi$ in the gas reference frame $K'$ as a function of $v_{out}$ assuming $\theta=180$. For speeds lower than 0.1 the speed of light the radiation intercepted by the outflow and the (rest-frame) observer at infinity are virtually the same. For higher speeds, the fraction of intercepted radiation drops dramatically due to special relativistic effects. []{data-label="psi"}](psi_long.pdf){width="9.5cm"} Modelling Outflow Absorption Spectra Including Special Relativistic Effects {#prescription} =========================================================================== We propose to include these special relativistic corrections in modelling spectral absorption features from the outflowing gas, according to the following procedure (see Appendix \[appendix1\] for a detailed description). - [The first step is to transform the incident spectrum $S_I(K)$ from $K$ to $K'$, obtaining $S_I(K')$, according to Eq. \[expl\].]{} - [$S_I(K')$ is then given as input to the radiative transfer code to calculate the transmitted spectrum in the outflowing gas frame $K'$, $S_T(K')$.]{} - [Finally, the relativistic-corrected transmitted spectrum in $K$, i.e. $S_{out}(K)$, is given by: $$S_{out}(K)=S_I(K)\cdot \Delta + S_T(K')\cdot \psi^{-1} \label{sout}$$ where $\Delta\equiv 1-\psi^3$. The term $S_T(K')\cdot \psi^{-1}$ indicates the spectrum $S_T(K')$ in Doppler-shifted (from $K'$ to $K$) frequencies.]{} We note that in the low-velocity limit $v_{out}\ll c$, $\Psi \approx 1, \Delta\approx0$ and the resulting spectrum is $S_{out}(K)=S_T(K')\cdot \psi^{-1}$, as it is usually calculated. For the opposite high-velocity regime $v_{out}\rightarrow c$, $\Psi\approx 0$ and the outflowing gas does not interact with the ionising radiation. In fact, $S_I(K')$ and $S_T(K')$ have null intensity (see Eq. \[expl\]), $\Delta\approx 1$ and $S_{out}(K) \approx S_I(K)$. We use the radiative transfer code *XSTAR*, v2.5 ([@xstar]) to calculate $S_{out}(K)$, which is the spectrum as seen by a rest frame observer in $K$. Figure \[abs\_spectra\] shows the X-ray spectrum in the range $6-16\ keV$ of a power-law continuum source with $\Gamma=2$ and a ionising luminosity $L_{ion}=5\cdot 10^{46}\ erg\ s^{-1}$ in the 1-1000 Ry (1 Ry$= 13.6\ eV$) energy interval, modified by an absorber with $v_{
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We carry on our study of the connection between two shape optimization problems with spectral cost. On the one hand, we consider the optimal design problem for the survival threshold of a population living in a heterogenous habitat $\Omega$; this problem arises when searching for the optimal shape and location of a shelter zone in order to prevent extinction of the species. On the other hand, we deal with the spectral drop problem, which consists in minimizing a mixed Dirichlet-Neumann eigenvalue in a box $\Omega$. In a previous paper [@mapeve] we proved that the latter one can be obtained as a singular perturbation of the former, when the region outside the refuge is more and more hostile. In this paper we sharpen our analysis in case $\Omega$ is a planar polygon, providing quantitative estimates of the optimal level convergence, as well as of the involved eigenvalues.' author: - 'Dario Mazzoleni, Benedetta Pellacci and Gianmaria Verzini' title: Quantitative analysis of a singularly perturbed shape optimization problem in a polygon --- [**AMS-Subject Classification**]{}. [49R05, 49Q10; 92D25, 35P15, 47A75.]{}\ [**Keywords**]{}. [Singular limits, survival threshold, mixed Neumann-Dirichlet boundary conditions, $\alpha$-symmetrization, isoperimetric profile.]{} Introduction {#sec:intro} ============ In this note we investigate some relations between the two following shape optimization problems, settled in a box $\Omega\subset{\mathbb{R}}^N$, that is, a bounded, Lipschitz domain (open and connected). \[def:lambda\] Let $0<\delta<|\Omega|$ and $\beta>\dfrac{\delta}{|\Omega|-\delta}$. For any measurable $D\subset\Omega$ such that $|D| = \delta$, we define the *weighted eigenvalue* $$\label{eq:def_lambda_beta_D} \lambda(\beta,D):=\min \left\{ \dfrac{\int_\Omega |\nabla u|^2\,dx}{\int_D u^2\,dx - \beta \int_{\Omega\setminus D} u^2\,dx} : u\in H^1(\Omega),\ \int_D u^2\,dx>\beta \int_{\Omega\setminus D} u^2\,dx\right\},$$ and the *optimal design problem for the survival threshold as* $$\label{eq:def_od} \operatorname{\Lambda}(\beta,\delta)=\min\Big\{\lambda(\beta,D):D\subset {\Omega},\ |D|=\delta\Big\}.$$ Let $0<\delta<|{\Omega}|$. Introducing the space $H^1_0(D,{\Omega}):=\left\{u\in H^1({\Omega}):u=0\text{ q.e. on }{\Omega}\setminus D\right\}$ (where q.e. stands for quasi-everywhere, i.e. up to sets of zero capacity), we can define, for any quasi-open $D\subset\Omega$ such that $|D| = \delta$, the *mixed Dirichlet-Neumann eigenvalue* as $$\label{eq:def_mu_D} \mu(D,{\Omega}):=\min{\left\{\frac{\int_{{\Omega}}|\nabla u|^2\,dx}{\int_{\Omega}u^2\,dx}:u\in H^1_0(D,{\Omega})\setminus\{0\}\right\}},$$ and the *spectral drop problem* as $$\label{eq:def_sd} \operatorname{M}(\delta)=\min{\Big\{\mu(D,\Omega):D\subset {\Omega},\;\mbox{quasi-open, }|D|=\delta\Big\}}.$$ The two problems above have been the subject of many investigations in the literature. The interest in the study of the eigenvalue $\lambda(\beta,D)$ goes back to the analysis of the optimization of the survival threshold of a species living in a heterogenous habitat $\Omega$, with the boundary $\partial\Omega$ acting as a reflecting barrier. As explained by Cantrell and Cosner in a series of paper [@MR1014659; @MR1105497; @MR2191264] (see also [@ly; @llnp; @mapeve]), the heterogeneity of $\Omega$ makes the intrinsic growth rate of the population, represented by a $L^{\infty}(\Omega)$ function $m(x)$, be positive in favourable sites and negative in the hostile ones. Then, if $m^{+}\not\equiv 0$ and $\int m<0$, it turns out that the positive principal eigenvalue $\lambda=\lambda(m)$ of the problem $$\begin{cases} -\Delta u = \lambda m u &\text{in }\Omega\\ \partial_\nu u = 0 &\text{on }\partial\Omega, \end{cases}$$ i.e. $$\lambda(m)=\left\{\frac{{\int_{\Omega}}|\nabla u|^{2}dx}{{\int_{\Omega}}mu^{2}dx}: u\in H^{1}(\Omega), {\int_{\Omega}}mu^{2}dx>0\right\},$$ acts a survival threshold, namely the smaller $\lambda(m)$ is, the greater the chances of survival become. Moreover, by [@ly], the minimum of $\lambda(m)$ w.r.t. $m$ varying in a suitable class is achieved when $m$ is of bang-bang type, i.e. $m={\mathds{1}_{D}} -\beta {\mathds{1}_{\Omega\setminus D}}$, being $D\subset \Omega$ with fixed measure. As a consequence, one is naturally led to the shape optimization problem introduced in Definition \[def:lambda\]. On the other hand, the spectral drop problem has been introduced in [@buve] as a class of shape optimization problems where one minimizes the first eigenvalue $\mu=\mu(D,{\Omega})$ of the Laplace operator with homogeneous Dirichlet conditions on $\partial D\cap \Omega$ and homogeneous Neumann ones on $\partial D\cap \partial \Omega$: $$\begin{cases} -\Delta u = \mu u &\text{in }D\\ u = 0 &\text{on }\partial D\cap\Omega\\ \partial_\nu u = 0 &\text{on }\partial D\cap\partial\Omega. \end{cases}$$ In our paper [@mapeve], we analyzed the relations between the above problems, showing in particular that $\operatorname{M}(\delta)$ arises from $\operatorname{\Lambda}(\beta,\delta)$ in the singularly perturbed limit $\beta\to+\infty$, as stated in the following result. \[thm:convergence\] If $0<\delta<|\Omega|$, $\beta>\dfrac{\delta}{|\Omega|-\delta}$ and $\dfrac{\delta}{\beta}<{\varepsilon}< |\Omega|-\delta$ then $$\operatorname{M}(\delta+{\varepsilon})\left(1-\sqrt{\frac{\delta}{{\varepsilon}\beta}}\right)^{2}\leq \operatorname{\Lambda}(\beta,\delta)\leq \operatorname{M}(\delta).$$ As a consequence, for every $0<\delta<|\Omega|$, $$\lim_{\beta\to+\infty} \operatorname{\Lambda}(\beta,\delta) = \operatorname{M}(\delta).$$ In respect of this asymptotic result, let us also mention [@derek], where the relation between the above eigenvalue problems has been recently investigated for $D\subset\Omega$ fixed and regular. In [@mapeve], we used the theorem above to transfer information from the spectral drop problem to the optimal design one. In particular, we could give a contribution in the comprehension of the shape of an optimal set $D^{*}$ for $\operatorname{\Lambda}(\beta,\delta)$. This topic includes several open questions starting from the analysis performed in [@MR1105497] (see also [@llnp; @ly]) when $\Omega=(0,1)$: in this case it is shown that any optimal set $D^{*}$ is either $(0,\delta)$ or $(1-\delta,1)$. The knowledge of analogous features in the higher dimensional case is far from being well understood, but it has been recently proved in [@llnp] that when $\Omega$ is an N-dimensional rectangle, then $\partial D^{*}$ does not contain any portion of sphere, contradicting previous conjectures and numerical studies [@MR2214420; @haro; @MR2494032]. This result prevents the existence of optimal *spherical shapes*, namely optimal $D^{*}$ of the form $D^{*}=\Omega\cap B_{r(\delta)}(x_{0})$ for suitable $x_{0}$ and $r(\delta)$ such that $|D^{*}|=\delta$. On the other hand, we have shown that spherical shapes are optimal for $\operatorname{M}(\delta)$, for small $\delta$, when $\Omega$ is an $N$-dimensional polytope. This, together with Theorem \[thm:convergence\], yields the following result. \[thm:orthotope\] Let $\Omega \subset {\mathbb{R}}^N$ be a bounded, convex polytope. There exists $\bar\delta>0$ such that, for any $0<\delta< \bar\delta$: - $D^*$ is a minimizer of the spectral drop problem in $\Omega$, with volume constraint $\delta$, if and only if $D^*=B_{r(\delta)}(x_0)\cap\Omega$, where $x_0$ is a vertex of $\Omega$ with the smallest solid angle; - if
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We analyze the electrostatic interactions between a single graphene layer and a SiO$_2$ susbtrate, and other materials which may exist in its environment. We obtain that the leading effects arise from the polar modes at the SiO$_2$ surface, and water molecules, which may form layers between the graphene sheet and the substrate. The strength of the interactions implies that graphene is pinned to the substrate at distances greater than a few lattice spacings. The implications for graphene nanoelectromechanical systems, and for the interaction between graphene and a STM tip are also considered.' author: - 'J. Sabio$^1$' - 'C. Seoánez$^1$' - 'S. Fratini$^{1,2}$' - 'F. Guinea$^1$' - 'A. H. Castro Neto$^3$' - 'F. Sols$^4$' bibliography: - 'vdW\_sub.bib' title: 'Electrostatic interactions between graphene layers and their environment.' --- Introduction. ============= Graphene is a versatile two dimensional material whose singular electronic and mechanical properties show a great potential for applications in nanoelectronics.[@Netal05b; @GN07; @NGPNG07] Since free floating graphene is subject to crumpling,[@Nelson] the presence of a substrate, and the environment that comes with it, is fundamental for its stabilization. Hence, this environment will have direct impact in the physical properties of graphene. Though the influence of the substrate and other elements of the surroundings has been taken into account in different ways in the literature, the exact part that these are playing is not yet fully understood. On the one hand, the differences observed between samples grown on different substrates constitute an open issue. Most experiments have been carried out in graphene samples deposited over SiO$_2$, or grown over SiC substrates, [@Betal04] and a better understanding of how graphene properties are expected to change would be worthy. On the other hand, there is the question of characterizing all the effects that a particular environment has on graphene electronic and structural properties. Concerning electronic properties, it has been suggested that the low temperature mobility of the carriers is determined by scattering with charged impurities in the SiO$_2$ substrate,[@NM07; @AHGS07] and the effect of these charges can be significantly modified by the presence of water molecules.[@Setal07b] Actually, the very polar modes of SiO$_2$ give a good description of the finite temperature corrections to the mobility.[@PR06; @FG07; @CJXIF07] Supporting this idea, recent experiments show that graphene suspended above the substrate has a higher mobility.[@Betal08; @XuDu08] Experiments also seem to reveal a very important role played by the substrate in the structural properties of graphene. STM measurements suggest that single layer graphene follows the corrugations of the SiO$_2$ substrate,[@ICCFW07; @Setal07] and experiments on graphene nanoelectromechanical systems (NEMS) indicate that the substrate induces significant stresses in few layer graphene samples.[@Betal07] Moreover, the interaction between graphene and the substrate determines the frequency of the out of plane (flexural) vibrations, which can influence the transport properties at finite temperatures.[@KG07; @MNKSEJG07] ![a) Sketch of the system studied in the text. Interaction effects: b) Interaction with water molecules attached to hydroxyl radicals at the substrate. c) Interaction with polar modes at the surface of the substrate. d) van der Waals interaction between the graphene sheet and the metallic gate.[]{data-label="mechanisms"}](Fig1.eps){width="8.5cm"} In order to shed light on the influence of the environment on the graphene properties, we analyze the characteristic energies of interaction with the substrate and other materials present in the experimental setup. This allows us to evaluate the relative importance of the different interactions in the binding and mechanical response of the graphene layer. We also provide estimates of quantities such as equilibrium distances, typical lengthscales of corrugations, and frequencies of vibration, which can be measured in principle in current experimental setups. Throughout the paper we concentrate on SiO$_2$, though results are easily generalized to other substrates. Particularly, we consider: i) the van der Waals forces between graphene and the metallic gate below the SiO$_2$ substrate, ii) the electrostatic forces between the graphene layer and the polar modes of the substrate, iii) the electrostatic forces between graphene and charged impurities which may be present within the substrate and iv) the electrostatic forces between graphene and a water layer which may lay between graphene and the substrate.[@Setal07b] A sketch of the setup studied, and the different interaction mechanisms, is shown in Fig.\[mechanisms\]. We will also mention the possibility of weak chemical bonds between the graphene layer and molecules adjacent to it,[@LPP07; @Wetal07] although they will not be analyzed in detail. We do not consider a possible chemical modification of the graphene layer,[@Eetal07; @Wetal07b] which would change its transport properties. The general features of the electrostatic interactions to be studied are discussed in the next section. Then, we analyze, case by case, the different interactions between the graphene layer and the materials in its environment. Section IV discusses the main implications for the structure and dynamics of graphene, with applications to graphene NEMS and the interaction between graphene and a STM tip. The last section presents the main highlights of our work. Electrostatic interactions between a graphene layer and its environment. ======================================================================== The electrons in the $\pi$ and $\pi^*$ bands of graphene are polarized by electromagnetic potentials arising from charges surrounding it. The van der Waals interactions between metallic systems, and metals and graphene can be expressed as integrals over the dynamic polarizability of both systems. Those, in turn, can be written in terms of the zero point energy of the plasmons.[@TA83; @DWR06] The interaction between the graphene layer and a polarizable dielectric like SiO$_2$ is also given by an integral of the polarizability of the graphene layer times the polarizability of the dielectric. The latter can be approximated by the propagator of the polar modes, which play a similar role to the plasmons in a metal. The interaction between the graphene and static charges of electric dipoles depends only on the static polarizability.[@image] We will calculate these interactions using second order perturbation theory, assuming a perfect graphene sheet so that the momentum parallel to it is conserved. The corresponding diagrams are given in Fig. \[diagrams\]. All interactions depend, to this order, linearly on the polarizability of the graphene layer. In ordinary metallic systems, the Coulomb interaction is changed qualitatively when screening by the graphene electrons is taken into account through an RPA summation of diagrams. This is not the case for undoped graphene. There, the Random Phase Approximation leads to a finite correction $ \pi e^2 / 8 \hbar {v_{\rm F}}\sim 1$ to the dielectric constant, which does not change significantly the estimates obtained using second order perturbation theory. The response function of a graphene layer at half filling is:[@GGV94] $$\chi_G (\vec{q}, i\omega) = \frac{N_v N_s}{16 \hbar} \frac{q^2}{\sqrt{v_F^2 q^2 + \omega^2}}, \label{susc}$$ where $N_s = N_v = 2$ are the valley and spin degeneracy. This expression is obtained assuming a linear dispersion around the $K$ and $K'$ points of the Brillouin Zone. It is valid up to a cutoff in momentum $\Lambda \sim a^{-1}$ and energy $\omega_c \sim {v_{\rm F}}\Lambda$, where $a$ is the lattice spacing. Beyond this scale, the susceptibility has a more complex form, and it is influenced by the trigonal warping of the bands. The component of the electrostatic potential induced by a system at distance $z$ from the graphene layer with momentum $\vec{q}$ is suppressed by a factor $e^{- | \vec{q} | z}$. Hence, the integrations over $\vec{q}$ can be restricted to the region $0 \le q = | \vec{q} | \lesssim q_{max} \sim z^{-1}$. The combination of a term proportional to $e^{- | \vec{q} | z}$ and scale invariant quantities such as the susceptibility in Eq. (\[susc\]) leads to interaction energies which depend as a power law on $z$. In general, we will consider only the leading term, neglecting higher order corrections.[@polar] The calculation described above, which is valid for a single graphene layer at half filling, can be extended to other fillings and to systems with more than one layer. In all cases, the calculations are formally the same, and the interaction energies can be written as integrals over energies and momenta of the susceptibility of the system being considered, which replaces the susceptibility of a single layer, Eq. (\[susc\]). The susceptibility of a doped single layer is well approximated by that of an undoped system, Eq. (\[susc\]), for momenta such that $q \gtrsim {k_{\rm F}}$.[@WSSG06] Analogously, the susceptibilities of a stack of decoupled layers of graphene and multilayered graphene become similar for $q \gtrsim t_\perp / \hbar {v_{\rm F}}$[@G07], where $t_\perp$ is the hopping in the perpendicular direction. The susceptibility of a single undoped plane of graphene, Eq.(\[susc\]) is
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We use a Leibnitz rule type inequality for fractional derivatives to prove conditions under which a solution $u(x,t)$ of the k-generalized KdV equation is in the space $L^2(|x|^{2s}\,dx)$ for $s \in \mathbb R_{+}$.' address: | École Polytechnique Fédérale de Lausanne\ MA B1 487\ CH-1015 Lausanne author: - 'J. Nahas' title: 'A decay property of solutions to the k-generalized KdV equation' --- Introduction ============ The the initial value problem for the modified Korteweg-de Vries equation (mKdV), $$\begin{aligned} \partial_tu + \partial_x^3u+\partial_x(u^3)=0, \label{mkdv} \\ u(x,0)=u_0(x), \notag\end{aligned}$$ has applications to fluid dynamics (see [@2009ChPhB..18.4074L], [@1994JNS.....4..355R]), and plasmas (see [@PRUD]). It is also an example of an integrable system (see [@PhysRevLett.19.1095]). Ginibre and Y. Tsutsumi in [@g] proved well-posedness in a weighted $L^2$ space. In [@KPV1], Kenig, Ponce, and Vega proved local well-posedness for $u_0$ in the Sobolev space $H^s$, when $s \ge \frac{1}{4}$ by a contraction mapping argument in mixed $L_x^p$ and $L_T^q$ spaces. Christ, Colliander, and Tao in [@MR2018661] showed that was locally well-posed for $u_0 \in H^s$, when $s \ge \frac{1}{4}$, by using a contraction mapping argument in the Bourgain spaces $X_{s,b}$. Colliander, Keel, Staffilani, Takaoka, and Tao proved global well-posedness for real initial data $u_0 \in H^{s}$, $s > \frac{1}{4}$ in [@CKSTT]. Kishimoto in [@Kish] and Guo in [@MR2531556] proved global well-posedness for real data in the case $s=\frac{1}{4}$. The focus of this work will be , but we will also consider the generalized Korteweg-de Vries equation, $$\left\{ \begin{array}{c l} & \partial_tu + \partial_x^3u + \partial_x (u^{k+1})=0, \label{gkdv} \\ & u(x,0)=u_0(x),\textrm{ } x \in \mathbb R. \end{array} \right.$$ When $k \ge 4$, local well posedness was obtained for initial data $u_0 \in H^s$ with $s \ge \frac{k-4}{2k}$ in [@KPV1] using a contraction mapping argument in mixed $L_x^p$ and $L_T^q$ spaces. When $k=3$, the optimal local well posedness result was proven by Tao in [@MR2286393] for $u_0 \in H^s$ with $s \ge -\frac{1}{6}$ by using Bourgain spaces $X_{s,b}$. Kato in [@Ka] with energy estimates, and the fact that the operator $$\Gamma_K \equiv x+3t\partial_x^2 \notag$$ commutes with $\partial_t+\partial_x^3$, was able to prove the following: if $u_0 \in H^{2k}$ and $|x|^ku_0 \in L^2$ where $k \in \mathbb Z^{+}$, then for any other time $t$ when the solution exists, $|x|^ku(t) \in L_x^2$. Using slightly different techniques, we will prove the following theorem that extends this result slightly to $k \in \mathbb R_+$. \[weak-decay\] Suppose the initial data $u_0$ satisfies $|x|^su_0 \in L^2$, and $u_0 \in H^{2s+\varepsilon}$, for $\varepsilon >0$. Then for any other time $t$, the solution $u(x,t)$ to satisfies $|x|^su(x,t) \in L^2$. When $s \ge \frac{1}{2}$, the result holds for $\varepsilon=0$. Namely, if $|x|^{s}u_0 \in L^2$, and $u_0 \in H^{2s}$, then for any other time $t$, the solution $u(x,t)$ to satisfies $|x|^{s}u(x,t) \in L^2$. Analogous results for the NLS were first proved by Hayashi, Nakamitsu, and M. Tsutsumi in [@MR847012], [@MR880978], and [@MR987792]. They used the vector field $$\Gamma_S = x+2it\nabla, \label{nls-gamma}$$ which commutes with the operator $\partial_t -i\Delta$, and a contraction mapping argument to show that if $u_0 \in L^2(|x|^{2m}\,dx) \cap H^m$, where $m \in \mathbb N$, then the solution $u(x,t)$ at any other time is also in the space $L^2(|x|^{2m}\,dx) \cap H^m$. These results were extended to the case when $m \in \mathbb R_+$ by the author and G. Ponce in [@NP]. The corresponding results for the Benjamin-Ono equation were obtained in [@PonceFons] by G. Ponce and G. Fonseca. Inspired by these persistence results we prove the following as our main result. \[main\] If $u(x,t)$ is a solution of $$\notag \left\{ \begin{array}{c l} & \partial_tu + \partial_x^3u + \partial_x (u^{k+1})=0, \\ & u(x,0)=u_0(x),\textrm{ } x \in \mathbb R, \end{array} \right.$$ such that $u_0 \in H^{s'} \cap L^2(|x|^{s}\,dx)$, where $s \in (0,s']$. If $k=2$, and $s'\ge \frac{1}{4}$, then $u(\cdot,t) \in H^{s'} \cap L^2(|x|^{s}\,dx)$ for all $t$ in the lifespan of $u$. If $k \ge 4$, and $s \ge \frac{k-4}{2k}$, then $u(\cdot,t) \in H^{s'} \cap L^2(|x|^{s}\,dx)$ for all $t$ in the lifespan of $u$. We only prove this property the most interesting case, . Note that the cases in when $k=1$ or $4$ are excluded from Theorem \[main\]. We require our technique to be adapted to Bourgain spaces for these nonlinearities, which is an interesting open question. The difficulty in the case of fractional decay lies in the lack of an operator $\Gamma$ that sufficiently describes the relation between initial decay, and properties of the solution at another time (such as ). In order to solve this problem, we develop a Leibnitz rule type inequality for fractional derivatives. We need some notation to illustrate this idea. If $f$ is a complex valued function on $\mathbb R$, we let $f^{\wedge}$ (or $\hat{f}$) denote the Fourier transform of $f$, and $f^{\vee}$ the inverse Fourier transform. For $\alpha \in \mathbb R$, the operator $D_x^{\alpha}$ is defined as $(D_x^{\alpha}f(x))^{\wedge}(\xi)\equiv |\xi|^{\alpha}f^{\wedge}(\xi)$. Let $U(t)f$ denote the solution $u(x,t)$ to the linear part of $\eqref{mkdv}$, with $u(x,0)=f(x)$. Choose $\eta \in C_0^{\infty}(\mathbb R)$ with $\textrm{supp}(\eta)\subset [\frac{1}{2},2]$ so that $$\sum_{N \in \mathbb Z} (\eta(\frac{x}{2^N})+\eta(-\frac{x}{2^N}))=1 \textrm{ for }x \ne 0. \notag$$ Define the operator $Q_N$ on a function $f$ as $$Q_N(f) \equiv ((\eta(\frac{\xi}{2^N})+\eta(-\frac{\xi}{2^N}))\hat{f}(\xi))^{\vee}. \notag$$ If $\|\cdot\|_Y$ is a norm on some space of functions, we recall that $$\|Q_N(f)\|_{Y l_N^p} \equiv \|(\
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Let $F$ be a non-archimedean local field with residue field $\bbF_q$ and let $\mathbf{G}=GL_{2/F}$. Let $\bfq$ be an indeterminate and let $\cH^{(1)}(\bfq)$ be the generic pro-$p$ Iwahori-Hecke algebra of the $p$-adic group $\mathbf{G}(F)$. Let $V_{\mathbf{\whG}}$ be the Vinberg monoid of the dual group $\mathbf{\whG}$. We establish a generic version for $\cH^{(1)}(\bfq)$ of the Kazhdan-Lusztig-Ginzburg antispherical representation, the Bernstein map and the Satake isomorphism. We define the flag variety for the monoid $V_{\mathbf{\whG}}$ and establish the characteristic map in its equivariant $K$-theory. These generic constructions recover the classical ones after the specialization $\bfq=q\in\bbC$. At $\bfq=q=0\in \overline{\bbF}_q$, the antispherical map provides a dual parametrization of all the irreducible $\cH^{(1)}_{\overline{\bbF}_q}(0)$-modules.' author: - Cédric PEPIN and Tobias SCHMIDT title: '****' --- Introduction ============ Let $F$ be a non-archimedean local field with ring of integers $o_F$ and residue field $\bbF_q$. Let $\bfG$ be a connected split reductive group over $F$. Let $\cH_k=(k[I\setminus\bfG(F)/I],\star) $ be the Iwahori-Hecke algebra, i.e. the convolution algebra associated to an Iwahori subgroup $I\subset \bfG(F)$, with coefficients in an algebraically closed field $k$. On the other hand, let $\widehat{\bfG}$ be the Langlands dual group of $\bfG$ over $k$, with maximal torus and Borel subgroup $\widehat{\bfT}\subset \widehat{\bfB}$ respectively. Let $W_0$ be the finite Weyl group. When $k=\bbC$, the irreducible $\cH_{\bbC}$-modules appear as subquotients of the Grothendieck group $K^{\widehat{\bfG}}( \widehat{\bfG}/ \widehat{\bfB})_{\bbC}$ of $\widehat{\bfG}$-equivariant coherent sheaves on the dual flag variety $\widehat{\bfG}/ \widehat{\bfB}$. As such they can be parametrized by the isomorphism classes of irreducible tame $\widehat{\bfG}(\bbC)$-representations of the Weil group $\cW_F$ of $F$, thereby realizing the tame local Langlands correspondence (in this setting also called the Deligne-Lusztig conjecture for Hecke modules): Kazhdan-Lusztig [@KL87], Ginzburg [@CG97]. Their approach to the Deligne-Lusztig conjecture is based on two steps: the first step develops the theory of the so-called [*antispherical representation*]{} leading to a certain dual parametrization of Hecke modules. The second step links these dual data to representations of the group $\cW_F$. The antispherical representation is a distinguished faithful action of the Hecke algebra $\cH_{\bbC}$ on its maximal commutative subring $\cA_{\bbC}\subset\cH_{\bbC}$ via $\cA_{\bbC}^{W_0}$-linear operators: elements of the subring $\cA_{\bbC}$ act by multiplication, whereas the standard Hecke operators $T_s\in\cH_{\bbC}$, supported on double cosets indexed by simple reflections $s\in W_0$, act via the classical Demazure operators [@D73; @D74]. The link with the geometry of the dual group comes then in two steps. First, the classical Bernstein map $\tilde{\theta}$ identifies the ring of functions $\bbC[\widehat{\bfT}]$ with $\cA_{\bbC}$, such that the invariants $\bbC[\widehat{\bfT}]^{W_0}$ become the center $Z(\cH_{\bbC})=\cA_{\bbC}^{W_0}$. Second, the characteristic homomorphism $c_{\mathbf{\whG}}$ of equivariant $K$-theory identifies the rings $\bbC[\widehat{\bfT}]$ and $K^{\widehat{\bfG}}( \widehat{\bfG}/ \widehat{\bfB})_{\bbC}$ as algebras over the representation ring $\bbC[\widehat{\bfT}]^{W_0}=R(\widehat{\bfG})_{\bbC}$. When $k=\overline{\bbF}_q$ any irreducible $\widehat{\bfG}(\overline{\bbF}_q)$-representation of $\cW_F$ is tame and the Iwahori-Hecke algebra needs to be replaced by the bigger pro-$p$-Iwahori-Hecke algebra $$\cH_{\overline{\bbF}_q}^{(1)}=(\overline{\bbF}_q[I^{(1)}\setminus \bfG(F)/I^{(1)}],\star).$$ Here, $I^{(1)}\subset I$ is the unique pro-$p$ Sylow subgroup of $I$. The algebra $\cH_{\overline{\bbF}_q}^{(1)}$ was introduced by Vignéras and its structure theory developed in a series of papers [@V04; @V05; @V06; @V14; @V15; @V16; @V17]. More generally, Vignéras introduces and studies a generic version $\cH^{(1)}(\bfq_{*})$ of this algebra which is defined over a polynomial ring $\bbZ[\bfq_{*}]$ in finitely many indeterminates $\bfq_s$. The mod $p$ ring $\cH_{\overline{\bbF}_q}^{(1)}$ is obtained by specialization $\bfq_s=q$ followed by extension of scalars from $\bbZ$ to $\overline{\bbF}_q$, in short $\bfq_s=q=0$. Let now $\bfG=\mathbf{GL_n}$ be the general linear group, so that $\bfq_s$ is independent of $s$. Our aim is to show that there is a Kazhdan-Lusztig theory for the generic pro-$p$ Iwahori-Hecke algebra $\cH^{(1)}(\bfq)$. On the one hand, it gives back (and actually improves!) the classical theory after passing to the direct summand $\cH(\bfq)\subset \cH^{(1)}(\bfq)$ and then specializing $\bfq=q\in\bbC$. On the other hand, it gives a genuine mod $p$ theory after specializing to $\bfq=q=0\in \overline{\bbF}_q$. In the generic situation, the role of the Langlands dual group is taken by its Vinberg monoid $V_{\widehat{\bfG}}$ and its flag variety. The monoid comes with a fibration $\bfq : V_{\widehat{\bfG}}\rightarrow\bbA^1$ and the dual parametrisation of $\cH_{\overline{\bbF}_q}^{(1)}$-modules is achieved by working over the $0$-fiber $V_{\widehat{\bfG},0}$. In this article, we only explain the case of the group $\bfG=\mathbf{GL_2}$, and we are currently generalizing this material to the general linear group $\mathbf{GL_n}$. Moreover, we will treat the link with two-dimensional mod $p$ representations of the Weil group $\cW_F$ and the mod $p$ local Langlands program for $\mathbf{GL_2}$ in a forthcoming sequel to this paper. From now on, let $k=\overline{\bbF}_q$ and $\bfG=\mathbf{GL_2}$ and let $\bfq$ be an indeterminate. We let $\bfT\subset\bfG$ be the torus of diagonal matrices. Although our primary motivation is the extreme case $\bfq=q=0$, we will prove all our results in the far more stronger generic situation. It also allows us to find the correct normalizations in the extreme case and to recover and improve the classical theory over $\bbC$ (typically, the formulas become cleaner, e.g. in the Bernstein and in the Satake isomorphism). Let $\cA^{(1)}(\bfq) \subset \cH^{(1)}(\bfq)$ be the maximal commutative subring and $\cA^{(1)}(\bfq)^{W_0} = Z(\cH^{(1)}(\bfq))$ be its ring of invariants. We let $\tilde{\bbZ}:=\bbZ[\frac{1}{q-1},\mu_{q-1}]$ and denote by $\tilde{\bullet}$ the base change from $\bbZ$ to $\tilde{\bbZ}$. The algebra $\tilde{\cH}^{(1)}(\bfq)$ splits as a direct product of subalgebras $\tilde{\cH}^{\gamma}(\bfq)$ indexed by the orbits $\gamma$ of $W_0$ in the set of characters of the finite torus $\bbT:=\bfT(\bbF_q)$. There are regular resp. non-regular components corresponding to $|\gamma|=2$ resp. $|\gamma|=1$ and the algebra structure of $\tilde{\cH}^{\gamma}(\bfq)$ in these two cases is fundamentally different. We define an analogue of the Demazure operator for the regular components and call
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | Recent researches show that the fluctuations of the dielectric mirrors coating thickness can introduce a substantial part of the future laser gravitational-wave antennae total noise budget. These fluctuations are especially large in the high-reflectivity end mirrors of the Fabry-Perot cavities which are being used in the laser gravitational-wave antennae. We show here that the influence of these fluctuations can be substantially decreased by using additional short Fabry-Perot cavities, tuned in anti-resonance instead of the end mirrors. author: - 'F.Ya.Khalili' title: 'Reducing the mirrors coating noise in laser gravitational-wave antennae by means of double mirrors' --- Introduction ============ One of the basis components of laser gravitational-wave antennae [@Abramovici1992; @Abramovici1996; @WhitePaper1999] are high-reflectivity mirrors with multilayer dielectric coating. Recent researches [@Levin1998; @Crooks2002; @Harry2002; @Nakagava2002; @Penn2003; @03a1BrVy; @03a1BrSa; @Cagnoli2003; @Fejer2004; @Harry2004] have shown that fluctuations of the coating thickness produced by, in particular, Brownian and thermoelastic noise in a coating, can introduce substantial part of the total noise budget of the future laser gravitational-wave antennae. For example, estimates, done in [@03a1BrVy] show that the thermoelastic noise value can be close to the Standard Quantum Limit (SQL) [@03a1BrGoKhMaThVy] which corresponds to the sensitivity level of the Advanced LIGO project [@WhitePaper1999] or even can exceed it in some frequency range. For this reason it was proposed in [@04a1BrVy] to replace end mirrors by coatingless corner reflectors. It was shown in this article that by using these reflectors, it is possible, in principle, to obtain sensitivity much better than the SQL. However, the corner reflectors require substantial redesign of the gravitational-wave antennae core optics and suspension system. At the same time, the value of the mirror surface fluctuations depends on the number of dielectric layers which form the coating. It can be explained in the following way. The most of the light is reflected from the first couple of the layers. At the same time, fluctuations of the mirror surface are created by the thickness fluctuations of all underlying layers, and the larger is the layers number, the larger is the surface noise. Therefore, the surface fluctuations are relatively small for the input mirrors ([ITM]{}) of the Fabry-Perot cavities of the laser gravitational-wave antennae with only a few coating layers and $1-{\cal R}\sim 10^{-2}$ (${\cal R}$ is the mirror power reflectivity), and is considerably larger for the end mirrors ([ETM]{}) with coating layers number $\sim 40$ and $1-{\cal R}\lesssim 10^{-5}$. \[ct\]\[lb\][$L=4\,{\rm Km}$]{} \[ct\]\[lb\][$l\lesssim 10\,{\rm m}$]{} \[cb\]\[lb\][[ITM]{}]{} \[cb\]\[lb\][[IETM]{}]{} \[cb\]\[lb\][[EETM]{}]{} ![Schematic layout of a Fabry-Perot cavity with double mirror system instead of the end mirror: [ITM]{} and [IETM]{} are similar moderate reflective mirrors; [EETM]{} is a high-reflective one.[]{data-label="fig:fabry_dbl_mirror"}](fabry_dbl_mirror.eps){width="5in"} In this paper another, less radical way of reducing the coating noise, exploiting this feature, is proposed. It is based on the use of an additional short Fabry-Perot cavity instead of the end mirror (see Fig.\[fig:fabry\_dbl\_mirror\]). It should be tuned in anti-resonance, [*i.e*]{} its optical length $l$ should be close to $l=(N+1/4)\lambda$, where $\lambda$ is a wavelength. The back side of the first mirror have to have a few layers of an antireflection coating. It can be shown that in this case reflectivity of this cavity will be defined by the following equation: $$\label{R_simple} 1-{\cal R} \approx \frac{(1-{\cal R}_1)(1-{\cal R}_2)}{4} \,,$$ where ${\cal R}_{1,2}$ are the reflectivities of the first ([EETM]{} on Fig.\[fig:fabry\_dbl\_mirror\]) and the second ([IETM]{}) mirrors. Phase shift in the reflected beam produced by small variations $y$ in position of the second mirror reflecting surface relative to the first one will be equal to $$\label{phi_simple} \phi \approx \frac{1-{\cal R}_1}{4}\times 2ky \,,$$ where $k=2\pi/\lambda$ is a wave number. It is supposed for simplicity that there is no absorption in the first mirror material; more general formulae are presented below. It follows from these formulae that the first mirror can have a moderate value of reflectivity and, therefore, a small number of coating layers. In particular, it can be identical to the input mirror of the main Fabry-Perot cavity ([ITM]{}). At the same time, influence of the coating noise of the second (very-high-reflective) mirror will be suppressed by a factor of $(1-{\cal R}_1)/4$, which can be as small as $\sim 10^{-2}\div 10^{-3}$. \[cb\]\[lb\][[ETM]{}]{} ![The double reflector based on a single mirror.[]{data-label="fig:single_mirror"}](single_mirror.eps){width="2in"} In principle, another design of the double reflector is possible, which consists of one mirror only, see Fig.\[fig:single\_mirror\]. Both surfaces of this mirror have to have reflective coatings: the thin one on the face side and the thick one on the back side. In this case the additional Fabry-Perot cavity is created [*inside*]{} this mirror. However, in this case thermoelastic fluctuations of the the back surface coating will bend the mirror and thus will create unacceptable large mechanical fluctuations of the face surface. Estimates show that using this design, it possible to reduce the face surface fluctuations by factor $\sim 3$ only [@vyat_priv]. So the design with two [*mechanically isolated*]{} reflectors only will be considered here. In the next section more detail analysis of this system is presented. Analysis of the double-mirror reflector ======================================= \[rc\]\[lb\][$a$]{} \[rc\]\[lb\][$b$]{} \[lc\]\[lb\][$a_0$]{} \[lc\]\[lb\][$b_0$]{} \[lc\]\[lb\][$a_1$]{} \[lc\]\[lb\][$b_1$]{} \[rc\]\[lb\][$a_2$]{} \[rc\]\[lb\][$b_2$]{} \[cb\]\[lb\][[IETM]{}]{} \[cb\]\[lb\][[EETM]{}]{} ![The double mirror reflector.[]{data-label="fig:dbl_mirror"}](dbl_mirror.eps){width="3.5in"} The rightmost part of Fig.\[fig:fabry\_dbl\_mirror\] is presented in Fig.\[fig:dbl\_mirror\], where the following notation is used: $a, b$ are the amplitudes of the incident and reflected waves for the first mirror, respectively; $a_0, b_0$ are the amplitudes of the waves traveling in the left and right directions, respectively, just behind the first mirror coating; $a_1, b_1$ are the same for the waves just behind the first mirror itself; $a_2, b_2$ are the amplitudes of the incident and reflected waves for the second mirror, respectively. These amplitudes satisfy the following equations: \[main\_eqs\] $$\begin{aligned} a_0 &= -R_1b_0 + iT_1a \,, \\ a_1 &= T_0a_0 + A_1n_a \,, \\ a_2 &= \theta a_1 \,, \\ b &= -R_1a + iT_1b_0 \,, \\ b_0 &= T_0b_1 + A_0n_b \,, \\ b_1 &= \theta b_2 \,, \\ b_2 &= -R_2a_2 + A_2n_2 \,, \end{aligned}$$ where: $n_a,n_b,n_2$ are independent zero-point oscillations generated in the first ($n_a,n_b$) and the second ($n_2$) mirrors; $\theta = e^{ikl_1}$, where $l_1$ is the distance between the first mirror back surface and the second mirror; $-R_1$ and $iT_1$ are the amplitude reflectivity and transmittance of the first mirror coating, respectively, $R_1^2+T_1^2=1$; $T_0$ and $A_0$
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We describe the TreePM method for carrying out large N-Body simulations to study formation and evolution of the large scale structure in the Universe. This method is a combination of Barnes and Hut tree code and Particle-Mesh code. It combines the automatic inclusion of periodic boundary conditions of PM simulations with the high resolution of tree codes. This is done by splitting the gravitational force into a short range and a long range component. We describe the splitting of force between these two parts. We outline the key differences between TreePM and some other N-Body methods.' author: - | J.S.Bagla\ Harish-Chandra Research Institute, Chhatnag Road, Jhunsi,\ Allahabad 211019, INDIA\ e-mail:jasjeet@mri.ernet.in date: 'Received 2002 June 13; accepted 2002 November 14' title: 'TreePM: A code for Cosmological N-Body Simulations' --- \[firstpage\] gravitation, methods: numerical, cosmology: large scale structure of the universe Introduction ============ Observations suggest that the present universe is populated by very large structures like galaxies, clusters of galaxies etc. Current models for formation of these structures are based on the assumption that gravitational amplification of density perturbations resulted in the formation of large scale structures. In absence of analytical methods for computing quantities of interest, numerical simulations are the only tool available for study of clustering in the non-linear regime. Last two decades have seen a rapid development of techniques and computing power for cosmological simulations and the results of these simulations have provided valuable insight into the study of structure formation. The simplest N-Body method that has been used for studying clustering of large scale structure is the Particle Mesh method (PM hereafter). The genesis of this method is in the realisation that the Poisson equation is an algebraic equation in Fourier space, hence if we have a tool for switching to Fourier space and back, we can calculate the gravitational potential and the force with very little effort. It has two elegant features in that it provides periodic boundary conditions by default, and the force is softened naturally so as to ensure collisionless evolution of the particle distribution. However, softening of force done at grid scale implies that the force resolution is very poor. This limits the dynamic range over which we can trust the results of the code between a few grid cells and about a quarter of the simulation box (Bouchet and Kandrup, 1985; Bagla and Padmanabhan, 1997. Many efforts have been made to get around this problem, mainly in the form of P$^3$M (Particle-Particle Particle Mesh) codes (Efstathiou et al, 1985; Couchman 1991). In these codes, the force computed by the particle mesh part of the code is supplemented by adding the short range contribution of nearby particles, to improve force resolution. The main problem with this approach is that the particle-particle summation of the short range force takes a lot of time in highly clustered situations. Another, more subtle problem is that the force computed using the PM method has anisotropies and errors in force at grid scale – these errors are still present in the force calculated by combining the PM force with short range corrections (Bouchet and Kandrup, 1985). A completely different approach to the problem of computing force are codes based on the tree method. In this approach we consider groups of particles at a large distance to be a single entity and compute the force due to the group rather than sum over individual particles. There are different ways of defining a group, but by far the most popular method is that due to Barnes and Hut (1986). Applications of this method to Cosmological simulations require including periodic boundary conditions. This has been done using Ewald’s method (Ewald, 1921; Rybicki, 1986; Hernquist, Bouchet and Suto, 1991; Springel, Yoshida and White, 2001). Ewald’s method is used to tabulate the correction to the force due to periodic boundary conditions. This correction term is stored on a grid (in relative separation of a pair of particles) and the interpolated value is added to the pairwise force. Some attempts have been made to combine the high resolution of a tree code with the natural inclusion of periodic boundary conditions in a PM code by simply extending the P$^3$M method and replacing the particle-particle part for short range correction with a local tree (Xu, 1995). In this paper we present a hybrid N-Body method that attempts to combine the good features of the PM and the tree method, while avoiding the problems of the P$^3$M and the TPM methods. Our approach is to divide force into long and short range components using partitioning of unity, instead of taking the PM force as given. This allows us greater control over errors, as we shall see below. The plan of the paper is as follows: §[2]{} introduces the basic formalism of both the tree and PM codes. §[2.3]{} gives the mathematical model for the TreePM code. We analyse errors in force for the TreePM code in §[3]{}. Computational requirements of our implementation of the TreePM code are discussed in §[4]{}. A discussion of the relative merits of the TreePM method with respect to other N-Body methods follows in §[5]{}. The TreePM Method ================= Tree Code --------- We use the approach followed by Barnes and Hut (1986). In this, the simulation volume is taken to be a cube. The tree structure is built out of cells and particles. Cells may contain smaller cells (subcells) within them. Subcells can have even smaller cells within them, or they can contain a particle. We start with the simulation volume and add particles to it. If two particles end up in the same subcell, the subcell is geometrically divided into smaller subcells until each subcell contains either subcells or at most one particle. The cubic simulation volume is the root cell. In three dimensions, each cubic cell is divided into eight cubic subcells. Cells, as structures, have attributes like total mass, location of centre of mass and pointers to subcells. Particles, on the other hand have the traditional attributes like position, velocity and mass. More details can be found in the original paper (Barnes and Hut, 1986). Force on a particle is computed by adding contribution of other particles or of cells. A cell that is sufficiently far away can be considered as a single entity and we can just add the force due to the total mass contained in the cell from its centre of mass. If the cell is not sufficiently far away then we must consider its constituents, subcells and particles. Whether a cell can be accepted as a single entity for force calculation is decided by the cell acceptance criterion (CAC). We compute the ratio of the size of the cell $d$ and the distance $r$ from the particle in question to its centre of mass and compare it with a threshold value $$\theta = \frac{d}{r} \leq \theta_c \label{trwalk}$$ The error in force increases with $\theta_c$. There are some potentially serious problems associated with using $\theta_c \geq 1/\sqrt{3}$, a discussion of these is given in Salmon and Warren (1994). One can also work with completely different definitions of the CAC (Salmon and Warren, 1994; Springel, Yoshida and White, 2001). Irrespective of the criterion used, the number of terms that contribute to the force on a particle is much smaller than the total number of particles, and this is where a tree code gains in terms of speed over direct summation. We will use the Barnes and Hut tree code and we include periodic boundary conditions for computing the short range force of particles near the boundaries of the simulation cube. Another change to the standard tree walk is that we do not consider cells that do not have any spatial overlap with the region within which the short range force is calculated. We also use an optimisation technique to speed up force calculation (Barnes, 1990). Particle Mesh Code ------------------ A PM code is the obvious choice for computing long range interactions. Much has been written about the use of these in cosmological simulations (e.g., see Hockney and Eastwood, 1988) so we will not go into details here. PM codes solve for the gravitational potential in the Fourier space. These use Fast Fourier Transforms (FFT) to compute Fourier transforms, and as FFT requires data to be defined on a regular grid the concept of mesh is introduced. The density field represented by particles is interpolated onto the mesh. Poisson equation is solved in Fourier space and an inverse transform gives the potential (or force) on the grid. This is then differentiated and interpolated to the position of each particle in order to calculate the displacements. Use of a grid implies that forces are not accurate at the scale smaller than the grid cells. A discussion of errors in force in a PM code can be found in Efstathiou et al (1985) and elsewhere (Bouchet and Kandrup, 1985; Bagla and Padmanabhan, 1997). The error in force can be very large at small scales but it drops to an acceptable number beyond a few grid cells, and is negligible at large scales. We use the Cloud-in-Cell weight function for interpolation. We solve the Poisson equation using the natural kernel, $-1/k^2$; this is called the poor man’s Poisson solver (Hockney and Eastwood, 1988). We compute the gradient of the potential in Fourier space. TreePM Code
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In this paper we propose a novel Bayesian kernel based solution for regression in complex fields. We develop the formulation of the Gaussian process for regression (GPR) to deal with complex-valued outputs. Previous solutions for kernels methods usually assume a *complexification* approach, where the real-valued kernel is replaced by a complex-valued one. However, based on the results in complex-valued linear theory, we prove that both a kernel and a *pseudo-kernel* are to be included in the solution. This is the starting point to develop the new formulation for the complex-valued GPR. The obtained formulation resembles the one of the *widely* linear minimum mean-squared (WLMMSE) approach. Just in the particular case where the outputs are proper, the pseudo-kernel cancels and the solution simplifies to a real-valued GPR structure, as the WLMMSE does into a *strictly* linear solution. We include some numerical experiments to show that the novel solution, denoted as widely non-linear complex GPR (WCGPR), outperforms a *strictly* complex GPR where a pseudo-kernel is not included.' bibliography: - 'CGPR.bib' - 'murilloGP.bib' - 'biblio.bib' --- Introduction ============ Complex-valued signals are present in the modeling of many systems in a wide range of fields such as optics, electromagnetics, acoustics and telecommunications, among others. The study of linear solutions for complex-valued signals has been addressed in detail in the literature. These solutions can be roughly classified into those that assume properness and those that do not. A proper complex random signal is uncorrelated with its complex conjugate [@Neeser93]. In the proper scenario, solutions for the real-valued case can be usually rewritten for the complex-valued scenario by just replacing transpose by Hermitian. However, in the improper case, the solutions are more involved and the concept of *widely* linear is introduced. Accordingly, the linear minimum mean-squared error (LMMSE) can be simply rewritten by taking into account the covariance between two random vectors. However, if the outputs are improper, an additional term must be added to include the pseudo-covariance [@Tulay11; @Schreier06]. Hence, both covariance and pseudo-covariance must be taken into account. Many non-linear tools for complex fields have been developed within the artificial neural network research community [@Mandic09; @hirose13]. In kernel methods, we may find a few results for kernel principal analysis [@Papaioannou14], classification [@Steinwart06] or regression [@OgunfunmiP11; @Bouboulis12; @Tobar12; @Boloix14]. These solutions are usually introduced as a *complexificacion* of the kernel [@Bouboulis12]. In the complexification approach, real-valued kernel tools are adapted to the complex-valued scenario by just rewriting the kernel to deal with complex-valued outputs, and inputs. However, as discussed above for linear solutions, this may suffice for the proper case, but not for the general one. Bearing this in mind, we investigate in this paper how pseudo-covariance matrices should be included in the solutions. In particular, we focus in Gaussian process for regression (GPR). Gaussian processes (GPs) are kernel Bayesian tools for discriminative machine learning [@OHagan78; @Rasmussen06; @PerezCruz13gp]. They have been successfully applied to regression, classification and dimensionality reduction. GPs can be interpreted as a family of kernel methods with the additional advantage of providing a full conditional statistical description for the predicted variable. Also, hyperparameters can be learned by maximizing the marginal likelihood, avoiding cross-validation. For real fields, GPs applied to regression can be casted as a non-linear MMSE [@PerezCruz13gp]: they present a similar structure as the LMMSE, where we replace the linear covariances by kernels, and the regularization term also depends on the prior of the weights of the generalized regression [@Rasmussen06]. In the following, we propose to develop a new formulation of GPR for complex-valued signals. We start analyzing the prediction for the real and imaginary part separately. Then we merge the results into a complex-valued formulation. In the general improper case, we show that the solution depends on both a kernel and a pseudo-kernel, to propose a *widely* complex GPR (WCGPR). Widely linear MMSE (WLMMSE) estimation ====================================== In this section we review the *widely* concept for complex-valued signals by describing the widely linear minimum mean-squared error (WLMMSE) estimation. The WLMMSE estimation of a zero-mean signal $\fv\newd: \Omega \rightarrow \CC^\d$ from the zero-mean measurement $\yv: \Omega \rightarrow \CC^\n$ is [@Picinbono95; @Schreier06] $$\begin{aligned} {\hat{\fv}}_{\newd}&=\matr{W}_{1}\yv+\matr{W}_{2}\yv^{*},\end{aligned}$$ or by making use of the augmented notation, where the complex signals are stacked on their conjugates: $$\begin{aligned} \aug{\hat{\fv}}_{\newd}=\left[ \begin{array}{c} {\hat{\fv}}_{\newd}\\ {\hat{\fv}}_{\newd}^{*}\\ \end{array}\right]=\aug{\matr{W}}\,\aug{\yv}=\left[ \begin{array}{c c} \matr{W}_{1} & \matr{W}_{2}\\ \matr{W}_{2}^{*} & \matr{W}_{1}^{*}\\ \end{array}\right]\left[ \begin{array}{c} \yv\\ \yv^{*}\\ \end{array}\right].\end{aligned}$$ The widely linear estimator is determined such that the mean-squared error is minimized, i.e., the error between the augmented estimator and the augmented signal, $\aug{\vect{e}}=\aug{\hat{\fv}}_{\newd}-\aug{{\fv}}_{\newd}$, must be orthogonal to the augmented measurement, $\aug{\yv}$, [@Picinbono95; @Schreier06]: $$\begin{aligned} \LABEQ{W} \aug{\matr{W}}=\aug{\matr{R}}_{\fv\newd\yv}\aug{\matr{R}}_{\yv\yv}\inv=\left[ \begin{array}{cc} {\matr{R}}_{\fv\newd\yv} & {\matr{\tilde{R}}}_{\fv\newd\yv}\\ {\matr{\tilde{R}}}_{\fv\newd\yv}^* &{\matr{R}}_{\fv\newd\yv}^*\\ \end{array}\right]\left[ \begin{array}{cc} {\matr{R}}_{\yv\yv} & {\matr{\tilde{R}}}_{\yv\yv}\\ {\matr{\tilde{R}}}_{\yv\yv}^* &{\matr{R}}_{\yv\yv}^*\\ \end{array}\right]\inv,\end{aligned}$$ where $\aug{\matr{R}}_{\yv\yv}$ is the augmented covariance matrix of the measurements, with covariance matrix $\matr{R}_{\yv\yv}=\mathbb{E}\left[\yv\yv\her\right]$ and pseudo-covariance or complementary covariance matrix $\matr{\tilde{R}}_{\yv\yv}=\mathbb{E}\left[\yv\yv^\top\right]$. Similarly, $\aug{\matr{R}}_{\fv\newd\yv}$ is composed by $\matr{R}_{\fv\newd\yv}=\mathbb{E}\left[\fv\newd\yv\her\right]$ and $\matr{\tilde{R}}_{\fv\newd\yv}=\mathbb{E}\left[\fv\newd\yv^\top\right]$. Now, by using the matrix-inversion lemma in , the WLMMSE estimation yields $$\begin{aligned} \LABEQ{fWLMMSE} {\hat{\fv}}_{\newd}&=\left[\matr{R}_{\fv\newd\yv}-\matr{\tilde{R}}_{\fv\newd\yv}\matr{{R}}_{\yv\yv}^{-*}\matr{\tilde{R}}^*_{\yv\yv}\right]\matr{P}_{\yv\yv}\inv\yv\nonumber\\&+\left[\matr{\tilde{R}}_{\fv\newd\yv}-\matr{{R}}_{\fv\newd\yv}\matr{{R}}_{\yv\yv}\inv\matr{\tilde{R}}_{\yv\yv}\right]\matr{P}_{\yv\yv}^{-*}\yv^{*},\end{aligned}$$ where $\matr{P}_{\yv\yv}=\matr{{R}}_{\yv\yv}-\matr{\tilde{R}}_{\yv\yv}\matr{{R}}_{\yv\yv}^{-*}\matr{\tilde{R}}^*_{\yv\yv}$ is the error covariance matrix for linearly estimating $\yv$ from $\yv^*$. Finally, the error covariance
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | We investigate the classical and quantum dynamics of an electron confined to a circular quantum dot in the presence of homogeneous $B_{dc}+B_{ac}$ magnetic fields. The classical motion shows a transition to chaotic behavior depending on the ratio $\epsilon=B_{ac}/B_{dc}$ of field magnitudes and the cyclotron frequency ${\tilde\omega_c}$ in units of the drive frequency. We determine a phase boundary between regular and chaotic classical behavior in the $\epsilon$ vs ${\tilde\omega_c}$ plane. In the quantum regime we evaluate the quasi-energy spectrum of the time-evolution operator. We show that the nearest neighbor quasi-energy eigenvalues show a transition from level clustering to level repulsion as one moves from the regular to chaotic regime in the $(\epsilon,{\tilde\omega_c})$ plane. The $\Delta_3$ statistic confirms this transition. In the chaotic regime, the eigenfunction statistics coincides with the Porter-Thomas prediction. Finally, we explicitly establish the phase space correspondence between the classical and quantum solutions via the Husimi phase space distributions of the model. Possible experimentally feasible conditions to see these effects are discussed. Pacs: 05.45.+b address: | [*Department of Physics and Center for Interdisciplinary Research on Complex Systems,\ Northeastern University, Boston Massachusetts 02115, USA*]{} author: - 'R. Badrinarayanan and Jorge V. José' title: | Classical and Quantum Chaos in a quantum dot\ in time-periodic magnetic fields --- Introduction {#sec:intro} ============ In this paper, we present results of a study of the behavior of an electron confined to a disk of finite radius, subjected to spatially uniform, constant ($B_{dc}$) plus time-varying ($B_{ac}$) perpendicular magnetic fields. This allows us to analyze an old problem which exhibits some very novel behavior because of the time-dependent field. Without this time varying component of the field, the electronic states form the oscillator-like Landau levels[@fock]. With the addition of confinement, this constant field problem was studied in great detail by Dingle[@dingle]. He obtained perturbative solutions and subsequently others obtained numerical and exact[@robnik] solutions. The solutions depend on the ratio of the cyclotron radius $\rho_c$ to the confinement radius $R_0$. One of the most important consequences of confinement is the presence of ‘skipping orbits’, which play an important role, for example, in the Quantum Hall Effect[@prange]. This problem is of significant interest as a consequence of two independent developments over the past few years. One, the important advances in our knowledge of classical chaos[@ll], and to a lesser extent, it’s quantum and semiclassical counterparts[@casati1]; and two, the spectacular advances in the fabrication of very clean mesoscopic quantum devices[@beenakker], where a high-mobility two-dimensional electron gas is trapped within a boundary of controlled shape. We attempt to begin to bring the two fields together by asking how this model system behaves from the classical dynamical point of view and what it’s quantum signatures are. We predict ranges of fields and frequencies where some novel effects may be experimentally observable. In this paper, we consider the single-electron case and leave for a future publication the many electron problem. This paper is organized as follows: In section II we define the model with its classical and quantum-mechanical properties, elucidate the important parameters in the problem and describe the general method of solution. In section III, we investigate the properties of the classical model. Based on a combination of analytic and numerical analysis, we obtain a ‘phase diagram’ in the parameter space of the system, which separates the quasi-integrable from the chaotic regions. This phase diagram is shown in Fig.1. The vertical axis is the ratio $\epsilon=B_{ac}/B_{dc}$ of the magnitudes of the fields, and the horizontal axis is the Larmor frequency normalized to the [*a.c.*]{} drive frequency, ${\tilde\omega_c}=\omega_c/\omega_0$. This phase diagram is of paramount importance in making the connection between the classical and quantum solutions. The values of the d.c. field $B_{dc}$ and drive frequency $\omega_0$ depend on the radius of the dot $R_0$ and certain other parameters. However, to give an idea of the magnitudes of the physical parameters involved, let us pick two representative points on the diagram: $({\tilde{\omega_c}},\epsilon)$ = (0.1, 0.1) corresponds to $\omega_0$ = 20 GHz and $B_{dc}$ = 20 gauss when $R_0 = 1\mu m$, while $\omega_0$ = 800 MHz and $B_{dc}$ = 0.08 gauss for $R_0 = 5\mu m$. Similarly, $({\tilde{\omega_c}},\epsilon)$ = (2.0, 2.0) corresponds to $\omega_0$ = 20 GHz and $B_{dc}$ = 800 gauss for $R_0 = 1\mu m$, while $\omega_0$ = 20 GHz and $B_{dc}$ = 32 gauss for $R_0 = 5\mu m$. The details of the these estimates are presented in Section V. We analytically obtain conditions and look at various kinds of fixed points of the classical solutions. In section IV we study the spectral statistics of the quantum evolution operator, which shows clear signatures of the classical transition from quasi-integrabality to chaos. We also discuss the eigenfunctions properties in different regimes using the $\chi ^2$ distribution of $\nu$ freedoms as a convenient parameterization of the results. Then, we turn to semiclassical correspondences, where we use a phase-space approach to the quantum eigenfunctions, and make direct connections with various types of classical phase space periodic orbits. In section V we discuss possible experimental scenarios where the predicted effects may be observable. Finally, in section VI we summarize our results and present our conclusions. The Model {#sec:model} ========= The model of a quantum dot we consider here is that of an electron confined to a disk of radius $R_0$ subject to steady ([*d.c.*]{}) and time-periodic ([*a.c.*]{}) magnetic fields. Choosing the cylindrical gauge, where the vector potential ${\bf A}(\vec \rho,t) = {1\over 2}B(t)\, \rho\, \hat e_\phi$, $B(t)$ being the time-dependent magnetic field, the quantum mechanical single-particle Hamiltonian in the coordinate representation is given by $$\label{eq:a} H = -\frac{{\hbar^2}}{2m^*}\left( \frac{d^2}{d\rho^2} + \frac{1}{\rho}\frac{d}{d\rho} + \frac{1}{\rho^2}\frac{d^2}{d\phi^2} \right) + \frac{1}{8} m^* \Omega^2(t) \rho^2 + \frac{1}{2} \Omega(t) L_z, \quad 0 \leq \rho \leq R_0,$$ where $m^*$ is the effective mass of the electron (roughly 0.067$m_e$ in GaAs-AlGaAs semiconductor quantum dots) [@beenakker], $L_z$ is the operator of the conserved angular momentum, and $\Omega(t) = e^{*}B(t)/m^*c$, $e^{*}$ and $c$ being the effective electronic charge ($e^{*}\sim 0.3e$) and the speed of light, respectively. Let the magnetic field be of the form, $B(t) = B_{dc} + B_{ac} f(t)$, where $f(t)=f(t+T_0)$ is some periodically time varying function. We can separate the Hamiltonian $H=H_{dc} + H_1(t)$, where $$\label{eq:b} H_{dc} = -\frac{{\hbar^2}}{2m^*}\left( \frac{d^2}{d\rho^2} + \frac{1}{\rho}\frac{d}{d\rho}\right) + \frac{{\hbar^2\ell ^2}}{2m^*}\frac{1}{\rho^2} + \frac{1}{8} m^* \omega_{c}^2(t) \rho^2 + \frac{1}{2} \frac{\ell \hbar \omega_c}{2},$$ and $H_1(t)=\frac{1}{8}m^*\left(2B_{dc}B_{ac}f(t) + B_{ac}^2f^2(t)\right)\rho^2$. Here $H_{dc}$ is the standard static Hamiltonian for a charge in a homogeneous constant perpendicular magnetic field, that includes the para- and dia-magnetic contributions, with $\omega_c = \frac{e B_{dc}}{m^* c}$. With the additional dropping of a term of the form $L_z B_{ac}f(t)$ which can trivially be removed by a unitary transformation, $H_1(t)$ gives the time-dependent contribution to $H$. Note that $H_1(t)=H_1(t+T_0)$. In the limit in which $H_1(t)$ is much smaller than $H_{dc}$ one can study the modification to the solutions associated to $H_{dc}$ by standard
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We prove that for a chainable continuum $X$ and every $x\in X$ with only finitely many coordinates contained in a zigzag there exists a planar embedding $\phi:X\to \phi(X)\subset\R^2$ such that $\phi(x)$ is accessible, partially answering a question of Nadler and Quinn from 1972. Two embeddings $\phi,\psi:X \to \R^2$ are called strongly equivalent if $\phi \circ \psi^{-1}: \psi(X) \to \phi(X)$ can be extended to a homeomorphism of $\R^2$. We also prove that every nondegenerate indecomposable chainable continuum can be embedded in the plane in uncountably many ways that are not strongly equivalent.' address: - 'Departamento de Matemática Aplicada, IME-USP, Rua de Matão 1010, Cidade Universitária, 05508-090 São Paulo SP, Brazil' - 'Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, A-1090 Vienna, Austria' - 'AGH University of Science and Technology, Faculty of Applied Mathematics, al. Mickiewicza 30, 30-059 Kraków, Poland. – and – National Supercomputing Centre IT4Innovations, Division of the University of Ostrava, Institute for Research and Applications of Fuzzy Modeling, 30. dubna 22, 70103 Ostrava, Czech Republic' author: - 'Ana Anušić, Henk Bruin, Jernej Činč' title: Planar embeddings of chainable continua --- [^1] Introduction ============ It is well-known that every chainable continuum can be embedded in the plane, see [@Bing]. In this paper we develop methods to study nonequivalent planar embeddings, similar to methods used by Lewis in [@Lew] and Smith in [@Sm] for the study of planar embeddings of the pseudo-arc. Following Bing’s approach from [@Bing] (see Lemma \[lem:patterns\]), we construct nested intersections of discs in the plane which are small tubular neighborhoods of polygonal lines obtained from the bonding maps. Later we show that this approach produces all possible planar embeddings of chainable continua which can be covered with planar chains with *connected* links, see Theorem \[thm:allemb\]. From that we can produce uncountably many nonequivalent planar embeddings of the same chainable continuum. \[def:equivembed\] Let $X$ be a chainable continuum. Two embeddings $\phi,\psi:X \to \R^2$ are called [*equivalent*]{} if there is a homeomorphism $h$ of $\R^2$ such that $h(\phi(X)) = \psi(X)$. They are [*strongly equivalent*]{} if $\psi \circ \phi^{-1}: \phi(X)\to \psi(X)$ can be extended to a homeomorphism of $\R^2$. That is, equivalence requires some homeomorphism between $\phi(X)$ and $\psi(X)$ to be extended to $\R^2$ whereas strong equivalence requires the homeomorphism $\psi \circ \phi^{-1}$ between $\phi(X)$ and $\psi(X)$ to be extended to $\R^2$. Clearly, strong equivalence implies equivalence, but in general not the other way around, see for instance Remark \[rem:n\_emb\]. We say a nondegenerate continuum is [*indecomposable*]{}, if it is not the union of two proper subcontinua. \[q:uncountably\] Are there uncountably many nonequivalent planar embeddings of every chainable indecomposable continuum? This question is listed as Problem 141 in a collection of continuum theory problems from 1983 by Lewis [@LewP] and was also posed by Mayer in his dissertation in 1983 [@MayThesis] (see also [@May]) using the standard definition of equivalent embeddings. We give a positive answer to the adaptation of the above question using strong equivalence, see Theorem \[thm:Mayer\]. If the continuum is the inverse limit space of a unimodal map and not hereditarily decomposable, then the result holds for both definitions of equivalent, see Remark \[rem:otherdef\]. In terms of equivalence, this generalizes the result in [@embed], where we prove that every unimodal inverse limit space with bonding map of positive topological entropy can be embedded in the plane in uncountably many nonequivalent ways. The special construction in [@embed] uses symbolic techniques which enable direct computation of accessible sets and prime ends (see [@AC]). Here we utilize a more direct geometric approach. One of the main motivations for the study of planar embeddings of tree-like continua is the question of whether the *plane fixed point property* holds. The problem is considered to be one of the most important open problems in continuum theory. Is it true that every continuum $X \subset \R^2$ not separating the plane has the fixed point property, every continuous $f: X\to X$ has a fixed point? There are examples of tree-like continua without the fixed point property, see Bellamy’s example in [@Bell]. It is not known whether Bellamy’s example can be embedded in the plane. Although chainable continua are known to have the fixed point property (see [@Ha]), insight in their planar embeddings may be of use to the general setting of tree-like continua. Another motivation for this study is the following long-standing open problem. For this we use the following definition. Let $X\subset\R^2$. We say that $x\in X$ is [*accessible*]{} (from the complement of $X$) if there exists an arc $A\subset\R^2$ such that $A\cap X=\{x\}$. \[Nadler and Quinn 1972, pg. 229 in [@Nadler]\] \[q:NaQu\] Let $X$ be a chainable continuum and $x\in X$. Can $X$ be embedded in the plane such that $x$ is accessible? We will introduce the notion of a *zigzag* related to the admissible permutations of graphs of bonding maps and answer Nadler and Quinn’s question in the affirmative for the class of *non-zigzag* chainable continua (see Corollary \[cor:nonzigzag\]). From the other direction, a promising possible counterexample to Question \[q:NaQu\] is the one suggested by Minc (see Figure \[fig:Minc\] and the description in [@Minc]). However, the currently available techniques are insufficient to determine whether the point $p\in X_M$ can be made accessible or not, even with the use of thin embeddings, see Definition \[def:thin\]. Section \[sec:notation\] gives basic notation, and we review the construction of natural chains in Section \[sec:chains\]. Section \[sec:permuting\] describes the main technique of permuting branches of graphs of linear interval maps. In Section \[sec:stretching\] we connect the techniques developed in Section \[sec:permuting\] to chains. Section \[sec:emb\] applies the techniques developed so far to accessibility of points of chainable planar continua; this is the content of Theorem \[thm:algorithm\] which is used as a technical tool afterwards. Section \[sec:zigzags\] introduces the concept of zigzags of a graph of an interval map. Moreover, it gives a partial answer to Question \[q:NaQu\] and provides some interesting examples by applying the results from this section. Section \[sec:thin\] gives a proof that the permutation technique yields all possible thin planar embeddings of chainable continua. Furthermore, we pose some related open problems at the end of this section. Finally, Section \[sec:nonequivalent\] completes the construction of uncountably many planar embeddings that are not equivalent in the strong sense, of every chainable continuum which contains a nondegenerate indecomposable subcontinuum and thus answers Question \[q:uncountably\] for strong equivalence. We conclude the paper with some remarks and open questions emerging from the study in the final section. Notation {#sec:notation} ======== Let $\N = \{ 1,2,3,\dots\}$ and $\N_0=\{0,1,2,3,\dots\}$ be the positive and nonnegative integers. Let $f_i: I=[0,1]\to I$ be continuous surjections for $i\in\N$ and let [ *inverse limit space*]{} $$X_{\infty}=\underleftarrow{\lim}\{I, f_i\}=\{(x_0, x_1, x_2, \dots): f_i(x_i)=x_{i-1}, i\in\N\} %\subset I^{\infty}$$ be equipped with the subspace topology endowed from the product topology of $I^{\infty}$. Let $\pi_i: X_{\infty}\to I$ be the [*coordinate projections*]{} for $i\in\N_0$. Let $X$ be a metric space. A *chain in $X$* is a set $\chain=\{\ell_1\ldots, \ell_n\}$ of open subsets of $X$ called *links*, such that $\ell_i\cap\ell_j\neq\emptyset$ if and only if $|i-j|\leq 1$. If also $\cup_{i=1}^n \
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: '[We study experimentally and theoretically a cold trapped Bose gas under critical rotation, *i.e.* with a rotation frequency close to the frequency of the radial confinement. We identify two regimes: the regime of explosion where the cloud expands to infinity in one direction, and the regime where the condensate spirals out of the trap as a rigid body. The former is realized for a dilute cloud, and the latter for a Bose-Einstein condensate with the interparticle interaction exceeding a critical value. This constitutes a novel system in which repulsive interactions help in maintaining particles together.]{}' author: - 'P. Rosenbusch$^1$, D.S. Petrov$^{2,3}$, S. Sinha$^1$, F. Chevy$^1$, V. Bretin$^1$, Y. Castin$^1$, G. Shlyapnikov$^{1,2,3}$, and J. Dalibard$^1$' date: Received title: Critical rotation of a harmonically trapped Bose gas --- The rotation of a macroscopic quantum object is a source of spectacular and counter-intuitive phenomena. In superfluid liquid helium contained in a cylindrical bucket rotating around its axis $z$, one observes the nucleation of quantized vortices for a sufficiently large rotation frequency $\Omega$ [@Donnelly]. A similar phenomenon occurs in a Bose-Einstein condensate confined in a rotating harmonic trap [@ENS; @MIT; @Boulder; @Oxford]. In particular, vortices are nucleated when the rotation resonantly excites surface modes of the condensate. This occurs for particular rotation frequencies in the interval $0<\Omega \leq \omega_\bot/\sqrt{2}$, where $\omega_\bot$ is the trap frequency in the $xy$ plane perpendicular to the rotation axis $z$. Several theoretical studies have recently considered the critical rotation of the gas, *i.e.* $\Omega\sim \omega_\bot$, which presents remarkable features [@Rokhsar; @Mottelson; @Gunn; @Stringari; @Zoller; @Ho; @Fetter; @Baym; @Sinova]. From a classical point of view, for $\Omega=\omega_\bot$ the centrifugal force compensates the harmonic trapping force in the $xy$ plane. Hence the motion of a single particle of mass $m$ in the frame rotating at frequency $\Omega$ is simply due to the Coriolis force $2 m {\bf{\dot r}}\times {\bf \Omega} $. This force is identical to the Lorentz force acting on a particle of charge $q$ in the magnetic field ${\bf B}= 2 (m/q)\, {\bf \Omega}$. The analogy between the motion of charged particles in a magnetic field and neutral particles in a rotating frame also holds in quantum mechanics. In this respect, a quantum gas of atoms confined in a harmonic trap rotating at the critical frequency is analogous to an electron gas in a uniform magnetic field. One can then expect [@Gunn; @Zoller] to observe phenomena related to the Quantum Hall Effect. This paper presents an experimental and theoretical study of the dynamics of a magnetically trapped rubidium ($^{87}$Rb) gas stirred at a frequency close to $\omega_\bot$. We show that the single particle motion is dynamically unstable for a window of frequencies $\Omega$ centered around $\omega_\bot$. This result entails that the center-of-mass of the atom cloud (without or with interatomic interactions) is destabilized, since its motion is decoupled from any other degree of freedom for a harmonic confinement. This also implies that a gas of non-interacting particles “explodes”, which we indeed check experimentally. When one takes into account the repulsive interactions between particles, which play an important role in a $^{87}$Rb condensate, one would expect naively that this explosion is enhanced. However, we show experimentally that this is not the case, and repulsive interactions can “maintain the atoms together". This has been predicted for a Bose-Einstein condensate in the strongly interacting – Thomas-Fermi (TF)– regime [@Stringari]. Here we derive the minimal interaction strength which is necessary to prevent the explosion. This should help studies of the Quantum Hall related physics in the region of critical rotation. Consider a gas of particles confined in an axisymmetric harmonic potential $V_0({\bf r})$, with frequency $\omega_z$ along the trap axis $z$, and $\omega_\bot$ in the $xy$ plane. To set this gas into rotation, one superimposes a rotating asymmetric potential in the $xy$ plane. In the reference frame rotating at an angular frequency $\Omega$ around the $z$ axis, this potential reads $V_1({\bf r})=\epsilon m\omega_\bot^2 (Y^2 -X^2)/2$, where $\epsilon>0$. The rotating frame coordinates $X,Y$ are deduced from the lab frame coordinates $x,y$ by a rotation at an angle $\Omega t$. For a non-interacting gas, the equation of motion for each particle reads: $$\begin{aligned} & & \ddot X -2 \Omega \dot Y+\left(\omega_\bot^2(1-\epsilon)-\Omega^2 \right)X=0 \label{eq:Xmotion}\\ & & \ddot Y +2 \Omega \dot X+\left(\omega_\bot^2(1+\epsilon)-\Omega^2 \right)Y=0, \label{eq:Ymotion}\end{aligned}$$ while the motion along $z$ is not affected by the rotation. One deduces from this set of equations that the motion in the $xy$ plane is dynamically unstable if the stirring frequency $\Omega$ is in the interval $[\omega_\bot \;\sqrt{1-\epsilon},\omega_\bot\; \sqrt{1+\epsilon}]$. In particular, for $\Omega=\omega_\bot$ and $\epsilon \ll 1$, one finds that the quantity $X+Y$ diverges as $\exp{(\epsilon \omega_\bot t/2)}$, whereas $X-Y$ remains finite. To test this prediction we use a $^{87}$Rb cold gas in a Ioffe-Pritchard magnetic trap, with frequencies $\omega_x= \omega_y=2\pi\times 180$ Hz, and $\omega_z=2\pi \times 11.7$ Hz. The initial temperature of the cloud pre-cooled using optical molasses is 100 $\mu$K. The gas is further cooled by radio-frequency evaporation. For the first set of experiments we stop the evaporation before the Bose-Einstein condensation is reached. The resulting sample contains $10^7$ atoms at a temperature $T\sim 5\,\mu$K. It is dilute, with a central density $\sim 10^{12}$ cm$^{-3}$, and atomic interactions can be neglected (mean-field energy $\ll k_B T$). The second set of experiments corresponds to a much colder sample ($T<50$ nK), *i.e.* to a quasi-pure condensate with $10^5$ atoms. After evaporative cooling, the atomic cloud is stirred during an adjustable period $t$ by a focused laser beam of wavelength $852$ nm and waist $w_0=20 \mu$m, whose position is controlled using acousto-optic modulators [@ENS]. The beam is switched on abruptly and it creates a rotating optical-dipole potential which is nearly harmonic over the extension of the cloud. We measure the transverse density profile of the condensate after a period of free expansion. In this pursuit, we suddenly switch off the magnetic field and the stirrer, allow for a 25 ms free-fall, and image the absorption of a resonant laser by the expanded cloud. The imaging beam propagates along the $z$ axis. We fit the density profile of the sample assuming a Gaussian shape for the non-condensed cloud, and a parabolic TF shape for the quasi-pure condensate. We extract from the fit the long and short diameters in the plane $z=0$, and the average position of the cloud. The latter gives access to the velocity of the center-of-mass of the atom cloud before time of flight. ![Center-of-mass displacement after free expansion (log-scale) *vs.* stirring time for $\Omega\!=\!\omega_\bot$ and $\epsilon\!=\!0.09$. (a) Non-condensed cloud with $10^7$ atoms, $T\!=\!5\;\mu$K; (b) Condensate with $10^5$ atoms. Solid line: exponential fit to the data.[]{data-label="fig:CMinstability"}](fig1.eps) The center-of-mass displacement as a function of the stirring time $t$ is shown in Fig. \[fig:CMinstability\]. We choose here $\epsilon=0.09$ and $\Omega=\omega_\bot$, so that the motion predicted by Eqs.(\[eq:Xmotion\]-\[eq:Ymotion\]) is dynamically unstable. To ensure reliable initial conditions, we deliberately offset the center of the rotating potential by a few micrometers with respect to the atom cloud. We find the instability for the center-of-mass motion both for the non-condensed cloud (Fig. \[fig:CMinstability\]a) and for the quasi-pure condensate (Fig. \[fig:CMinstability\]b). The center-of-mass displacement increases exponentially, with an exponent consistent with the measured $\epsilon$. We consider now the evolution of the size of the atom cloud as a function of $t$ (Fig. \[fig:sizeincrease\]).
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'R. Liseau' - 'C. Risacher' - 'A. Brandeker' - 'C. Eiroa' - 'M. Fridlund' - 'R. Nilsson' - 'G. Olofsson' - 'G.L. Pilbratt' - | \ P. Thébault date: 'Received ; accepted ' title: 'q$^{1}$Eri: a solar-type star with a planet and a dust belt[^1]' --- [Far-infrared excess emission from main-sequence stars is due to dust produced by orbiting minor bodies. In these disks, larger bodies, such as planets, may also be present and the understanding of their incidence and influence currently presents a challenge.]{} [Only very few solar-type stars exhibiting an infrared excess and harbouring planets are known to date. Indeed, merely a single case of a star-planet-disk system has previously been detected at submillimeter (submm) wavelengths. Consequently, one of our aims is to understand the reasons for these poor statistics, i.e., whether these results reflected the composition and/or the physics of the planetary disks or were simply due to observational bias and selection effects. Finding more examples would be very significant.]{} [The selected target, [q$^{1}$Eri]{}, is a solar-type star, which was known to possess a planet, [q$^{1}$Eri]{}b, and to exhibit excess emission at IRAS wavelengths, but had remained undetected in the millimeter regime. Therefore, submm flux densities would be needed to better constrain the physical characteristics of the planetary disk. Consequently, we performed submm imaging observations of [q$^{1}$Eri]{}.]{} [The detected dust toward [q$^{1}$Eri]{} at 870[$\mu$m]{} exhibits the remarkable fact that the entire SED, from the IR to mm-wavelengths, is fit by a single-temperature blackbody function (60K). This would imply that the emitting regions are confined to a narrow region (ring) at radial distances much larger than the orbital distance of [q$^{1}$Eri]{}b, and that the emitting particles are considerably larger than some hundred micron. However, the 870[$\mu$m]{} source is extended, with a full-width-half-maximum of roughly 600AU. Therefore, a physically more compelling model also invokes a belt of cold dust (17K), located at 300AU from the star and about 60AU wide.]{} [The minimum mass of 0.04[$M_{\oplus}$]{} (3[$M_{\rm Moon}$]{}) of 1mm-size icy ring-particles is considerable, given the stellar age of [$\stackrel {>}{_{\sim}}$]{}1Gyr. These big grains form an inner edge at about 25AU, which may suggest the presence of an unseen outer planet ([q$^{1}$Eri]{}c). ]{} Introduction ============ During the end stages of early stellar evolution, dusty debris disks are believed to be descendents of gas-rich protoplanetary disks. These had been successful to varying degrees in building a planetary system. What exactly determines the upper cut-off mass of the bodies in individual systems, and on what time scales, is not precisely known. However, the presence of debris around matured stars is testimony to the action of orbiting bodies, where a large number of smaller ones are producing the dust through collisional processes and where a small number of bigger bodies, if any, are determining the topology (disks, rings and belts, clumps) through gravitational interaction. The time evolution of the finer debris is believed to be largely controlled by non-gravitational forces, though. By analogy, many debris disks are qualitatively not very different from the asteroid and Kuiper belts and the zodiacal dust cloud in the solar system [@mann2006]. For solar-type stars on the main-sequence, which are known to exhibit infrared excess due to dust disks, one might expect, therefore, a relatively high incidence of planetary systems around them. Surveying nearly 50 FGK stars with known planets for excess emission at 24[$\mu$m]{} and 70[$\mu$m]{}, @trilling2008 detected about 10-20% at 70[$\mu$m]{}, but essentially none at 24[$\mu$m]{}, implying that these planetary disks are cool ($<100$K) and large ($> 10$AU). However, in general, the conjecture that the infrared excess arises from disks lacks as yet observational confirmation due to insufficient spatial resolution. In fact, until very recently, there was only one main-sequence system known that has an extended, resolved disk/belt structure and (at least) one giant planet, viz. [$\epsilon \, {\rm Eri}$]{}, a solar-type star at the distance of only three parsec [@greaves1998; @greaves2005]. Its planetary companion, [$\epsilon \, {\rm Eri}$]{}b, has been detected indirectly by astrometric and radial velocity (RV) methods applied to the star [@hatzes2000; @benedict2006], whereas attempts to directly detect the planet have so far been unsuccessful [@itoh2006; @janson2007]. As its name indicates, the object of the present study, [q$^{1}$Eri]{}, happens to belong to the same celestial constellation of Eridanus, albeit at a larger distance ($D=17.35 \pm 0.2$pc) and is, as such, unrelated to [$\epsilon \, {\rm Eri}$]{}. The planet was discovered with the RV technique [for a recent overview, see @butler2006]. These RV data suggest that the semimajor axis of the Jupiter-mass planet [q$^{1}$Eri]{}b is about 2AU (Table\[star\]). It seems likely that regions inside this orbital distance have been largely cleared by the planet, whereas outside the planetary orbit, substantial amounts of material might still be present. In fact, IRAS and ISO data were suggestive of significant excess radiation above the photospheric emission at wavelengths longward of about 20[$\mu$m]{}. @zuckerman2004 interpreted these data in terms of dust in a disk at the orbital distance of 30AU and at a temperature of about 55K. @chen2006 fitted the far-infrared emission with the corresponding values of 20AU and 70K, respectively. @trilling2008 derived 20AU and 60K. In their entire sample of more than 200 stars, [q$^{1}$Eri]{} (=HD10647) has by far the highest 70[$\mu$m]{} excess. At mm-wavelengths, @schutz2005 failed to detect the disk and assigned an upper limit to the dust mass of 6[$M_{\rm Moon}$]{}. This is unsatisfactory, as the proper characterization of the dust around [q$^{1}$Eri]{} would require valid long wavelength data. In the following, observations of [q$^{1}$Eri]{} at 870[$\mu$m]{} are described and their implications discussed. Observations and Data Reductions ================================ APEX, the Atacama Pathfinder EXperiment, is a 12m diameter submillimeter telescope situated at an altitude of 5100m on the Llano Chajnantor in northern Chile. The telescope is operated by the Onsala Space Observatory, the Max-Planck-Institut für Radioastronomie, and the European Southern Observatory. The Large Apex BOlometer CAmera [LABOCA, @siringo2007] is a multi-channel bolometer array for continuum observations with 60GHz band width and centered on the wavelength of 870[$\mu$m]{}. The array, having a total field of view of 11[$^{\prime}$]{}, is spatially undersampled and we therefore adopted spiral pattern observing as the appropriate technique [@siringo2007]. This procedure results in fully-sampled maps with a uniform noise distribution over an area of about 8[$^{\prime}$]{}. During the nights of August1-4, 2007, we obtained 32 such individual maps, for about 7.5min each with central coordinates RA= and Dec= (J2000). The LABOCA beam width at half power (HPBW) is $\pm$ . We focussed LABOCA on the planet Jupiter and the rms-pointing accuracy of the telescope was 3[$^{\prime \prime}$]{} to 4[$^{\prime \prime}$]{}. We reduced the data with the BoA software [@siringo2007], which included flat fielding, baseline removal, despiking and iteratively removing the sky noise, and filtering out the low frequencies of the $1/f$-noise, with the cut-off frequency corresponding to several arcminutes. The software also accounts for the map reconstruction and the absolute calibration, using the opacities determined from numerous skydips (zenith opacities were in the range 0.1 to 0.3) and observations of the planets Uranus and Mars. The final result is an rms-noise-weighted average map (Fig.\[obs\]). [ll]{} Parameter & Value\ \ [**The star [q$^{1}$Eri]{}**]{} &\ Distance, $D$ & 17.35pc\ Spectral type and luminosity class & F8-9V\ Effective temperature, $T_{\rm eff}$ & 6100K\ Luminosity, $L_{\rm star}$ & 1.2[$L_{\odot}$]{}\ Surface gravity, $\log {g}$ & 4.4 (in cm s$^{-2}$)\ Radius, $R_{\
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We present a parameterization of the non-collinear (virtual) Compton scattering tensor in terms of form factors, in which the Lorentz tensor associated with each form factor possesses manifest electromagnetic gauge invariance. The main finding is that in a well-defined form factor expansion of the scattering tensor, the form factors are either symmetric or antisymmetric under the exchange of two Mandelstam variables, $s$ and $u$. Our decomposition can be used to organize complicated higher-order and higher-twist contributions in the study of the virtual Compton scattering off the proton. Such procedures are illustrated by use of the virtual Compton scattering off the lepton. In passing, we note the general symmetry constraints on Ji’s off-forward parton distributions and Radyushkin’s double distributions.' address: | Centre de Physique Théorique[^1], Ecole Polytechnique\ 91128 Palaiseau Cedex, France\ [ ]{} author: - Wei Lu date: July 1997 title: 'FORM FACTOR DESCRIPTION OF THE NON-COLLINEAR COMPTON SCATTERING TENSOR' --- introduction ============ Recently, there is [@ji1; @ra1; @hyde; @chen; @pire] much revived interest in the virtual Compton scattering (VCS). By VCS, people usually mean the the scattering of a virtual photon into a real photon off a proton target $$\gamma^\ast (q) + N(P,S) \to \gamma^\ast(q^\prime) +N(P^\prime, S^\prime) \ .$$ As usual, three Mandelstam variables are defined for this process: $s \equiv (q+P)^2, $ $ t \equiv (q-q^\prime)^2 ,$ $ u \equiv(P-q^\prime )^2$. Due to the momentum conservation $ P + q = P^\prime + q^\prime, $ there is the following constraint: $$s+t +u =q^2 + q^{\prime 2} + 2m^2,$$ where $m$ is the proton mass. The object of study is the following scattering tensor $$T^{\mu\nu}(q, P,S; q^\prime, P^\prime,S^\prime) = i\int d^4 \xi e^{iq^\prime\cdot \xi} \langle P^\prime,S^\prime| {\rm T}[J^\mu (\xi) J^\nu (0)]|P,S\rangle \ ,$$ where $J$ is the quark electromagnetic current in the proton and T stands for the time-ordering of the operators. At present, much of interest is focused on the deeply VCS (DVCS), which is a very special kinematic region of the generic VCS. It has been claimed that the dominant mechanism in the DVCS is the VCS off a massless quark [@ji1]. Correspondingly, two different approaches to the DVCS tensor have been developed: the Feynman diagram expansion [@ji1; @ra1] and operator product expansion (OPE) [@wana; @chen]. A careful reader might be aware of such a fact: At the leading twist expansion of the DVCS tensor, both in the Feynman diagram expansion and in OPE approach, the resultant expressions do not possess manifest electromagnetic gauge invariance. The purpose of this paper is to remedy the case by presenting a full form factor parameterization of the non-collinear Compton scattering tensor. With the help of our decomposition of the scattering tensor, one can safely ignore the higher-twist terms at leading-twist expansion and recover the electromagnetic gauge invariance by brute force. Hopefully, our form factor description can be used to organize complicated calculations as one goes beyond leading twist and/or leading order. We confess that we are not the very first to attempt to develop a form factor parameterization of the VCS tensor. As early as in 1960s, Berg and Lindner [@lindner] ever reported a parameterization of the VCS tensor in terms of form factors. The virtue, also an implicit assumption, in their decomposition is that the scattering tensor can be put into a form of direct products of the Lorentz tensors and Dirac bilinears, i.e., the Lorentz index of the VCS tensor is $not$ carried by the gamma matrices. In fact, all the leading twist expansions of the DVCS tensor so far assume such a factorized form. A drawback of the Berg-Lindner decomposition is that they employed a lot of momentum combinations which have no specific crossing and time reversal transformation properties. As a consequence, the form factors they defined possess no specific symmetry properties under crossing and time reversal transformations. Moreover, their decomposition lacks a term associated with the Lorentz structure $\epsilon^{\mu\nu\alpha\beta}q_\alpha q^\prime_\beta$, which has been shown by recent researches to be a carrier of leading-twist contributions. It should be stressed that there is no unique decomposition of the Compton tensor. A few years ago, Guichon, Liu and Thomas [@pierre] worked out a general decomposition of the VCS tensor, which contains no explicit proton spinors. Their decomposition is nice for the discussion of the generalized proton polarizabilities, as has done in Ref. [@pierre]. Recently, the decomposition of the VCS tensor of such type has been refined by Drechsel et al. [@Drechsel] within more extensive contexts. However, a decomposition of the VCS tensor without explicit Dirac bilinear structures is of very limited use for the present Feynman diagram expansion and OPE analysis of the DVCS tensor. Hence, it is desirable to reconstruct a parameterization of the Compton scattering tensor in terms of form factors with explicit Dirac structures, which constructs the subject of this paper. To make our arguments more transparent, we will first consider the Lorentz decomposition for the double VCS off a lepton, then transplant our results onto the proton case. \[By double VCS we mean that both the initial- and final-state photons are virtual. Correspondingly, we will refer the usual VCS to as the single VCS in distinction.\] The reason for adopting such a strategy is that in quantum electrodynamics, it is more convenient to discuss the chiral properties of the Dirac bilinears. At the later stage, we will reduce our results for the double VCS to the real Compton scattering (RCS) as well as the single VCS. Such a procedure will greatly facilitate the discussion of the symmetry properties of the single VCS form factors. The decomposition of the Compton tensor is essentially subject to the symmetries that it observes, so we begin with a brief discussion of the symmetry properties of the Compton scattering. First, the current conservation requires that $$q_\mu T^{\mu\nu}(q, P,S; q^\prime, P^\prime,S^\prime) = q^\prime_\nu T^{\mu\nu}(q, P,S; q^\prime, P^\prime,S^\prime) =0\ . \label{gauge}$$ Second, parity conservation tells us $$T^{\mu\nu}(q, P,S; q^\prime, P^\prime,S^\prime) = T_{\mu\nu}( \tilde q,\tilde P, - \tilde S; \tilde q^\prime, \tilde P^\prime, - \tilde S^\prime) \ , \label{p}$$ where $\tilde q^\mu \equiv q_\mu$, and so on. Third, time reversal invariance demands $$T^{\mu\nu}(q, P,S; q^\prime, P^\prime, S^\prime) = T_{\nu\mu}(\tilde q^\prime, \tilde P^\prime,\tilde S^\prime; \tilde q, \tilde P,\tilde S )\ . \label{t}$$ Fourthly, there is a crossing symmetry for the Compton scattering, namely, $$T^{\mu\nu}(q, P,S; q^\prime, P^\prime, S^\prime) = T^{\nu\mu}(-q^\prime, P,S; -q, P^\prime, S^\prime)\ . \label{cr}$$ By combining (\[p\]) with (\[t\]), we have $$T^{\mu\nu}(q, P,S; q^\prime, P^\prime, S^\prime) = T^{\nu\mu}(q^\prime, P^\prime, -S^\prime ; q, P,-S)\ .\label{pt}$$ That is to say, the adjoint parity-time-reversal transformation amounts to $\mu \leftrightarrow \nu$, $q \leftrightarrow q^\prime $, $P \leftrightarrow P^\prime $, $S\to -S^\prime$ and $S^\prime \to -S$. Furthermore, combining (\[cr\]) with (\[pt\]) yields $$T^{\mu\nu}(q, P,S; q^\prime, P^\prime, S^\prime) = T^{\mu\nu}(-q,P^\prime,- S^\prime ; -q^\prime, P,-S )\ . \label{crpt}$$ In fact, the Compton scattering respects more symmetries than summarized above. For example, it is subject to the momentum and angular momentum conservations. In the case of collinear scattering, the angular momentum conservation exerts further constraints on the Compton scattering. To show this, we digress to the helicity amplitude description of the Compton scattering. In the
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Primordial black holes can represent all or most of the dark matter in the window $10^{17}-10^{22}\,$g. Here we present an extension of the constraints on PBHs of masses $10^{13}-10^{18}\,$g arising from the isotropic diffuse gamma ray background. Primordial black holes evaporate by emitting Hawking radiation that should not exceed the observed background. Generalizing from monochromatic distributions of Schwarzschild black holes to extended mass functions of Kerr rotating black holes, we show that the lower part of this mass window can be closed for near-extremal black holes.' author: - Alexandre Arbey - Jérémy Auffinger - Joseph Silk bibliography: - 'biblio.bib' title: Constraining primordial black hole masses with the isotropic gamma ray background --- CERN-TH-2019-084 \[sec:intro\]Introduction ========================= Primordial Black Holes (PBHs) are the only candidate able to solve the Dark Matter (DM) issue without invoking new physics. Two mass windows are still open for the PBHs to contribute to all or most of the DM: the $10^{17} - 10^{19}\,$g range, recently re-opened by [@Katz2018] after revisiting the $\gamma$-ray femtolensing constraint, and the $10^{20}-10^{22}\,$g range [@Niikura2017], from HST microlensing probes of M31. PBHs are believed to have formed during the post-inflationary era, and subsequently evolved through accretion, mergers and Hawking Radiation (HR). If the PBHs were sufficiently numerous, that is to say if they contribute to a large fraction of DM, HR from PBHs may be the source of observable background radiation. In this Letter, we update the constraints on the number density of PBHs by observations of the diffuse Isotropic Gamma Ray Background (IGRB) [@Carr2010], taking into account the latest FERMI-LAT data and, as new constraints, the spin of PBHs and extension of the PBH mass function. Our assumption is that part of the IGRB comes from the time-stacked, redshifted HR produced by evaporating PBHs distributed isotropically in the extragalactic Universe. Those PBHs must have survived at least until the epoch of CMB transparency for the HR to be able to propagate in the intergalactic medium. This sets the lower boundary on the PBH mass $M{_{\rm min}} \approx 5\times10^{13}\,$g. Furthermore, the HR peaks at an energy which decreases when the PBH mass increases. This sets the upper boundary for the PBH mass $M{_{\rm max}} \approx 10^{18}\,$g as the IGRB emission does not constrain the photon flux below $100\,$keV. This Letter is organized as follows: Section \[sec:Hawking\] gives a brief reminder of HR physics, Section \[sec:IGRB\] describes the IGRB flux computation and Section \[sec:results\] presents the new constraints obtained with Kerr and extended mass function PBHs. \[sec:Hawking\]Kerr PBH Hawking radiation ========================================= Black Holes (BHs) emit radiation and particles similar to blackbody radiation [@Hawking1975] with a temperature linked to their mass $M$ and spin parameter $a \equiv J/M \in [0,M]$ ($J$ is the BH angular momentum) through $$T \equiv \dfrac{1}{2\pi}\left( \dfrac{r_+ - M}{r_+^2 + a^2} \right)\,, \label{eq:temperature}$$ where $r_+ \equiv M + \sqrt{M^2-a^2}$ and we have chosen a natural system of units with $G = \hbar = k{_{\rm B}} = c = 1$. The number of particles $N_i$ emitted per units of energy and time is given by $$\dfrac{{{\rm d}}^2N_i}{{{\rm d}}t{{\rm d}}E} = \dfrac{1}{2\pi}\sum_{\rm dof} \dfrac{\Gamma_i(E,M,a^*)}{e^{E^\prime/T}\pm 1}\,, \label{eq:hawking}$$ where $E^\prime \equiv E - m\Omega$ is the total energy of the particle taking into account the BH horizon rotation velocity $\Omega \equiv a^*/(2r_+)$, $a^* \equiv a/M \in [0,1]$ is the reduced spin parameter, $m$ is the projection of the particle angular momentum $l$ and the sum is over the degrees of freedom (dof) of the particle (color and helicity multiplicities). The $\pm$ signs are for fermions and bosons, respectively. The greybody factor $\Gamma_i(E,M,a^*)$ encodes the probability that a Hawking particle evades the gravitational well of the BH. This emission can be integrated over all energies to obtain equations for the evolution of both PBH mass and spin [@PageII1976] $$\dfrac{{{\rm d}}M}{{{\rm d}}t} = -\dfrac{f(M,a^*)}{M^2}\,, \label{eq:diffM}$$ and $$\dfrac{{{\rm d}}a^*}{{{\rm d}}t} = \dfrac{a^*(2f(M,a^*) - g(M,a^*))}{M^3}\,, \label{eq:diffa}$$ where $$\begin{aligned} f(M,a^*) &\equiv -M^2 \dfrac{{{\rm d}}M}{{{\rm d}}t}\label{eq:fM} \\ &= M^2\int_{0}^{+\infty} \sum_{\rm dof} \dfrac{E}{2\pi}\dfrac{\Gamma(E,M,a^*)}{e^{E^\prime/T}\pm 1} {{\rm d}}E \,, \nonumber \end{aligned}$$ $$\begin{aligned} g(M,a^*) &\equiv -\dfrac{M}{a^*} \dfrac{{{\rm d}}J}{{{\rm d}}t}\label{eq:gM} \\ &= \dfrac{M}{a^*}\int_{0}^{+\infty} \sum_{\rm dof}\dfrac{m}{2\pi} \dfrac{\Gamma(E,M,a^*)}{e^{E^\prime/T}\pm 1}{{\rm d}}E \,. \nonumber \end{aligned}$$ There are two main effects coming from the PBH spin that play a role in the IGRB. Firstly, a Kerr PBH with a near-extremal spin $a^* \lesssim 1$ radiates more photons than a Schwarzschild one ($a^* = 0$). This is due to the coupling between the PBH rotation and the particle angular momentum for high-spin particles [@Chandra4]. We thus expect the constraints to be more stringent. Secondly, a near-extremal Kerr PBH will evaporate faster than a Schwarzschild PBH with the same initial mass due to this enhanced HR [@Taylor1998]. Hence, we expect that the constraints will be shifted toward higher PBH masses when the reduced spin parameter $a^*$ increases. \[sec:IGRB\]Isotropic Gamma Ray Background ========================================== Many objects in the Universe produce gamma rays, such as Active Galactic Nuclei (AGN) and gamma ray bursts [@Ackermann2015]. The IGRB is the diffuse radiation that fills the intergalactic medium once all point-sources have been identified and removed from the measured photon flux. This background might come from unresolved sources, or more speculatively from DM decays or annihilations. [Fig. \[fig:data\_IGRB\]]{} shows the IGRB measured by four experiments (HEAO1+balloon, COMPTEL, EGRET and FERMI-LAT) over a wide range of energies between 100 keV and 820 GeV. If we consider the simplifying hypothesis that DM is distributed isotropically at sufficiently large scales, then its annihilations/decays should produce, at each epoch of the Universe since transparency, an isotropic flux of photons. Thus, the flux measured along some line of sight should be the redshifted sum over all epoch emissions. Following Carr [*et al.*]{} [@Carr2010], we estimate the flux at energy $E$ to be $$\begin{aligned} I &\equiv E \dfrac{{{\rm d}}F}{{{\rm d}}E} \label{eq:flux_IGRB} \\ &\approx \dfrac{1}{4\pi} n{_{\rm BH}}(t_0) E \int_{t{_{\rm min}}}^{t{_{\rm max}}} (1+z(t)) \dfrac{{{\rm d}}^2 N}{{{\rm d}}t{{\rm d}}E}((1+z(t))E){{\rm d}}t\,, \nonumber\end{aligned}$$ where $n{_{\rm PBH}}(t_0)$ is the number density of PBHs of a given mass $M$ today, $z(t)$ is the redshift and the time integral runs from $t{_{\rm min}} = 380\,000\,$years at last scattering of the CMB to $t{_{\rm max}} = {\rm Max}(\tau(M),t_0)$ where $\tau(M)\sim M^3$ is the PBH lifetime and $t_0$ is the age
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'The first generation quantum computer will be implemented in the cloud style, since only few groups will be able to access such an expensive and high-maintenance machine. How the privacy of the client can be protected in such a cloud quantum computing? It was theoretically shown \[A. Broadbent, J. F. Fitzsimons, and E. Kashefi, Proceedings of the 50th Annual IEEE Symposium on Foundation of Computer Science, 517 (2009)\], and experimentally demonstrated \[S. Barz, E. Kashefi, A. Broadbent, J. F. Fitzsimons, A. Zeilinger, and P. Walther, Science [**335**]{}, 303 (2012)\] that a client who can generate randomly-rotated single qubit states can delegate her quantum computing to a remote quantum server without leaking any privacy. The generation of a single qubit state is not too much burden for the client, and therefore we can say that “almost classical client" can enjoy the secure cloud quantum computing. However, isn’t is possible to realize a secure cloud quantum computing for a client who is completely free from any quantum technology? Here we show that perfectly-secure cloud quantum computing is impossible for a completely classical client unless classical computing can simulate quantum computing, or a breakthrough is brought in classical cryptography.' author: - Tomoyuki Morimae - Takeshi Koshiba title: Impossibility of secure cloud quantum computing for classical client --- Introduction ============ Imagine that Alice who does not have any sophisticated quantum technology wants to factor a large integer. She has a rich friend, Bob, who has a full-fledged quantum computer. Alice asks Bob to perform her quantum computing on his quantum computer. However, the problem is that Bob is not a reliable person, and therefore she does not want to reveal her input (the large integer), output (a prime factor), and the program (Shor’s algorithm), to Bob. Can she delegate her quantum computing to Bob while keeping her privacy? Recently, it was theoretically shown that such a secure cloud quantum computing is indeed possible [@BFK]. (A proof-of-principle experiment was also demonstrated with photonic qubits [@Barz].) In the protocol of Ref. [@BFK] (Fig. \[fig1\]), Alice, a client, has a device that emits randomly rotated single qubit states. She sends these states to Bob, the server, who has the full quantum technology. Alice and Bob are also connected with a classical channel. Bob performs quantum computing by using qubits sent from Alice, and classical messages exchanging with Alice via the classical channel. After finishing his quantum computation, Bob sends the output of his computation, which is a classical message, to Alice. This message encrypts the result of Alice’s quantum computing, which is not accessible to Bob. Alice decrypts the message, and obtains the desired result of her quantum computing. It was shown that whatever Bob does, he cannot learn anything about the input, the program, and the output of Alice’s computation [@BFK; @Vedrancomposability] (except for some unavoidable leakage, such as the upperbound of the input size, etc.). ![ The secure cloud quantum computing protocol proposed in Ref. [@BFK]. Alice possesses a device that emits randomly-rotated single-qubit states. Bob has a universal quantum computer. Alice and Bob share a two-way classical channel. []{data-label="fig1"}](BFK.eps){width="40.00000%"} In this protocol, the client has to possess a device that generates single qubit states. Generation of single qubit states is ubiquitous in today’s laboratories, and therefore not too much burden for the client. In other words, “almost classical" client can enjoy secure cloud quantum computing. However, isn’t it possible to realize secure cloud quantum computing for a completely classical client (Fig. \[classical\])? Many variant protocols of secure cloud quantum computing have been proposed recently [@MABQC; @BarzNP; @FK; @Vedran; @AKLTblind; @topoblind; @CVblind; @Lorenzo; @Joe_intern; @Sueki; @tri]. For example, it was shown that, in stead of single-qubit states, the client has only to generate weak coherent pulse states if we add more burden to the server [@Vedran]. Coherent states are considered as “more classical" than single-photon states, and therefore it enables secure cloud quantum computing for “more classical" client. It was also shown that secure cloud quantum computing is possible for a client who can only measure states [@MABQC] (Fig. \[measuringAlice\]). A measurement of a bulk state with a threshold detector is sometimes much easier than the single-photon generation, and therefore the protocol also enables “more classical" client. However, these two protocols still require the client to have some minimum quantum technologies, namely the generation of weak coherent pulses or measurements of quantum states. In fact, all protocols proposed so far require the client to have some quantum ability, such as generation, measurement, or routing of quantum states [@MABQC; @BarzNP; @FK; @Vedran; @AKLTblind; @topoblind; @CVblind; @Lorenzo; @Joe_intern; @Sueki; @tri]. (It is known that [@BFK] if we have two quantum servers, a completely classical client can delegate her quantum computing. However, in this case, we have to assume that two servers cannot communicate with each other.) In other words, the possibility of the perfectly secure cloud quantum computing for a completely classical client has been an open problem. ![ The secure cloud quantum computing for a classical client. Alice has only a classical computer, whereas Bob has a universal quantum computer. Alice and Bob share a two-way classical channel. []{data-label="classical"}](classical.eps){width="40.00000%"} ![ The secure cloud quantum computing protocol proposed in Ref. [@MABQC]. Alice possesses a device that measure qubits. Bob has the ability of entangling operations and quantum memory. []{data-label="measuringAlice"}](measuringAlice.eps){width="40.00000%"} In this paper, we show that perfectly-secure cloud quantum computing for a completely classical client is unlikely possible. Here, the perfect security means that an encrypted text gives no information about the plain text [@nonlinearcrypto]. It is a typical security notion in the information theoretical security. The idea of the proof is as follows. Since no non-affine cryptography is known to be perfectly secure [@nonlinearcrypto], we assume that the client uses an affine cryptography. We then show that if the cloud quantum computing can be done in the perfectly secure way for a completely classical client, classical computing can efficiently simulate quantum computing. Although the conjecture of $\mbox{BPP}\subsetneq\mbox{BQP}$ is not so solid as $\mbox{P}\ne\mbox{NP}$ or that the polynomial hierarchy does not collapse, researchers in quantum computing believe $\mbox{BPP}\subsetneq\mbox{BQP}$. Therefore, we conclude that perfectly-secure cloud quantum computing is impossible for a completely classical client (unless a non-affine cryptography is shown to be perfectly secure or classical computing can efficiently simulate quantum computing). Result ====== Our setup is given in Fig. \[classical\]. Alice has only a classical computer (more precisely, the probabilistic polynomial time Turing machine), whereas Bob has a universal quantum computer. Furthermore, Alice and Bob share a two-way classical channel. Let $U$ be the $n$-qubit unitary operator that Alice wants to implement in her quantum computing, where $n$ is a polynomial of the size of the input of her problem. (More precisely, she choses a unitary from the finite set $\{U_j\}_{j=1}^r$ of unitaries, since the capacity of the classical channel between Alice and Bob is finite, and a set of finite unitaries is sufficient for universal quantum computing.) Without loss of generality, we can assume that the initial state of her quantum computing is the standard state $|0\rangle^{\otimes n}$. In other words, if she wants to start with a certain input state $|\psi\rangle$, the preparation of it is included in $U$. (In the secure cloud quantum computing protocol of Ref. [@BFK], Alice can use unknown quantum state as the input, such as a given state from another person. However, in the present setup, by definition, Alice’s input is restricted to classical information. Therefore we assume that the input state is a standard one or she knows the classical description of the input quantum state.) Therefore, what Alice wants to hide from Bob are the classical description $[U]$ of the unitary $U$, and the output of the computation, which is the computational basis measurement result on $U|0\rangle^{\otimes n}$. (The protocol of Ref. [@BFK] allows Alice to finally obtain an output quantum state. However, again, we assume that Alice’s output is a classical information, since she is completely classical.) In the setup of Fig. \[classical\], what Alice and Bob can do is the following protocol. - Alice sends Bob the classical message $$\begin{aligned} a=E([U],k)\end{aligned}$$ that encrypts the classical description $[U]$ of the unitary $U$ with the private key $k\in K$, where $K$ is the set of keys and $E$ is an encrypting operation. Since the encryption is done by
{ "pile_set_name": "ArXiv" }
null
null
--- author: - | Marwa Hadj SalahDidier Schwab Hervé Blanchon Mounir Zrigui\ [ (1) LIG-GETALP, Univ. Grenoble Alpes, France\ `Prénom.Nom@univ-grenoble-alpes.fr ` (2) LaTICE, Tunis, 1008, Tunisie\ `Prénom.Nom@fsm.rnu.tn` ]{} bibliography: - 'biblio.bib' title: 'Système de traduction automatique statistique Anglais-Arabe' --- Introduction ============ La traduction automatique (TA) est le processus qui consiste à traduire un texte rédigé dans une langue source vers un texte dans une langue cible. Dans cet article, nous présentons notre système de traduction automatique statistique anglais-arabe. Dans un premier temps, nous présentons le processus général pour mettre en place un système de traduction automatique statistique, ensuite nous décrivons les outils ainsi que les différents corpus que nous avons utilisés pour construire notre système de TA. Traduction automatique ====================== Traduction automatique statistique ----------------------------------- La traduction automatique statistique (TAS) est une approche très utilisée dans la TA et qui se base sur l’apprentissage de modèles statistiques à partir de corpus parallèles. En effet, comme il est montré dans la figure \[FigureTA\], la traduction automatique statistique se base essentiellement sur: Un modèle de langage (ML), un modèle de traduction (MT) et un décodeur. ![Processus de la traduction automatique statistique[]{data-label="FigureTA"}](smt.png) ### Modèle de langage Parmi les modèles de langages utilisés dans les systèmes de TAS les principaux sont le modèle n-gramme, le modèle Cache [@kuhn1990cache] et le modèle Trigger [@lau1993trigger]. Le modèle Cache repose sur les dépendances des mots non contigus. Quant à lui, le modèle Trigger consiste à déterminer le couple de mots (X, Y) où la présence de X dans l’historique déclenche l’apparition de Y. Toutefois, le modèle n-gramme (1$\leq$n$\leq$5) reste le plus utilisé dans les systèmes de traduction actuels et plus précisément le modèle trigramme ( -gramme pour le traitement des langues européennes. En effet, le modèle n-gramme permet d’estimer la vraisemblance d’une suite de mots en lui attribuant une probabilité. Soit $\textit{t} = w_{1}w_{2} . . . w_{k}$ une séquence de k mots dans une langue donnée et n la taille maximale des n-gramme (1$\leq$n$\leq$5, la formule de p(t est exprimée en : $$P(t)=\prod_{i=1}^{k} (w_{i}|w_{i-1}w_{i-2} ... w_{i-n+1})$$ ### Modèle de traduction à base de segments Pour construire un modèle de traduction à base de segments [@och2003systematic] , il est nécessaire de passer par trois étapes indispensables: - Segmentation de la phrase en séquences de mots - Traduction des séquences de mots en se fondant sur la table de traduction - Ré-ordonnancement des séquences de mots à l’aide d’un modèle de distorsion ### Décodeur Moses [@koehn2007moses] est une boite à outils disponible sous licence libre GPL, basée sur des approches statistiques de la traduction automatique. En effet, Moses nous permet de développer et manipuler un système de traduction selon nos besoins grâce à ses nombreuses caractéristiques, telle que la production du modèle de traduction et le modèle de réordonnance à partir des corpus volumineux.\ Parmi les principaux modules du Moses, on trouve : - **Train** : permet de construire des modèles de traduction ainsi que des modèles de réordonnance. - **Mert** : permet d’ajuster les poids des différents modèles afin d’optimiser et maximiser la qualité de traduction en utilisant les données de développement (DEV) . - **Décodage** : ce module contient des scripts et des excusables permettant de trouver la traduction la plus probable d’une phrase source en consultant les modèles du module Train. Outils ------ ### Le décodeur Moses Moses [@koehn2007moses] est une boite à outils disponible sous licence libre GPL, basée sur des approches statistiques de la traduction automatique. En effet, Moses nous permet de développer et manipuler un système de traduction selon nos besoins grâce à ses nombreuses caractéristiques, telle que la production du modèle de traduction et le modèle de réordonnance à partir des corpus volumineux.\ Parmi les principaux modules du Moses, on trouve : - **Train** : permet de construire des modèles de traduction ainsi que des modèles de réordonnance. - **Mert** : permet d’ajuster les poids des différents modèles afin d’optimiser et maximiser la qualité de traduction en utilisant les données de développement (DEV) . - **Décodage** : ce module contient des scripts et des excusables permettant de trouver la traduction la plus probable d’une phrase source en consultant les modèles du module Train. ### IRSTLM IRSTLM [@federico2007efficient] est une boite à outils utilisée pour la construction des modèles de langage statistiques. L’avantage de cette boite à outils est de réduire les besoins de stockage ainsi que la mémoire lors de décodage. Par conséquent, cet outil nous permet de gagner du temps pour le chargement du modèle de langage. ### BLEU:Métrique d’évaluation automatique Le score BLEU (en anglais : Bilingual Evaluation Understudy) a initialement été proposé par [@papineni2002bleu].C’est un algorithme utilisé en vue d’évaluer la qualité des hypothèses de sortie produites par un système de traduction automatique. En effet, le concept est fondé sur l’idée de comparer l’hypothèse de traduction avec une ou plusieurs références au niveau des mots, des bigrammes, trigrammes etc. Le score BLEU est normalisé entre 0 et 1, et il est exprimé généralement en pourcentage. Notons qu’une traduction humaine peut parfois obtenir un mauvais score BLEU , si elle s’écarte de la référence. ### MADAMIRA L’analyseur morphologique MADAMIRA [@pasha2014madamira] : est un système d’analyse morphologique et de désambiguïsation de l’arabe qui exploite certains des meilleurs aspects des deux systèmes existants et les plus utilisés pour le traitement automatique de la langue arabe que sont : MADA ([@habash2005arabic]; [@habash2009mada+];. [@habash2013morphological]) et AMIRA [@diab2009second]. En effet, MADAMIRA permet la tokenisation, la lemmatisation, le racinisation, l’étiquetage morpho-syntaxique, la désambiguïsation morphologique, la diacritisation, la reconnaissance des entités nommées, etc. MADAMIRA propose les deux shémas de tokenisation suivants: - **ATB:** consiste à segmenter touts les clitiques excepté les articles défin
{ "pile_set_name": "ArXiv" }
null
null