text
stringlengths
0
12.5k
meta
dict
--- abstract: 'Separation kernels are fundamental software of safety and security-critical systems, which provide to their hosted applications spatial and temporal separation as well as controlled information flows among partitions. The application of separation kernels in critical domain demands the correctness of the kernel by formal verification. To the best of our knowledge, there is no survey paper on this topic. This paper presents an overview of formal specification and verification of separation kernels. We first present the background including the concept of separation kernel and the comparisons among different kernels. Then, we survey the state of the art on this topic since 2000. Finally, we summarize research work by detailed comparison and discussion.' address: | National Key Laboratory of Software Development Environment (NLSDE)\ School of Computer Science and Engineering, Beihang Univerisity, Beijing, China author: - Yongwang Zhao title: A survey on formal specification and verification of separation kernels --- real-time operating systems ,separation kernel ,survey ,formal specification ,formal verification Introduction {#sec:intro} ============ The concept of “Separation Kernel” was introduced by John Rushby in 1981 [@Rushby81] to create a secure environment by providing temporal and spatial separation of applications as well as to ensure that there are no unintended channels for information flows between partitions other than those explicitly provided. Separation kernels decouple the verification of the trusted functions in the separated components from the verification of the kernels themselves. They are often sufficiently small and straightforward to allow formal verification of their correctness. The concept of separation kernel originates the concept of Multiple Independent Levels of Security/Safety (MILS) [@Alves06]. MILS is a high-assurance security architecture based on the concepts of separation [@Rushby81] and controlled information flow [@Denning76]. MILS provides means to have several strongly separated partitions on the same physical computer/device and enables existing of different security/safety level components in the same system. The MILS architecture is particularly well suited to embedded systems which must provide guaranteed safety or security properties. An MILS system employs the separation mechanism to maintain the assured data and process separation, and supports enforced security/safety policies by authorizing information flows between system components. The MILS architecture is layered and consists of separation kernels, middleware and applications. The MILS separation kernels are small pieces of software that divide the system into separate partitions where the middleware and applications are located, as shown in [[[Fig.]{}]{}]{} \[fig:mils\_arch\]. The middleware provides an interface to applications or a virtual machine enabling operating systems to be executed within partitions. The strong separation between partitions both prevents information leakage from one partition to another and provides fault-containment by preventing a fault in one partition from affecting another. MILS also enables communication channels (unidirectional or bidirectional) to be selectively configured between partitions. ![The MILS architecture. Notation: Unclassified (U), confidential (C), secret (S), top secret (TS), single level (SL), multi level security (MLS) [@gjertsen08][]{data-label="fig:mils_arch"}](FCS-14226-fig1.pdf){width="3.4in"} Separation kernels are first applied in embedded systems. For instance, they have been accepted in the avionics community and are required by ARINC 653 [@ARINC653] compliant systems. Many implementations of separation kernels for safety and security-critical systems have been developed, such as VxWorks MILS [@vxworksmils13], INTEGRITY-178B [@integrity], LynxSecure [@LynxSecure], LynxOS-178 [@lynxos178], PikeOS [@pikeos], and open-source implementations, such as POK [@Delange11] and Xtratum [@Masmano09]. In safety and security-critical domains, the correctness of separation kernels is significant for the whole system. Formal verification is an rigorous approach to proving or disproving the correctness of the system w.r.t. a certain formal specification or property. The work in [@Woodcock09] presents 62 industrial projects using formal methods over 20 years and the effects of formal methods on time, cost and quality of systems. The successful applications of formal methods in software development are increasing in academic and industries. Security and safety are traditionally governed by well-established standards. (1) In the security domain, verified security is achieved by Common Criteria (CC) [@CC] evaluation, where EAL 7 is the highest assurance level. EAL 7 certification demands that formal methods are applied in requirements, functional specification, and high-level design. The low-level design may be treated semi-formally. The correspondence between the low-level design and the implementation is usually confirmed in an informal way. But for the purpose of fully formal verification, the verification chain should reach the implementation level. In 2007, the Information Assurance Directorate of the U.S. National Security Agency (NSA) published the Separation Kernel Protection Profile (SKPP) [@SKPP07] within the framework established by the CC [@CC]. SKPP is a security requirements specification for separation kernels. SKPP mandates formal methods application to demonstrate the correspondence between security policies and the functional specification of separation kernels. (2) In the safety domain, safety of software deployed in airborne systems is governed by RTCA DO-178B [@DO178B], where Level A is the highest level. The new version DO-178C [@DO178C] was published in 2011 to replace DO-178B. The technology supplements of DO-178C recommend formal methods application to complement testing. Although most of commercial products of separation kernels have been certified through DO-178B Level A and CC, we only find two CC EAL 7 certified separation kernels, i.e., LynxSecure and the AAMP7G Microprocessor [@Wilding10] (a separation kernel implemented as hardware). Without fully verification, the correctness of the separation kernels can not be fully assured. Many efforts have been paid on achieving verified separation kernels in this decade, such as formal verification of SYSGO PikeOS [@Baumann09; @Baumann09b; @Baumann10; @Baumann11], INTEGRITY-178B kernel [@Richards10], ED (Embedded Devices) separation kernel of Naval Research Laboratory [@Heitmeyer06; @Heitmeyer08], and Honeywell DEOS [@Penix00; @Penix05; @Ha04]. Using logic reduction to create high dependable and safety-critical software was one of 10 breakthrough technologies selected by MIT Technology Review in 2011 [@Bulk11]. They reported the L4.verified project in NICTA (National ICT Australia). The seL4 (secure embedded L4) micro-kernel, which comprises 8,700 lines of C code and 600 lines of assembler code, is fully formally verified by the Isabelle/HOL theorem prover [@Klein09; @Klein10]. They found 160 bugs in the C code in total, 16 of which are found during testing and 144 bugs during the C verification phase. This work provides successful experiences for formal verification of separation kernels and proves the feasibility of fully formal verification on small kernels. We could find a survey on formal verification of micro-kernels of general purpose operating systems [@Klein09b], but a survey of separation kernel verification for safety and security-critical systems does not exist in the literature to date. Considering that the correctness of separation kernels is crucial for safety and security-critical systems, this survey covers the research work on formal specification and verification of separation kernels ever since 2000. We outline them in high-level including formal specification, models, and verification approaches. By comparing and discussing research work in detail, this survey aims at proving an useful reference for separation kernel verification projects. In the next section, we first introduce the concept of separation kernels and compare it to other types of kernels to clarity the relationship. In [[[Section]{}]{}]{} \[sec:verify\], literatures on formal specification and verification of separation kernels are surveyed including three categories: formalization of security policies and properties, formal specification and model of separation kernels, and formal verification of separation kernels. In [[[Section]{}]{}]{} \[sec:summary\], we summarize research work by detailed comparison and discussion. Finally, we conclude this paper in [[[Section]{}]{}]{} \[sec:conclude\]. Background {#sec:bg} ========== This section first introduces the concept of separation kernel, and then gives the comparisons among different kernels such security kernels, partition kernels and hypervisors. What’s the Separation Kernel ---------------------------- Separation kernel is a type of security kernels [@Ames83] to simulate a distributed environment. Separation kernels are proposed as a solution to develop and verify the large and complex security kernels that are intended to “provide multilevel secure operation on general-purpose multi-user systems.” “The task of a separation kernel is to create an environment which is indistinguishable from that provided by a physically distributed system: it must appear as if each regime is a separate, isolated machine and that information can only flow from one machine to another along known external communication lines. One of the properties we must prove of a separation kernel, therefore, is that there are no channels for information flow between regimes other than those explicitly provided. [@Rushby81]” Based on separation kernels, the system security is archived partially through physical separation of individual components and mediation of trusted functions performed within some components. Separation kernels decouple the verification of components from the kernels themselves. Separation kernels provide their hosted software applications high-assurance partitioning and controlled information flow that are both tamperproof and non
{ "pile_set_name": "ArXiv" }
--- author: - 'C. Argiroffi' - 'A. Maggio' - 'G. Peres' - 'J. J. Drake' - 'J. López-Santiago' - 'S. Sciortino' - 'B. Stelzer' bibliography: - 'mpmus.bib' date: 'Received 1 July 2009 / Accepted 25 August 2009' title: 'X-ray optical depth diagnostics of T Tauri accretion shocks' --- Introduction ============ Classical T Tauri stars (CTTS) are young low-mass stars, still surrounded by a circumstellar disk from which they accrete material. According to a widely accepted model, they have a strong magnetic field that regulates the accretion process disrupting the circumstellar disk, loading material of the inner part of the disk, and guiding it in a free fall along its flux tubes toward the central star [@UchidaShibata1984; @BertoutBasri1988; @Koenigl1991]. A characteristic feature of young stars is strong X-ray emission that is traditionally ascribed to magnetic activity in their coronae. Similar to their more evolved siblings, the diskless weak-line T Tauri stars (WTTS), CTTS display high X-ray luminosities and frequent flaring activity. The typical temperature of their coronal plasma is $\sim10-20$MK, or even higher during strong flares [e.g. $50-100$MK, @GetmanFeigelson2008]. From a theoretical point of view, the accretion process can also produce significant X-ray emission on CTTS. Material accreting from the circumstellar disk reaches velocities of $\sim300-500\,{\rm km\,s^{-1}}$. A shock forms at the base of the accretion column because of the impact with the stellar atmosphere. This shock heats up the accreting material to a maximum temperature $T_{\rm max}=3 \mu m_{\rm H} v_{0}^2 / ( 16 k )$, where $v_{0}$ is the infall velocity. Because of the high pre-shock velocity, the infalling material reaches temperatures of a few MK, and hence it emits X-rays. Typical values of mass accretion rate for CTTS indicate that the accretion-driven X-ray luminosity should be comparable to the coronal one [@Gullbring1994]. Considering typical inferred stream cross-sectional area [$\la5\%$ of the stellar surface, e.g. @CalvetGullbring1998], velocity ($\sim300-500\,{\rm km\,s^{-1}}$), and mass accretion rate [$\sim10^{-9}-10^{-7}\,{\rm M_{\sun}\,yr^{-1}}$, e.g. @GullbringHartmann1998], it can be inferred that the plasma heated in the accretion shock should have densities $n_{\rm e}\ga10^{11}\,{\rm cm^{-3}}$, i.e. at least one order of magnitude higher than coronal plasma density. Hence, in principle, the accretion process can produce plasma with: high $L_{\rm X}$, high density, and temperatures of a few MK. To summarize, X-ray emission from CTTS can originate from two different plasma components: plasma heated in the accretion shock and coronal plasma. The former, because of its lower temperatures, should dominate the softer X-ray band [e.g. $E\le1$keV in the case of the CTTS TW Hya, @GuntherSchmitt2007]. While the harder X-ray emission, $E\ge1$keV, should be produced almost entirely by coronal plasma. Recently, high-resolution X-ray spectra of a few CTTS enabled measurement of individual emission lines sensitive to plasma density (i.e. He-like triplets), and hence searches for evidence of accretion-driven X-rays. The density of the plasma at $T\sim2-4$MK can be inferred from the and triplet lines (at $E\approx0.6$ and 0.9keV, respectively). All but one of the CTTS for which the triplet lines were detected showed cool plasma with high density, $n_{\rm e}>10^{11}\,{\rm cm^{-3}}$ [@KastnerHuenemoerder2002; @StelzerSchmitt2004; @SchmittRobrade2005; @GuntherLiefke2006; @ArgiroffiMaggio2007; @GuedelSkinner2007; @RobradeSchmit2007]. In contrast, the cool quiescent plasma of active stellar coronae is always dominated by low densities [$n_{\rm e}\la10^{10}\,{\rm cm^{-3}}$, @NessSchmitt2002; @TestaDrake2004a]. This basic difference suggests that the high-density cool plasma in CTTS is not coronal plasma but plasma heated in accretion shocks. One complication to this argument is that mass accretion rates derived from assuming a very high efficiency of conversion of accretion energy into X-rays tend to be an order of magnitude or so lower than rates derived using other methods [e.g. @Drake2005; @SchmittRobrade2005; @GuntherSchmitt2007]. The idea of accretion-driven X-rays from CTTS is superficially supported by a soft X-ray excess found in high-resolution X-ray spectra of CTTS with respect to similar spectra of WTTS by @TelleschiGuedel2007 and @GuedelTelleschi2007. However, @GuedelSkinner2007 and @GuedelTelleschi2007 noted that this soft X-ray excess is significantly lower than that predicted by simple models of X-ray emission from accretion shocks. Moreover, the soft excess scales with total stellar X-ray luminosity, and hence is related at least partially in some way with the stellar magnetic activity. @GuedelSkinner2007 and @GuedelTelleschi2007 suggested that the CTTS soft X-rays could be produced by infalling material loaded into coronal structures. The properties of the X-ray emitting plasma in CTTS and in WTTS have also been investigated using CCD X-ray spectra of large stellar samples. These studies, however, commonly covered the $0.5-8.0$keV energy band in which the coronal component dominates. The main results are that CTTS are on average less luminous in the X-ray band than WTTS [e.g. @FlaccomioMicela2003], and that X-ray emitting plasma of CTTS is on the average hotter than that of WTTS [@NeuhaeuserSterzik1995; @PreibischKim2005]. CTTS and WTTS therefore do have different coronal characteristics, suggesting that the accretion process can affect coronal properties to some extent. Numerical simulations have confirmed that the accretion process can produce significant X-rays: @GuntherSchmitt2007 derived stationary 1-D models of the shock in an accretion column; @SaccoArgiroffi2008 improved those results by performing 1-D hydrodynamical (HD) simulations of the accretion shock, including the stellar atmosphere and taking into account time variability. Assuming optically thin emission, @SaccoArgiroffi2008 showed that, even for low accretion rates, the amount of X-rays produced in the accretion shock is comparable to the typical X-ray luminosity of CTTS ($L_{\rm X}\sim10^{30}{\rm erg\,s^{-1}}$ for $\dot{M}\sim10^{-10}\,{\rm M_{\odot}\,yr^{-1}}$). Several aspects of the nature of the high-density cool plasma component observed in CTTS are still debated. In particular, definitive evidence that it is material heated in the accretion shock is still lacking. Moreover, while the simple “photospheric burial” model of @Drake2005 suggests that under some circumstances a large fraction of the shock X-rays can be absorbed and reprocessed by the photosphere, there are currently no detailed quantitative models explaining why the X-ray luminosities, predicted on the basis of 1-D HD simulation results and mass accretion rates inferred from observations at other wavelengths, are universally much higher than observed. Understanding the link between accretion and X-rays would also allow more accurate characterization of the coronal component of the X-ray emission from CTTS. This could help in understanding how accretion changes coronal activity, and which other parameters determine the coronal activity level in PMS stars, whose X-ray luminosity cannot simply be explained in terms of a Rossby dynamo number [@PreibischKim2005] as is largely the case for active main sequence stars [e.g. @PizzolatoMaggio2003]. To address the above issues, we performed a detailed study of the high-resolution X-ray spectra of two nearby CTTS: TW Hya and MP Mus. In particular we investigated: - optical depth effects in their soft X-ray emission; - the emission measure distribution [*(EMD)*]{} of the X-ray emitting plasma. Optical depth effects probe the nature of the high-density cool plasma component: we show that, if the emitting plasma is located in the accretion shock, some emission lines should have non-negligible optical depth; in contrast these lines should be optically thin if the plasma is located in coronal structures. We also investigate how the [*EMD*]{} can help in recognizing coronal and accretion plasma components, which should have different average temperatures. We compare the [*
{ "pile_set_name": "ArXiv" }
--- abstract: 'We investigate the characteristics of the thick disk in the Canada – France – Hawaii – Telescope Legacy Survey (CFHTLS) fields, complemented at bright magnitudes with Sloan Digital Sky Survey (SDSS) data. The (\[Fe/H\], Z) distributions are derived in the W1 and W3 fields, and compared with simulated maps produced using the Besançon model. It is shown that the thick disk, represented in star-count models by a distinct component, is not an adequate description of the observed (\[Fe/H\], Z) distributions in these fields.' address: - 'GEPI, Observatoire de Paris, CNRS, Université Paris Diderot; 5 Place Jules Janssen, 92190 Meudon, France' - 'Observatoire de Besançon; 41 bis, avenue de l’Observatoire, 25000 Besançon, France' author: - 'M. Guittet' - 'M. Haywood$^1$' - 'M. Schultheis' bibliography: - 'guittet.bib' title: The Milky Way stellar populations in CFHTLS fields --- the Galaxy, the thick disk, \[Fe/H\] abundance. Introduction {#intro} ============ Our knowledge of the characteristics of the thick disk remains limited in practically every aspects. Its structure on large scales ($>$kpc) is not well defined, either clumpy or smooth, and its connections with the collapsed part of the halo or the old thin disk are essentially not understood. The spectrum of possible scenarios proposed to explain its formation is still very large and really discriminant constraints are rare. The SDSS photometric survey has provided a wealth of new informations on the thick disk, see in particular Ivezić et al. (2008), Bond et al. (2010) and Lee et al. (2011). However, the data have barely been directly confronted to star-count models, and little insights have been given on how the thick disk in these models really represents the survey data. In the present work, we initiate such comparisons by comparing the Besançon model with metallicity and distance information in the W1 and W3 CFHTLS fields, and provide a brief discussion of our results. Data description ================ Among the four fields that make the Wide Survey, W1 and W3 cover larger angular surfaces (72 and 49 square degrees) than W2 and W4 (both having 25 square degrees). They point towards higher latitudes (–61.24${{\mathrm{^\circ}}}$ and 58.39${{\mathrm{^\circ}}}$ respectively) and are consequently less affected by dust extinction, and contain a larger relative proportion of thick disk stars. We will therefore focus on W1 and W3. CFHTLS photometry starts at a substantially fainter magnitude than the SDSS, missing a large part of the thick disk. We complemented the CFHTLS catalogue at the bright end with stars from the SDSS not present in the CFHTLS fields. In the final catalogues, W1 contains $\sim$ 139 000 stars, with 16$\%$ from the SDSS, while $\sim$ 132 000 stars are found in W3 field, with 31$\%$ coming from the Sloan.\ W1 and W3 are at large distances above the galactic plane. The dust extinction is very small at these latitudes. For example the Schlegel map [@schlegel98] estimates for W1 an absorption coefficient Av of 0.087 while @Jones11 give Av=0.113. The extinction models of @Arenou92 or @Hakkila97 estimate Av values to 0.1 and 0.054 respectively. We briefly discuss the effect of extinction on distance determination and metallicities in 4.1. Comparisons between the Besançon model and CFHTLS/SDSS data: Hess diagrams ========================================================================== The Besançon model ------------------ Simulations were made using the Besançon model (@Robin03, @Haywood97, @Bienayme87) online version. The model includes four populations: the bulge, the thin disk, the thick disk and the halo. The metallicities of the thick disk and the halo in the online version of the model (–0.78 and –1.78 dex respectively) were shifted (to –0.6 dex and -1.5 dex) to comply with more generally accepted values, and in particular with values derived from the Sloan data (see @Lee11, who shows that the thick disk have a metallicity \[Fe/H\] = –0.6 dex roughly independant of vertical distances, and (@ivezic08, @Bond10, @Sesar11, @Carollo10 or @Dejong10 for the inner halo metallicity, estimated to be about –1.5 dex). The thick disk has a scale height of 800 pc and a local stellar density $\rho_0$ of 6.8 $\%$ of the local thin disk, while the stellar halo is described by a power law with a flattening and a local density of 0.6%. Simulations where made assuming photometric errors as described in the SDSS. Hess diagrams ------------- The distributions of CFHTLS/SDSS and model stars in the g versus u–g color magnitude diagram (CMD) are shown in Fig. \[fig1\]. For both diagrams, faint blue stars (u–g $\sim$ 0.9, g$>$18) are clearly discernible and correspond to the galactic halo. The concentration of stars at g$<$18, u–g $\sim$1.1, corresponds to disk stars and in particular thick disk stars. Because of the SDSS saturation at g=14 which does not allow to have a representative sample of thin disk stars, our data sample is mainly composed of thick disk and halo stars. The Besançon model shows a distinct separation between thin disk stars (u–g$\sim$1.3, g$<$14-15) and thick disk stars (u–g$\sim$1.1, 15$<$g$<$18) which cannot be check with the present data.\ Comparisons between the Besançon model and CFHTLS/SDSS data: (\[Fe/H\], Z) distributions ======================================================================================== Metallicity and photometric distance determinations --------------------------------------------------- @Juric08 and @ivezic08 have published calibrations of the metallicity and photometric parallax as a function of ugri magnitudes. The metallicity calibration has been revised in @Bond10 : $$\begin{aligned} \label{feh} \mathrm{[Fe/H]} & = & \mathrm{A + Bx + Cy + Dxy + Ex^2 + Fy^2 + Gx^2y + Hxy^2 + Ix^3 + Jy^3 }\end{aligned}$$ where x = $u\--g$, y = $g\--r$ and (A–J) = (–13.13, 14.09, 28.04, –5.51, –5.90, –58.58, 9.14, –20.61, 0.0, 58.20).\ This relation has been determined for F and G stars and is consequently applicable in the range : 0.2 $< g\--r <$ 0.6 and –0.25 + 0.5($u\--g$) $< g\--r <$ 0.05 + 0.5($u\--g$). This calibration only extends to –0.2 dex. Observed vertical distances $Z$ have been calculated using $\normalsize{ Z\ = \ \mathrm{D \ sin(b)} }$, b being the latitude of the star. Photometric distances $D$, such as $\normalsize{m_{r} \-- M_{r} = 5\ log(D) \-- 5}$, were determined using the absolute magnitude calibration of @ivezic08 which depends on the metallicity and on $g-i$ colours.\ For the highest extinction values given by @Jones11, the impact on metallicities, as can be estimated using Eq. \[feh\] and the absolute magnitude relation of @ivezic08 are at most of 0.15 dex near g–r=0.5 at solar metallicities and 0.1 dex at \[Fe/H\]= –1 dex. Distances will be affected at most by about 20% at solar metallicities and 15% at \[Fe/H\]= –1 dex at g–r near 0.40-0.45.\ (\[Fe/H\], Z) distributions ---------------------------- We generated catalogues with the model in the direction of W1 and W3, deriving the Z height above the plane from simulated distances and metallicities from the assumed metallicity distributions of each population. In Fig. \[fig2\] we present (\[Fe/H\], Z) distributions for both the data and the model. The dotted line is the median metallicity per bin of 0.5 kpc. The continuous line is the median metallicity for disk stars as shown by @Bond10 and follows rather well the disk distribution in our data. We find similar results as @Bond10 : the halo dominates the star counts above 3 kpc with a mean metallicity of about –1.5 dex. @Sesar11 studied the four CFHTLS Wide fields but with magnitudes corrected for ISM extinction. They found the mean halo metallicity in the range between –1.4 and –1.6 dex. Our estimate of the extinction effect would shift metallicities to about 0.15
{ "pile_set_name": "ArXiv" }
--- abstract: 'The principal portfolios of the standard Capital Asset Pricing Model (CAPM) are analyzed and found to have remarkable hedging and leveraging properties. Principal portfolios implement a recasting of any *correlated* asset set of $N$ risky securities into an equivalent but *uncorrelated* set when short sales are allowed. While a determination of principal portfolios in general requires a detailed knowledge of the covariance matrix for the asset set, the rather simple structure of CAPM permits an accurate solution for any reasonably large asset set that reveals interesting universal properties. Thus for an asset set of size $N$, we find a *market-aligned* portfolio, corresponding to the *market* portfolio of CAPM, as well $N-1$ *market-orthogonal* portfolios which are market neutral and strongly leveraged. These results provide new insight into the return-volatility structure of CAPM, and demonstrate the effect of unbridled leveraging on volatility.' author: - 'M. Hossein Partovi' title: 'Hedging and Leveraging: Principal Portfolios of the Capital Asset Pricing Model' --- 1. Introduction {#introduction .unnumbered} =============== Modern investment theory dates back to the mean-variance analysis of Markowitz (1952, 1959), which is expected to hold if asset prices are normally distributed or the investor preferences are quadratic. Undoubtedly, the most consequential fruit of Markowitz’ seminal work was the introduction of the capital asset pricing model (CAPM) by Sharpe (1964), Lintner (1965), and Mossin (1966). The key ideas of this model are that investors are mean-variance optimizers facing a frictionless market with full agreement on the distribution of security returns and unrestricted access to borrowing and lending at the riskless rate. As an asset pricing model, CAPM is an equilibrium model valid for a given investment horizon, which is taken to be the same for all investors. Indeed investors are solely distinguished by their level of risk aversion. Principal portfolio analysis, on the other hand, simplifies asset allocation by recasting the asset set into uncorrelated portfolios when short sales are allowed (Partovi and Caputo 2004). Stated otherwise, the original problem of stock selection from a set of *correlated assets* is transformed into the much simpler problem of choosing from a set of *uncorrelated portfolios*. The details of this transformation are given in Partovi and Caputo (2004), where the results are summarized as follows: *Every investment environment ${ \{ {s}_{i}, {r}_{i}, {\sf \sigma}_{ij} \} }_{i,j=1}^{N}$ which allows short sales can be recast as a principal portfolio environment ${ \{ {S}_{\mu}, {R}_{\mu}, {\sf V}_{\mu \nu} \} }_{\mu, \nu =1}^{N}$ where the principal covariance matrix ${\sf V}$ is diagonal. The weighted mean of the principal variances equals the mean variance of the original environment. In general, a typical principal portfolio is hedged and leveraged.* Here $s_i$ ($ {S}_{\mu}$), $r_i$ (${R}_{\mu}$), and ${\sigma}_{ij}$ (${\sf V}_{\mu \nu} $) represent the assets, the expected returns, and the covariance matrix of the original (recast) set, while $N$ is the size of the asset set. It was further shown in Partovi and Caputo (2004) that the efficient frontier in the presence of a riskless asset has a simple allocation rule which requires that each principal portfolio be included in inverse proportion to its variance. Practical applications of principal portfolios have already been considered by several authors, for example, Poddig and Unger (2012) and Kind (2013). In this paper we present a perturbative calculation of the principal portfolios of the single-index CAPM in the large $N$ limit. The results of this calculation are in general expected to entail a relative error of the order of $1/{N}^2$. However, since any application of the single-index CAPM is most likely to involve a large asset set, the stated error is normally quite small and in any case majorized by modelling errors. Thus the results to be reported here are accurate implications of the underlying model. The principal portfolio analysis of the single-index model and an exactly solvable version of it presented in §3 highlight the volatility structure of principal portfolios in a practical and familiar context. A remarkable result of the analysis is the bifurcation of the set of principal portfolios into a [*market-aligned*]{} portfolio, which is unleveraged and behaves rather like a total-market index fund, and $N-1$ *market-orthogonal* portfolios, which are hedged and leveraged,[^1] and nearly free of market driven fluctuations. This equivalency between the original asset set and two classes of principal portfolios is reminiscent of, but fundamentally different from, Merton’s (1972) two mutual fund theorems. The market-orthogonal portfolios, on the other hand, provide a vivid demonstration of the effect of leveraging on the volatility level of a portfolio. 2. Principal Portfolios of the Single-Index Model {#principal-portfolios-of-the-single-index-model .unnumbered} ================================================= Here we shall analyze the standard single-index model as well as an exactly solvable special case of it with respect to their principal portfolio structure. Remarkably, our analysis will uncover interesting and hitherto unnoticed properties of well-diversified and arbitrarily leveraged portfolios within the single-index model. Consider a set of $N$ assets $ \{ {s}_{i} \}$, $1 \le i \le N$, whose rates of return are normally distributed random variables given by $${\rho}_{i}\stackrel{\rm def}{=}{\alpha}_{i}+{\beta}_{i}{\rho}_{mkt}, \label{431}$$ where ${\alpha}_{i}$ and ${\rho}_{mkt}$ are uncorrelated, normally distributed random variables with expected values and variances equal to ${\bar{\alpha}}_{i}$, ${\bar{\rho}}_{mkt}$ and ${\bar{{\alpha}_{i}^{2}}}$, ${\bar{{\rho}^{2}}}_{mkt}$, respectively. The quantity ${\beta}_{i}$ associated with asset ${s}_{i}$ is a constant which measures the degree to which ${s}_{i}$ is coupled to the overall market variations. Thus the attributes of a given asset are assumed to consist of a [*market-driven*]{} (or [*systematic*]{}) part described by $({\beta}_{i}{\rho}_{mkt},{\beta}_{i}^{2}{\bar{{\rho}^{2}}}_{mkt})$ and a [*residual*]{} (or [*specific*]{}) part described by $({\alpha}_{i},{\bar{{\alpha}_{i}^{2}}})$, with the two parts being uncorrelated. The expected value of Eq. (\[431\]) is given by $${\bar{{\rho}_{i}}}\stackrel{\rm def}{=}{r}_{i}={\bar{{\alpha}_{i}}}+{\beta}_{i}{\bar{\rho}}_{mkt}. \label{4311}$$ The covariance matrix which results from Eq. (\[431\]) is similarly a superposition of the specific and market-driven contributions, as would be expected of the sum of two uncorrelated variables. It can be written as $${\sf \sigma}_{ij}={\bar{{\alpha}_{i}^{2}}} {\delta}_{ij}+ {\beta}_{i}{\beta}_{j} {\bar{{\rho}^{2}}}_{mkt}. \label{432}$$ Note that ${\sf \sigma}$ is a [*definite*]{} matrix, since we have excluded riskless assets from the asset set for the time being. We shall assume here that the number of assets $N$ is appropriately large, as is in fact implicit in the formulation of all index models, so that the condition ${\bar{{\alpha}_{i}^{2}}}/ N {b}^{2} {\bar{{\rho}^{2}}}_{mkt} \ll 1$ is satisfied; here $b\stackrel{\rm def}{=} {({\sum}_{i=1}^{N}{\beta}_{i}^{2}/N)}^{1 \over 2}$ is the square root of the average value of ${\beta}_{i}^{2}$, typically of the order of unity. These assumptions are not essential to our discussion, but they do simplify the presentation and more importantly, they are usually well satisfied for appropriately large values of $N$ and guarantee that our purturbative results below are accurate for practical applications. Under the above assumptions it is appropriate to rescale the covariance matrix as in ${\sf \sigma}_{ij}= N {b}^{2} {\bar{{\rho}^{2}}}_{mkt} {\tilde{\sf \sigma}}_{ij}$, where $${\tilde{\sf \sigma}}_{ij}\stackrel{\rm def}{=}{\gamma}_{i}^{2} {\delta}_{ij}+ {\hat{\beta}}_{i}{\hat{\beta}}_{j} \label{433}$$ is a dimensionless matrix. Here ${\hat{\beta}}_{i} \stackrel{\rm def}{=}{{\beta}}_{i}/ {({\sum}_{i=1}^{N}{\beta}_{i}^{2})}^{1 \over 2}$, so that $\hat{{\bm { \beta}}}=({\hat{\beta}}_{1},{\hat{\beta}}_{2}, \ldots, {\hat{\beta}}_{N})$ is a unit vector, and ${\gamma}_{i}^{2}\stackrel{\rm def}{=}{\bar{{\alpha}_{i}^{2}}}/ N {b}^{2} {\bar{{\rho}^{2}}}_{mkt} \ll 1
{ "pile_set_name": "ArXiv" }
--- abstract: 'We unify two recent results concerning equilibration in quantum theory. We first generalise a proof of Reimann \[PRL 101,190403 (2008)\], that the expectation value of ‘realistic’ quantum observables will equilibrate under very general conditions, and discuss its implications for the equilibration of quantum systems. We then use this to re-derive an independent result of Linden et. al. \[PRE 79, 061103 (2009)\], showing that small subsystems generically evolve to an approximately static equilibrium state. Finally, we consider subspaces in which all initial states effectively equilibrate to the same state.' author: - 'Anthony J. Short' title: Equilibration of quantum systems and subsystems --- Introduction ============ Recently there has been significant progress in understanding the foundations of statistical mechanics, based on fundamentally quantum arguments [@Mahler; @Goldstein1; @Goldstein2; @PopescuShortWinter; @Tasaki; @reimann1; @reimann2; @us1; @us2; @gogolin1; @gogolin2]. In particular, Reimann [@reimann1; @reimann2] has shown that the expectation value of any ‘realistic’ quantum observable will equilibrate to an approximately static value, given very weak assumptions about the Hamiltonian and initial state. Interestingly, the same assumptions were used independently by Linden *et al.* [@us1; @us2], to prove that any small quantum subsystem will evolve to an approximately static equilibrium state (such that even ‘unrealistic’ observables on the subsystem equilibrate). In this paper we unify these two results, by deriving the central result of Linden *et al.*[@us1] from a generalisation of Reimann’s result. We also offer a further discussion and extension of Reimann’s results, showing that systems will appear to equilibrate with respect to all reasonable experimental capabilities. Finally, we identify subspaces of initial states which equilibrate to the same state. Equilibration of expectation values. ==================================== We prove below a generalisation of Reimann’s result that the expectation value of any operator will almost always be close to that of the equilibrium state [@reimann1]. We extend his results to include non-Hermitian operators (which we will use later to prove equilibration of subsystems), correct a subtle mistake made in [@reimann2] when considering degenerate Hamiltonians, and improve the bound obtained by a factor of 4. As in [@reimann2; @us2], we make one assumption in the proof, which is that the Hamiltonian has *non-degenerate energy gaps*. This means that given any four energy eigenvalues $E_k, E_l, E_m$ and $E_n$, $$\label{eq:non-degen} E_k - E_l = E_m - E_n \Rightarrow \begin{array}{c} (E_k = E_l \; \textrm{and}\; E_m = E_n) \\ \textrm{or} \\ (E_k = E_m \; \textrm{and}\; E_l = E_n). \end{array}$$ Note that this definition allows degenerate energy levels, which may arise due to symmetries. However, it ensures that all subsystems physically interact with each other. In particular, given any decomposition of the system into two subsystems ${\mathcal{H}}= {\mathcal{H}}_A \otimes{\mathcal{H}}_B$, equation (\[eq:non-degen\]) will not be satisfied by any Hamiltonian of the form $H=H_A \otimes I_B + I_A \otimes H_B$ (unless either $H_A$ or $H_B$ is proportional to the identity) [^1]. Consider a $d$-dimensional quantum system evolving under a Hamiltonian $H=\sum_n E_n P_n$, where $P_n$ is the projector onto the eigenspace with energy $E_n$. Denote the system’s density operator by $\rho(t)$, and its time-averaged state by $\omega \equiv {\left\langle \rho(t) \right\rangle_t}$. If $H$ has non-degenerate energy gaps, then for any operator $A$, $$\label{eq:theorem} \sigma_A^2 \equiv {\left\langle \left| {\operatorname{tr}}\left(A \rho\left(t\right) \right) - {\operatorname{tr}}\left( A \omega\right) \right|^2 \right\rangle_t} \leq \frac{\Delta(A)^2 }{4 d_{{\rm eff}}} \leq \frac{\|A\|^2}{d_{{\rm eff}}}$$ where $\|A\|$ is the standard operator norm [^2], $$\Delta(A) \equiv 2 \min_{c \in \mathbb{C}} \| A- c I \|,$$ and $$d_{{\rm eff}} \equiv \frac{1}{\sum_n \big( {\operatorname{tr}}(P_n \rho(0)) \big)^2}.$$ This bound will be most significant when the number of different energies incorporated in the state, characterised by the effective dimension $ d_{{\rm eff}}$, is very large. Note that $1 \leq d_{{\rm eff}} \leq d$, and that $d_{{\rm eff}}=N$ when a measurement of $H$ would yield $N$ different energies with equal probability. For pure states $d_{{\rm eff}} = {\operatorname{tr}}(\omega^2)^{-1}$ as in [@us1; @us2], but it may be smaller for mixed states when the Hamiltonian is degenerate. The quantity $\Delta(A) $ gives the range of eigenvalues when $A$ is Hermitian, and gives a slightly tighter bound than the operator norm. Following [@reimann2], we could improve the bound further by replacing $\Delta(A) $ with a state- and Hamiltonian-dependent term [^3], however we omit this step here for simplicity. **Proof:** To avoid some difficulties which arise when considering degenerate Hamiltonians, we initially consider a pure state $\rho(t) = {{| \psi(t) \rangle}\!{\langle \psi(t) |}}$, then extend the results to mixed states via purification. We can always choose an energy eigenbasis such that ${| \psi(t) \rangle}$ has non-zero overlap with only a single energy eigenstate ${| n \rangle}$ of each distinct energy, by including states ${| n \rangle} = P_n {| \psi(0) \rangle}/\sqrt{{\langle \psi(0) |} P_n {| \psi(0) \rangle}}$ whenever ${\langle \psi(0) |} P_n {| \psi(0) \rangle}>0$. The state at time $t$ is then given by $${| \psi(t) \rangle} = \sum_{n} c_n e^{-i E_n t/\hbar} {| n \rangle},$$ where $c_n = {\left\langle n| \psi(0) \right\rangle}$. This state will evolve in the subspace spanned by $\{{| n \rangle}\}$ as if it were acted on by the non-degenerate Hamiltonian $H'=\sum_n E_n {{| n \rangle}\!{\langle n |}}$. For any operator $A$, it follows that $$\begin{aligned} \sigma_A^2\!\!\! &=& {\left\langle |{\operatorname{tr}}(A [\rho(t) - \omega] )|^2 \right\rangle_t} \nonumber \\ &=& {\left\langle \left|\sum_{n \neq m} c_n c_m^* e^{i(E_m-E_n)t/\hbar} {\langle m |} A {| n \rangle} \right|^2 \right\rangle_t} \nonumber \\ &=& \!\!\!\! \sum_{\scriptsize \begin{array}{c} n \neq m \\ k\neq l \end{array}} \!\!\! \! c_n c_m^* c_k c_l^*{\left\langle e^{i(E_m-E_n + E_l - E_k)t/\hbar} \right\rangle_t}{\langle m |} A { | n \rangle \! \langle l |} A^{\dag} {| k \rangle} \nonumber \\ &=& \sum_{n,m} |c_n|^2 |c_m|^2 {\langle m |} A { | n \rangle \! \langle n |} A^{\dag} {| m \rangle} - \sum_{n} |c_n|^4 |{\langle n |}A {| n \rangle}|^2 \nonumber \\ & \leq &{\operatorname{tr}}( A \omega A^{\dag} \omega ) \nonumber \\ & \leq & \sqrt{{\operatorname{tr}}(A^{\dag}\!A\, \omega^2) {\operatorname{tr}}(A A^{\dag} \omega^2)} \nonumber \\ &\leq& \| A \|^2 {\operatorname{tr}}(\omega^2) \label{eq:pure_theorem} \nonumber \\ &=& \| A \|^2{\operatorname{tr}}\left[ \left(\sum_n |c_n|^2 {{|
{ "pile_set_name": "ArXiv" }
--- abstract: 'Three types of explicit estimators are proposed here to estimate the loss rates of the links in a network of the tree topology. All of them are derived by the maximum likelihood principle and proved to be either asymptotic unbiased or unbiased. In addition, a set of formulae are derived to compute the efficiencies and variances of the estimators that also cover some of the estimators proposed previously. The formulae unveil that the variance of the estimates obtained by a maximum likelihood estimator for the pass rate of the root link of a multicast tree is equal to the variance of the pass rate of the multicast tree divided by the pass rate of the tree connected to the root link. Using the formulae, we are able to evaluate the estimators proposed so far and select an estimator for a data set.' author: - 'Weiping Zhu [^1]' bibliography: - '../globcom06/congestion.bib' title: Statistical Properties of Loss Rate Estimators in Tree Topology --- Correlation, Efficiency, Explicit Estimator, Loss Tomography, Maximum Likelihood, Variance. Introduction {#section1} ============ Network characteristics, such as link-level loss rate, delay distribution, available bandwidth, etc. are valuable information to network operations, development and researches. Therefore, a considerable attention has been given to network measurement, in particular to large networks that cross a number of autonomous systems, where security concerns, commercial interests, and administrative boundary make direct measurement impossible. To overcome the security and administrative obstacles, network tomography was proposed in [@YV96], where the author suggests the use of end-to-end measurement and statistical inference to estimate the characteristics of interest. Since then, many works have been carried out to estimate various characteristics that cover loss tomography [@CDHT99; @CDMT99; @CDMT99a; @CN00; @XGN06; @BDPT02; @ADV07; @DHPT06; @ZG05; @GW03], delay tomography [@LY03; @TCN03; @PDHT02; @SH03; @LGN06], loss pattern tomography [@ADV07], and so on. Despite the enthusiasm in loss tomography, there has been little work to study the statistical properties of an estimator with a finite sample size although some asymptotic properties are presented in the literature [@CDHT99; @DHPT06]. The finite sample properties, such as efficiency and variance, differ from the asymptotic ones that are critical to the performance evaluation of an estimator since each of them unveil the quality and effectiveness of an estimator in a specific aspect. Apart from that, the finite sample properties can be used to select a better estimator, if not the best, from a group for a data set obtained from a specific circumstance. To fill the gap, we in this paper propose a number of maximum likelihood estimators (MLE) that can be solved explicitly for a network of the tree topology and provide the statistical properties of them. The statistical properties are further extended to cover the MLEs proposed previously. One of the most important discoveries is a set of formulae to compute the efficiency and variance of the estimates obtained by an estimator. The approach proposed in [@YV96] requires us to send probing packets, called probes, from some end-nodes called sources to the receivers located on the other side of the network, where the paths connecting the sources to the receivers cover the links of interest. To make the probes received informative in statistical inference, multicast or unicast-based multicast proposed in [@HBB00; @CN00] is used to send probes from a source to a number of receivers, via a number of intermediate nodes that replicate the arrived probes and then forward to its descendants. This process continues until either the probes reach the destinations or lost, which makes the observations of any two receivers correlated in some degree and the degrees vary depending on the interconnection between the receivers. Given the network topology used for sending probes and the observations obtained at receivers, we are able to create a likelihood function to connect the observation to the process described above. Since the number of correlations created by multicasting are proportional to the number of descendants attached to a node, the likelihood equation obtained for a node having many descendants is a high degree polynomial that requires an iterative procedure, such as the expectation and maximization (EM) or the Newton-Raphson algorithm, to approximate the solution. Using iterative procedure to solve a polynomial has been widely criticised for its computational complexity that increases with the number of descendants attached to the link or path to be estimated [@CN00]. There has been a persistent effort in the research community to search for explicit estimators that are comparable in terms of accuracy to the estimators using iterative approach. To achieve this, we must have the statistical properties of the estimates obtained by an estimator, such as unbiasedness, efficiency, and variance. Unfortunately, there has been little work in a general form for the properties and the asymptotic properties obtained in [@CDHT99; @DHPT06] has little use in this circumstance. To overcome the problems stated above, we have undertaken a thorough and systematic investigation of the estimators proposed for loss tomography that aims at identifying the statistical principle and strategies that have been used or can be used in the tree topology. A number of findings are obtained in the investigation that show all of the estimators proposed previously rely on observed correlations to infer the loss/pass rates and most of them use all of the correlations available in estimation, such as the MLE proposed in [@CDHT99]. However, the qualities of the correlations, measured by the fitness between a correlation and the corresponding observation, are very much ignored. Rather than using all of the correlations available in estimation, we propose here to use a small portion of high-quality ones and expect the estimates obtained by such an estimator are comparable to that considering all of the correlations. The investigation further leads to a number of findings that contribute to loss tomography in four-fold. - A large number of explicit estimators are proposed on the basis of composite likelihood [@Lindsay88] that are divided into three groups: the block wised estimators (BWE), the reduce scaled estimators (RSE), and the individual based estimators (IBE). - The estimators in BWE and IBE are proved to be unbiased and that in RSE are proved to be asymptotic unbiased as that proved in [@DHPT06]. A set of formulae are derived for the efficiency and variances of the estimators in RSE and IBE, plus the MLE proposed in [@CDHT99]. The formulae show the variance of the estimates obtained by a MLE can be exactly expressed by the pass rate of the path of interest and the pass rate of the subtrees connected to the path. The formulae also show the weakness of the result obtained in [@DHPT06]. - The efficiency of the estimators in IBE are compared with each other on the basis of the Fisher information that shows an estimator considering a correlation involving a few observers can be more efficient than that considering more and the estimator proposed in [@DHPT06] is the least efficient. A similar conclusion is obtained for the estimators in BWE. - Using the formulae, we able to identify an efficient estimator by examining the end-to-end observation that makes model selection not only possible but also feasible. A number of simulations are conducted to verify this feature that also show the connection between efficiency and robustness of an estimator. The rest of the paper is organised as follows. In Section \[related work\], we briefly introduce the previous works related to explicit loss rate estimators and point out the weakness of them. In Section \[section2\], we introduce the loss model, the notations, and the statistics used in this paper. Using the model and statistics, we derive a MLE that considers all available correlations for a network of the tree topology in Section \[section3\]. We then decompose the MLE into a number of components according to correlations and derive a number of likelihood equations for the components in Section \[section 4\]. A statistic analysis of the proposed estimators is presented in Section \[section5\] that details the statistical properties of the proposed estimators, one of them is the formulae to calculate the variances of various estimators. Simulation study is presented in Section \[section 6\] that compares the performance of five estimators and shows the feasibility of selecting an estimator for a data set. Section \[section7\] is devoted to concluding remark. Related Works {#related work} ============= Multicast Inference of Network Characters (MINC) is the pioneer of using the ideas proposed in [@YV96] into practice, where a Bernoulli model is used to model the loss behaviors of a path. Using this model, the authors of [@CDHT99] derive an estimator in the form of a polynomial that is one degree less than the number of descendants connected to the end node of the path of interest [@CDHT99; @CDMT99; @CDMT99a]. Apart from that, the authors obtain a number of results from asymptotic theory, such as the large number behaviour of the estimator and the dependency of the estimator variance on topology. Unfortunately, the results only hold if the sample size $n$ grows indefinitely. In addition, if $n\rightarrow \infty$, almost all of the estimators proposed previously must have the same results and no one can tell the difference between them. In order to evaluate the performance of an estimator, experiments and simulation have been widely used but lead to little result since there are too many random factors affecting the results obtained from experiments and simulations. To overcome the problem stated, simple and explicit estimators, such as that proposed in [@DHPT06], are investigated that aims at reducing the complexity of an estimator and hopefully finding theoretical support for further development since a simple estimator may be easy to analyse. Using this strategy, the authors of [@DHPT06] propose an explicit estimator that only considers a correlation
{ "pile_set_name": "ArXiv" }
--- abstract: 'We implement the contractor-renormalization method to study the checkerboard Hubbard model on various finite-size clusters as function of the inter-plaquette hopping $t''$ and the on-site repulsion $U$ at low hole doping. We find that the pair-binding energy and the spin gap exhibit a pronounced maximum at intermediate values of $t''$ and $U$, thus indicating that moderate inhomogeneity of the type considered here substantially enhances the formation of hole pairs. The rise of the pair-binding energy for $t''<t''_{\rm max}$ is kinetic-energy driven and reflects the strong resonating valence bond correlations in the ground state that facilitate the motion of bound pairs as compared to single holes. Conversely, as $t''$ is increased beyond $t''_{\rm max}$ antiferromagnetic magnons proliferate and reduce the potential energy of unpaired holes and with it the pairing strength. For the periodic clusters that we study the estimated phase ordering temperature at $t''=t''_{\rm max}$ is a factor of 2–6 smaller than the pairing temperature.' author: - Shirit Baruch and Dror Orgad title: 'A contractor-renormalization study of Hubbard plaquette clusters' --- Introduction {#intro} ============ It is by now generally accepted that spatial inhomogeneity may emerge either as a static or as a fluctuating effect in strongly-coupled models of the high-temperature superconductors, and indeed in many of the real materials.[@ourreview] What is far from being settled is the issue of whether such inhomogeneity is [*essential*]{} to the mechanism of high-temperature superconductivity from repulsive interactions. While most researchers would probably answer this question in the negative one should bare in mind the absence of a conclusive evidence that the single-band two-dimensional Hubbard model, widely believed to be the “standard model” of high-temperature superconductivity, actually supports superconductivity with a high transition temperature.[@aimi] On the other hand, when examined on small clusters the same model and its strong-coupling descendent, the $t-J$ model, exhibit robust signs of incipient superconductivity in the form of a spin-gap and pair binding.[@ourreview] This fact points to the possibility that the strong susceptibility towards pairing is a consequence of the confining geometry itself. This line of thought has been pursued in the past by considering the extreme limit where the electronic density modulation is so strong that the system consists of weakly coupled Hubbard ladders[@optimal-ladder; @optimal-AFK] or plaquettes[@wf-steve]. Beyond the questionable applicability of such models to the physical systems, which are at most only moderately modulated, it is clear that strong inhomogeneity, even if beneficial to pairing, is detrimental to the establishment of phase coherence and consequently to superconductivity. On both counts it is, therefore, desirable to extend the analysis to the regime of intermediate inhomogeneity. Recently, the checkerboard Hubbard model, constructed from 4-site plaquettes with nearest-neighbor hopping $t$ and on-site repulsion $U$, was studied as function of the inter-plaquette hopping $t'$ (see Fig. \[model-fig\]). Tsai [@steve-exact] diagonalized exactly the $4\times 4$ site cluster ($2\times 2$ plaquettes) and found that the pair-binding energy, as defined by Eq. (\[pb-def\]) below, exhibits a substantial maximum at $t'\approx t/2$ for $U\approx 8t$ and low hole concentration. Doluweera [@DMFT-cluster], on the other hand, used the dynamical cluster approximation in the range $0.8\le t'/t\le 1$ and obtained a monotonic increase in both the strength of the $d$-wave pairing interaction and the superconducting transition temperature, $T_c$, towards a maximum that occurs in the homogeneous model. In this paper, we use the contractor-renormalization (CORE) method[@CORE] to derive an effective low-energy Hamiltonian for the checkerboard Hubbard model, which we then diagonalize numerically on various finite-size clusters. We begin by establishing the region of applicability of the CORE approximation by contrasting its predictions with the exact results of Ref. for $2\times 2$ plaquettes. Our findings indicate that at low concentrations of doped holes the two approaches agree reasonably well unless $t'$ is larger than a value, which increases with $U$. Deviations also appear for small $t'$ when $U$ is large. We identify probable sources of these discrepancies. Based on the lessons gained from the small system we go on to study larger clusters of up to 10 plaquettes. These include the periodic $6\times 6$ sites cluster and 2-leg and 4-leg ladders with periodic boundary conditions along their length. Within the region where CORE is expected to provide reliable results the pair-binding energy continues to exhibit a non-monotonic behavior with a pronounced maximum at intermediate values of $t'$ and $U$. The precise location of the maximum depends on the cluster geometry but it typically occurs in the range $t'_{\rm max}\approx 0.5-0.7t$ and $U_{\rm max}\approx 5-8t$. The spin gap of the doped system follows a similar trend, often reaching the maximum slightly before the pair-binding energy. These findings demonstrate that moderate inhomogeneity, of the type considered here, can substantially enhance the binding of holes into pairs. In an effort to elucidate the source of the maximum we have looked into the content of the ground state and calculated the contributions of various couplings in the effective Hamiltonian to its energy. Our results indicate that for $t'<t'_{\rm max}$ the doped holes move in a background, which is composed predominantly of plaquettes that are in their half-filled ground state. This background possesses strong intra-plaquette singlet resonating valence bond (RVB) correlations, which facilitate the propagation of pairs relative to independent holes. The rise in the pair-binding energy while $t'$ grows towards $t'_{\rm max}$ is a result of a faster decrease of the pair kinetic energy in comparison to that of unpaired fermions. As $t'$ crosses $t'_{\rm max}$ and approaches the uniform limit the ground state contains a growing number of plaquettes that support antiferromagnetic (AFM) magnons. In this regime of increasing AFM correlations the kinetic energy changes relatively little with $t'$, and the decrease of the pair-binding energy for $t'>t'_{\rm max}$ is caused by the lowering of the energy of single holes due to their interactions with the magnons. Interestingly, we find that the maximum in the pair-binding energy of the periodic clusters is accompanied by a change in the crystal momentum of the single-hole ground state from the $\Gamma-{\rm M}$ and symmetry related directions at $t'<t'_{\rm max}$ to the Brillouin-zone diagonals at $t'>t'_{\rm max}$. A similar correlation was also found for the 3-hole ground state of the $6\times 6$ sites cluster. While the pair-binding energy sets a pairing scale, $T_p$, a phase-ordering scale, $T_\theta$, is provided by the phase stiffness. The latter was evaluated from the second derivative of the ground state energy with respect to a phase twist introduced by threading the system with an Aharonov-Bohm flux. We have found that as the twist is taken to zero, the CORE energy curvature typically converges towards a limiting value only when $t'<t'_{\rm max}$. Within this region the phase stiffness increases monotonically with $t'$. Our results indicate that for the lightly doped periodic clusters that we have considered phase fluctuations dominate over pairing, specifically, $T_p\approx 2-6 T_\theta$ at $t'=t'_{\rm max}$. The limitations of the present study make it difficult to draw conclusions regarding the behavior of $T_c$ in the two-dimensional thermodynamic limit. We have also calculated the pair-field correlations between Cooper-pairs that reside on the most distant bonds allowed by our finite clusters. As expected, these correlations are consistent with $d$-wave pairing. However, in contrast to the pair-binding energy and the phase stiffness the correlations change little with $t'$ and are small in magnitude. This discrepancy might be resolved in light of our finding that only few holes are tightly bound into pairs that reside within a single plaquette. Moreover, we obtain that the number of such pairs changes relatively little with $t'$ with no apparent correlation to the substantial maximum in the pair-binding energy. Taken together these findings suggest that the correlation function which we and others often use to identify and quantify pairing in the Hubbard model may be ill-constructed to take account of the more extended and structured nature of pairing in this model. ![The checkerboard Hubbard model. Shown here are two of the clusters that we studied. The bonds labeled $ab$, $cd$, and $ef$ specify locations used in calculating the pairing correlations.[]{data-label="model-fig"}](plaquette.eps){width="\linewidth"} Model and Method {#models} ================ The Hamiltonian of the checkerboard Hubbard model, which we have studied, is given by H=-\_[i,j, ]{}( t\_[ij]{} c\_[i,]{}\^c\_[j,]{} +[H.c.]{})+U\_i n\_[i,]{} n\_[i,]{}, \[H\] where $c_{i,\sigma}^\dagger$ creates an electron with spin $\sigma=\uparrow,\downarrow$ at site $i$ of a two-dimensional square lattice
{ "pile_set_name": "ArXiv" }
--- bibliography: - 'References.bib' title: '**[On the Perturbative Expansion around a Lifshitz Point]{}**' --- IPMU-09-0105 <span style="font-variant:small-caps;">Abstract</span>\ The quantum Lifshitz model provides an effective description of a quantum critical point. It has been shown that even though non–Lorentz invariant, the action admits a natural supersymmetrization. In this note we introduce a perturbative framework and show how the supersymmetric structure can be used to greatly simplify the Feynman rules and thus the study of the model. Introduction {#sec:intro} ============ In 1941, Lifshitz [@Lifshitz] introduced models with anisotropic scaling between space and time in the context of tri–critical models. Since then, such models have been studied in the context of solid state physics. Materials with strongly correlated electrons, such as copper oxides, show this type of critical behaviour, and also the smectic phase of liquid crystals for example can be described this way. Our treatment is based on quantum Lifshitz models as were studied in [@Ardonne:2003p1613]. Quantum Lifshitz points are especially interesting, since they are *quantum critical points* [@Sachdev], *i.e.* points at which a continuous phase transition happens at $T=0$ which is driven by zero point quantum fluctuations. A quantum Lifshitz point is characterized by the vanishing of the term $(\nabla \phi)^2$ in the effective Hamiltonian. While scale invariance is conserved, this gives rise to an anisotropy between space and time. This anisotropy is quantified by the *dynamical critical exponent* $z$, $$t\to \lambda^zt,\ \ x\to \lambda x.$$ For models in $2+1$ dimensions at a Lifshitz point, $z=2$, as opposed to the Lorentz invariant $z=1$. Models at a Lifshitz point have recently met with a large amount of interest beyond their original field of application[^1]. A $3+1$ dimensional theory of gravity with $z=3$ put forward by Hořava [@Horava:2009uw] has generated a big echo. But also in the context of the AdS/CFT correspondence, interest in gravity duals of non–Lorentz invariant models has arisen, see *e.g.* [@Balasubramanian:2008dm; @Son:2008ye; @Kachru:2008yh; @Volovich:2009yh]. In [@Kachru:2008yh] in particular, a gravity dual for a Lifshitz type model with $z=2$ was proposed. As discussed recently in [@Li:2009pf], it seems difficult to find string theory embeddings for gravity duals of Lifshitz–type points. While often, calculations are easier to do on the gravity side of the correspondence, we are able to perform a number of calculations directly on the field theory side, which are presented in this article. Apart from being of interest directly for statistical physics, our results can serve as a point of reference for comparison to results derived on the gravity side. In [@Dijkgraaf:2009gr] it was shown that systems of Lifshitz type in $\left( d + 1 \right)$ dimensions admit a natural supersymmetrization, a property which results from their relation to $d$–dimensional models via a Langevin equation. The quantum Lifshitz model in [@Ardonne:2003p1613], described by the action $$\begin{aligned} S [ \phi] = \int \diff \left[ \dot \phi^2 + (\partial_i \partial^i \phi )^2 \right], && \mathbf{x} = x_i, \, i = 1,2 \, ,\end{aligned}$$ can be thought of as descending from a free boson in two dimensions with action $$W[\phi] = \int \di \mathbf{x} \, \left[ \partial_i \phi \partial^i \phi \right] \, .$$ This formulation allows the generalization of the quantum Lifshitz model to massive and interacting cases. It becomes possible to consider the class of models satisfying the detailed balance condition whose (bosonic part of the) action takes the form $$S[\phi] = \int \diff \left[ \dot \phi^2 + \left( \frac{\delta W [\phi]}{\delta \phi }\right)^2 \right] \, ,$$ where $W[\phi]$ is a local functional of the field $\phi (t,\mathbf{x})$. The structure due to the Langevin equation implies supersymmetry in the time direction, so that the complete action includes also a fermionic field. It is given by $$\label{eq:SuperStochasticAction} S[\phi, \psi, \bar \psi] = \int \diff \left[ \dot \phi^2 + \left( \frac{\delta W [\phi]}{\delta \phi }\right)^2 - \bar \psi \left( \frac{\di}{\di t} + \frac{\delta^2 W[\phi]}{\delta \phi^2} \right) \psi \right] \, .$$ This is the supersymmetric theory we focus on in this work. A major advantage of models with this structure is that they can be studied very efficiently by using a perturbative expansion of the underlying Langevin equation, as proposed in [@Cecotti:1983up]. In this way, the cancellation of bosonic and fermionic terms in the perturbative expansion becomes automatic. In consequence, there is a *great simplification of the Feynman diagrams* of the theory in $\left( d+ 1 \right)$ dimensions which are reformulated in terms of those of the $d$–dimensional system described by $W[\phi]$, plus a set of additional rules. If we consider only $n$–point functions for the bosonic field $\phi$, all the fermionic contributions are automatically accounted for, so that it is not even necessary to introduce a fermionic propagator.\ For relativistic theories, this construction is possible only for $d=0$ and $d=1$. Giving up Lorentz invariance, we concentrate on $d=2$, which – as we show in the following – is the critical case. The generalization to any $d$ is however clear. In the following we derive - the expression for the propagator of the free Lifshitz scalar (Sec. \[sec:propagator\]); - the Feynman rules for the simplest generalization to the interacting case (Sec. \[sec:interaction\]); - a scheme for UV regularization (Sec. \[sec:regularization\]). As examples, the three–point function (Sec. \[sec:three-point-function\]) and the one–loop propagator (Sec. \[sec:one-loop-propagator\]) are discussed. The Langevin equation and the Nicolai map {#sec:stoch-quant} ========================================= Having chosen to study the supersymmetric extension of the quantum Lifshitz model, we can make use of the Nicolai map [@Nicolai:1980jc]. In a supersymmetric field theory, a Nicolai map is a transformation of the bosonic fields $$\label{eq:Nicolai-map} \phi (t, \mathbf{x} ) \mapsto \eta(t, \mathbf{x} ) \, ,$$ such that the bosonic part of the Lagrangian is quadratic in $\eta$ and the Jacobian for the transformation is given by the determinant of the fermionic part: $$\begin{gathered} \label{eq:Nicolai-boson} S_B = \int \diff \left[ \frac{1}{2} \eta(t, \mathbf{x} )^2 \right] \, ,\\ \label{eq:Nicolai-fermion} \det \left[ \frac{\delta \eta}{\delta \phi} \right] = \int \mathcal{D}\psi \mathcal{D} \bar \psi \, \exp[-S_F] \, .\end{gathered}$$ Following [@Cecotti:1983up], we would like to interpret the mapping in Eq. (\[eq:Nicolai-map\]) as a Langevin equation for the field $\phi(t, \mathbf{x})$ with noise $\eta(t,\mathbf{x})$. More precisely, we want to show the equivalence of the action in Eq. (\[eq:SuperStochasticAction\]) to the Langevin equation $$\label{eq:langevin_st} \frac{\partial\,\phi(t,\mathbf{x})}{\partial t} = -\frac{\delta W}{\delta \phi}+\eta(t, \mathbf{x}).$$ The correlations of $\eta$, which is a white Gaussian noise (as in Eq. (\[eq:Nicolai-boson\])), are given by $$\begin{aligned} \label{eq:corr_eta} \braket{\eta (t,\mathbf{x})} = 0 \, , && \braket{ \eta (t, \mathbf{x})) \eta(t^\prime, \mathbf{x}^\prime)} = 2\, \delta ( t - t^\prime) \delta ( \mathbf{x} - \mathbf{x}^\prime ) \, .\end{aligned}$$ A stochastic equation of this type, where the dissipation term
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present [*macroscopic*]{} experimental evidence for field-induced [*microscopic*]{} quantum fluctuations in different hole- and electron-type cuprate superconductors with varying doping levels and numbers of CuO$_2$ layers per unit cell. The significant suppression of the zero-temperature in-plane magnetic irreversibility field relative to the paramagnetic field in all cuprate superconductors suggests strong quantum fluctuations due to the proximity of the cuprates to quantum criticality.' author: - 'A. D. Beyer,$^1$ V. S. Zapf,$^2$ H. Yang,$^1$, F. Fabris,$^2$ M. S. Park,$^3$ K. H. Kim,$^3$ S.-I. Lee,$^3$ and N.-C. Yeh' title: | Macroscopic evidence for quantum criticality and field-induced\ quantum fluctuations in cuprate superconductors --- High-temperature superconducting cuprates are extreme type-II superconductors that exhibit strong thermal, disorder and quantum fluctuations in their vortex states. [@FisherDS91; @Blatter94; @Yeh93; @Balents94; @Giamarchi95; @Kotliar96; @Kierfeld04; @Zapf05; @Yeh05a] While much research has focused on the [*macroscopic*]{} vortex dynamics of cuprate superconductors with phenomenological descriptions, [@FisherDS91; @Blatter94; @Yeh93; @Balents94; @Giamarchi95; @Kierfeld04] little effort has been made to address the [*microscopic*]{} physical origin of their extreme type-II nature. [@Yeh05a] Given that competing orders (CO) can exist in the ground state of these doped Mott insulators besides superconductivity (SC), [@Yeh05a; @Zhang97; @Demler01; @Chakravarty01; @Sachdev03; @Kivelson03; @LeePA06] the occurrence of quantum criticality may be expected. [@Demler01; @Sachdev03; @Onufrieva04] The proximity to quantum criticality and the existence of CO can significantly affect the low-energy excitations of the cuprates due to strong quantum fluctuations [@Zapf05; @Yeh05a] and the redistribution of quasiparticle spectral weight among SC and CO. [@Yeh05a; @ChenCT07; @Beyer06] Indeed, empirically the low-energy excitations of cuprate superconductors appear to be unconventional, exhibiting intriguing properties unaccounted for by conventional Bogoliubov quasiparticles. [@Yeh05a; @ChenCT07; @Beyer06; @ChenCT03] Moreover, external variables such as temperature ($T$) and applied magnetic field ($H$) can vary the interplay of SC and CO, such as inducing or enhancing the CO [@ChenHY05a; @Lake01] at the price of more rapid suppression of SC, thereby leading to weakened superconducting stiffness and strong thermal and field-induced fluctuations. [@FisherDS91; @Blatter94; @Yeh93] On the other hand, the quasi two-dimensional nature of the cuprates can also lead to quantum criticality in the limit of decoupling of CuO$_2$ planes. In this work we demonstrate experimental evidence from [*macroscopic*]{} magnetization measurements for field-induced quantum fluctuations among a wide variety of cuprate superconductors with different [*microscopic*]{} variables such as the doping level ($\delta$) of holes or electrons, and the number of CuO$_2$ layers per unit cell ($n$). [@Chakravarty04] We suggest that the manifestation of strong field-induced quantum fluctuations is consistent with a scenario that all cuprates are in close proximity to a quantum critical point (QCP). [@Kotliar96] To investigate the effect of quantum fluctuations on the vortex dynamics of cuprate superconductors, our strategy involves studying the vortex phase diagram at $T \to 0$ to minimize the effect of thermal fluctuations, and applying magnetic field [*parallel*]{} to the CuO$_2$ planes ($H \parallel ab$) to minimize the effect of random point disorder. The rationale for having $H \parallel ab$ is that the intrinsic pinning effect of layered CuO$_2$ planes generally dominates over the pinning effects of random point disorder, so that the commonly observed glassy vortex phases associated with point disorder for $H \parallel c$ ([*e.g.*]{} vortex glass and Bragg glass) [@FisherDS91; @Giamarchi95; @Kierfeld04] can be prevented. In the [*absence*]{} of quantum fluctuations, random point disorder can cooperate with the intrinsic pinning effect to stabilize the low-temperature vortex smectic and vortex solid phases, [@Balents94] so that the vortex phase diagram for $H \parallel ab$ would resemble that of the vortex-glass and vortex-liquid phases observed for $H \parallel c$ with a glass transition $H_G (T = 0)$ approaching $H_{c2} (T = 0)$. On the other hand, when [*field-induced quantum fluctuations*]{} are dominant, the vortex phase diagram for $H \parallel ab$ will deviate substantially from predictions solely based on thermal fluctuations and intrinsic pinning, and we expect strong suppression of the magnetic irreversibility field $H_{irr} ^{ab}$ relative to the upper critical field $H_{c2} ^{ab}$ at $T \to 0$, because the induced persistent current circulating along both the c-axis and the ab-plane can no longer be sustained if field-induced quantum fluctuations become too strong to maintain the c-axis superconducting phase coherence. In this communication we present experimental results that are consistent with the notion that all cuprate superconductors exhibit significant field-induced quantum fluctuations as manifested by a characteristic field $H_{irr} ^{ab} (T \to 0) \equiv H^{\ast} \ll H_{c2} ^{ab} (T \to 0)$. Moreover, we find that we can express the degree of quantum fluctuations for each cuprate in terms of a reduced field $h^{\ast} \equiv H^{\ast}/H_{c2}^{ab}(0)$, with $h^{\ast} \to 0$ indicating strong quantum fluctuations and $h^{\ast} \to 1$ referring to the mean-field limit. Most important, the $h^{\ast}$ values of all cuprates appear to follow a trend on a $h^{\ast} (\alpha)$-vs.-$\alpha$ plot, where $\alpha$ is a material parameter for a given cuprate that reflects its doping level, electronic anisotropy, and charge imbalance if the number of CuO$_2$ layers per unit cell $n$ satisfies $n \ge 3$. [@Kotegawa01a; @Kotegawa01b] In the event that $H_{c2} ^{ab} (0)$ exceeds the paramagnetic field $H_p \equiv \Delta _{\rm SC} (0)/(\sqrt{2} \mu _B)$ for highly anisotropic cuprates, where $\Delta _{\rm SC} (0)$ denotes the superconducting gap at $T = 0$, $h^{\ast}$ is defined by $(H^{\ast}/H_p)$ because $H_p$ becomes the maximum critical field for superconductivity. To find $h^{\ast}$, we need to determine both the upper critical field $H_{c2} ^{ab} (T)$ and the irreversibility field $H_{irr} ^{ab} (T)$ to as low temperature as possible. Empirically, $H_{c2} ^{ab} (T)$ can be derived from measuring the magnetic penetration depth in pulsed fields, with $H_{c2}^{ab}(0)$ extrapolated from $H_{c2} ^{ab} (T)$ values obtained at finite temperatures. The experiments involve measuring the frequency shift $\Delta f$ of a tunnel diode oscillator (TDO) resonant tank circuit with the sample contained in one of the component inductors. Details of the measurement techniques have been given in Ref. . In general we find that the condition $H_{c2} ^{ab} (0) > H_p$ is satisfied among all samples investigated so that we define $h^{\ast} \equiv (H^{\ast}/H_p)$ hereafter. On the other hand, determination of $H_{c2}^{ab} (0)$ and $H_{c2}^c (0)$ is still useful because it provides the electronic anisotropy $\gamma \equiv (\xi _{ab}/\xi _c) = \lbrack H_{c2}^{ab}(0)/H_{c2}^c(0) \rbrack$, where $\xi _{ab} (\xi _c)$ refers to the in-plane (c-axis) superconducting coherence length. ![Representative measurements of the in-plane irreversibility fields $H_{irr}^{ab} (T)$ in cuprate superconductors: (a) Hg-1223 (polycrystalline and grain-aligned), (b) Hg-1234 (polycrystalline), (c) Hg-1245 (grain-aligned), and (d) La-112 (polycrystalline and grain-aligned). Insets: (a) Consistent $T_{irr}^{ab}(H)$ obtained from maximum irreversibility of a polycrystalline sample and from irreversibility of a grain-aligned sample with $H \parallel ab$; (b) representative $\chi_3$ data taken using AC Hall probe techniques; (c) details of the
{ "pile_set_name": "ArXiv" }
--- abstract: 'The paper deals with the program of determining the complexity of various homeomorphism relations. The homeomorphism relation on compact Polish spaces is known to be reducible to an orbit equivalence relation of a continuous Polish group action (Kechris-Solecki). It is shown that this result extends to locally compact Polish spaces, but does not hold for spaces in which local compactness fails at only one point. In fact the result fails for those subsets of ${\mathbb{R}}^3$ which are unions of an open set and a point. In the end a list of open problems is given in this area of research.' author: - 'Vadim Kulikov[^1]' bibliography: - 'ref1.bib' title: 'Classification and Non-classification of Homeomorphism Relations' --- **MSC2010:** 03E15, 57N10, 57M25, 57N65. #### Acknowledgments. {#acknowledgments. .unnumbered} I am grateful to Professors Alexander Kechris and Su Gao for providing useful answers to my e-mails which helped and encouraged me to proceed with this work. During the time of the preparation I also had several valuable discussions on this topic with my Ph.D. supervisor Tapani Hyttinen and my friends and collegues Rami Luisto, Pekka Pankka and Marcin Sabok. Last but not least I would like to thank a dear to me woman (whose identity is concealed) for the infinite inspiration that she has given me during the time of this work. This research was supported by the Austrian Science Fund (FWF) under project number P24654. Introduction {#sec:Intro} ============ It is known that the homeomorphism relation on Polish spaces is $\Sigma^1_2$ [@Gao] and ${{\Sigma_1^1}}$-hard [@FerLouRos Thm 22]. On the other hand, it is known that restricted to compact spaces this homeomorphism relation is reducible to an orbit equivalence relation induced by continuous action of a Polish group which is known to be strictly below ${{\Sigma_1^1}}$-complete. This result is extended to locally compact spaces in Theorem \[thm:locCom\]. The main result of this paper is that this “nice” property of locally compact spaces breaks when just one point is added to them: The homeomorphism relation on the ${\sigma}$-compact spaces of the form $V\cup \{x\}$ where $x\in {\mathbb{R}}^3$ is fixed and $V\subset {\mathbb{R}}^3$ is open falls somewhere in between: it is ${{\Sigma_1^1}}$ and the equivalence relation known as $E_1$ is continuously reducible to it (Theorem \[thm:NonClass\]). This implies that this homeomorphism relation is not classifiable in a Borel way by any orbit equivalence relation arising from a Borel action of a Polish group. The proof relies on known results in knot theory and low dimensional topology. We hope that these methods can be helpful in approaching Question \[open:Main\] and other questions listed in Section \[sec:Further\]. Sections \[sec:KnotTheory\] and \[sec:BkgrndDST\] are devoted to the required preliminaries. In Section \[sec:NonClass\] we prove the main non-classification result. In the final sections the research topic of classifying homeomorphism relations is looked at in more detail: In Section \[sec:Other\] it is reviewed what positive results there are in classification of homeomorphism relations and in Section \[sec:Further\] a list of open questions is given in the area. Preliminaries in Topology and Knot Theory {#sec:KnotTheory} ========================================= In this section we go through those definitions and lemmas in knot theory and topology that we need in the proofs later. We assume that the reader is familiar with the notion of the first homology group $H_1(X)$ of a topological space $X$. The standard definitions can be found for example in [@Hatcher]. We denote by ${\mathbb{R}}^n$ the $n$-dimensional Euclidean space and by ${\mathbb{S}}^n$ the one-point compactification of it, i.e. ${\mathbb{S}}^n={\mathbb{R}}^n\cup\{\infty\}$ and the neighborhoods of $\infty$ are the sets of the form $\{\infty\}\cup({\mathbb{R}}^n\setminus C)$ where $C$ is compact. By ${{\operatorname{int}}}A$ we denote the topological interior of $A$ and by $\bar A$ the closure. Hausdorff Metric and Path Connected Subspaces --------------------------------------------- \[def:HausdorffMetric\] Let $X$ be a compact metric space. The space of all non-empty compact subsets of $X$ is denoted by $K(X)$. We equip $K(X)$ with the Hausdorff-metric: An *${\varepsilon}$-collar* of a set $C\subset X$ is the set $$C_{\varepsilon}=\{x\mid d(x,C)< {\varepsilon}\}$$ and the Hausdorff-distance between two sets in $K(X)$ is determined by $$d_{K(X)}(C,C')=\max\{\inf\{{\varepsilon}\mid C\subset C'_{\varepsilon}\},\inf\{{\varepsilon}\mid C'\subset C_{\varepsilon}\}\}.$$ The following facts are standard to verify. \[fact:Hausdorffmetric\] Let $X$ be a compact metric space. Then $K(X)$ is compact and if $(C_i)_{i\in{\mathbb{N}}}$ is a converging sequence in $K(X)$ and $C_*$ is its limit, then 1. for every $x_*$ we have $x_*\in C_*$ if and only if there is a sequence $x_i$ converging to $x_*$ with $x_i\in C_i$ for all $i\in{\mathbb{N}}$. \[fact:Haus1\] 2. if every $C_i$ is connected, then $C_*$ is connected. \[fact:Haus2\] \[fact:Haus3\] \[def:DenselyPathConnected\] A subset $A\subset {\mathbb{R}}^n$ is *path metric* if the distance between two points is given by $$d_E(x,y)=\inf\{L({\gamma})\mid {\gamma}\subset A\text{ is a path joining }x\text{ and }y\}$$ where $d_E$ is the Euclidean distance and $L({\gamma})$ is the length of the path. Equivalently $A$ is path metric if and only if for every two points $x,y\in A$ and ${\varepsilon}>0$ there is a path ${\gamma}\subset A$ connecting $x$ to $y$ and $L({\gamma})<(1+{\varepsilon})d_E(x,y)$. \[lem:HDPathMetric\] If the Hausdorff dimension of a closed $A\subset{\mathbb{R}}^n$ is less than $n-1$, then ${\mathbb{R}}^n\setminus A$ is path metric. Let $D_0$ be the $(n-1)$-dimensional unit disc $$D_0=\{(x_1,\dots,x_{n-1})\mid x_1^2+\dots+x_{n-1}^2<1\}.$$ and let $C_0$ be the cylinder $$D_0\times [0,1]\subset {\mathbb{R}}^n.$$ For $x,y\in{\mathbb{R}}^n$ denote by $[x,y]$ the straight line segment connecting $x$ and $y$. Suppose $A_0\subset C_0$ and assume that for every $(x_1,\dots,x_{n-1})\in D_0$ the set $$A_0\cap [(x_1,\dots,x_{n-1},0),(x_1,\dots,x_{n-1},1)]$$ is non-empty. Then $A_0$ must have Hausdorff dimension at least $n-1$: $A_0$ can be projected onto $D_0$ with the Lipschitz map $$(x_1,\dots,x_{n-1},x_n)\mapsto (x_1,\dots,x_{n-1},0),$$ the latter has Hausdorff dimension $n-1$ and the Hausdorff dimension cannot increase in a Lipschitz map. Therefore we have the following claim: If $A_0\subset C_0$ has Hausdorff dimension less than $n-1$, then there is $(x_1,\dots,x_{n-1})\in D_0$ such that $[(x_1,\dots,x_{n-1},0),(x_1,\dots,x_{n-1},1)]\cap A_0={\varnothing}.$ Let $x,y\in {\mathbb{R}}^n\setminus A$ and let ${\varepsilon}>0$. Since $A$ is closed there is ${\delta}<{\varepsilon}/2$ such that $\bar B(x,{\delta})\cap A=\bar B(y,{\delta})\cap A={\varnothing}$. Let $P_x$ and $P_y$ be $(n-1)$-dimensional affine hyperplanes passing through $x$ and $y$ respectively and which are orthogonal to $x-
{ "pile_set_name": "ArXiv" }
--- abstract: 'This is a note on a recent paper of De Simoi - Kaloshin - Wei [@DKW]. We show that using their results combined with wave trace invariants of Guillemin-Melrose [@GM2] and the heat trace invariants of Zayed [@Za] for the Laplacian with Robin boundary conditions, one can extend the Dirichlet/Neumann spectral rigidity results of [@DKW] to the case of Robin boundary conditions. We will consider the same generic subset as in [@DKW] of smooth strictly convex ${{\mathbb Z}}_2$-symmetric planar domains sufficiently close to a circle, however we pair them with arbitrary ${{\mathbb Z}}_2$-symmetric smooth Robin functions on the boundary and of course allow deformations of Robin functions as well.' address: 'Department of Mathematics, UC Irvine, Irvine, CA 92617, USA' author: - Hamid Hezari title: Robin spectral Rigidity of strictly convex domains with a reflectional symmetry --- Introduction ============ In [@DKW], it is shown that for a generic class $\mathcal C$ of smooth strictly convex ${{\mathbb Z}}_2$-symmetric planar domains sufficiently close to a circle, endowed with Dirichlet or Neumann boundary conditions, one has Laplace spectral rigidity within $\mathcal C$. This means that given any $\Omega_0 \in \mathcal C$ and any $C^1$-deformation $\{\Omega_s\}_{s \in [0, 1]} $ of $\Omega_0$ in $\mathcal C$ with $\text{Spec}(\Delta_{\Omega_s}) = \text{Spec}(\Delta_{\Omega_0})$ for all $s \in [0, 1]$, one can find isometries $\{ \mathcal I_s\}_{s \in [0, 1]}$ of ${{\mathbb R}}^2$ such that $ \mathcal I_s( \Omega_0) = \Omega_s$. Here $\text{Spec}(\Delta_{\Omega})$ is the spectrum of the euclidean Laplacian $\Delta= \frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}$ with Dirichlet (or Neumann) boundary condition on $\Omega$. In this paper we are concerned with the generalization of this problem for $\text{Spec}(\Delta_{\Omega, K})$ i.e., the spectrum of the euclidean Laplacian with Robin boundary condition $\partial_n u = Ku$ on $\partial \Omega$, for a given function $K \in C^\infty (\partial \Omega)$, where $\partial_n$ is the inward normal differentiation. In particular, by this notation $\Delta_{\Omega, 0}$ is the Laplacian on $\Omega$ with Neumann boundary condition on $\partial \Omega$. We show that: \[main\] Let $\delta >0$ and $\mathcal S_\delta$ be the class of smooth strictly convex ${{\mathbb Z}}_2$-symmetric[^1] planar domains that are $\delta$-close [^2] to a circle. Then there exists $\delta>0$ and a generic subset $\mathcal C$ of $\mathcal S_\delta$ such that given any $\Omega_0 \in \mathcal C$ and $K_0 \in C^\infty_{{{\mathbb Z}}_2} (\partial \Omega)$, and any $C^1$-deformation $\{\Omega_s\}_{s \in [0, 1]}$ of $\Omega_0$ in $\mathcal C$ and $C^0$-deformation $\{K_s\}_{s \in [0, 1]}$ of $K_0$ in $C_{{{\mathbb Z}}_2}^\infty (\partial \Omega)$ satisfying $\text{Spec}(\Delta_{\Omega_s, K_s}) = \text{Spec}(\Delta_{\Omega_0, K_0})$ for all $s \in [0, 1]$, one can find isometries $\{ \mathcal I_s\}_{s \in [0, 1]}$ of ${{\mathbb R}}^2$ such that $ \mathcal I_s( \Omega_0) = \Omega_s$ and $K_s ( \mathcal I_s (b))= K_0(b)$ for all $b \in \partial \Omega_0$. Here, $C^\infty_{{{\mathbb Z}}_2} (\partial \Omega)$ is the space of smooth functions on $\partial \Omega$ that are invariant under the imposed ${{\mathbb Z}}_2$-symmetry on $\Omega$. Also, in fact the generic class $\mathcal C$ consists of $\Omega \in \mathcal S_\delta$ that satisfy: - Up to the reflection symmetry, all distinct periodic billiard orbits in $\Omega$ have distinct lengths. - All (transversal) periodic billiard orbits in $\Omega$ are non-degenerate, i.e., the linearized Poincaré map associated to each orbit does not have $1$ as an eigenvalue. Using the results of [@PSgeneric] one sees that $\mathcal C$ is generic[^3] in $\mathcal S_\delta$. Moreover, for every $\Omega \in \mathcal C$, the spectrum of $\Delta$ with Dirichlet, Neumann, or Robin boundary conditions, determines the length spectrum $\text{LS}(\Omega)$, which is the set of lengths of periodic billiard trajectories and their iterations also including the length of the boundary and its multiples with positive integers. Such determination is shown through the so called *Poisson relation *proved by [@AnMe; @PS], which asserts that if the boundary of $\Omega$ is smooth then $$\text{SingSupp} \left ( \text{Tr} \; \cos{ t \sqrt{-\Delta^B_\Omega} } \right ) \subset \{0 \} \cup \pm \text{LS}(\Omega),$$ where $\Delta^B_\Omega$ is the Euclidean Laplacian with Dirichlet, Neumann, or Robin boundary conditions. One can see ([@PS; @PSgeneric]) that under the generic conditions (1) and (2) above, the containment in the Poisson relation is an equality, hence $\text{LS}(\Omega)$ is a spectral invariant. On the other hand, the length spectral rigidity result of [@DKW] shows that if $\Omega_s \in \mathcal S_\delta$, and if $\text{LS}(\Omega_s) =\text{LS}(\Omega_0)$, then there exist isometries $\{ \mathcal I_s\}_{s \in [0, 1]}$ of ${{\mathbb R}}^2$ such that $ \mathcal I_s( \Omega_0) = \Omega_s$. Hence Theorem \[main\] follows from the second part of the following theorem which concerns a fixed domain. To present the statement it is convenient to fix the axis of symmetry and also a marked point as in [@DKW]; we assume that each $\Omega \in \mathcal S_\delta$ is invariant under the reflection about the $x$-axis, that $\Omega \subset \{ (x, y); x \geq 0 \}$, and that $0=(0, 0) \in \partial \Omega$, which will be called the marked point.** \[Robin\] Let $\mathcal C \subset \mathcal S_\delta$ be defined as above. There exists $\delta>0$ such that - If $\Omega \in \mathcal C$, $K_1, K_2 \in C^\infty_{{{\mathbb Z}}_2}(\partial \Omega)$, $K_1(0)=K_2(0)$, and $\text{Spec}(\Delta_{\Omega, K_1}) = \text{Spec}(\Delta_{\Omega, K_2})$, then $K_1=K_2$. - If $\Omega \in \mathcal C$ and if there are three functions $K_1, K_2, K_3$ in $C^\infty_{{{\mathbb Z}}_2}(\partial \Omega)$ such that $$\text{Spec}(\Delta_{\Omega, K_1})= \text{Spec}(\Delta_{\Omega, K_2})= \text{Spec}(\Delta_{\Omega, K_2}),$$ then at least two of them are identical. One can see that if we add the assumption that $\Omega$ has two perpendicular reflectional symmetries, and $K_1$ and $K_2$ are preserved under both symmetries, then $K_1(0)=K_2(0)$, hence $K_1=K_2$ by part (a). As a result one gets the following extension of the inverse spectral result of Guillemin-Melrose [@GM1] obtained on ellipses. \[2symmetries\] Let $\mathcal S_{2, \delta}$ be the subclass of $\mathcal S_{\delta}$ consisting of domains with two reflectional symmetries whose axes are perpendicular to each other. Let $\mathcal C_2 \subset \mathcal S_{2, \delta}$ be the class of domains satisfying the generic properties (1) and (2) above. If $\Omega \in \mathcal C_2$, $K_1, K_2 \in C^\infty_{{{\mathbb Z}}_2 \times {{\mathbb Z}}_2}(\partial \Omega)$, and $\text{Spec}(\Delta_{\Omega, K_1}) = \text{Spec}(\Delta_{\Omega, K_2})$, then $K_1=K_2$. To prove Theorem \[Robin\] we will use some technical results from [@DKW]. To be able to do so we will need a sufficient number of spectral invariants which we will obtain from a Poisson summation formula of Guillemin-Melrose [@GM2], and also heat trace formulas of Zayed [@Za] for the Robin Laplacian. In fact to our knowledge these are the only Robin spectral invariants that are
{ "pile_set_name": "ArXiv" }
--- abstract: | The flavour degree of freedom in non-charged $q\bar q$ mesons is discussed in a generalisation of quantum electrodynamics including scalar coupling of gauge bosons, which yields to an understanding of the confinement potential in mesons. The known “flavour states” $\sigma$, $\omega$, $\Phi$, $J/\Psi$ and $\Upsilon$ can be described as fundamental states of the $q\bar q$ meson system, if a potential sum rule is applied, which is related to the structure of vacuum. This indicates a quantisation in fundamental two-boson fields, connected directly to the flavour degree of freedom.\ In comparison with potential models additional states are predicted, which explain the large continuum of scalar mesons in the low mass spectrum and new states recently detected in the charm region. PACS/ keywords: 11.15.-q, 12.40.-y, 14.40.Cs, 14.40.Gx/ Generalisation of quantum electrodynamics with massless elementary fermions (quantons, $q$) and scalar two-boson coupling. Confinement potential. Flavour degree of freedom of mesons described by fundamental $q^+q^-$ states. Masses of $\sigma$, $\omega$, $\Phi$, $J/\Psi$ and $\Upsilon$. --- version 30.3.2011 [Two-boson field quantisation and flavour in $q^+q^-$ mesons]{} H.P. Morsch[^1]\ Institute for Nuclear Studies, Pl-00681 Warsaw, Poland The flavour degree of freedom has been observed in hadrons, but also in charged and neutral leptons, see e.g. ref. [@PDG]. It is described in the Standard Model of particle physics by elementary fermions of different flavour quantum number. The fact that flavour is found in both strong and electroweak interactions could point to a supersymmetry between these fundamental forces, which should give rise to a variety of supersymmetric particles, which in spite of extensive searches have not been observed. A very different interpretation of the flavour degree of freedom is obtained in an extension of quantum electrodynamics, in which the property of confinement of mesons as well as their masses are well described. This is based on a Lagrangian [@Moinc], which includes a scalar coupling of two vector bosons $$\label{eq:Lagra} {\cal L}=\frac{1}{\tilde m^{2}} \bar \Psi\ i\gamma_{\mu}D^{\mu}( D_{\nu}D^{\nu})\Psi\ -\ \frac{1}{4} F_{\mu\nu}F^{\mu\nu}~,$$ where $\Psi$ is a massless elementary fermion (quanton, q) field, $D_{\mu}=\partial_{\mu}-i{g_e} A_{\mu}$ the covariant derivative with vector boson field $A_{\mu}$ and coupling $g_e$, and $F^{\mu\nu}=\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu}$ the field strength tensor. Since our Lagrangian is an extension of quantum electrodynamics, the coupling $g_e$ corresponds to a generalized charge coupling $g_e\geq e$ between charged quantons $q^+$ and $q^-$. By inserting the explicite form of $D^{\mu}$ and $D_{\nu}D^{\nu}$ in eq. (\[eq:Lagra\]), this leads to the following two contributions with 2- and 3-boson ($2g$ and $3g$) coupling, if Lorentz gauge $\partial_\mu A^\mu =0$ and current conservation is applied $$\label{eq:L2} {\cal L}_{2g} =\frac{-ig_e^{2}}{\tilde m^{2}} \ \bar \Psi \gamma_{\mu} \partial^{\mu} (A_\nu A^\nu) \Psi $$ and $$\label{eq:L3} {\cal L}_{3g} =\frac{ -g_e^{3}\ }{\tilde m^{2}} \ \bar \Psi \gamma_{\mu} A^\mu (A_\nu A^\nu)\Psi \ .$$ Requiring that $A_\nu A^\nu$ corresponds to a background field, ${\cal L}_{2g}$ and ${\cal L}_{3g}$ give rise to two first-order $q^+q^-$ matrix elements $$\label{eq:P2} {\cal M}_{2g} =\frac{-\alpha_e^{2}}{\tilde m^{3}} \bar \psi(\tilde p') \gamma_\mu~\partial^\mu \partial^\rho w(q)g_{\mu\rho}~\gamma_\rho \psi(\tilde p)$$ and $$\label{eq:P3} {\cal M}_{3g} = \frac{-\alpha_e^{3}}{\tilde m} \bar \psi(\tilde p')\gamma_{\mu} ~w(q)\frac{g_{\mu\rho} f(p_i)}{p_i^{~2}} w(q)~ \gamma_{\rho} \psi(\tilde p)~,$$ in which $\alpha_e={g_e^2}/{4\pi}$ and $\psi(\tilde p)$ is a two-fermion wave function $\psi(\tilde p)=\frac{1}{\tilde m^3} \Psi(p)\Psi(k)$. The momenta have to respect the condition $\tilde p'-\tilde p=q+p_i=P$. Further, $w(q)$ is the two-boson momentum distribution and $f(p_i)$ the probability to combine $q$ and $P$ to $-p_i$. Since $f(p_i)\to 0$ for $\Delta p\to 0$ and $\infty$, there are no divergencies in ${\cal M}_3$. By contracting the $\gamma$ matrices by $\gamma_\mu\gamma_\rho+ \gamma_\rho\gamma_\mu=2g_{\mu\rho}$, reducing eqs. (\[eq:P2\]) and (\[eq:P3\]) to three dimensions, and making a transformation to r-space (details are given in ref. [@Moinc]), the following two potentials are obtained, which are given in spherical coordinates by $$V_{2g}(r)= \frac{\alpha_e^2\hbar^2 \tilde E^2}{\tilde m^3}\ \Big (\frac{d^2 w(r)}{dr^2} + \frac{2}{r}\frac{d w(r)}{dr}\Big )\frac{1}{\ w(r)}\ , \label{eq:vb}$$ where $\tilde E=<E^2>^{1/2}$ is the mean energy of scalar states of the system, and $$\label{eq:vqq} V^{(1^-)}_{3g}(r)= \frac{\hbar}{\tilde m} \int dr'\rho(r')\ V_{g}(r-r')~,$$ in which $w(r)$ and $\rho(r)$ are two-boson wave function and density (with dimension $fm^{-2}$), respectively, related by $\rho(r)=w^2(r)$. Further, $V_{g}(r-r')$ is an effective boson-exchange interaction $V_{g}(r)=-\alpha_e^3\hbar \frac{f(r)}{r}$. Since the quanton-antiquanton parity is negative, the potential (\[eq:vqq\]) corresponds to a binding potential for vector states (with $J^\pi =1^-$). For scalar states angular momentum L=1 is needed, requiring a p-wave density, which is related to $\rho(r)$ by $$\label{eq:spur} \rho^{ p}(\vec r)=\rho^{ p}(r)\ Y_{1,m}(\theta,\Phi) = (1+\beta R\ d/dr) \rho(r)\ Y_{1,m}(\theta,\Phi)\ .$$ $\beta R$ is determined from the condition $<r_{\rho^p}>\ =\int d\tau\ r \rho^p(r)=0$ (elimination of spurious motion). This yields a boson-exchange potential given by $$\label{eq:vqq0} V^{(0^+)}_{3g}(r)= \frac{\hbar}{\tilde m} \int d\vec r\ '\rho^{ p}(\vec r\ ')\ Y_{1,m}(\theta',\Phi')\ V_{g}(\vec r-\vec r') = 4\pi \frac{\hbar}{\tilde m} \int dr'\rho^{ p}(r')\ V_{g}(r-r')~.$$ We require a matching of $V^{(0^+)}_{3g}(r)$ and $\rho(r)$ $$\label{eq:con1} V^{(0^+)}_{3g}(r)=c_{pot} \ \rho(r)\ ,$$ where $c_{pot}$ is an arbitrary proportionality factor. Eq. (\[eq:con1\]) is a consequence of the fact that $V_{g}(r)$ should be finite for all values of r. This can be achieved by using a form $$\label{eq:veff} V_{g}(r)=f_{as}(r) (-\alpha_e^3 \hbar /r)\ e^{-cr}$$ with $f_{as}(r)=(e^{(ar)^{\sigma}}-1)/(e^{(ar)^{\sigma}}+1)$, where the parameters $c$, $a$ and $\sigma$ are determined
{ "pile_set_name": "ArXiv" }
--- abstract: 'The segmentation of large scale power grids into zones is crucial for control room operators when managing the grid complexity near real time. In this paper we propose a new method in two steps which is able to automatically do this segmentation, while taking into account the real time context, in order to help them handle shifting dynamics. Our method relies on a “guided” machine learning approach. As a first step, we define and compute a task specific “Influence Graph” in a guided manner. We indeed simulate on a grid state chosen interventions, representative of our task of interest (managing active power flows in our case). For visualization and interpretation, we then build a higher representation of the grid relevant to this task by applying the graph community detection algorithm *Infomap* on this Influence Graph. To illustrate our method and demonstrate its practical interest, we apply it on commonly used systems, the IEEE-14 and IEEE-118. We show promising and original interpretable results, especially on the previously well studied RTS-96 system for grid segmentation. We eventually share initial investigation and results on a large-scale system, the French power grid, whose segmentation had a surprising resemblance with RTE’s historical partitioning.' author: - 'A. Marot, S. Tazi, B. Donnot, P. Panciatici (RTE R&D)$^{1}$ [^1]' bibliography: - 'bibliography.bib' title: '**Guided Machine Learning for power grid segmentation** ' --- INTRODUCTION ============ Well-established power systems such as the French power grid are starting to experience a transition with a steep rise in complexity. This is due in part to the changing nature of the grid, with an end to the ever increasing total consumption. This shifts the way we traditionally develop the grid. While we used to expand it by building new power lines with heavy investments that relies on growth in revenues, we now should optimize the existing one with every flexibilities at our disposal. We also notice a revival of DC current technology, hybridizing the current AC grid with new dynamics. In addition, this new complexity also comes from other external factors such as the changing energy mix with a massive integration of renewables, as well as an ever more fragmented set of actors at a more granular level like prosumers, or at the supranational level with an interconnected European grid for instance. This new complexity will bring new dynamics such as dynamically varying flow amplitudes and directions. This is in contrast of what was the case in the past with centralized production from large power plants, “pushing” the flows to the loads in a very hierarchical and descendant way. New distributed controls are getting implemented, taking advantage of new communication and software technologies. This pushes us towards an always more entangled cyber-physical system whose topology is no more the actual physical grid topology, which was convenient to study the grid. Its topology will be one also induced by long distance communications and controls. Therefore, rethinking the way we operate the grid has become a necessity. To handle the current complexity, our control room operators have built over time, and over many studies with the simulators at their disposal, their own mental representations of the grid. They actually segment the grid into static zones that are redefined every year to study the grid efficiently near real time. They are indeed able to quickly identify remedial actions given security risks around. It helps them make the best trade-off between exploration and exploitation. However we anticipate that these yearly static views will be less and less relevant in the future to operate the grid, with fuzzier electrical “frontiers”. This can even occur along a course of a day within this dynamic context. However a zonal segmentation should still be relevant to operate the grid by efficiently representing this complexity to act on it. Offering such context awareness will help our dispatchers in their decision making process. That is why an assisted segmentation built in a dynamic fashion to fit the specific context of a situation is needed. Hence, how can we build such contextual segmentation for a given task? Previous works on segmentation have relied on the one hand on gathering proper dynamical phasor measurements on the grid to compute disturbance-based coherency in the time-domain and find similarities between electrical nodes [@Kamwa2007Automatic; @Wang1994Novel; @Juarez2011Characterization]. This implies a massive deployment of PMUs or very accurate large-scale dynamic simulations. On the other hand, other analytical approaches have investigated simplified modeling of the grid, relying on the linearized DC approximation, to partition it along buses for the purpose of studying cascading failures [@Blumsack2009Defining] (hierarchical clustering), [@Sanchez2014Hierarchical] (spectral clustering) [@Cotilla2013Multi] (hybrid K-means/evolutionary algorithm). This gave interesting results at a much lower cost. However, as our system becomes more cyber-physical with distributed regulations relying on advanced embedded software and fast communications, this has some limitations on the system complexity it can handle, such as non connected clusters in the actual grid topology. In addition, given their objective of identifying weak components overall for cascading failures, those methods were not particularly grid state specific. We would like to address those 2 points for our near-real time applications. In contrast of analytical methods that have been more extensively explored in the field of power systems, our approach relies on machine learning, following our previous work [@IntroducingML] and responding to the call for new grid proxies in reliability management [@Proxies]. We propose in this paper a new method that relies on a guided use of existing power grid simulators to teach the machine an expected system response in a context of our task. We will talk of “guided machine learning”, a form of unsupervised learning with carefully generated inputs to represent, guided by human expertise. For a more extended form of it, you can refer to [@GuidedML]. Simulating systematic chosen interventions on a grid state, we build an Influence Graph (IG) to define a similarity between our components given our operational task. Interestingly, the IG connectivity goes beyond the actual grid topology, which can further lead to non topologically connected components within a cluster, an idea expressed by [@hines2015InfluenceG] and reminiscent of [@roy2001InfluenceModel] when studying cascading failures. This kind of phenomena will certainly become more prominent in a cyber-physical system and should be captured. Our machine can then learn a useful interpretable representation, a proxy, from that IG complex representation by running a suitable clustering algorithm. The *Infomap* algorithm from the field of community detection was our top candidate given some intrinsic properties and has shown to work well on our IG. The paper is organized as followed. Section \[sec:method\] is dedicated to the method, where we describe the IG, justify its relevance compare to more classical distance matrices and talk about the suitability of the “*Infomap*” clustering algorithm. In section \[sec:results\] we present the results on commonly used system, namely the IEEE 14, for illustration and interpretation of our method. To compare our method to others, we use the well studied 96-RTS system for grid segmentation which can serve as a benchmark. We eventually give some insights on the usefulness of our method on large-scale power grids such has the French power grid. Finally, section \[sec:conclusions\] provides conclusions and future directions for this work. METHOD {#sec:method} ====== A proxy, a simplified model of a complex system, can only be relevant for a certain range of tasks as we are “neglecting” details that matters for other phenomena. It is useful as it reduces the dimension and exploration of a problem related to our task while preserving the relevant information. Such a representation can be judged along 3 axis: interpretability (helping someone apprehend a situation), synthetic (limiting someone’s exploration of the problem) and efficiency (containing solutions to the actual problem). Clustering methods already applies to a wide diversity of problems. Our main issue here is to provide representative data of our task for a given grid state to a clustering algorithm. Measurements are not enough as our grid state is evolving. The dynamics around this state are the results of multiple entangled phenomena whose contributions are hard to assess. We cannot invasively influence those dynamics on the real system for the sake of our method. Rather we need to rely on the proper use of simulators. Building on top of existing simulators has the advantage to rely on the complexity of their system modeling. This avoid us the need of redefining a specific analytical model that captures the grid complexity for our task. Furthermore, rather than analytically and explicitly modeling our task, we make use of the simulator as an oracle to show a system response under some representative experiments that we call interventions. We then let the machine learns from it a proper synthetic representation for this implicit task. The combination of the simulator (Sim) and our set of interventions peculiar to our task can be seen as a teacher for our machine: we will call it a guided simulator (GSim). An influence Matrix: a grid state under simulated small perturbations --------------------------------------------------------------------- In this section, we suppose that we have at our disposal a simulator $Sim$, that given a grid state $(Inj,Topo)$, representing respectively the injections vector and the topology representation. The resulting call of this simulator is denoted by $x$: $$x=Sim(Inj,Topo)$$ Given our state $x = (\bm{z}, \omega)$, $\bm{z} = (z_1, \dots, z_j, z_{n_z})$ being our variables of interest relevant to our task, $\omega$ being the other variables we discard and with $(Inj,Topo)$ $\subset{x}$. In our case we have $\bm{z} = $ “all the
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we present the design, manufacturing, characterization and analysis of the coupling ratio spectral response for Multimode Interference (MMI) couplers in Silicon-on-Insulator (SOI) technology. The couplers were designed using a Si rib waveguide with SiO$_2$ cladding, on a regular 220 nm film and 2 $\mu$m buried oxide SOI wafer. A set of eight different designs, three canonical and five using a widened/narrowed coupler body, have been subject of study, with coupling ratios 50:50, 85:15 and 72:28 for the former, and 95:05, 85:15, 75:25, 65:35 and 55:45 for the latter. Two wafers of devices were fabricated, using two different etch depths for the rib waveguides. A set of six dies, three per wafer, whose line metrology matched the design, were retained for characterization. The coupling ratios obtained in the experimental results match, with little deviations, the design targets for a wavelength range between 1525 and 1575 nm, as inferred from spectral measurements and statistical analyses. Excess loss for all the devices is conservatively estimated to be less than approximately 2 dB. All the design parameters, body width and length, input/output positions and widths, and tapers dimensions are disclosed for reference.' author: - | José David Doménech$^1$, Javier S. Fandiño$^2$, Bernardo Gargallo$^2$ and Pascual Muñoz$^{1,2}$\ \ $^1$VLC Photonics S.L., C/ Camino de Vera s/n,\ Valencia 46022, Spain e-mail: david.domenech@vlcphotonics.com\ $^2$Optical and Quantum Communications Group, iTEAM Research Institute,\ Universitat Politècnica de València, C/ Camino de Vera s/n,\ Valencia 46022, Spain e-mail: pmunoz@iteam.upv.es. bibliography: - 'mmi.bib' title: 'Arbitrary coupling ratio multimode interference couplers in Silicon-on-Insulator' --- Introduction ============ Optical couplers are perhaps one of the most basic and most used among the building blocks for photonic integrated circuits (PICs) in all currently available technology platforms [@munoz_icton2013]. Different integrated implementations exist (see [@hunsperger]), and they are usually compared according to their coupling constant and operational wavelength range. Among all of them, the Multimode Interference (MMI) coupler is mostly used in high index contrast PIC technologies, such as III-V and group IV materials, since it is in general more compact and preserves the coupling constant over a wide wavelength range. Since its inception by Ulrich in 1975 [@ulrich1975], and the demonstrations of MMIs as we know them today carried out by Pennings and co-workers [@pennings1991], multitude of papers have studied the different aspects of these very versatile devices: fundamental theory and design rules for the so called canonical MMIs, by Soldano [@soldano] and Bachmann [@bachmann94; @bachmann95]; design rules and experimental demonstrations of widened/narrowed body MMIs for arbitrary coupling ratios at a single wavelength by Besse [@besse96], with reconfiguration using thermal tuning by Leuthold [@leuthold01]; tolerance analysis by Besse [@besse94]; design optimizations for different technologies by Halir [@halir]; library of experimentally demonstrated 50:50 Silicon-on-Insulator couplers [@zhou], to name a few. [There are other means of implementing couplers with arbitrary ratio that make use of additional structures, as for instance the combination of two MMIs in a Mach-Zehnder Interferometer (MZI) like structure recently proposed by Cherchi]{} [@art:cherchi14]. In this paper we report on the design and experimental demonstration of arbitrary coupling ratio MMIs following the design rules by Besse and co-workers [@besse96], supported by Beam Propagation Method (BPM) commercial software optimizations [@phoenix], on a Silicon-on-Insulator (SOI) platform. [Complimentary to previous works available in the literature this paper presents:, a) all]{} the design parameters required to obtain broadband (1525-1575 nm) coupling ratio, with modest excess loss, for canonical 50:50, 85:15 and 72:28 MMIs, as well as for widened/narrowed 95:05, 85:15, 75:25, 65:35 and 55:45 MMIs are disclosed[; b) spectral traces demonstrating the otherwise well known theoretically broadband operation of these devices; c) statistics for the coupling ratio variations in the operational wavelength range, that may be of use to perform variational analysis of more complex on-chip devices, circuits and networks based on these MMIs; d) explanation on how measurement deviations, due to variations in the in/out coupling to/from the chip, can bias the coupling ratio results, and e) measurements to infer the reproducibility, die to die and wafer to wafer, of the responses.]{} These reference designs, experimentally validated, [together with the statistical variations and reproducibility information,]{} can be used as starting point for other designers [and researchers of these devices, and of more complex chip networks employing them,]{} on SOI platforms. Design ====== The design of all the MMIs was carried out in three steps: i) cross-section analysis and 2D reduction, ii) analytic approach and iii) numerical BPM optimization. The cross section consists of a buried oxide layer of 2 microns height, capped with a 220 nm Si layer and a SiO$_2$ over-cladding. Rib waveguides, with 130 nm etch depth from top of the Si layer, were used in the design stage. For the same lithographic resolution, rib waveguides provide more robust MMIs than strip waveguides, owing to the fact that wider waveguides are required to support the same number of modes [@soldano]. This comes at the cost of increased footprint and some additional design refinements are required to minimize the MMI imbalance and excess loss[@halir][@hill], besides the complexity of two mask level fabrication described in [@thomson2010low]. The latter trade-off is common in applications where the coupling constant needs to be set very precisely, for instance in very small free spectral range Mach-Zehnder interferometers (MZI), to compensate for the significantly larger loss difference between the long and short interferometer arms [@bogaerts2010silicon]. Moreover it is determinant for on-chip reflectors based on Sagnac interferometers, where the reflectivity is solely determined by the coupling ratio of the coupler in the interferometer [@munoz2011sagnac]. Firstly, for the cross-section analysis a film-mode matching mode solver was used [@phoenix]. The wavelength dependence of the refractive indices was included in the solver (see the Appendix). For a given MMI width, the first and second mode propagation constants, $\beta_0$ and $\beta_1$ respectively, were found for a wavelength of 1.55 $\mu$m for TE polarization, and the beat length $L_{\pi} = \pi / (\beta_0-\beta_1)$ was computed from these. For the case of all the MMIs subject to design, the body width was set to 10 $\mu$m. The effective indices for the first and second mode given by the solver are n$_{eff,0}$=2.84849 and n$_{eff,1}$=2.84548. Therefore the beat length results into L$_{\pi}$=257.61 $\mu$m. In order to later use a 2D BPM method, the cross-section was reduced vertically to a 1D waveguide using the effective index method (EIM) [@buus]. EIM was firstly used to derive the 1D effective index for the core region, and then the effective index left/right to the core was calculated by numerically solving (with a bisection method) for the 1D modes of the reduced structure to match the previously calculated $L_{\pi}$ on the 2D cross-section. Secondly, analytic design rules for canonical [@soldano] and arbitrary coupling ratio [@besse96] MMIs were used. These rules provide, for a given MMI width, an analytic approximation for the MMI body length, [named L$^0$]{}, from the previously calculated $L_{\pi}$, and for the case of arbitrary ratio, the width variation and body geometry (named type A, B, C and D in [@besse96]). [For completeness, the analytic approximations for the MMI lengths are reproduced here:]{} $$\begin{aligned} L^0_A &=& \delta_W^A \frac{1}{2} \left(3 L_\pi \right) \label{eq:La}\\ L^0_{B,Bsym} &=& \delta_W \frac{1}{3} \left(3 L_\pi \right) \label{eq:Lb}\\ L^0_C &=& \delta_W \frac{1}{4} \left(3 L_\pi \right) \label{eq:Lc}\\ L^0_D &=& \delta_W \frac{1}{5} \left(
{ "pile_set_name": "ArXiv" }
--- abstract: 'In order to maximize the sensitivity of pulsar timing arrays to a stochastic gravitational wave background, we present computational techniques to optimize observing schedules. The techniques are applicable to both single and multi-telescope experiments. The observing schedule is optimized for each telescope by adjusting the observing time allocated to each pulsar while keeping the total amount of observing time constant. The optimized schedule depends on the timing noise characteristics of each individual pulsar as well as the performance of instrumentation. Several examples are given to illustrate the effects of different types of noise. A method to select the most suitable pulsars to be included in a pulsar timing array project is also presented.' author: - | K. J. Lee $^{1,2}$[^1], C. G. Bassa$^{2}$, G. H. Janssen$^{2}$, R. Karuppusamy$^{1,2}$, M. Kramer$^{1,2}$, R. Smits$^{2,3}$ and B. W. Stappers$^{2}$\ $^1$[Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany]{}\ $^2$[Jodrell Bank Centre for Astrophysics, University of Manchester, Manchester M13 9PL, UK]{}\ $^3$[Stichting ASTRON, Postbus 2, 7990 AA Dwingeloo, The Netherlands]{}\ title: The optimal schedule for pulsar timing array observations --- \[firstpage\] [pulsar: general — gravitational wave]{} Introduction ============ Millisecond pulsars (MSPs) are stable celestial clocks, so that the timing residuals, the differences between the observed and the predicted time of arrival (TOA) of their pulses, are usually minute compared to the total length of the data span. A stochastic gravitational wave (GW) background leaves angular dependent correlations in the timing residuals of widely separated pulsars (for general relativity see @HD83, for alternative gravity theories see @LJR08 [@LJR10]), i.e. the correlation coefficient between timing residuals of a pulsar pair is a function of the angular distance between the two pulsars. Such a spatial correlation in pulsar timing signals makes it possible to directly detect GW using pulsar timing arrays (PTAs; @HD83 [@FB90]). Previous analyses [@JHLM05] have calculated PTA sensitivity to a stochastic GW background generated by super massive blackhole (SMBH) binaries at cosmological distances [@JB03; @SHMV04]. They have shown that a positive detection of the GW background is feasible, if one uses state of the art pulsar timing technologies. Such encouraging results triggered consequent observational efforts. At present, several groups are trying to detect GWs using PTAs: i) the European Pulsar Timing Array (EPTA; @EPTA06 [@EPTA10; @FHB10; @VLJ11]) with a sub-project, the Large European Array for Pulsars (LEAP, @KS10 [@FHB10]), combining data from the Lovell telescope, the Westerbork Synthesis Radio Telescope, the Effelsberg 100-m Radio Telescope, the Nançay Decimetric Radio Telescope, and the Sardinia Radio Telescope[^2], ii) the Parkes Pulsar Timing Array (PPTA; @Man08 [@Hob09; @PPTA10]) using observations with the Parkes radio telescope augmented by public domain observations from the Arecibo Observatory, iii) the North-American Nanohertz Observatory for Gravitational waves (NANOGrav, @Jenet09) using data from the Green Bank Telescope and the Arecibo Observatory, iv) Kalyazin Radio Astronomical Observatory timing [@Rodin01]. Besides these on-going projects, international cooperative efforts, e.g. the International Pulsar Timing Array (IPTA, @Ipta10) or future telescopes with better sensitivity, e.g. the Five-hundred-meter Aperture Spherical Radio Telescope (FAST, @NWZZJG04 [@SLKMSJN09]) and the Square Kilometre Array (SKA, @KS10 [@SKSL09]), are planned to join the PTA projects to increase the chances of detecting GWs. Operational questions arise naturally from such PTA campaigns, e.g. how should the observing schedule be arranged to maximize our opportunity to detect the GW signal? How much will we benefit from such optimization? In this paper, we try to answer these questions. The paper is organized as follows: In [Section \[sec:decs\]]{}, we extend the formalism of [@JHLM05] to calculate the GW detection significance as a function of observing schedules, i.e. the telescope time allocation to each pulsar. Then we describe the technique to maximize the GW detection significance in [Section \[sec:optintro\]]{}. Frameworks of the optimization problem are described in [Section \[sec:optbak\]]{}, and the algorithm to optimize a single and multiple telescope array are given in [Section \[sec:optsig\]]{} and [Section \[sec:optmul\]]{} respectively. The results are presented in [Section \[sec:res\]]{} and we discuss related issues in [Section \[sec:con\]]{}. Analytical Calculation For GW Detection Significance {#sec:decs} ==================================================== In this section, we calculate the statistical significance $S$ for detecting the stochastic GW background using PTAs. We consider TOAs from multiple pulsars, where each set may be collected from different telescopes or data acquisition systems. To detect the GW background, one correlates the TOAs between pulsar pairs and checks if the GW-induced correlation is significant. [@JHLM05] have calculated the GW detection significance for the case, where the noise in TOAs is of a white spectra with equal root-mean-square (RMS) level for all pulsars. To investigate the optimal observing schedule, we have to generalize the calculation, such that we can explicitly check the dependence of the GW detection significance on the noise properties of each individual pulsar. Under the influence of a stochastic gravitational wave background, the pulsar timing residual $R$ from a standard pulsar timing pipeline contains two components, the GW-induced signal ${ {s} }$ and noise from other contributions ${ {n} }$. In this section, we determine the statistical properties of ${ {s} }$ and ${ {n} }$ first, and then calculate the GW detection significance. Statistics for GW-induced pulsar timing signal {#sec:psr} ---------------------------------------------- The spectrum of the stochastic GW background is usually assumed to be a power-law, in which the characteristic strain ($h_{\rm c}$) of the GW background is $h_{\rm c}=A_{0} (f/f_{0})^{\alpha}$. Here, $A_{0}$ is the dimensionless amplitude for the background at $f_{0}=1\, \textrm{yr}^{-1}$, and $\alpha$ is the spectral index. Under the influence of such a GW background, the power spectrum $S_{\rm s}(f)$ of the GW-induced pulsar timing residual ${ {s} }$ is [@JHLM05] $$S_{\rm s}(f)= \frac{A_{0}^2 f^{2\alpha-3}}{12 \pi^2 f_{0}^{2\alpha}}\,. \label{eq:powsgw}$$ GWs perturb the space-time metric at the Earth. This introduces a correlation in the timing signal of two pulsars. The correlation coefficient $H(\theta)$ between the GW-induced signals of two pulsars with an angular separation of $\theta$ is called the Hellings and Downs function [@HD83] given as $$H(\theta)=\left\{\begin{array}{l} \frac{3+\cos\theta}{8}-\frac{3(\cos\theta-1)}{2} \ln \left[\sin\left(\frac{\theta}{2}\right)\right] \textrm{, if }\theta\neq 0\,, \\ 1 \textrm{, if }\theta=0\,. \end{array}\right. \label{eq:hdfun}$$ The spectral properties, [equation (\[eq:powsgw\])]{}, together with the spatial correlation, [equation (\[eq:hdfun\])]{}, fully characterize the statistical properties of the GW-induced signals. For an isotropic GW background, the correlations between the GW-induced signals are $$\langle { {\,^{i}\negthinspace}}s_{k}{ {\,^{j}\negthinspace}}s_{k'}\rangle=\sigma_{\rm g}^2 H({ {\,^{i}\negthinspace}}{ {\,^{j}\negthinspace}}\theta) \gamma_{kk'}\,. \label{eq:corgws}$$ Here, we follow the notation that the subscript on the right is the index of sampling and the superscript on the left is the index for the pulsar. For example, we denote the $k$-th measurement of a timing residual of the $i$-th pulsar as ${ {\,^{i}\negthinspace}}R_{k}$, the GW-induced signal as ${ {\,^{i}\negthinspace}}s_{k}$ and other noise contributions as ${ {\,^{i}\negthinspace}}n_{k}$. $\sigma_{\rm g}$ is the RMS level for the GW-induced signal, ${ {\,^{i
{ "pile_set_name": "ArXiv" }
--- abstract: | We show that, with indivisible goods, the existence of competitive equilibrium fundamentally depends on agents’ substitution effects, not their income effects. Our Equilibrium Existence Duality allows us to transport results on the existence of competitive equilibrium from settings with transferable utility to settings with income effects. One consequence is that net substitutability—which is a strictly weaker condition than gross substitutability—is sufficient for the existence of competitive equilibrium. We also extend the “demand types” classification of valuations to settings with income effects and give necessary and sufficient conditions for a pattern of substitution effects to guarantee the existence of competitive equilibrium. JEL Codes: C62, D11, D44 author: - Elizabeth Baldwin - Omer Edhan - Ravi Jagadeesan - Paul Klemperer - Alexander Teytelboym bibliography: - 'bib.bib' date: 17th June 2020 title: | The Equilibrium Existence Duality:\ Equilibrium with Indivisibilities & Income Effects --- Introduction ============ This paper shows that, when goods are indivisible and there are income effects, the existence of competitive equilibrium fundamentally depends on agents’ substitution effects—i.e., the effects of compensated price changes on agents’ demands. We provide general existence results that do not depend on income effects. In contrast to the case of divisible goods, competitive equilibrium does not generally exist in settings with indivisible goods [@henry1970indivisibilites]. Moreover, most previous results about when equilibrium does exist with indivisible goods assume that utility is transferable—ruling out income effects but allowing tractable characterizations of (Pareto-)efficient allocations and aggregate demand that can be exploited to analyze competitive equilibrium.[^1] But understanding the role of income effects is important for economies with indivisible goods, as these goods may comprise large fractions of agents’ budgets. Furthermore, in the presence of income effects, the distribution of wealth among agents affects both Pareto efficiency and aggregate demand, making it necessary to develop new methods to analyze competitive equilibrium with indivisible goods. The cornerstone of our analysis is an application of the relationship between Marshallian and Hicksian demand. As in classical demand theory, Hicksian demand is defined by fixing a utility level and minimizing the expenditure of obtaining it. We combine Hicksian demands to construct a family of “Hicksian economies” in which prices vary, but agents’ utilities—rather than their endowments—are held constant. Our key result, which we call the Equilibrium Existence Duality, states that competitive equilibria exist for all endowment allocations if and only if competitive equilibria exist in the Hicksian economies for all utility levels. Preferences in each Hicksian economy reflect agents’ substitution effects. Therefore, by the Equilibrium Existence Duality, the existence of competitive equilibrium fundamentally depends on substitution effects. Moreover, as fixing a utility level precludes income effects, agents’ preferences are quasilinear in each Hicksian economy. Hence, the Equilibrium Existence Duality allows us to transport (and so generalize) *any* necessary or sufficient condition for equilibrium existence from settings with transferable utility to settings with income effects.[^2] In particular, our most general existence result gives a necessary and sufficient condition for a pattern of agents’ substitution effects to guarantee the existence of competitive equilibrium in the presence of income effects. Consider, for example, the case of substitutable goods in which each agent demands at most one unit of each good. With transferable utility, substitutability is sufficient for the existence of competitive equilibrium [@KeCr:82] and defines a maximal domain for existence [@GuSt:99]. With income effects, @FlJaJaTe:19 showed that competitive equilibrium exists under gross substitutability. The Equilibrium Existence Duality tells us that, with income effects, competitive equilibrium in fact exists under *net* substitutability and that net substitutability defines a maximal domain for existence. Moreover, we show that gross substitutability implies net substitutability; the reverse direction is not true in the presence of income effects. An implication of our results is that it is unfortunate that [@KeCr:82], and much of the subsequent literature, used the term “gross substitutes” to refer to a condition on quasilinear preferences. Indeed, gross and net substitutability are equivalent without income effects, and our work shows that it is net substitutability, not gross substitutability, that is critical to the existence of competitive equilibrium with substitutes.[^3] To appreciate the distinction between gross and net substitutability, suppose that Martine owns a house and is thinking about selling her house and buying one of two different other houses: a spartan one and a luxurious one [@quinzii1984core]. If the price of her own house increases, she may wish to buy the luxurious house instead of the spartan one—exposing a gross complementarity between her existing house and the spartan one. However, Martine regards the houses as net substitutes: the complementarity emerges entirely due an income effect. Competitive equilibrium is therefore guaranteed to exist in economies with Martine if all other agents see the goods as net substitutes, despite the presence of gross complementarities. Our most general equilibrium existence theorem characterizes the combinations of substitution effects that guarantee the existence of competitive equilibrium. It is based on classification of valuations into “demand types.” A demand type is defined by the set of vectors that summarize the possible ways in which demand can change in response to a small generic price change. For example, the set of all substitutes valuations forms a demand type, as does the set of all complements valuations, etc. Applying @BaKl:19’s taxonomy to changes in Hicksian demands, we see that their definition easily extends to general utility functions, capturing agents’ substitution effects. Examples of demand types in our setting with income effects, therefore, include the set of all net substitutes preferences, the set of all net complements preferences, etc. The Equilibrium Existence Duality then makes it straightforward that the Unimodularity Theorem[^4]—which encompasses many standard results on the existence of competitive equilibrium as special cases[^5]—is unaffected by income effects. Therefore, as with the case of substitutes, conditions on complementarities and substitutabilities that guarantee the existence of competitive equilibrium in settings with transferable utility translate to conditions on net complementarities and substitutabilities that guarantee the existence of competitive equilibrium in settings with income effects. In particular, there are patterns of net complementarities that are compatible with the existence of competitive equilibrium. Our results may have significant implications for the design of auctions that seek competitive equilibrium outcomes, and in which bidders face financing constraints. For example, they suggest that versions of the Product-Mix Auction [@klemperer2008new], used by the Bank of England since the Global Financial Crisis, may work well in this context. Several other papers have considered the existence of competitive equilibrium in the presence of indivisibilities and income effects. [@quinzii1984core], [@gale1984equilibrium], and [@svensson1984competitive] showed the existence of competitive equilibrium in a housing market economy in which agents have unit demand and endowments. Building on those results, [@kaneko1986existence], [@van1997existence; @van2002existence], and [@yang2000equilibrium] analyzed settings with multiple goods, but restricted attention to separable preferences. By contrast, our results—even for the case of substitutes—allow for interactions between the demand for different goods. We also clarify the role of net substitutability for the existence of competitive equilibrium. In a different direction, [@DaKoMu:01] proved a version of the sufficiency direction of the Unimodularity Theorem for settings with income effects. [@DaKoMu:01] also defined domains of preferences using an optimization problem that turns out to be equivalent to the expenditure minimization problem. However, they did not note the connection to the expenditure minimization problem or Hicksian demand, and, as a result, did not interpret their sufficient conditions in terms of substitution effects or establish the role of substitution effects in determining the existence of equilibrium. We proceed as follows. Section \[sec:setting\] describes our setting—an exchange economy with indivisible goods and money. Section \[sec:EED\] develops the Equilibrium Existence Duality. Since the existing literature has focused mostly on the case in which indivisible goods are substitutes, we consider that case in Section \[sec:subst\]. Section \[sec:demTypes\] develops demand types for settings with income effects and states our Unimodularity Theorem with Income Effects. Section \[sec:auctions\] remarks on implications for auction design, and Section \[sec:conclusion\] is a conclusion. Appendix \[app:EEDproof\] proves the Equilibrium Existence Duality. Appendix \[app:grossToNet\] proves the connection between gross and net . Appendices \[app:dualDemPrefs\] and \[app:maxDomain\] adapt the proofs of results from the literature to our setting. The Setting {#sec:setting} =========== We work with a model of exchange economies with indivisibilities—adapted to allow for income effects. There is a finite set $J$ of agents, a finite set $I$ of indivisible goods, and a divisible numéraire that we call “money.” We allow goods to be undesirable, i.e., to be “bads." We fix a *total endowment* $\tot \in \mathbb{Z}^I$ of goods in the
{ "pile_set_name": "ArXiv" }
--- author: - Aaron Clauset - Kristian Skrede Gleditsch title: The developmental dynamics of terrorist organizations --- [**Traditional studies of terrorist group behavior [@terrorism:definition] focus on questions of political motivation, strategic choices, organizational structure, and material support [@cordes:etal:1985; @hoffman:1998; @enders:sandler:2002; @pape:2003; @sageman:2004; @li:2005], but say little about the basic laws that govern how the frequency and severity (number of deaths) [@clauset:young:gleditsch:2007] of their attacks change over time. Here we study 3,143 fatal attacks carried out worldwide from 1968–2008 by 381 terrorist groups [@mipt:2008], and show that the frequency of a group’s attacks accelerates along a universal trajectory, in which the time between attacks decreases according to a power law in the group’s total experience; in contrast, attack severity is independent of organizational experience and organizational size. We show that the acceleration can be explained by organizational growth, and suggest that terrorist organizations may be best understood as firms whose primary product is political violence. These results are independent of many commonly studied social and political factors, suggesting a fundamental law for the dynamics of terrorism and a new approach to understanding political conflicts.** ]{} High-quality empirical data on terrorist groups, such as their recruitment, fundraising, decision making, and organizational structure, are scarce, and the available sources are not typically amenable to scientific analysis [@jackson:etal:2005]. However, good-quality data on the frequency and severity of their attacks do exist, and their systematic analysis can shed new light on which facets of terrorist group behavior are predictable and which are inherently contingent. Each record in our worldwide database of 3,143 fatal attacks [@mipt:2008] includes its calendar date $t$, its severity $x$, and the name of the associated organization, if known (see Supplementary Information). For each group, we quantify the changes in the frequency and severity of a group’s attacks over its lifetime using a *development curve*. This curve maps a group’s behavior onto a common quantitative scale and facilitates the direct comparison of different groups at similar points in their life histories. To construct this, we plot the behavioral variable, such as the time (days) between consecutive attacks [$\Delta t$]{} or the severity of an attack $x$, as a function of the group’s maturity or experience $k$, indexed here by the cumulative number of fatal attacks (Fig. \[fig:individual:devcurves\]). Combining the developmental curves of many groups produces an aggregate picture of their behavioral dynamics, and allows us to extract the typical developmental trajectory of a terrorist group. Constructing a combined development curve for the 381 organizations in our database, we find that the time between consecutive attacks [$\Delta t$]{} changes in a highly regular way (Fig. \[fig:development\]a), while the severity of these attacks $x$ is independent of organizational experience (Fig. \[fig:development\]c). ![image](devcurves_4groups_freq_sev.eps) Empirically, the time between attacks decreases quickly as a group gains experience. For example, the mean delay between the first and second fatal attacks is almost six months, $\langle{\ensuremath{\Delta t}}\rangle=168.6\pm0.6$ days, while after 13 attacks, the mean delay is only $\langle{\ensuremath{\Delta t}}\rangle=27\pm1$ days. More generally, the envelope or distribution of delays $p({\ensuremath{\Delta t}},k)$ can be characterized as a truncated log-normal distribution with constant variance $\sigma^{2}$ and a characteristic delay between attacks $\mu$ that decreases systematically with experience $k$. Mathematically: $$\begin{aligned} p(\log{\ensuremath{\Delta t}},\log k) & \propto {\rm exp}\!\left[\frac{-(\log \Delta t+\beta \log k-\mu)^{2}}{2\sigma^{2} }\right] \enspace , \label{eq:model}\end{aligned}$$ where $\beta$ controls the trajectory of the distribution toward the natural cutoff at ${\ensuremath{\Delta t}}=1$ day. For small $k$, i.e., during a group’s early development, this model predicts a mean delay between attacks that decays like a power law ${\ensuremath{\Delta t}}\approx\mu\,\!k^{-\beta}$; however, as $k$ increases, this trend is attenuated as the mean delay asymptotes to ${\ensuremath{\Delta t}}=1$ (see Supplementary Information). Under this model, $\beta=1$ would indicate a simple linear feedback between a group’s attack rate and its experience. However, we find $\hat{\beta}=1.10\pm0.02$, indicating a faster-than-linear feedback between the accumulation of experience and the rate of future attacks. This model successfully predicts that the distributions of normalized delays ${\ensuremath{\Delta t}}\,k^{\hat{\beta}}$ will collapse onto a single log-normal distribution with parameters $\hat{\mu}$ and $\hat{\sigma}$ (Fig. \[fig:development\]b). However, individual attacks cannot be considered fully independent ($p=0.00\pm0.03$; see Supplementary Information), indicating that significant temporal or inter-group correlations may exist in the timing of a group’s future attacks [@clauset:etal:2009:b] (Fig. \[fig:individual:devcurves\]). In contrast, the severity of an attack $x$ is independent of group experience $k$ (Fig. \[fig:development\]c; $r=-0.024$, t-test, $p=0.17$), as illustrated by the collapse of the severity distributions $p(x\,|\,k)$ onto a single invariant heavy-tailed distribution [@clauset:young:gleditsch:2007; @clauset:etal:2009] (Fig. \[fig:development\]d; see Supplementary Information). For example, the mean severity of a group’s first attack $\langle x\rangle=6.7\pm0.9$ is only slightly larger than the mean severity of all attacks by very experienced groups ($k>100$) $\langle x\rangle=5.1\pm0.6$. Thus, contrary to common assumptions, young and old groups are equally likely to produce extremely severe events. Older groups, however, remain significantly more lethal overall [@asal:rethemeyer:2008] because they attack much more frequently than small groups, not because their individual attacks are more deadly. Here, we consider four explanations for the acceleration in the frequency of attacks: (i) organizational learning [@jackson:etal:2005], (ii) organizational growth, (iii) sampling artifacts, and (iv) averaging artifacts [@gallistel:etal:2004]. Organizational learning—commonly studied in manufacturing [@dutton:thomas:1984; @argote:1993] and called “learning by doing”—implies that terrorist groups are born clumsy and increase their attack rate primarily because their existing members learn to be more efficient, e.g., better planning, coördination, and execution. In contrast, organizational growth implies that groups are born small and increase their attack rate primarily by recruiting new, replaceable, relatively independent members, e.g., adding new terrorist cells. Straightforward tests of the data can eliminate both artifactual explanations (see Supplementary Information), indicating that the acceleration is real, even at the level of an individual group (Fig. \[fig:individual:devcurves\]). The relative importance of organizational learning and organizational growth cannot be estimated using frequency and severity data alone. Untangling their effects requires data both on event planning and execution, and on a group’s size and recruitment at various points in its life history. To our knowledge, systematic data on event planning and execution do not exist. The best available data on group sizes, taken from an expert survey [@asal:rethemeyer:2008], are coarse (roughly order of magnitude) estimates of the maximum size achieved by each of the 381 groups over the 1998–2005 period; of these 161 conducted at least one fatal attack, and 80 conducted at least two. ![image](delays_trend_final.eps) ![image](delays_collapse_final.eps)\ ![image](sev_trend_final_c.eps) ![image](sev_collapse_final_d.eps) The growth hypothesis predicts that a group’s maximum size will be inversely related to the minimum delay between its attacks over the 1998–2005 period. An analysis of variance indicates that the average minimum delays in the four size categories are significantly different (Fig. \[fig:group:sizes\]a; $n$-way ANOVA, $F=9.98$, $p<0.000013$), and further that larger organizational size is a highly significant predictor of increased attack frequency ($r=-0.49$, t-test, $p<10^{-5}$). In contrast, size, like experience (Fig. \[fig:development\]c), is not a significant predictor of attack severity (Fig. \[fig:group:sizes\]b; see Supplementary Information). Although operational, organizational, and political circumstances vary widely across terrorist groups, the systematic nature of our results suggests several general conclusions. The strong dependence of attack frequency on experience (Fig. \[fig:development\]a
{ "pile_set_name": "ArXiv" }
--- abstract: | This paper deals with the problem of estimating predictive densities of a matrix-variate normal distribution with known covariance matrix. Our main aim is to establish some Bayesian predictive densities related to matricial shrinkage estimators of the normal mean matrix. The Kullback-Leibler loss is used for evaluating decision-theoretical optimality of predictive densities. It is shown that a proper hierarchical prior yields an admissible and minimax predictive density. Also, superharmonicity of prior densities is paid attention to for finding out a minimax predictive density with good numerical performance. [*AMS 2010 subject classifications:*]{} Primary 62C15, 62C20; secondary 62C10. [*Key words and phrases:*]{} Admissibility, Gauss’ divergence theorem, generalized Bayes estimator, inadmissibility, Kullback-Leibler loss, minimaxity, shrinkage estimator, statistical decision theory. author: - 'Hisayuki Tsukuma[^1]  and Tatsuya Kubokawa[^2]' title: 'Proper Bayes and Minimax Predictive Densities for a Matrix-variate Normal Distribution' --- Introduction {#sec:intro} ============ The problem of predicting a density function for future observation is an important field in practical applications of statistical methodology. Since predictive density estimation has been revealed to be parallel to shrinkage estimation for location parameter, it has extensively been studied in the literature. Particularly, the Bayesian prediction for a multivariate (vector-valued) normal distribution has been developed by Komaki (2001), George et al. (2006) and Brown et al. (2008). See George et al. (2012) for a broad survey including a clear explanation of parallelism between density prediction and shrinkage estimation. This paper addresses Bayesian predictive density estimation for a matrix-variate normal distribution. Denote by $\Nc_{a\times b}(M,\Psi\otimes\Si)$ the $a\times b$ matrix-variate normal distribution with mean matrix $M$ and positive definite covariance matrix $\Psi\otimes\Si$, where $M$, $\Psi$ and $\Si$ are, respectively, $a\times b$, $a\times a$ and $b\times b$ matrices of parameters and $\Psi\otimes\Si$ represents the Kronecker product of the positive definite matrices $\Psi$ and $\Si$. Let $A^\top$ be the transpose of a matrix $A$ and let $\tr A$ and $|A|$ be, respectively, the trace and the determinant a square matrix $A$. Also, let $A^{-1}$ be the inverse of a nonsingular matrix $A$. If an $a\times b$ random matrix $Z$ is distributed as $\Nc_{a\times b}(M,\Psi\otimes\Si)$, then $Z$ has density of the form $$(2\pi)^{-ab/2}|\Psi|^{-b/2}|\Si|^{-a/2}\exp[-2^{-1}\tr\{\Psi^{-1}(Z-\Th)\Si^{-1}(Z-\Th)^\top\}].$$ For more details of matrix-variate normal distribution, see Muirhead (1982) and Gupta and Nagar (1999). It is assumed in this paper that the covariance matrix of a matrix-variate normal distribution is known. Then the prediction problem is more precisely formulated as follows: Let $X|\Th\sim\Nc_{r\times q}(\Th, v_xI_r\otimes I_q)$ and $Y|\Th\sim\Nc_{r\times q}(\Th, v_yI_r\otimes I_q)$, where $\Th$ is a common $r\times q$ matrix of unknown parameters, $v_x$ and $v_y$ are known positive values and $I_r$ stands for the identity matrix of order $r$. Assume that $q\geq r$ and $X$ and $Y$ are independent. Let $p(X\mid \Th)$ and $p(Y\mid \Th)$ be the densities of $X$ and $Y$, respectively. Consider here the problem of estimating $p(Y\mid \Th)$ based only on the observed $X$. Denote by $\ph=\ph(Y\mid X)$ an estimated density for $p(Y\mid \Th)$ and hereinafter $\ph$ is referred to as a predictive density of $Y$. Define the Kullback-Leibler (KL) loss as $$\begin{aligned} \label{eqn:loss} L_{KL}(\Th,\ph) &=\Er^{Y|\Th}\bigg[\log {p(Y\mid\Th)\over\ph(Y\mid X)}\bigg] \non\\ &=\int_{\Re^{r\times q}} p(Y\mid \Th)\log {p(Y\mid\Th)\over\ph(Y\mid X)}\dd Y.\end{aligned}$$ The performance of a predictive density $\ph$ is evaluated by the risk function with respect to the KL loss (\[eqn:loss\]), $$\begin{aligned} R_{KL}(\Th,\ph)&=\Er^{X|\Th}[L_{KL}(\Th,\ph)]\\ &=\int_{\Re^{r\times q}}\int_{\Re^{r\times q}}p(X\mid\Th)p(Y\mid\Th)\log {p(Y\mid\Th)\over\ph(Y\mid X)}\dd Y\dd X.\end{aligned}$$ Let $\pi(\Th)$ be a proper/improper density of prior distribution for $\Th$, where we assume that the marginal density of $X$, $$m_\pi(X;v_x)=\int_{\Re^{r\times q}} p(X\mid \Th)\pi(\Th)\dd\Th,$$ is finite for all $X\in\Re^{r\times q}$. Denote the Frobenius norm of a matrix $A$ by $\Vert A\Vert=\sqrt{\tr AA^\top}$. Let $$p_\pi(X,Y)=\int_{\Re^{r\times q}} p(X\mid \Th) p(Y\mid \Th)\pi(\Th)\dd\Th.$$ Note that $p_\pi(X,Y)$ is finite if $m_\pi(X;v_x)$ is finite. Here $p_\pi(X,Y)$ can be rewritten as $$\begin{aligned} p_\pi(X,Y)&={1\over (2\pi v_s)^{qr/2}}e^{-\Vert Y-X\Vert^2/2v_s} \times \int_{\Re^{r\times q}} {1\over (2\pi v_w)^{qr/2}}e^{-\Vert W-\Th\Vert^2/2v_w}\pi(\Th)\dd\Th \\ &\equiv \ph_U(Y\mid X) \times m_\pi(W;v_w),\end{aligned}$$ where $v_s=v_x+v_y$ and $$W=v_w(X/v_x+Y/v_y)\mid \Th \sim\Nc_{r\times q}(\Th, v_wI_r\otimes I_q)$$ with $v_w=(1/v_x+1/v_y)^{-1}$. From Aitchison (1975), a Bayesian predictive density relative to the KL loss (\[eqn:loss\]) is given by $$\label{eqn:BPD} \ph_\pi(Y\mid X)={p_\pi(X,Y) \over m_\pi(X; v_x)} ={m_\pi(W;v_w)\over m_\pi(X;v_x)}\,\ph_U(Y\mid X).$$ See George et al. (2006, Lemma 2) for the multivariate (vector-valued) normal case. It is noted that $\ph_U(Y\mid X)$ is the Bayesian predictive density with respect to the uniform prior $\pi_U(\Th)=1$. Under the predictive density estimation problem relative to the KL loss (\[eqn:loss\]), $\ph_U(Y\mid X)$ is the best invariant predictive density with respect to a location group. Using the same arguments as in George et al. (2006, Corollary 1) gives that, for any $r$ and $q$, $\ph_U(Y\mid X)$ is minimax relative to the KL loss (\[eqn:loss\]) and has a constant risk. Recently, Matsuda and Komaki (2015) constructed an improved Bayesian predictive density on $\ph_U(Y\mid X)$ by using a prior density of the form $$\label{eqn:pr_em} \pi_{EM}(\Th)=|\Th\Th^\top|^{-\al^{EM}/2},\quad \al^{EM}=q-r-1.$$ The prior (\[eqn:pr\_em\]) is interpreted as an extension of Stein’s (1973, 1981) harmonic prior $$\label{eqn:pr_js} \pi_{JS}(\Th)=\Vert\Th\Vert^{-\be^{JS}}=\{\tr(\Th\Th^\top)\}^{-\be^{JS}/2},\quad \be^{JS}=qr-2.$$ In the context of Bayesian estimation for mean matrix, (\[eqn:pr\_em\]) yields a matricial shrinkage estimator, while (\[eqn:pr\_js\]) does a scalar shrinkage one. Note that, when $X\sim\Nc_{r\times q}(\Th, v
{ "pile_set_name": "ArXiv" }
--- abstract: | We present sensitive (T$_R^*\ \approx\ $0.1K), large-scale (47$^{\prime}$ $\times$ 7$^{\prime}$–corresponding to 4 pc $\times$ 0.6 pc at the source) maps of the CO J=1$\rightarrow$0 emission of the L1448 dark cloud at 55$^{\prime\prime}$ resolution. These maps were acquired using the On-The-Fly (OTF) capability of the NRAO 12-meter telescope atop Kitt Peak in Arizona. CO outflow activity is seen in L1448 on parsec-scales for the first time. Careful comparison of the spatial and velocity distribution of our high-velocity CO maps with previously published optical and near-infrared images and spectra has led to the identification of six distinct CO outflows. Three of these are powered by the Class 0 protostars, L1448C, L1448N(A), and L1448N(B). L1448 IRS 2 is the source of two more outflows, one of which is newly identified from our maps. The sixth newly discovered outflow is powered by an as yet unidentified source outside of our map boundaries. We show the direct link between the heretofore unknown, giant, highly-collimated, protostellar molecular outflows and their previously discovered, distant optical manifestations. The outflows traced by our CO mapping generally reach the projected cloud boundaries. Integrated intensity maps over narrow velocity intervals indicate there is significant overlap of blue- and redshifted gas, suggesting the outflows are highly inclined with respect to the line-of-sight, although the individual outflow position angles are significantly different. The velocity channel maps also show that the outflows dominate the CO line cores as well as the high-velocity wings. The magnitude of the combined flow momenta, as well as the combined kinetic energy of the flows, are sufficient to disperse the 50 M$_{\odot}$ NH$_3$ cores in which the protostars are currently forming, although some question remains as to the exact processes involved in redirecting the directionality of the outflow momenta to effect the complete dispersal of the parent cloud. author: - 'Grace A. Wolf-Chase, Mary Barsony, and JoAnn O’Linger' title: Giant Molecular Outflows Powered by Protostars in L1448 --- Introduction ============ It has long been an open question whether young stars could be the agents of dispersal of their parent molecular clouds through the combined effects of their outflows [@nor79; @ber96]. The answer to this question depends on whether the outflows have the requisite kinetic energy to overcome the gravitational binding energy of the cloud, as well as the efficiency with which outflows can transfer momentum, in both magnitude and direction, to the surrounding cloud. For considerations of molecular cloud dispersal, addressing the question of the adequacy of outflow momenta has historically lagged behind determinations of outflow energetics. This is because evaluation of the available energy sources needed to account for the observed spectral linewidths in a cloud is adequate for quantitative estimates of outflow energies. However, in order to address whether the requisite momentum for cloud dispersal exists in a given case requires well-sampled, sensitive, large-scale mapping of sufficiently large areas to encompass entire molecular clouds. Such observing capability has been beyond reach until the last few years, with the implementation of “rapid” or “On-The-Fly” mapping capabilities at large-aperture millimeter telescopes. The fact that many outflows powered by young stellar objects actually extend well beyond their parent molecular cloud boundaries has been recognized only recently, with the advent of large-scale, narrowband optical imaging surveys that have revealed shock-excited Herbig-Haro objects at parsec-scale separations from their exciting sources [@ba96a; @ba96b; @bal97; @dev97; @eis97; @wil97; @gom97; @gom98; @rei98] and from equally large-area, sensitive, millimeter line maps that show parsec-scale molecular outflows [@den95; @lad96; @ben96; @ben98; @oli99]. The millimeter line maps of parsec-scale flows have been almost exclusively confined to instances of single, well-isolated cases, due to the tremendous confusion of multiple outflows in regions of clustered star formation, such as are found in NGC 1333 (Sandell & Knee 1998; Knee & Sandell 2000), $\rho$ Oph, Serpens [@whi95], or Circinus [@bal99]. The L1448 dark cloud, with a mass of 100 M$_{\odot}$ over its $\sim$ 1.3 pc $\times$ 0.7 pc extent as traced by C$^{18}$O emission, [@ba86a], is part of the much more extensive (10 pc $\times$ 33 pc) Perseus molecular cloud complex, which contains $\approx$ 1.7 $\times$ 10$^4$ M$_{\odot}$, at a distance of 300 pc [@ba86b]. The two dense ammonia cores within L1448 contain 50 M$_{\odot}$ distributed over a 1 pc $\times$ 0.5 pc area [@ba86a; @ang89]. The core at V$_{LSR}$ = 4.2 km s$^{-1}$ contains the Class 0 protostar L1448 IRS 2, while the other core, at V$_{LSR}$ = 4.7 km s$^{-1}$, harbors four Class 0 protostars: L1448C, L1448N(A), L1448N(B), and L1448NW. [@bar98; @oli99; @eis00]. The Class I source, L1448 IRS 1, lies close to the western boundary of the cloud, just outside the lowest NH$_3$ contours in the maps of [@ba86b]. High-velocity molecular gas in L1448 was discovered a decade ago via CO J$=$2$\rightarrow$1 and CO J$=$1$\rightarrow$0 mapping of a $\sim$ 2$^{\prime}$ $\times$ 6$^{\prime}$ area centered on L1448C, acquired with 12$^{\prime\prime}$ and 20$^{\prime\prime}$ angular resolutions, respectively [@bac90]. Due to its brightness, high-velocity extent ($\pm$ 70 km s$^{-1}$), and symmetrically spaced CO bullets, the L1448C molecular outflow has been the object of much study, unlike the flows from its neighbors, the 7$^{\prime\prime}$ (in projected separation) protobinary, L1448N(A) & (B), just 1.2$^{\prime}$ to the north, or L1448 IRS 2, 3.7$^{\prime}$ to the northwest (e.g., [@cur90; @gui92; @bal93; @bac94; @dav94; @bac95; @dut97]). Although outflow activity in the vicinity of the protobinary had been reported previously, the H$_2$ and CO flows, driven by L1448N(A) and L1448N(B), respectively [@bac90; @dav95], were not recognized as distinct until recently [@bar98]. Identification of these flows was aided by noting the position angle of the low-excitation H$_2$ flow, centered on L1448N(A), to be distinct from the position angle of the CO flow from L1448N(B), defined by the direction of the line joining L1448N(B) with the newly discovered Herbig-Haro object, HH 196 [@bal97]. Recent, wide-angle ($\sim$ 70$^{\prime}$ field-of-view), narrowband optical imaging of the entire extent of the L1448 cloud has resulted in the discovery of several systems of Herbig-Haro objects, some displaced several parsecs from any exciting source [@bal97]. In order to investigate the link between high-velocity molecular gas and the newly discovered Herbig-Haro objects, as well as to study the possibility of cloud dispersal via outflows, we acquired new, sensitive, large-scale CO J$=$1$\rightarrow$0 maps of a substantial portion of the L1448 cloud. These new molecular line maps were acquired with the On-The-Fly (OTF) mapping technique as implemented at NRAO’s 12-meter millimeter telescope atop Kitt Peak, Arizona. Observations and Data Reduction =============================== The CO J$=$1$\rightarrow$0 maps of L1448 presented in this paper were acquired using the spectral-line On-The-Fly (OTF) mapping mode of the NRAO’s[^1] 12-meter telescope on 23 June 1997, UT 13$^h$53$^m$ $-$ UT 19$^h$25$^m$. We stress that the OTF technique allows the acquisition of large-area, high-sensitivity, spectral line maps with unprecedented speed and pointing accuracy. For comparison, it would have taken eight times the amount of telescope time, or nearly a week in practice, to acquire this same map using conventional, point-by-point mapping. Although OTF mapping is not a new concept, given the rigor of the position encoding that allows precise and accurate gridding of the data, the fast data recording rates that allow rapid scanning without beam smearing, and the analysis tools that are available, the 12-meter implementation is the most ambitious effort at OTF imaging yet. To produce our CO maps of L1448, we observed a 47$^{\prime}$ $\times$ 7$^{\prime}$ field along a position angle P.A. $=$ 135$^{\circ}$, (measured East from North), centered on the coordinates of L1448
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper leverages heterogeneous auxiliary information to address the data sparsity problem of recommender systems. We propose a model that learns a shared feature space from heterogeneous data, such as item descriptions, product tags and online purchase history, to obtain better predictions. Our model consists of autoencoders, not only for numerical and categorical data, but also for sequential data, which enables capturing user tastes, item characteristics and the recent dynamics of user preference. We learn the autoencoder architecture for each data source independently in order to better model their statistical properties. Our evaluation on two [*MovieLens*]{} datasets and an e-commerce dataset shows that mean average precision and recall improve over state-of-the-art methods.' author: - title: | Deep Heterogeneous Autoencoders\ for Collaborative Filtering\ --- Deep Autoencoder, Heterogeneous Data, Shared Representation, Sequential Data Modeling, Collaborative Filtering Introduction ============ Although Collaborative Filtering (CF) techniques achieve good performance in many recommender systems [@Hu2008], their performance degrades significantly when historical data is sparse. In order to alleviate this problem, features from auxiliary data sources that reflect user preference have been extracted [@Oord2013; @Porteous2010], as shown in Fig. \[fig:auxiliary\_usage\]. How to represent data from different sources is still a research problem, and it has been shown that the representation itself substantially impacts performance [@Loyola2017; @Goodfellow2016]. Recently, representation learning that automatically discovers hidden factors from raw data has become a popular approach to remedy the data sparsity issue of recommender systems [@wangweiran2015; @Zheng2017]. Many online shopping platforms gather not only user profiles and item descriptions, but various other types of data, such as product reviews, tags and images. Recent research has added textual and visual information to recommender systems [@Fuzheng2016; @Oramas2017]. However, in many cases sequential data, such as user purchase and browsing history, which carries information about trends in user tastes, have largely been neglected in CF-based recommender systems. ![[**Auxiliary information usage in recommender systems.**]{} *Item descriptions and user profiles are typically being used for feature extraction to alleviate the data sparsity problem. Our proposal also leverages sequential data, such as purchase histories, to reflect user preferences.*[]{data-label="fig:auxiliary_usage"}](heterogeneous2recommender.png){width="\columnwidth"} In this paper we propose Deep Heterogeneous Autoencoders (DHA) for Collaborative Filtering to combine information from multiple domains. We use Stacked Denoising Autoencoders (SDAE) to extract latent features from non-sequential data, and Recurrent Neural Network Encoder-Decoders (RNNED) to extract features from sequential data. The model is able to capture both user preferences and potential shifts of interest over time. Each data source is modeled using an independent encoder-decoder mechanism. Different encoders can have different number of hidden layers and an arbitrary number of hidden units in order to deal with the intrinsic difference of data sources. For instance, user demographic data and item content are typically categorical, while user comments or item tags are textual. After pre-processing, such as one hot encoding, bag-of-words and word2vec computation, representation vectors are on a different level of abstraction. Owing to its flexible structure, our model is able to learn suitable latent feature vectors for each component. These local representations from each data source are joined to form a shared feature space, which couples the joint learning of the representation from heterogeneous data and the collaborative filtering of user-item relationships. The contributions of this paper are summarized as follows: 1. A method for modeling both static and sequential data in a consistent way for recommender systems in order to capture the trend in user tastes, and 2. Adaptation of the autoencoder architecture to accurately model each data source by considering their distinct abstraction levels. We show improvements in terms of mean average precision and recall on three different datasets. Related work ============ Incorporating side information into recommender systems ------------------------------------------------------- In order to improve recommendation performance, research has been focusing on using side information, such as user profiles and reviews [@Fuzheng2016; @Porteous2010]. In particular, deep learning models have been widely studied [@He2017; @Wu2017]. AutoRec first proposed the use of autoencoders for recommender systems [@Sedhain2015]. In more recent work, representations are learned via stacked autoencoders (SAE), and fed into conventional CF models, either loosely or tightly coupled [@szhang2017; @hwang2015]. Deep models that integrate autoencoders into collaborative filtering have shown state-of-the-art performance. Recurrent Neural Network Encoder-Decoder ---------------------------------------- Recurrent neural networks (RNNs) process sequential data one element at each step to capture temporal dynamics. The encoder-decoder mechanism was initially applied to RNN for machine translation [@Cho2014]. Recently, RNN encoder-decoders (RNNED) have been used to learn features from a series of actions and have successfully been applied in other areas. It was shown that Long Short-Term Memory (LSTM) networks have the ability to learn on data with long range temporal dependencies, and we adopt LSTMs for modeling sequential data. ![[**Deep Heterogeneous Autoencoders and the integration with collaborative filtering.**]{} *The proposed model extracts a shared feature space from multiple sources of auxiliary information. It models non-sequential and sequential data to capture user preferences, item properties as well as temporal dynamics. It adopts independent encoder-decoder architectures for different data sources in order to better model their statistical properties. The product of $U \in \mathbb{R}^{m \times d}$ and $V \in \mathbb{R}^{n \times d}$ approximates the user-item interaction matrix.*[]{data-label="fig:DHA"}](DHA_0606.png){width="45.00000%"} Deep Heterogeneous Autoencoders for Collaborative Filtering =========================================================== Overview -------- We propose a model that learns a joint representation from heterogeneous auxiliary information to mitigate the data sparsity problem of recommender systems. SDAEs are applied to numerical and categorical data for modeling the static tastes of users for items. We use RNNEDs to extract features from sequential data to reveal interest shifts over time. The model adopts an independent autoencoder architecture for each data source since the inputs are generally on a different level of abstraction, see Fig. \[fig:DHA\] for an overview. In order to discover the distinct statistical properties of every data source, our model takes the existing disparity of input abstraction levels into consideration, and applies autoencoders to each source independently by allowing distinct hidden layer numbers and arbitrary hidden units at every layer. Deep Heterogeneous Autoencoders ------------------------------- We define each source of auxiliary data as a component indexed by $c \in \{1,...,C\}$. $S_c$ denotes the input of component $c$. We pre-process non-sequential data like textual item descriptions by generating fixed-length embedding vectors. For sequential data, an embedding vector is learned for every time step after tokenization. We seperately describe the encoding-decoding outputs of the above two types of embedding vectors. As shown in Fig. \[fig:DHA\], SDAE is applied to fixed-length embedding vectors. Each component encoder takes the input $S_c$, generates a corrupted version of it, $\hat{S_c}$, and the first layer maps it to a hidden representation $h_c$, which captures the main factors of variation in the input data distribution[@vincent2008; @vincent2010]. More importantly, the number of component hidden layers in our model can differ from each other. The architecture is unique for each data source, where the number of layers of component $c$ is denoted as $L_c$. The representation at every layer is $S_{c,l}$. For the encoder of each component, given $l_c \in \{1, ..., L_c/2 \}$ and $c \in C$, the hidden representation $h_{c,l}$ is derived as: $$h_{c, l} = f \left(W_{c,l}h_{c,l-1} + b_{c,l} \right).$$ The decoder reconstructs the data at layer $L$ as follows: $$\bar{S}_c = g\left( W'_c h_{c,L} + b'_c \right).$$ The proposed model leverages sequential data by using two LSTMs for encoding and decoding one sequential data source. Specifically, the encoder reads a sequence with $T$ time steps. At the last time step, the hidden state $h_T$ is mapped to a context vector $c$, as a summary of the whole input sequence[@Cho2014]. The decoder generates the output sequence by predicting the next action $y_t$ given $h_t$. Both $y_t$ and $h_t$ are also conditioned on $y_{t-1}$ and the context vector $c$. To combine them, as shown in Fig. \[fig:DHA\], the first part of our model encodes all components to generate hidden representations $S_{c, L_c/2}$ of non-sequential data and $h_T$ of sequential data across all sources. These are merged to generate a joint latent representation, denoted as $
{ "pile_set_name": "ArXiv" }
--- abstract: | Monolayers and multilayers of semiconducting transition metal dichalcogenides (TMDCs) offer an ideal platform to explore valley-selective physics with promising applications in valleytronics and information processing. Here we manipulate the energetic degeneracy of the $\mathrm{K}^+$ and $\mathrm{K}^-$ valleys in few-layer TMDCs. We perform high-field magneto-reflectance spectroscopy on WSe$_2$, MoSe$_2$, and MoTe$_2$ crystals of thickness from monolayer to the bulk limit under magnetic fields up to 30 T applied perpendicular to the sample plane. Because of a strong spin-layer locking, the ground state A excitons exhibit a monolayer-like valley Zeeman splitting with a negative $g$-factor, whose magnitude increases monotonically when thinning the crystal down from bulk to a monolayer. Using the $\mathbf{k\cdot p}$ calculation, we demonstrate that the observed evolution of $g$-factors for different materials is well accounted for by hybridization of electronic states in the $\mathrm{K}^+$ and $\mathrm{K}^-$ valleys. The mixing of the valence and conduction band states induced by the interlayer interaction decreases the $g$-factor magnitude with an increasing layer number. The effect is the largest for MoTe$_2$, followed by MoSe$_2$, and smallest for WSe$_2$. Keywords: MoSe$_2$, WSe$_2$, MoTe$_2$, valley Zeeman splitting, transition metal dichalcogenides, excitons, magneto optics. author: - Ashish Arora - Maciej Koperski - Artur Slobodeniuk - Karol Nogajewski - Robert Schmidt - Robert Schneider - 'Maciej R. Molas' - Steffen Michaelis de Vasconcellos - Rudolf Bratschitsch - Marek Potemski title: | Zeeman spectroscopy of excitons and hybridization of electronic states\ in few-layer WSe$_2$, MoSe$_2$ and MoTe$_2$ --- Hybridization of electronic states in van der Waals-coupled layers of semiconducting transition metal dichalcogenides (TMDCs), significantly affects their energy bands and optical properties. Most striking is a dramatic change in the quasiparticle band gap character, from a direct bandgap at the $\mathrm{K}$-point of the Brillouin zone in monolayers to an indirect $\Gamma-\Lambda$ band gap in multilayers and bulk crystals [@1; @2]. In contrast, the energy of the optical band gap, which is due to $\mathrm{K}$-point excitons in any mono-, multi- and bulk-crystals rather weakly depends on the number of layers in TMDC stacks [@1]. This effect is due to both the hybridization of electronic states at the $\mathrm{K}$-points [@3] and the change in the dielectric environment with different number of layers [@4]. While the hybridization of the electronic states leads to (often unresolved) multiplets of intralayer (electron and hole within the same layer) and spatially-separated interlayer excitons (electron and hole confined to different layers), the dielectric environment largely determines the excitonic binding energy and the optical band gap. The hybridization of electronic states in TMDC multilayers is also encoded in the magnitudes of the effective Landé $g$-factors of the coupled states. However, in contrast to the energetic positions of electronic resonances, $g$-factors are less sensitive to the effects of Coulomb interaction (dielectric environment) [@5]. In TMDC monolayers, the band structure at the $\mathrm{K}$-point consists of energetically degenerate states at the $\mathrm{K}^+$ and $\mathrm{K}^-$ valleys. However, the two valleys possess opposite magnetic moments, and can be individually addressed using $\sigma^+$ and $\sigma^-$-polarized light [@1]. An externally applied magnetic field in the Faraday geometry lifts the valley degeneracy, resulting in a so-called valley Zeeman splitting [@1]. Therefore, the $g$-factors of the excitons can be measured using helicity-resolved spectroscopy under magnetic fields [@6; @7; @8; @9; @10; @11; @12; @13; @14; @15; @16; @17; @18; @19]. In multilayer and bulk TMDCs, it has been found that the spin orientation of the carriers is strongly coupled to the valleys within the individual layers (“spin-layer locking”) [@17; @19; @20; @21]. Therefore, many salient features of monolayer physics are preserved in multilayers. As a consequence, intralayer excitons form with their characteristic negative $g$-factors [@17; @21]. Moreover, spin-layer locking effects have recently enabled the unambiguous identification of interlayer excitons in bulk TMDCs with positive $g$-factors [@17; @19]. However, a systematic investigation of the effect of layer number and the hybridization of electronic states on the valley Zeeman effect has not been reported so far. Here, we perform circular polarization-resolved micro-reflectance contrast ($\mu$RC) spectroscopy on 2H-WSe$_2$, 2H-MoSe$_2$ and 2H-MoTe$_2$ crystals of variable thickness (from monolayer to bulk) under high magnetic fields of up to $B=30$ T and at a temperature of $T=4$ K. We measure the layer thickness-dependent valley Zeeman splittings of the ground state A excitons ($X_A^{1s}$) and compare the observed trends with the $\mathbf{k\cdot p}$ theory. The model takes into account the interlayer admixture of valence and conduction bands and corrections from the higher and lower bands from adjacent layers at the $\mathrm{K}$-point of the Brillouin zone. We find that the hybridization of the electronic states at the band extrema has profound effects on the $g$-factors of the excitons. Overall, the exciton $g$-factor decreases with an increasing layer thickness where the extent of this reduction depends upon the magnitude of interlayer interaction in the TMDCs. Experiment ========== Monolayer and few-layer flakes of TMDCs are mechanically exfoliated [@22] onto SiO$_2$(80nm)/Si substrates. The layer number in the MoSe$_2$ and WSe$_2$ crystals is determined by the optical contrast, Raman spectroscopy and the low-temperature (liquid helium) micro-photoluminescence [@23; @24; @25]. For MoTe$_2$, the thickness characterization was performed using ultra-low frequency Raman spectroscopy [@26; @27; @28], in addition to the reflectance contrast and atomic force microscopy (AFM) measurements (see Fig. \[MOKEvsB\] in Appendix \[Sec:experiment\]). Magneto-reflectance measurements are performed using a fiber-based low-temperature probe inserted inside a resistive magnet with 50 mm bore diameter, where magnetic fields up to 30 T are generated in the center of the magnet. Light from a tungsten halogen lamp is routed inside the cryostat using an optical fiber of 50 $\mu$m diameter and focused on the sample to a spot of about 10 $\mu$m diameter with an aspheric lens of focal length 3.1 mm (numerical aperture NA=0.68). The sample is displaced by $x-y-z$ nano-positioners. The reflected light from the sample is circularly polarized using the combination of a quarter wave plate (QWP) and a polarizer. The emitted polarized light is collected using an optical fiber of 200 $\mu $m diameter, dispersed with a monochromator and detected using a liquid nitrogen cooled Si CCD (WSe$_2$ and MoSe$_2$) or InGaAs array (MoTe$_2$). During the measurements, the configuration of QWP-polarizer assembly is kept fixed, producing one state of circular polarization, whereas the effect corresponding to the other polarization state can be measured by reversing the direction of magnetic field, as a result of the time reversal symmetry [@11; @29]. We define the reflectance contrast $C(\lambda)$ at a given wavelength $\lambda$ as $C(\lambda)=[R(\lambda)-R_0 (\lambda)]/[R(\lambda)+R_0(\lambda)]$, where $R_0(\lambda)$ is the reflectance spectrum of the SiO$_2$/Si substrate and $R(\lambda)$ is the one of the TMDC flake kept on the substrate. $C(\lambda)$ spectral line shapes are modeled using a transfer matrix method-based approach to obtain the transition energies [@30]. The excitonic contribution to the dielectric response function is assumed to follow a Lorentz oscillator-like mode [@5; @31] $$\epsilon(E)=(n_b+ik_b)^2+\sum_{j}\frac{A_j}{E_{0j}^2-E^2-i\gamma_jE},$$ where $n_b+ik_b$ is the background complex refractive index of the TMDC being investigated, which excludes excitonic effects, and is kept equal to that of bulk material (WSe$_2$ [@32], MoSe$_2$ [@33], or MoTe$_2$ [@33] in the respective cases). $E_0$, $A$ and $\gamma$ are the transition energy, the oscillator strength parameter, and the full width at half maximum (FWHM) linewidth parameter, whereas the index $j$ represents the sum over excitons. ![(a)-(d) Helicity-resolved microreflectance
{ "pile_set_name": "ArXiv" }
--- abstract: 'In order to study the spin density wave transition temperature ($T_{\rm SDW}$) in $\mathrm{(TMTSF)_2PF_6}$ as a function of magnetic field, we measured the magnetoresistance $R_{zz}$ in fields up to 19 T. Measurements were performed for three field orientations $\mathbf{B}\|\mathbf{a}, \mathbf{b''}$ and $\mathbf{c^*}$ at ambient pressure and at $P= 5$ kbar, that is nearly the critical pressure. For $\mathbf{B\|c^*}$ orientation we observed quadratic field dependence of $T_{\rm SDW}$ in agreement with theory and with previous experiments. For $\mathbf{B\|b''}$ and $\mathbf{B\|a}$ orientations we have found no shift in $T_{\rm SDW}$ within 0.05 K, both at $P=0$ and $P=5$ kbar. This result is also consistent with theoretical predictions.' author: - 'Ya.A. Gerasimenko' - 'V.A. Prudkoglyad' - 'A.V. Kornilov' - 'V.M. Pudalov' - 'V.N. Zverev' - 'A.-K. Klehe' - 'J.S. Qualls' title: 'Anisotropy of the Spin Density Wave Onset for (TMTSF)$_2$PF$_6$ in Magnetic Field' --- Introduction ============ $\mathrm{(TMTSF)_2PF_6}$ is a layered organic compound that demonstrates a complex phase diagram, containing phases, characteristic of one-, two- and three-dimensional systems. Transport properties of this material are highly anisotropic (typical ratio of the conductivity tensor components is $\sigma_{xx}:\sigma_{yy}:\sigma_{zz}\sim 10^5:10^3:1$ at $T=100$K [@Review:Lebed_Yamaji; @notations]). At ambient pressure and zero magnetic field the carrier system undergoes a transition to the antiferromagnetically ordered spin density wave (SDW) state [@Review:Lebed_Yamaji] with a transition temperature $T_{\rm SDW}\approx12$K. When an external hydrostatic pressure is applied, $T_{\rm SDW}$ gradually decreases and vanishes at the critical pressure of $\sim6\,$kbar[@critical-pressure]. For higher pressures, $P>6$kbar, the SDW state is completely suppressed. Application of a sufficiently high magnetic field along the least conducting direction $\mathbf{c^*}$ restores the spin ordering. This occurs via a cascade of the field induced SDW states (FISDW) [@FISDW]. The conventional model for the electronic spectrum is [@Gorkov_Lebed; @Review:Lebed_Yamaji]: $$\mathcal{E}_0(\mathbf{k})=\pm \hbar v_F(k_x \mp k_F)-2t_b \cos(k_y b')-2t_b' \cos(2k_y b')- 2t_c \cos (k_z c^*), \label{eqn:dlaw}$$ where $t_b,\ t_c$ are the nearest neighbor transfer integrals along $\mathbf{b^\prime}$ and $\mathbf{c^*}$ directions respectively, and $t_b'$ is the transfer integral involving next-to-nearest (second order) neighbors. For ideal one dimensional case, $t_b=t_c=t_b'=0$, and the Fermi surface consists of two parallel flat sheets. This surface satisfies the so-called ideal nesting condition: there exists a vector $\mathbf{Q}_0$ which couples all states across the Fermi surface. In the quasi-one dimensional case, when $t_b$ and $t_c$ are non-zero, the Fermi-sheets become slightly corrugated. Nevertheless, one can still find a vector, that couples all states across the Fermi surface, therefore the ideal nesting property also holds in this case. It means that the magnetic susceptibility $\chi(\mathbf{q})$ of the system diverges at $\mathbf{q}=\mathbf{Q}_0$ and the system is unstable against formation of SDW [@Review:Lebed_Yamaji]. When $t_b'$ is non-zero, the situation changes drastically: no vector can couple all states on both sides of the Fermi surface, though $\mathbf{Q}_0$ still couples a large number of states. The situation called “imperfect nesting” is sketched on Fig. \[surface\]a. ![(a) Schematic view of the Fermi surface in the imperfect nesting model. Dashed and solid lines show FS with and without $t_b^\prime$ term in Eq. (\[eqn:dlaw\]) respectively ($t_b^\prime$ value is magnified for clarity). $\mathbf{Q}$ denotes the nesting vector. (b) Schematic 3D-view of the Fermi surface.Dashed and solid lines are the orbits of an electron, when magnetic field $\mathbf{B\|c^*}$ and $\mathbf{B\|b^\prime}$ respectively.[]{data-label="surface"}](fig1n.eps "fig:"){width="37.00000%"} ![(a) Schematic view of the Fermi surface in the imperfect nesting model. Dashed and solid lines show FS with and without $t_b^\prime$ term in Eq. (\[eqn:dlaw\]) respectively ($t_b^\prime$ value is magnified for clarity). $\mathbf{Q}$ denotes the nesting vector. (b) Schematic 3D-view of the Fermi surface.Dashed and solid lines are the orbits of an electron, when magnetic field $\mathbf{B\|c^*}$ and $\mathbf{B\|b^\prime}$ respectively.[]{data-label="surface"}](3dfs2.eps "fig:"){width="37.00000%"} Despite the complex behavior of the system, theory[@Montambaux; @Maki] successfully describes the effects of pressure and magnetic field on the SDW transition in terms of the single parameter $t_b'$. According to the theory, $t_b'$ increases with external pressure, and conditions for nesting deteriorate. Therefore, under pressure deviations of the system from the ideal 1D-model become more prominent and, as a consequence, $T_{\rm SDW}$ decreases. When $t_b'$ reaches a critical value $t_b^*$, the SDW transition vanishes. The application of a magnetic field normal to the $\mathbf{a}$ direction restricts electron motion in the $\mathbf{b\mathrm{-}c}$ plane making the system effectively more one dimensional. Theory Ref.  predicts that the transition temperature increases in weak fields $\mathbf{B\|c^*}$ as $$\nonumber \Delta T_{\rm SDW}(B)=T_{\mathrm{SDW}}(B)-T_{\mathrm{SDW}}(0)=\alpha B^2,$$ and further saturates in high fields; here $\alpha=\alpha(P)$ is a function of pressure. A number of experiments[@critical-pressure; @Chaikin_abc; @Tsdw-Biskup; @Tsdw-highfields] were made to examine the predictions of the theory for the $\mathbf{B\|c^*}$ case. All these studies confirmed quadratic field dependence of the transition temperature. Nevertheless, the predicted saturation has not been seen until now. Furthermore, Murata et. al [@uniaxial-1; @uniaxial-2; @uniaxial-3; @uniaxial-4] reported an unexpected anisotropy of the $T_{\rm SDW}$ in $\mathrm{(TMTSF)_2PF_6}$ under uniaxial stress, the result seems to disagree with the theory. According to theory [@Montambaux; @Maki], the only relevant parameter is $t^\prime_b$; therefore, one might expect the uniaxial stress along $\mathbf{b}^\prime$ to affect $T_{\rm SDW}$ stronger than the stress in other directions. Murata et al.[@uniaxial-1; @uniaxial-2; @uniaxial-3; @uniaxial-4], however, showed experimentally that the uniaxial stress applied along the $\mathbf{a}$ direction changed $T_{\rm SDW}$ stronger than the stress in the $\mathbf{b}^\prime$ direction. The results mentioned above demonstrate that the consistency between the theoretical description and experiment is incomplete. Whereas there is a number of experimental data for the magnetic field $\mathbf{B\|c^*}$, for $\mathbf{B\|a}$ and $\mathbf{B\|b'}$ only one experiment[@Chaikin_abc] has been done so far at ambient pressure, and none at elevated pressure. Danner et al.[@Chaikin_abc] observed no field dependence for $\mathbf{B\|a}$ and $\mathbf{B\|b'}$ at ambient pressure. The absence of a field dependence, however, cannot be considered as a crucial test of the theory, because the effect of the magnetic field might be small at ambient pressure. Indeed, according to the theory, elevated pressure enhances any imperfections of nesting, and the effect of magnetic field is expected to become stronger. As a result, the strongest effect should take place at pressures close to the critical value. The aim of the present work, therefore, is to determine experimentally $T_{\rm SDW}(B)$ dependence for $\mathbf{B\|a}$ and
{ "pile_set_name": "ArXiv" }
--- abstract: 'We investigate the behavior of a mixture of asymmetric colloidal dumbbells and emulsion droplets by means of kinetic Monte Carlo simulations. The evaporation of the droplets and the competition between droplet-colloid attraction and colloid-colloid interactions lead to the formation of clusters built up of colloid aggregates with both closed and open structures. We find that stable packings and hence complex colloidal structures can be obtained by changing the relative size of the colloidal spheres and/or their interfacial tension with the droplets.' author: - Hai Pham Van - Andrea Fortini - Matthias Schmidt bibliography: - 'refs.bib' title: Assembly of open clusters of colloidal dumbbells via droplet evaporation --- Introduction {#intr} ============ Complex colloids characterized by heterogeneous surface properties are an active field of research due to their diverse potential applications as interface stabilizers, catalysts, and building blocks for nanostructured materials. Janus particles are colloidal spheres with different properties on the two hemispheres. They have recently attracted significant attention due to their novel morphologies [@Pawar2010]. Corresponding dumbbells consist of two colloidal spheres with different sizes or dissimilar materials [@Claudia2013]. Many studies have investigated the self-assembly of colloidal dumbbells into more complex structures, including micelles, vesicles [@Sciortino2009; @Liang2008], bilayers [@Munao2013; @Whitelam2010; @Avvisati2015] and dumbbell crystals  [@Mock2007; @Marechal2008; @Ahmet2010]. Particularly, open clusters of colloidal dumbbells with syndiotactic, chiral [@Zerrouki2008; @Bo2013] and stringlike structures [@Smallenburg2012] are significant because they can be regarded as colloidal molecules [@Blaaderen2003; @Duguet2011] that exhibit unique magnetic, optical and rheological properties [@Edwards2007]. However, the control of the cluster stability and the particular geometric structure are two major challenges, which have yet to be solved. Several self-assembly techniques have been used to control the aggregation of colloidal particles. Velev [*et al*.]{} developed a method to obtain so-called colloidosomes from colloidal particles by evaporating droplets [@Velev1996a; @Velev1996b; @Velev1997]. Based on this technique, Manoharan [*et al*.]{} [@Manoharan2003] successfully prepared micrometer-sized clusters and found that the structures of particle packings seem to minimize the second moment of the mass distribution. Wittemann [*et al*.]{} also produced clusters, but with a considerably smaller size of about 200nm [@Wittemann2008; @Wittemann2009; @Wittemann2010]. Cho [*et al*.]{} prepared binary clusters with different sizes or species from phase-inverted water-in-oil [@Cho2005] and oil-in-water emulsion droplets [@Cho2008]. These authors found that the interparticle interaction and the wettability of the constituent spheres play an important role in the surface coverage of the smaller particles. In addition, for oil-in-water emulsions the minimization of the second moment of the mass distribution ($M2$) only applies if the size ratio is less than 3. More recently, Peng [*et al*.]{} [@Bo2013] reported both experimental and simulation work on the cluster formation of dumbbell-shaped colloids. These authors proved that the minimization of the second moment of the mass distribution is not generally true for anisotropic colloidal dumbbell self-assembly. However, they predicted cluster structures without considering the different wettabilities for constituent colloidal spheres. In previous work, Schwarz [*et al*.]{} [@Ingmar2011] studied cluster formation via droplet evaporation using Monte Carlo (MC) simulation with shrinking droplets. It was shown that a short-ranged attraction between colloidal particles can produce $M2$ nonminimal isomers and the fraction of isomers varied for each number of constituent particles. In addition, supercluster structures were found with complex morphologies starting from a mixture of tetrahedral clusters and droplets. Fortini [@Fortini2012] modeled cluster formation in hard sphere-droplet mixtures, without shrinking droplets, and observed a transition from clusters to a percolated network that is in good agreement with experimental results. In the current paper, we extend the model of Ref. [@Ingmar2011] in order to investigate the dynamic pathways of cluster formation in a mixture of colloidal dumbbells and emulsion droplets. By varying the size or hydrophilic property of colloidal dumbbells, we find a variety of complex cluster structures that have not been observed in clusters of monodispersed colloidal spheres. In particular, we find open clusters with a compact core, which determines the overall symmetry, and protruding arms. These structures could lead to novel self-assembled structures. This paper is organized as follows. We introduce the model and simulation method in Sec. \[s:model-method\]. We analyze the cluster formation, structures and size distributions for dumbbells with asymmetric wetting properties in Sec. \[s:fluid1\]. In Sec. \[s:fluid2\] we present the results for dumbbells with asymmetric sizes. Conclusions are given in Sec. \[s:conc\]. Model and Methods {#s:model-method} ================= We simulate a ternary mixture of $N_\textrm{d}$ droplets of diameter $\sigma_{\textrm{d}}$ and $N_\textrm{c}$ colloidal dumbbells formed by two spherical colloids, labeled colloidal species 1 and colloidal species 2, of diameter $\sigma_{1}$ and $\sigma_{2}$ ($\sigma_{1}\geq \sigma_{2} $), respectively. A sketch of the model is shown in Fig. \[fig:skt\]. ![Sketch of the model of colloidal dumbbells (bright yellow and dark red spheres) and droplets (white spheres). Shown are the diameters of colloidal species 1, $\sigma_{1}$, colloidal species 2, $\sigma_{2}$, and droplet $\sigma_{d}$. (a) In the initial stages the droplet captures the colloidal dumbbells. (b) The droplet has shrunk and has pulled the dumbbells into a cluster. The competition between Yukawa repulsion and surface adsorption energies can lead to open cluster structures. []{data-label="fig:skt"}](fig1){width="9cm"} The colloids in each dumbbell are separated from each other by a distance $l$ that fluctuates in the range of $\lambda\leq l\leq \lambda+\Delta $, where $\lambda=(\sigma_{1}+\sigma_{2})/2$. The total interaction energy is given by $$\begin{aligned} \dfrac{U}{k_\textrm{B}T} &=&\sum_{i<j}^{N_{c}} \phi_{11}\left ( \left | \mathbf{r}_{1i}-\mathbf{r}_{1j} \right | \right )+\sum_{i<j}^{N_{c}} \phi_{22}\left ( \left | \mathbf{r}_{2i}-\mathbf{r}_{2j} \right | \right )\nonumber \\ &&+\sum_{i,j}^{N_{c}} \phi_{12}\left ( \left | \mathbf{r}_{1i}-\mathbf{r}_{2j} \right | \right )+\sum_{i}^{N_c} \sum_{j}^{N_{d}} \Phi _{1\textrm{d}}\left ( \left | \mathbf{r}_{1i}-\mathbf{R}_{j} \right | \right )\nonumber \\ &&+\sum_{i}^{N_c} \sum_{j}^{N_{d}} \Phi_{2\textrm{d}}\left ( \left | \mathbf{r}_{2i}-\mathbf{R}_{j} \right | \right )\nonumber \\ &&+\sum_{i<j}^{N_{d}} \Phi_{\textrm{dd}}\left ( \left |\mathbf{R}_{i}-\mathbf{R}_{j} \right | \right ), \label{eqn:total-energy}\end{aligned}$$ where $k_\textrm{B}$ is the Boltzmann constant; $T$ is the temperature; $\mathbf{r}_{1i}$ and $\mathbf{r}_{2i}$ are the center-of-mass coordinates of colloid 1 and colloid 2 in dumbbell $i$, respectively; $\mathbf{R}_{j}$ is the center-of-mass coordinate of droplet $j$; $\phi_{11}, \phi_{12}$ and $\phi_{22}$ are the colloid 1-colloid 1, colloid 1-colloid 2, and colloid 2-colloid 2 pair interactions, respectively; $\Phi_{1\textrm{d}}$ and $\Phi_{2\textrm{d}}$ are the colloid 1-droplet, colloid 2-droplet pair interactions, respectively; and $\Phi_{\textrm{dd}}$ is the droplet-droplet pair interaction. The colloid-colloid pair interaction is composed of a short-ranged attractive square well and a longer-ranged repulsive Yukawa potential, $$\phi_{11}(r)=\left \{ \begin{array}{ll} \infty & r< \sigma_{1} \\ - \epsilon_{\mathrm{SW}} & \sigma_{1} <r< \sigma_{1}+\Delta \\ \epsilon_{\
{ "pile_set_name": "ArXiv" }
--- abstract: 'Using a sample of $1.31\times10^{9} ~J/\psi$ events collected with the BESIII detector, we perform a study of $J/\psi\to\gamma K\bar{K}\eta''$. The $X(2370)$ is observed in the $K\bar{K}\eta''$ invariant-mass distribution with a statistical significance of 8.3$\sigma$. Its resonance parameters are measured to be $M=2341.6\pm 6.5\text{(stat.)}\pm5.7\text{(syst.)}$ MeV/$c^{2}$ and $\Gamma = 117\pm10\text{(stat.)}\pm8\text{(syst.)}$ MeV. The product branching fractions for ${J/\psi}\to \gamma X(2370),X(2370)\to {K^{+}K^{-}}\eta''$ and ${J/\psi}\to \gamma X(2370),X(2370)\to {K_{S}^{0}K_{S}^{0}}\eta''$ are determined to be $(1.79\pm0.23\text{(stat.)}\pm0.65\text{(syst.)})\times10^{-5}$ and $(1.18\pm0.32\text{(stat.)}\pm0.39\text{(syst.)})\times10^{-5}$, respectively. No evident signal for the $X(2120)$ is observed in the $K\bar{K}\eta''$ invariant-mass distribution. The upper limits for the product branching fractions of $\mathcal{B}({J/\psi}\to\gamma X(2120)\to\gamma K^{+} K^{-} \eta'')$ and $\mathcal{B}({J/\psi}\to\gamma X(2120)\to\gamma K_{S}^{0} K_{S}^{0} \eta'')$ are determined to be $1.49\times10^{-5}$ and $6.38\times10^{-6}$ at the 90% confidence level, respectively.' author: - | M. Ablikim$^{1}$, M. N. Achasov$^{10,e}$, P. Adlarson$^{59}$, S.  Ahmed$^{15}$, M. Albrecht$^{4}$, M. Alekseev$^{58A,58C}$, A. Amoroso$^{58A,58C}$, Q. An$^{55,43}$,  Anita$^{21}$, Y. Bai$^{42}$, O. Bakina$^{27}$, R. Baldini Ferroli$^{23A}$, I. Balossino$^{24A}$, Y. Ban$^{35,l}$, K. Begzsuren$^{25}$, J. V. Bennett$^{5}$, N. Berger$^{26}$, M. Bertani$^{23A}$, D. Bettoni$^{24A}$, F. Bianchi$^{58A,58C}$, J Biernat$^{59}$, J. Bloms$^{52}$, I. Boyko$^{27}$, R. A. Briere$^{5}$, H. Cai$^{60}$, X. Cai$^{1,43}$, A. Calcaterra$^{23A}$, G. F. Cao$^{1,47}$, N. Cao$^{1,47}$, S. A. Cetin$^{46B}$, J. Chai$^{58C}$, J. F. Chang$^{1,43}$, W. L. Chang$^{1,47}$, G. Chelkov$^{27,c,d}$, D. Y. Chen$^{6}$, G. Chen$^{1}$, H. S. Chen$^{1,47}$, J. C. Chen$^{1}$, M. L. Chen$^{1,43}$, S. J. Chen$^{33}$, Y. B. Chen$^{1,43}$, W. Cheng$^{58C}$, G. Cibinetto$^{24A}$, F. Cossio$^{58C}$, X. F. Cui$^{34}$, H. L. Dai$^{1,43}$, J. P. Dai$^{38,i}$, X. C. Dai$^{1,47}$, A. Dbeyssi$^{15}$, D. Dedovich$^{27}$, Z. Y. Deng$^{1}$, A. Denig$^{26}$, I. Denysenko$^{27}$, M. Destefanis$^{58A,58C}$, F. De Mori$^{58A,58C}$, Y. Ding$^{31}$, C. Dong$^{34}$, J. Dong$^{1,43}$, L. Y. Dong$^{1,47}$, M. Y. Dong$^{1,43,47}$, Z. L. Dou$^{33}$, S. X. Du$^{63}$, J. Z. Fan$^{45}$, J. Fang$^{1,43}$, S. S. Fang$^{1,47}$, Y. Fang$^{1}$, R. Farinelli$^{24A,24B}$, L. Fava$^{58B,58C}$, F. Feldbauer$^{4}$, G. Felici$^{23A}$, C. Q. Feng$^{55,43}$, M. Fritsch$^{4}$, C. D. Fu$^{1}$, Y. Fu$^{1}$, X. L. Gao$^{55,43}$, Y. Gao$^{56}$, Y. Gao$^{35,l}$, Y. G. Gao$^{6}$, Z. Gao$^{55,43}$, I. Garzia$^{24A,24B}$, E. M. Gersabeck$^{50}$, A. Gilman$^{51}$, K. Goetzen$^{11}$, L. Gong$^{34}$, W. X. Gong$^{1,43}$, W. Gradl$^{26}$, M. Greco$^{58A,58C}$, L. M. Gu$^{33}$, M. H. Gu$^{1,43}$, S. Gu$^{2}$, Y. T. Gu$^{13}$, A. Q. Guo$^{22}$, L. B. Guo$^{32}$, R. P. Guo$^{36}$, Y. P. Guo$^{26}$, Y. P. Guo$^{9,j}$, A. Guskov$^{27}$, S. Han$^{60}$, X. Q. Hao$^{16}$, F. A. Harris$^{48}$, K. L. He$^{1,47}$, F. H. Heinsius$^{4}$, T. Held$^{4}$, Y. K. Heng$^{1,43,47}$, M. Himmelreich$^{11,h}$, T. Holtmann$^{4}$, Y. R. Hou$^{47}$, Z. L. Hou$^{1}$, H. M. Hu$^{1,47}$, J. F. Hu$^{38,i}$, T. Hu$^{1,43,47}$, Y. Hu$^{1}$, G. S. Huang$^{55,43}$, J. S. Huang$^{16}$, X. T. Huang$^{37}$, X. Z. Huang$^{33}$, N. Huesken$^{52}$, T. Hussain$^{57}$, W. Ikegami Andersson$^{59}$, W. Imoehl$^{22}$, M. Irshad$^{55,43}$, S. Jaeger$^{4}$, Q. Ji$^{1}$, Q. P. Ji$^{16}$, X. B. Ji$^{1,47}$, X. L. Ji$^{1,43}$, H. B. Jiang$^{37}$, X. S. Jiang$^{1,43,47}$, X. Y. Jiang$^{34}$, J. B. Jiao$^{37}$, Z. Jiao$^{18}$, D. P. Jin$^{1,43,47}$, S. Jin$^{33}$, Y. Jin$^{49}$, T. Johansson$^{59}$, N. Kalantar-Nayestanaki$^{29}$, X. S. Kang$^{31}$, R. Kappert$^{29}$, M. Kavatsyuk$^{29}$, B. C. Ke$^{1}$, I. K. Keshk$^{4}$, A. Khoukaz$^{52}$, P.  Kiese$^{26}$, R. Kiuchi$^{1}$, R. Kliemt$^{11}$, L. Koch$^{28}$, O. B. Kolcu$^{46B,g}$, B. Kopf$^{4}$, M. Kuemmel$^{4}$, M. Kuessner$^{4}$, A. Kupsc$^{59}$, M.  G. Kurth$^{1,47
{ "pile_set_name": "ArXiv" }
--- abstract: 'Changes in stoichiometric NiTi allotropes induced by hydrostatic pressure have been studied employing density functional theory. By modelling the pressure-induced transitions in a way that imitates quasi-static pressure changes, we show that the experimentally observed B19$''$ phase is (in its bulk form) unstable with respect to another monoclinic phase, B19$''''$. The lower symmetry of the B19$''''$ phase leads to unique atomic trajectories of Ti and Ni atoms (that do not share a single crystallographic plane) during the pressure-induced phase transition. This uniqueness of atomic trajectories is considered a necessary condition for the shape memory ability. The forward and reverse pressure-induced transition B19$''$[$\leftrightarrow$]{}B19$''''$ exhibits a hysteresis that is shown to originate from hitherto unexpected complexity of the Born-Oppenheimer energy surface.' author: - David Holec - Martin Friák - Antonín Dlouhý - Jörg Neugebauer title: 'Ab initio study of pressure stabilised NiTi allotropes: pressure-induced transformations and hysteresis loops' --- Introduction ============ Nickel-titanium alloys belong to the important class of shape-memory materials [@Hornbogen1991; @Saburi1998; @Van-Humbeeck1999; @Duerig1999]. Their properties include super-elasticity, excellent mechanical strength and ductility, good corrosion resistance and bio-compatibility (important for example in medical applications), and high specific electric resistance (allowing the material to be easily heated by an electric current). The shape memory effect is governed by a martensitic transformation from a high-temperature austenitic phase (cubic B2, CsCl-structure) into a low-temperature martensitic phase. X-ray experiments on single crystals[@Kudoh1985; @Michal1981] and neutron measurements on powder samples[@Buehrer1983] revealed the low temperature phase to be a monoclinic B19$'$ structure (see Fig. \[fig:B19”\], $\gamma\approx97.8{\ensuremath{^\circ}}$) with P$2_1$/m space group. In addition, a rhombohedral R-phase[@Hara1997] with P3 space group was found during multi-step martensitic transformations[@Khalil-Allafi2002; @Dlouhy2003; @Khalil-Allafi2004; @Bojda2005; @Michutta2006] under the following conditions: (i) off-stoichiometric composition, (ii) presence of substitutional or interstitial impurities, and/or (iii) formation of precipitate phases. ![Atomic geometry of the investigated B19$'$-like phases. The various structures considered in this study alternate in the lattice parameters $a$, $b$, $c$, monoclinic angle $\gamma$, and internal positions (see text for details). Larger blue spheres correspond to Ni, smaller gray spheres to Ti atoms. The highlighted planes are used to characterize the structural ability to accommodate the shape-memory effect (see Sec. \[sec:shapeMemory\]). The picture was generated using the VESTA package[@Momma2008].[]{data-label="fig:B19”"}](fig1a.eps){width="0.9\columnwidth"} Several theoretical studies on the low temperature martensitic phase of stoichiometric NiTi alloys have been performed. The intense search has been motivated in part by the fact that theoretically predicted structures do not unambiguously agree with those detected experimentally. For example @Huang2003 concluded that the B19$'$ structure is unstable with respect to a higher-symmetry base-centered orthorhombic (BCO, in some studies also termed B33) structure (see Fig. \[fig:B19”\], $\gamma\approx107{\ensuremath{^\circ}}$). These conclusions were based on systematically cross-checking several distinct DFT methods, functionals, and implementations (FLAPW, PAW, USPP, GGA, LDA, ABINIT, VASP, etc.). The analysis also considered a carefully selected shear transformation path connecting all three structures B2 ($\gamma=90{\ensuremath{^\circ}}$), B19$'$, and BCO, since they are characterized by a specific value of the crystallographic angle $\gamma$. Very similar results were reported by @Wagner2008 and by @GudaVishnu2010. The latter authors[@GudaVishnu2010] also predicted a new phase (B19$''$) characterized by $\gamma\approx102.5{\ensuremath{^\circ}}$ and with practically identical energy to the BCO phase. Finally, a barrier-less transformation path between the B2 and the BCO phases as a sequence of several special deformation modes was demonstrated in Ref. . Various explanations of the discrepancy between (i) the apparent stability of the B19$'$ phase as observed in low-temperature experiments and (ii) the instability of the B19$'$ phase predicted by theoretical calculations (for $T=0\,\mathrm{K}$) have been proposed: Recent theoretical works of @Sestak2011 and @Zhong2011 suggest that the B19$'$ may be stabilized by the presence of (nano)twins that are often experimentally observed[@Wagner2008]. As another possibility @Huang2003 suggested that the B19$'$ structure could be stabilized by residual stresses that are frequently present in experimental samples. Since the equilibrium volume is predicted to be smaller for the B19$'$ structure than for the BCO phase[@Huang2003], one may expect the BCO structure to transform into the B19$'$ phase under compressive loads. Considering this variety of mechanisms active in NiTi and in order to understand how external strains effect the stability of the various phases, we systematically explore the potential energy surface (PES). To complement previous studies, we focus solely on martensitic phase transformations induced by volumetric changes, i.e., hydrostatic pressure. Our choice is motivated by the fact that (i) stress/strain fields in NiTi alter process-parameters of the martensitic transformations (such as e.g., the transition temperature) and (ii) these actual stresses and strains in experimental samples are difficult to measure and are often not known. Focusing on volumetric changes, we [[show]{}]{} an unexpectedly complex PES. This complexity results in transformation mechanisms that exhibit hysteresis effects not reported in previous studies. From a methodological point of view, we also show that it is difficult to include internal variables explicitly in the PES since they are responsible for metastability of and the newly discovered hysteresis processes. Computational Details ===================== The calculations were performed using density functional theory (DFT)[@Hohenberg1964; @Kohn1965] in the generalized gradient approximation (GGA-PBE’96)[@Perdew1996] as implemented in the Vienna Ab-initio Simulation Package (VASP)[@Kresse1993; @Kresse1996]. All monoclinic structures were studied using four-atom cells with different external and internal parameters, while a two-atom cell was used for the B2 phase. As the total energy differences among different phases are rather small, it was necessary to ensure convergence of the energy below [[$1\,\mathrm{meV}$ per formula unit (f.u.), i.e., one Ni and one Ti atom]{}]{}. Therefore, the plane wave cutoff energy was set to $400\,\mathrm{eV}$ and a $24\times16\times18$ $\bm{k}$-point Monkhorst-Pack mesh was used to sample the Brillouin zone of the monoclinic allotropes studied. Computational Methodology: Quasi-Static Volumetric Changes ---------------------------------------------------------- The computational approach usually employed for studying the effect of hydrostatic pressure is based on determining the total energy as function of volume. The hydrostatic pressure in the system is obtained by fitting the equation of state[@Murnaghan1944] to the calculated energy–volume data points. Because the B19$'$ and the BCO phases are structurally similar and differ only slightly in few internal (atomic coordinates) and external (lattice constants and the angle $\gamma$) parameters, the multi-dimensional Born-Oppenheimer potential energy surface (PES) is expected to be quite complex, exhibiting many local minima. In order to explore the impact of hydrostatic pressure on phase stability and martensitic phase transformations among different NiTi allotropes, we determined the PES as function of (i) the atomic volume, (ii) Ni atom $x$-axis internal coordinate, and monoclinic angle $\gamma$ (see details below). In order to systematically map the complex PES, we adopted a quasi-static (QS) approach, within which the volume is increased/decreased gradually in an adiabatic-like manner (see detailed explanation in Appendix \[app-QS\]). This not usually used approach allows for more realistic simulations of gradually increasing/decreasing pressures since it closely imitates experimental conditions. Results and discussion ====================== The monoclinic allotropes under hydrostatic load ------------------------------------------------ [(a)]{} ![image](fig2.eps){width="0.9\columnwidth"} [(b)]{} ![image](fig2a.eps){width="0.9\columnwidth"} The QS simulations were initiated using the previously identified ground states for each phase (B19$'$ and BCO). Subsequently, both structures were to evolve quasi-statically under applied volumetric changes. Fig. \[fig:gamma.and.xNi\] summarizes results from four
{ "pile_set_name": "ArXiv" }
--- abstract: 'The magnetic properties of various iron pnictides are investigated using first-principles pseudopotential calculations. We consider three different families, LaFePnO, BaFe$_2$Pn$_2$, and LiFePn with Pn=As and Sb, and find that the Fe local spin moment and the stability of the stripe-type antiferromagnetic phase increases from As to Sb for all of the three families, with a partial gap formed at the Fermi energy. In the meanwhile, the Fermi-surface nesting is found to be enhanced from Pn=As to Sb for LaFePnO, but not for BaFe$_2$Pn$_2$ and LiFePn. These results indicate that it is not the Fermi surface nesting but the local moment interaction that determines the stability of the magnetic phase in these materials, and that the partial gap is an induced feature by a specific magnetic order.' author: - 'Chang-Youn Moon, Se Young Park, and Hyoung Joon Choi' title: 'Dominant role of local-moment interactions in the magnetism in iron pnictides : comparative study of arsenides and antimonides from first-principles ' --- The iron pnictide superconductors and their fascinating physical properties have become central issues in many fields since their recent discoveries [@Kamihara2006; @Kamihara2008; @Takahashi]. The prototype materials are REFeAsO with variouof RE (rare earth) elements, and the superconducting transition temperature ($T_c$) is as high as 55 K in doped SmFeAsO [@Ren2053]. Other compounds with various types of insulating layers are also superconducting when doped, such as K-doped BaFe$_2$As$_2$ [@0805.4021; @0805.4630] and SrFe$_2$As$_2$ [@0806.1043; @0806.1209] with $T_c$ of 38 K, and LiFeAs with $T_c$ of 16 K [@Pitcher] or 18 K [@Wang; @Tapp]. Without doping, these materials exhibit a peculiar magnetic structure of a stripe-type antiferromagnetic (AFM) spin configuration coupled to orthorhombic atomic structure, and either hole or electron doping destroys the AFM and the superconductivity emerges subsequently. Hence the magnetism is considered to be closely related to the superconductivity in these materials [@Giovannetti; @Singh; @Haule; @Xu; @Mazin], and the spin-fluctuation-mediated superconductivity is assumed in many theoretical works [@Mazin2; @Kuroki; @Korshunov]. Understanding the nature of magnetism in these materials is thus of crucial importance, but it still under debate. On one hand, many theoretical [@Mazin; @Cvetkovic; @Yin; @Dong] and experimental [@Dong; @Lorenz; @Cruz; @Klauss; @Hu] works emphasize on the itinerant nature of the magnetism of the spin density wave (SDW) type, since the electron and hole Fermi surfaces (FS) are separated by a commensurate nesting vector in iron pnictides, which is further supported by the reduced magnetic moment of about 0.3 $\mu_B$ [@Cruz; @Klauss] and the energy gap near the Fermi energy ($E_F$). On the other hand, there are also interpretations based on the Heisenberg-type interaction between localized spin moments [@Si; @Yang2; @Yildirim]. In this localized-moment picture, the observed stripe-type AFM ordering results from the frustrated spin configuration with the next-nearest-neighbor exchange interaction ($J_2$) larger than half of the nearest-neighbor (NN) interaction ($J_1$). The itinerant and the local-moment pictures are based on different assumptions on the electron itinerancy, but a more comprehensive mechanism might be discovered by combining the two pictures [@Wu; @Kou]. Recently, motivated by the great success of the As substitution for P in LaFePO on raising $T_c$, hypothetical iron antimonide compounds have been studied as candidates for a higher-$T_c$ superconductor by first-principles calculations [@Moon; @Zhang]. In these works, Sb substitution for As is found to modify the FS nesting and the magnetic stability significantly. Thus, with more variation of compounds including antimonides, more comprehensive understanding of the nature of magnetism in iron pnictides would be possible through a systematic comparative study dealing with many different types of compounds altogether. In this study, we present our density-functional pseudopotential calculations of the electronic and magnetic properties of various iron arsenides and antimonides: LaFePnO, BaFe$_2$Pn$_2$, and LiFePn (Pn=As and Sb). We find that there is no systematic trend of FS nesting feature between arsenides and antimonides, whereas the stability and the local Fe spin moment of the magnetic phase increase from arsenides to antimonides for all three types of compounds. This finding is consistent with Heisenberg-type interaction picture that the local Fe moment is larger for antimonides with the enhanced Hund’s rule coupling due to their larger lattice constants. We also find that the FS reconstruction and the subsequent formation of a partial gap in the density of states (DOS) at $E_F$ can be regarded as a secondary effect caused by the magnetic ordering of local moments. Our first-principles calculations are based on the density-functional theory (DFT) within the generalized gradient approximation (GGA) for the exchange-correlation energy functional [@PBE] and the [*ab-initio*]{} norm-conserving pseudopotentials as implemented in SIESTA code [@SIESTA]. Semicore pseudopotentials are used for Fe, La, and Ba, and electronic wave functions are expanded with localized pseudoatomic orbitals (double zeta polarization basis set), with the cutoff energy for real space mesh of 500 Ry. Brillouin zone integration is performed by Monkhorst-Pack scheme [@Monkhorst] with 12 $\times$ 12 $\times$ 6 k-point grid. First we obtain the optimized cell parameters and atomic coordinates of compounds by total energy minimization, as listed in Table I. For the non-magnetic (NM) phase, tetragonal structures are obtained while the stripe-type AFM phase prefers the orthorhombic structure of the approximate $\sqrt{2} \times \sqrt{2}$ supercell, in agreement with experiments. The lowering of the total energy per Fe atom in the stripe-type AFM phase in the optimized orthorhombic structure relative to the NM phase in the optimized tetragonal unit cell is 354 and 706 meV for LaFeAsO and LaFeSbO, 297 and 745 meV for BaFe$_2$As$_2$ and BaFe$_2$Sb$_2$, and 153 and 523 meV for LiFeAs and LiFeSb, respectively. Along with the local magnetic moments on Fe atoms displayed in Table I, this result implies the existence of a universal trend that the magnetism is stronger for antimonides than for arsenides irrespective of the detailed material properties. Figure 1 shows the calculated FSs on the $k_z=0$ plane. To facilitate the investigation of the nesting feature, the electron and hole surfaces are drawn together in the reduced Brillouin zone for the $\sqrt{2} \times \sqrt{2}$ supercell. LaFeSbO shows an enhanced nesting between the electron and hole surfaces which coincide with each other very isotropically with almost circular shapes compared with LaFeAsO [@Moon]. For BaFe$_2$Pn$_2$, the arsenide exhibits a moderate nesting feature, while nesting looks poor for the antimonide because hole surfaces, which are present in the arsenide, are missing so that the electron surfaces have no hole surfaces to couple with nearby. LiFeSb also shows an inefficient nesting compared with LiFeAs with some hole surfaces missing around the $\Gamma$ point. The nesting feature can be more quantitatively estimated by evaluating the Pauli susceptibility $\chi_0({\bf q})$ as a function of the momentum ${\bf q}$ in the static limit with matrix elements ignored. The result is displayed in Fig. 2. For LaFePnO, $\chi_0$ is larger for LaFeSbO for entire range of ${\bf q}$, especially at the nesting vector ${\bf q}=(\pi,\pi)$ where the pronounced peak is located. This peak indicates the enhanced FS nesting for LaFeSbO, consistently with the FS topology in Fig. 1. For BaFe$_2$Pn$_2$, situation is drastically different. Although the susceptibility for BaFe$_2$As$_2$ has similar ${\bf q}$ dependence with those for LaFePnO, the susceptibility for BaFe$_2$Sb$_2$ is larger only for partial range of ${\bf q}$ with very weak ${\bf q}$ dependence and moreover there is no peak at ${\bf q}=(\pi,\pi)$. This feature clearly reflects the poor FS nesting in BaFe$_2$Sb$_2$ due to the lack of hole surfaces, as shown in Fig. 1. Finally, LiFeSb also has smaller $\chi_0({\bf q})$ than LiFeAs near $(\pi,\pi)$, hence LiFeSb has less effective FS nesting at $(\pi,\pi)$ than LiFeAs. Although many previous studies suggest the itinerant magnetism in iron pnictides that the stripe
{ "pile_set_name": "ArXiv" }
--- abstract: 'Let $G$ be a finite group and $R$ be a commutative ring. The Mackey algebra $\mu_{R}(G)$ shares a lot of properties with the group algebra $RG$ however, there are some differences. For example, the group algebra is a symmetric algebra and this is not always the case for the Mackey algebra. In this paper we present a systematic approach to the question of the symmetry of the Mackey algebra, by producing symmetric associative bilinear forms for the Mackey algebra. Using the fact that the category of Mackey functors is a closed symmetric monoidal category, we prove that the Mackey algebra $\mu_{R}(G)$ is a symmetric algebra if and only if the family of Burnside algebras $(RB(H))_{H\leqslant G}$ is a family of symmetric algebras with a compatibility condition. As a corollary, we recover the well known fact that over a field of characteristic zero, the Mackey algebra is always symmetric. Over the ring of integers the Mackey algebra of $G$ is symmetric if and only if the order of $G$ is square free. Finally, if $(K,\mathcal{O},k)$ is a $p$-module system for $G$, we show that the Mackey algebras $\mu_{\mathcal{O}}(G)$ and $\mu_{k}(G)$ are symmetric if and only if the Sylow $p$-subgroups of $G$ are of order $1$ or $p$.' author: - Baptiste Rognerud title: 'Trace maps for Mackey algebras.' --- \[section\] \[section\] \[theo\][Proposition]{} \[theo\][Lemma]{} \[theo\][Corollary]{} \[theo\][Question]{} \[theo\][Notations]{} \[theo\][Definition]{} \[theox\][Definition]{} \[theo\][Example]{} \[theo\][Remark]{} \[theo\][Remarks]{} [*[Key words: Finite group. Mackey functor. Symmetric Algebra. Symmetric monoidal category. Burnside Ring.]{}*]{} [*[A.M.S. subject classification: 19A22, 20C05, 18D10,16W99.]{}*]{} Trace maps for Mackey algebras. =============================== Introduction. ------------- Let $R$ be a unital commutative ring and $G$ be a finite group. The notion of Mackey functor was introduced by Green in $1971$. For him a Mackey functor is an axiomatisation of the comportment of the representations of a finite group. There are now several possible definitions of Mackey functors, in this paper we use the point of view of Dress who defined the Mackey functors as particular bivariant functors and we use the Mackey algebra introduced by Thévenaz and Webb. In [@tw] they proved that a Mackey functor is nothing but a module over the so-called Mackey algebra. Numerous properties of this algebra are known: this algebra shares a lot of properties with the group algebra. For example, the Mackey algebra is a free $R$-module, and its $R$-rank doesn’t depend on the ring $R$. If we work with a $p$-modular system which is “large enough”, there is a decomposition theory, in particular the Cartan matrix of this algebra is symmetric. However there are some differences, over a field of characteristic $p>0$, where $p\mid |G|$, the determinant of the Cartan matrix is not a power of the prime number $p$ in general, and as shown in $\cite{tw}$ the Mackey algebra is seldom a self-injective algebra. One may wonder about a stronger property for the Mackey algebra: when is the Mackey algebra a symmetric algebra? The answer to this question depends on the ring $R$. When $R$ is a field of characteristic $0$ or coprime to $|G|$, the Mackey algebra is semi-simple (see [@tw_simple]), so it is clearly a symmetric algebra. Over a field of characteristic $p>0$ which is “*large enough*", where $p\mid |G|$, then Jacques Thévenaz and Peter Webb proved that the so called $p$-local Mackey algebra (see [@bouc_resolution]) is self-injective if and only if the Sylow $p$-subgroups of $G$ are of order $p$. However, in the same article, they proved that the $p$-local Mackey algebra is a product of matrix algebras and Brauer tree algebras. Since a Brauer tree algebra is derived equivalent to a symmetric Nakayama algebra, then by [@rickard_derived] or, for a more general result [@zimmermann_tilted_orders], all Brauer tree algebras are symmetric algebras. So the $p$-local Mackey algebra over a field of characteristic $p$ is symmetric if and only if the Sylow $p$-subgroups are of order $1$ or $p$. Now the Mackey algebra of the group $G$ is Morita equivalent to a direct product of $p$-local Mackey algebras for some sub-quotients of the group $G$ (Theorem 10.1 [@tw]), so if $p^2 \nmid |G|$, the Mackey algebra of $G$ is symmetric. However, if $(K,\mathcal{O},k)$ is a $p$-modular system for the group $G$, it is not so clear that the previous argument can be use for the valuation ring $\mathcal{O}$. In particular the Mackey algebras over the valuation rings are rather complicate objects (see Section $6.3$ of [@these]). An $R$-algebra is a symmetric algebra if it is a projective $R$-module and if there exist a non degenerate symmetric, associative bilinear form on this algebra. One may think that the previous argument for the symmetry of the Mackey algebra is somewhat elaborate for something as elementary as the existence of a bilinear form on this algebra. However, for the Mackey algebra it is not obvious to specify such a bilinear form even in the semi-simple case. In this paper we propose a systematic approach to this question: by using the so-called Burnside Trace, introduce by Serge Bouc ([@bouc_burnside_dim]), we reduce the question of the existence of such bilinear a form on the Mackey algebra to the question of the existence of a family of symmetric, associative, non degenerate bilinear forms on Burnside algebras with an extra property. Here we denote by $RB(H)$ the usual Burnside algebra of the group $H$. Let $G$ be a finite group and $R$ be a commutative ring. Let $\phi=(\phi_{H})_{H\leqslant G}$ be a family of linear maps such that $\phi_{H}$ is a linear form on $RB(H)$. Let $b_{\phi_{H}}$ be the bilinear form on $RB(H)$ defined by $b_{\phi_{H}}(X,Y):= \phi_{H}(XY)$ for $X,Y\in RB(H)$. 1. The family $\phi$ is stable under induction if for every $H$ subgroup of $G$ and finite $H$-set $X$ we have $\phi_{G}(Ind_{H}^{G}(X)) = \phi_{H}(X)$. 2. The family $\big(RB(H)\big)_{H\leqslant G}$ is a stable by induction family of symmetric algebras if there exist a stable by induction family of linear forms $\phi=(\phi)_{H\leqslant G}$ such that the bilinear form $b_{\phi_{H}}$ on $RB(H)$ is non-degenerate for all $H\leqslant G$. The main result of the paper is the following theorem: Let $G$ be a finite group and $R$ be a commutative ring. Then the Mackey algebra $\mu_{R}(G)$ is a symmetric algebra if and only if $\big(RB(H)\big)_{H\leqslant G}$ is a stable by induction family of symmetric algebras. As corollary, we produce various symmetric associative bilinear form on the Mackey algebra which generalize the usual bilinear form for the group algebra. Using these forms we give direct and elementary proof for the symmetry of the Mackey algebras in the following cases: - Over the ring of the integers $\mathbb{Z}$, the Mackey algebra of a finite group $G$ is symmetric if and only if the order of $G$ is square-free. - Over a field $k$ of characteristic $0$, the Mackey algebra of $G$ is symmetric. - Over a field $k$ of characteristic $p>0$, the Mackey algebra of $G$ is symmetric if and only if $p^2\nmid |G|$. - Let $p$ be a prime number such that $p\mid |G|$. Let $R$ be a ring in which all prime divisors of $|G|$, except $p$, are invertible. Then the Mackey algebra $\mu_{R}(G)$ is symmetric if and only if $p^2 \nmid |G|$. In particular, if $(K,\mathcal{O},k)$ is a $p$-modular system for $G$, then the Mackey algebras $\mu_{k}(G)$ and $\mu_{\mathcal{O}}(G)$ are symmetric if and only of $p^2 \nmid |G|$. We use the following notations: - Let $G$ be a finite group. Then $[s(G)]$
{ "pile_set_name": "ArXiv" }
--- abstract: 'The Gromov-Hausdorff distance $(d_{GH})$ proves to be a useful distance measure between shapes. In order to approximate $d_{GH}$ for compact subsets $X,Y\subset\R^d$, we look into its relationship with $d_{H,iso}$, the infimum Hausdorff distance under Euclidean isometries. As already known for dimension $d\geq 2$, the $d_{H,iso}$ cannot be bounded above by a constant factor times $d_{GH}$. For $d=1$, however, we prove that $d_{H,iso}\leq\frac{5}{4}d_{GH}$. We also show that the bound is tight. In effect, this gives rise to an $O(n\log{n})$-time algorithm to approximate $d_{GH}$ with an approximation factor of $\left(1+\frac{1}{4}\right)$.' author: - 'Sushovan Majhi[^1][^2]' - Jeffrey Vitter - 'Carola Wenk$^\dag$' bibliography: - 'main.bib' title: 'Approximating Gromov-Hausdorff Distance in Euclidean Space' --- Introduction {#sec:intro} ============ This paper grew out of our effort to compute the Gromov-Hausdorff distance between Euclidean subsets. The Gromov-Hausdorff distance between two abstract metric spaces was first introduced by M. Gromov in ICM 1979 (see Berger [@berger_encounter_2000]). The notion, although it emerged in the context of Riemannian metrics, proves to be a natural distance measure between any two (compact) metric spaces. Only in the last decade the Gromov-Hausdorff distance has received much attention from the researchers in the more applied fields. In shape recognition and comparison, shapes are regarded as metric spaces that are deformable under a class of transformations. Depending on the application in question, a suitable class of transformations is chosen, then the dissimilarity between the shapes are defined by a suitable notion of *distance measure or error* that is invariant under the desired class of transformations. For comparing Euclidean shapes under Euclidean isometry, the use of Gromov-Hausdorff distance is proposed and discussed in [@memoli_theoretical_2005; @memoli_use_nodate; @memoli_gromov-hausdorff_2008; @memoli_properties_2012]. In this paper, we are primarily motivated by the questions pertaining to the computation of the Gromov-Hausdorff distance, particularly between Euclidean subsets. Although the distance measure puts the Euclidean shape matching on a robust theoretical foundation [@memoli_theoretical_2005; @memoli_use_nodate], the question of computing the Gromov-Hausdorff distance, or even an approximation thereof, still remains elusive. In the recent years, some efforts have been made to address such computational aspects. Most notably, the authors of [@agarwal_computing_2015] show an NP-hardness result for approximating the Gromov-Hausdorff distance between metric trees. For Euclidean subsets, however, the question of a polynomial time algorithm is still open. In [@memoli_properties_2012], the author shows that computing the distance is related to various NP-hard problems and studies a variant of Gromov-Hausdorff distance. #### Background and Related Work The notion of Gromov-Hausdorff distance is closely related to the notion of Hausdorff distance. Let $(Z,d_Z)$ be any metric space. We first give a formal definition of the directed Hausdorff distance between any two subsets of $Z$. \[def:dh\] For any two compact subsets $X,Y$ of a metric space $(Z,d_Z)$, the from $X$ to $Y$, denoted $\overrightarrow{d}_H^Z(X,Y)$, is defined by $$\sup_{x\in X}\inf_{y\in Y}d_Z(x,y).$$ Unfortunately, the directed Hausdorff distance is not symmetric. To retain symmetry, the is defined in the following way: \[def:h\] For any two compact subsets $X,Y$ of a metric space $(Z,d_Z)$, their , denoted $d_H^Z(X,Y)$, is defined by $$\max\left\{\overrightarrow{d}_H^Z(X,Y),\overrightarrow{d}_H^Z(Y,X)\right\}.$$ To keep our notations simple, we drop the superscript when it is understood that $Z$ is taken to be $\R^d$ and $X,Y$ are Euclidean subsets equipped with the standard Euclidean metric $\mod{\cdot}$. The $d_{H}$ can be computed in $O(n\log{n})$-time for finite point sets with at most $n$ points; see [@Alt1995]. We are now in a place to define the Gromov-Hausdorff distance formally. Unlike the Hausdorff distance, the Gromov-Hausdorff distance can be defined between two abstract metric spaces $(X,d_X)$ and $(Y,d_Y)$ that may not share a common ambient space. We start with the following formal definition: \[def:gh\] The , denoted $d_{GH}(X,Y)$, between two metric spaces $(X,d_X)$ and $(Y,d_Y)$ is defined to be $$d_{GH}(X,Y)=\inf_{\substack{f:X\to Z \\g:Y\to Z\\ Z}}d_H^Z(f(X),g(Y)),$$ where the infimum is taken over all isometries $f:X\to Z$, $g:Y\to Z$ and metric spaces $(Z,d_Z)$. In order to present an equivalent definition of the Gromov-Hausdorff distance that is computationally viable, we first define the notion of a correspondence. \[def:cor\] A $\C$ between any two (non-empty) sets $X$ and $Y$ is defined to be a subset $\C\subseteq X\times Y$ with the following two properties: i) for any $x\in X$, there exists an $y\in Y$ such that $(x,y)\in\C$, and ii) for any $y\in Y$, there exists $x\in X$ such that $(x,y)\in\C$. A correspondence $\C$ is a special *relation* that assigns all points of both $X$ and $Y$ a corresponding point. If the sets $X$ and $Y$ in the are equipped with metrics $d_X$ and $d_Y$ respectively, we can also define the distortion of the correspondence $\C$. Let $\C$ be a correspondence between two metric spaces $(X,d_X)$ and $(Y,d_Y)$, then its , denoted $Dist(\C)$, is defined to be $$\sup_{(x_1,y_1),(x_2,y_2)\in\C}\mod{d_X(x_1,x_2)-d_Y(y_1,y_2)}$$ The distortion $Dist(\C)$ is sometimes called the *additive* distortion as opposed to the *multiplicative* distortion; see [@kenyon_low_2010] for a definition. For non-empty sets $X,Y$, we denote by $\C(X,Y)$ the set of all correspondences between $X$ and $Y$. We note the following relation, which can be used to give an equivalent definition of the Gromov-Hausdorff distance via correspondences. For a proof of the following, the readers are encouraged to see [@burago_course_2001]. For any two compact metric spaces $(X,d_X)$ and $(Y,d_Y)$, the following relation holds: $$d_{GH}(X,Y)=\frac{1}{2}\inf\limits_{\C\in\C(X,Y)} Dist(\C)$$ This work is primarily motivated by the question of approximating the Gromov-Hausdorff distance between compact sets $X,Y\subset\R^d$. In [@memoli_gromov-hausdorff_2008], the authors use a related notion $d_{H,iso}$ in an effort to bound the Gromov-Hausdorff distance in the Euclidean case. For any $d\geq1$, a Euclidean isometry $T:\R^d\to\R^d$ is defined to be a map that preserves the distance, i.e., $\mod{T(a)-T(b)}=\mod{a-b}~\forall a,b\in\R^d$. When $d=1$, the $T$ can only afford to be a translation or a reflection (flip). In $d=2$, a Euclidean isometry is characterized by a combination of a translation, a rotation by an angle, or a mirror-reflection. For more about Euclidean isometries, see [@artin2011algebra]. We denote by $\E(\R^d)$ the set of all isometres of $\R^d$. \[def:hiso\] For any two compact subsets $X,Y$ of $\R^d$, we define $d_{H,iso}(X,Y)$ to be $$\inf\limits_{T\in\mathcal{E}(\R^d)} d_H(X,T(Y
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper has been addressed to a very old but burning problem of energy in General Relativity. We evaluate energy and momentum densities for the static and axisymmetric solutions. This specializes to two metrics, i.e., Erez-Rosen and the gamma metrics, belonging to the Weyl class. We apply four well-known prescriptions of Einstein, Landau-Lifshitz, Papaterou and M$\ddot{o}$ller to compute energy-momentum density components. We obtain that these prescriptions do not provide similar energy density, however momentum becomes constant in each case. The results can be matched under particular boundary conditions.' author: - | M. Sharif [^1] and Tasnim Fatima\ Department of Mathematics, University of the Punjab,\ Quaid-e-Azam Campus, Lahore-54590, Pakistan. title: '**Energy Distribution associated with Static Axisymmetric Solutions**' --- [**Keywords:**]{} Energy-momentum, axisymmetric spacetimes. Introduction ============ The problem of energy-momentum of a gravitational field has always been an attractive issue in the theory of General Relativity (GR). The notion of energy-momentum for asymptotically flat spacetime is unanimously accepted. Serious difficulties in connection with its notion arise in GR. However, for gravitational fields, this can be made locally vanish. Thus one is always able to find the frame in which the energy-momentum of gravitational field is zero, while in other frames it is not true. Noether’s theorem and translation invariance lead to the canonical energy-momentum density tensor, $T_a^b$, which is conserved. $$T^b_{a;b}=0,\quad (a,b=0,1,2,3).$$ In order to obtain a meaningful expression for energy-momentum, a large number of definitions for the gravitation energy-momentum in GR have been proposed. The first attempt was made by Einstein who suggested an expression for energy-momentum density \[1\]. After this, many physicists including Landau-Lifshitz \[2\], Papapetrou \[3\], Tolman \[4\], Bergman \[5\] and Weinburg \[6\] had proposed different expressions for energy-momentum distribution. These definitions of energy-momentum complexes give meaningful results when calculations are performed in Cartesian coordinates. However, the expressions given by M$\ddot{o}$ller \[7,8\] and Komar \[9\] allow one to compute the energy-momentum densities in any spatial coordinate system. An alternate concept of energy, called quasi-local energy, does not restrict one to use particular coordinate system. A large number of definitions of quasi-local masses have been proposed by Penrose \[10\] and many others \[11,12\]. Chang et al. \[13\] showed that every energy-momentum complex can be associated with distinct boundary term which gives the quasi-local energy-momentum. There is a controversy with the importance of non-tensorial energy-momentum complexes whose physical interpretation has been a problem for the scientists. There is a uncertainity that different energy-momentum complexes would give different results for a given spacetime. Many researchers considered different energy-momentum complexes and obtained encouraging results. Virbhadra et al. \[14-18\] investigated several examples of the spacetimes and showed that different energy-momentum complexes could provide exactly the same results for a given spacetime. They also evaluated the energy-momentum distribution for asymptotically non-flat spacetimes and found the contradiction to the previous results obtained for asymptotically flat spacetimes. Xulu \[19,20\] evaluated energy-momentum distribution using the M$\ddot{o}$ller definition for the most general non-static spherically symmetric metric. He found that the result is different in general from those obtained using Einstein’s prescription. Aguirregabiria et al. \[21\] proved the consistency of the results obtained by using the different energy-momentum complexes for any Kerr-Schild class metric. On contrary, one of the authors (MS) considered the class of gravitational waves, G$\ddot{o}$del universe and homogeneous G$\ddot{o}$del-type metrics \[22-24\] and used the four definitions of the energy-momentum complexes. He concluded that the four prescriptions differ in general for these spacetimes. Ragab \[25,26\] obtained contradictory results for G$\ddot{o}$del-type metrics and Curzon metric which is a special solution of the Weyl metrics. Patashnick \[27\] showed that different prescriptions give mutually contradictory results for a regular MMaS-class black hole. In recent papers, we extended this procedure to the non-null Einstein-Maxwell solutions, electromagnetic generalization of G$\ddot{o}$del solution, singularity-free cosmological model and Weyl metrics \[28-30\]. We applied four definitions and concluded that none of the definitions provide consistent results for these models. This paper continues the study of investigation of the energy-momentum distribution for the family of Weyl metrics by using the four prescriptions of the energy-momentum complexes. In particular, we would explore energy-momentum for the Erez-Rosen and gamma metrics. The paper has been distributed as follows. In the next section, we shall describe the Weyl metrics and its two family members Erez-Rosen and gamma metrics. Section 3 is devoted to the evaluation of energy-momentum densities for the Erez-Rosen metric by using the prescriptions of Einstein, Landau-Lifshitz, Papapetrou and M$\ddot{o}$ller. In section 4, we shall calculate energy-momentum density components for the gamma metric. The last section contains discussion and summary of the results. The Weyl Metrics ================ Static axisymmetric solutions to the Einstein field equations are given by the Weyl metric \[31,32\] $$ds^2=e^{2\psi}dt^2-e^{-2\psi}[e^{2\gamma}(d\rho^2+dz^2) +\rho^2d\phi^2]$$ in the cylindrical coordinates $(\rho,~\phi,~z)$. Here $\psi$ and $\gamma$ are functions of coordinates $\rho$ and $z$. The metric functions satisfy the following differential equations $$\begin{aligned} \psi_{\rho\rho}+\frac{1}{\rho}\psi_{\rho}+\psi_{zz}=0,\\ \gamma_{\rho}=\rho(\psi^2_{\rho}-\psi^2_{z}),\quad \gamma_{z}=2\rho\psi_{\rho}\psi_{z}.\end{aligned}$$ It is obvious that Eq.(3) represents the Laplace equation for $\psi$. Its general solution, yielding an asymptotically flat behaviour, will be $$\psi=\sum^\infty_{n=0}\frac{a_n}{r^{n+1}}P_n(\cos\theta),$$ where $r=\sqrt{\rho^2+z^2},~\cos\theta=z/r$ are Weyl spherical coordinates and $P_n(\cos\theta)$ are Legendre Polynomials. The coefficients $a_n$ are arbitrary real constants which are called [*Weyl moments*]{}. It is mentioned here that if we take $$\begin{aligned} \psi=-\frac{m}{r},\quad\gamma=-\frac{m^2\rho^2}{2r^4},\quad r=\sqrt{\rho^2+z^2}\end{aligned}$$ then the Weyl metric reduces to special solution of Curzon metric \[33\]. There are more interesting members of the Weyl family, namely the Erez-Rosen and the gamma metric whose properties have been extensively studied in the literature \[32,34\]. The Erez-Rosen metric \[32\] is defined by considering the special value of the metric function $$2\psi=ln(\frac{x-1}{x+1})+q_2(3y^2-1)[\frac{1}{4}(3x^2-1) ln(\frac{x-1}{x+1})+\frac{3}{2}x],$$ where $q_2$ is a constant. Energy and Momentum for the Erez-Rosen Metric ============================================= In this section, we shall evaluate the energy and momentum density components for the Erez-Rosen metric by using different prescriptions. To obtain meaningful results in the prescriptions of Einstein, Ladau-Lifshitz’s and Papapetrou, it is required to transform the metric in Cartesian coordinates. This can be done by using the transformation equations $$x=\rho cos\theta,\quad y=\rho sin\theta.$$ The resulting metric in these coordinates will become $$ds^2=e^{2\psi}dt^2-\frac{e^{2(\gamma-\psi)}}{\rho^2}(xdx+ydy)^2\nonumber\\ -\frac{e^{-2\psi}}{\rho^2}(xdy-ydx)^2-e^{2(\gamma-\psi)}dz^2.$$ Energy and Momentum in Einstein’s Prescription ---------------------------------------------- The energy-momentum complex of Einstein \[1\] is given by $$\Theta^b_a= \frac{1}{16 \pi}H^{bc}_{a,c},$$ where $$H^{bc}_a=\frac{g_{ad}}{\sqrt{-g}}[-g(g^{bd}g^{ce} -g^{be}g^{cd})]_{,e},\quad a,b,c,d,e = 0,1,2,3.$$ Here $\Theta^0_{0}$ is the energy density, $\Theta^i_{0}~ (i=1,2,3)$ are the momentum density components and $\Theta^0_{i}$
{ "pile_set_name": "ArXiv" }
--- abstract: 'For some estimations and predictions, we solve minimization problems with asymmetric loss functions. Usually, we estimate the coefficient of regression for these problems. In this paper, we do not make such the estimation, but rather give a solution by correcting any predictions so that the prediction error follows a general normal distribution. In our method, we can not only minimize the expected value of the asymmetric loss, but also lower the variance of the loss.' author: - 'Naoya Yamaguchi, Yuka Yamaguchi, and Ryuei Nishii' bibliography: - 'reference.bib' title: Minimizing the expected value of the asymmetric loss and an inequality of the variance of the loss --- Introduction {#S1} ============ For some estimations and predictions, we solve minimization problems with loss functions, as follows: Let $\{ (x_{i}, y_{i}) \mid 1 \leq i \leq n \}$ be a data set, where $x_{i}$ are $1 \times p$ vectors and $y_{i} \in \mathbb{R}$. We assume that the data relate to a linear model, $$y = X \beta + \varepsilon,$$ where $y = {}^{t}(y_{1}, \ldots, y_{n})$, $\varepsilon = {}^{t}(\varepsilon_{1}, \ldots, \varepsilon_{n})$, and $X$ is the $n \times p$ matrix having $x_{i}$ as the $i$th row. Let $L$ be a loss function and let $r_{i}(\beta) := y_{i} - x_{i} \beta$. Then we estimate the value: $$\begin{aligned} \hat{\beta} := \arg\min_{\beta} \left\{ \sum_{i = 1}^{n} L(r_{i}(\beta)) \right\}. \end{aligned}$$ The case of $L(r_{i}(\beta)) = r_{i}(\beta)^{2}$ is well-known (see, e.g., Refs. [@doi:10.1111/j.1751-5823.1998.tb00406.x], [@legendre1805nouvelles], and [@stigler1981]). In the case of an asymmetric loss function, we refer the reader to, e.g., Refs. [@10.2307/2336317], [@10.2307/24303995], [@10.2307/1913643], and [@10.2307/2289234]. These studies estimate the parameter $\hat{\beta}$. In this paper, however, we do not make such the estimation, but instead give a solution to the minimization problems by correcting any predictions so that the prediction error follows a general normal distribution. In our method, we can not only minimize the expected value of the asymmetric loss, but also lower the variance of the loss. Let $y$ be an observation value, and let $\hat{y}$ be a predicted value of $y$. We derive the optimized predicted value $y^{*} = \hat{y} + C$ minimizing the expected value of the loss under the assumption: 1. The prediction error $z := \hat{y} - y$ is the realized value of a random variable $Z$, whose density function is a generalized Gaussian distribution function (see, e.g., Refs. [@Dytso2018], [@doi:10.1080/02664760500079464], and [@Sub23]) with mean zero $$\begin{aligned} f_{Z}(z) := \frac{1}{2 a b \G(a)} \exp{\left( - \left\lvert \frac{z}{b} \right\rvert^{\frac{1}{a}} \right)}, \end{aligned}$$ where $\G(a)$ is the gamma function and $a$, $b \in \mathbb{R}_{> 0}$. 2. Let $k_{1}$, $k_{2} \in \mathbb{R}_{> 0}$. If there is a mismatch between $y$ and $\hat{y}$, then we suffer a loss, $$\begin{aligned} \Pe(z) := \begin{cases} k_{1} z, & z \geq 0, \\ - k_{2} z, & z < 0. \end{cases}\end{aligned}$$ That is, the solution to the minimization problem is $$\begin{aligned} C = \arg\min_{c} \left\{ \operatorname{{E}}\left[ \Pe(Z + c) \right] \right\}. \end{aligned}$$ The motivation of our research is as follows: (1) Predictions usually cause prediction errors. Therefore, it is necessary to use predictions in consideration of predictions errors. Actually, in some cases, it is best not to act as predicted because of prediction errors. For example, the paper [@Yamaguchi2018] formulates a method for minimizing the expected value of the procurement cost of electricity in two popular spot markets: [*day-ahead*]{} and [*intra-day*]{}, under the assumption that the expected value of the unit prices and the distributions of the prediction errors for the electricity demand traded in two markets are known. The paper showed that if the procurement is increased or decreased from the prediction, in some cases, the expected value of the procurement cost is reduced. (2) In recent years, prediction methods have been black boxed by the big data and machine learning (see, e.g., Ref. [@10.1145/3236009]). The day will soon come, when we must minimize the objective function by using predictions obtained by such black boxed methods. In our method, even if we do not know the prediction $\hat{y}$, we can determine the parameter $C$ if we know the prediction error distribution $f$ and asymmetric loss function $L$. To obtain $y^{*}$, we derive $\operatorname{{E}}[\Pe(Z + c)]$ for any $c \in \mathbb{R}$. Let $\G(a, x)$ and $\g(a, x)$ be the upper and the lower incomplete gamma functions, respectively (see, e.g., Ref. [@doi:10.1142/0653]). The expected value and the variance of $\Pe(Z + c)$ are as follows: \[lem:1.1\] For any $c \in \mathbb{R}$, we have $$\begin{aligned} (1)\quad \operatorname{{E}}[\Pe(Z + c)] &= \frac{(k_{1} - k_{2}) c}{2} + \frac{(k_{1} + k_{2}) \lvert c \rvert}{2 \G(a)} \g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) + \frac{(k_{1} + k_{2}) b}{2 \G(a)} \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right), \\ (2)\quad \operatorname{{V}}[\Pe(Z + c)] &= \frac{(k_{1} + k_{2})^{2} c^{2}}{4} + \frac{(k_{1}^{2} - k_{2}^{2}) b c}{2 \G(a)} \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) \nonumber \\ &\quad - \frac{(k_{1} + k_{2})^{2} b \lvert c \rvert}{2 \G(a)^{2}} \g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right) \nonumber \\ &\quad - \frac{(k_{1} + k_{2})^{2} c^{2}}{4 \G(a)^{2}} \g\left(a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right)^{2} - \frac{(k_{1} + k_{2})^{2} b^{2}}{4 \G(a)^{2}} \G\left(2a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right)^{2} \nonumber \\ &\quad + \frac{(k_{1}^{2} + k_{2}^{2}) b^{2} \G(3a)}{2 \G(a)} + \operatorname{{sgn}}(c) \frac{(k_{1}^{2} - k_{2}^{2}) b^{2}}{2 \G(a)} \g\left(3a, \left\lvert \frac{c}{b} \right\rvert^{\frac{1}{a}} \right). \nonumber\end{aligned}$$ We write the value of $c$ satisfying $\frac{d}{dc} \operatorname{{E}}[\Pe(Z + c)] = 0$ as $C$.
{ "pile_set_name": "ArXiv" }
epsf Introduction ============ In the theory of random Hermitian matrices [@Guhr98] two robust types of statistics are found in the limit of infinite matrix size (denoted here as ’thermodynamic limit’). First, the Wigner-Dyson statistics describing systems that become ergodic in the thermodynamic limit and have an incompressible, correlated spectrum and Gaussian distributed, uncorrelated amplitudes of the corresponding eigenstates (see Fig. 1a). Since we do not consider further symmetry constraints we focus on the matrix ensembles denoted as class A in the classification of [@AZ96]. A convenient representative of its ergodic limiting ensemble is given by the Gaussian unitary matrix ensemble GUE. The second robust statistics is the Poisson statistics with eigenstates, localized on certain basis states (sites), and with an compressible, uncorrelated spectrum (see Fig. 1b). =6.5cm Real complex quantum systems, represented by random Hermitian matrices, can show a crossover between Wigner-Dyson and Poisson statistics or, in some cases, a true quantum phase transition with novel ’critical’ statistics. A well known example is the 3D Anderson model (see Fig. 2) =6.5cm describing the motion of independent electrons on a 3D lattice with random uncorrelated on-site disorder. Below a certain critical value of disorder, in the thermodynamic limit, all states at the energy band center are infinitely extended (delocalized) in space, while for larger disorder all states are spatially localized. Instead of changing the disorder, one can change the energy within the energy band, keeping the disorder fixed at low values. Again, a transition from localization (band tails) to delocalization (band center) occurs. It is worth mentioning, that the average density of states (DOS) is non-critical, i.e. it stays smooth across the localization-delocalization (LD) transition. Although these statements are substantiated by analytical as well as numerical work (for reviews see [@MK93; @J98]), the special structure of this matrix ensemble (composed of a sparse, but deterministic matrix and a random diagonal matrix) has prohibited, so far, a rigorous proof of these statements. Another well known system with a transition from localized to critical states is the two-dimensional (2D) quantum Hall system (for reviews see [@H94; @JVFH94]) which we will describe briefly later. Furthermore, several matrix ensembles modeling the motion of 2D disordered electrons undergoing (time-reversal symmetric) spin-orbit interactions are known to display a LD transition (see e.g. [@MJH98]). In all of these realistic matrix ensembles the statistics at criticality represents an unstable fixed point under increasing system size (i.e. matrix dimension), which means that any slight shift away from the critical value of the energy, say, will drive the system into one of the stable matrix ensembles, Wigner-Dyson for the delocalized states and Poisson for localized states. The critical ensembles are characterized by correlated spectra, but with a finite compressibility. Furthermore, critical eigenstates are multifractal and the multifractal exponents are related to the compressibility of the spectrum (for a review see [@J98]). It is desirable to study matrix ensembles with simple construction rules and to ask for necessary ingredients in order to have a LD transition. Also in quantum chaos the interest in crossover ensembles has grown [@C99]. In that context the Rosenzweig-Porter model [@RP60] was studied as a toy-model for the crossover. It is defined as a simple superposition of a Poissonian and a Wigner-Dyson matrix. It has been shown rigorously that, by choosing the superposition in an appropriate way, novel critical ensembles emerge, but the spectral compressibility is identical to the Poisson ensemble and states are not multifractal (see [@JVS98] and references therein). Another well studied matrix ensemble is that of random band matrices (RBM) with uncorrelated elements. The band width $B$ describes the number of diagonals with non-vanishing elements. For $B\sim N^s$, with $s > 1/2$, one recovers the Wigner-Dyson statistics. Such band matrix models have been discussed in the context of the ’quantum kicked rotor’ problem [@I90] and have been studied extensively in a series of papers by Mirlin, Fyodorov and others (for review see [@MF94; @M99]). It turned out that, in particular for $B\gg 1$, all states are localized with a localization length (in index space) $\xi\sim B^2$. For fixed $B$ one has therefore a crossover from Wigner-Dyson to Poisson statistics as $N$ is taken from values much smaller than $B^2$ to values much larger than $B^2$, and $B^2/N$ is the relevant parameter for a scaling analysis of data. Superpositions of such random band matrices with random diagonal matrices have been studied in the context of the ’two-interacting particle’ problem (see e.g. [@Sh94; @Fr98]), however these ensembles do not show novel critical behavior as compared to the Rosenzweig-Porter model. In fact, only few simply designed matrix ensembles are known to become critical with multifractal critical states (see [@Mir97; @Kr98]), for example ’power law’ band matrix ensembles, where the strength of (uncorrelated) matrix elements falls off in a power law fashion in the direction perpendicular to the central diagonal. The critical cases occur for the power law behavior $\sim x^{-1}$ of the typical absolute values of matrix elements [@Mir97; @BV]. It is, however, important to notice a significant difference to realistic critical ensembles: there is no LD transition within the spectrum; iff parameters are fixed to critical values all states are critical. In this paper we study correlated random band matrix (CRBM) ensembles and, with the assistance of numerical calculations, argue that these ensembles can lead to a LD transition within the spectrum. A new parameter $C(N)\sim N^t$ describing the correlation of certain matrix elements is introduced for random band matrices. For $B(N)\sim \sqrt{N}$ states are localized outside of the energy band center and a LD transition at the energy band center occurs provided $1/2 \leq t \leq 1$. A major motivation for studying these ensemble originates from the theory of the integer quantum Hall effect [@JVFH94]. The plateau to plateau transition in the quantum Hall effect can be captured in models of non-interacting 2D electrons in a strong magnetic field and a random potential, referred to as quantum Hall system (QHS). In the one-band Landau representation the Hamiltonian is represented as a random matrix with two characteristic features. (i) The matrix elements decay perpendicular to the main diagonal in a Gaussian way. (ii) No correlations exist between elements on distinct ’nebendiagonal’ lines, but Gaussian correlations exist along each of the nebendiagonals. These features led to the introduction of the ’random Landau model’ (RLM) to study critical properties of QHSs (see e.g. [@H94]). The original purpose of constructing the RLM was to avoid explicite calculations of matrix elements starting from a randomly chosen disorder potential and to directly generate the matrix elements as random numbers that fulfill the statistical properties (i) and (ii). As will be explained in more details below, the corresponding CRBM model simplifies the RLM further, in as much as a sharp band width is introduced and correlations along nebendiagonals are idealized and cutoff after a finite length. Recently, matrix ensembles with correlated matrix elements attracted some interest [@FK99] in the context of the metal-insulator experiments in 2D [@Khv94ff], for which strong Coulomb interaction is believed to be a necessary ingredient. It is very interesting that in [@Izr98] 1D models with correlated disorder potentials could, to a large extent, be solved analytically and that certain correlated disorder potentials were shown to cause LD transitions in 1D. It is not obvious how to extend the method of [@Izr98] to the case of CRBM models. Usually, correlations in matrix ensembles lead to serious complications in analytical attacks. For example, in the field theoretic treatment (see e.g. [@EfeB97]) of random matrix ensembles the absence of long ranged correlations is essential to find appropriate field degrees of freedom that depend smoothly on a single site variable. In our CRBM models correlations are introduced by constraints (a number of matrix elements are taken to be identical). This may help to reduce complications in constructing a field theoretic approach for CRBM models. In Sec. 2 we give a detailed definition of the CRBM and discuss three alternative interpretations. The investigation of the LD transition is carried out in Sec. 3 by a multifractal analysis of states for an ensemble that is expected to fall into the quantum Hall universality class. Our results are in favor of this expectation. The analysis is carried over to ensembles where correlations are taken to extreme limits in Sec. 4. In Section 5 we present our conclusions. Correlated Random Band Matrix Model =================================== Let the elements of a $N\times N$ Hermitian matrix $H$ be written as H\_[kl]{} &=& x\_[kl]{}+i y\_[kl]{} l &gt; k ,\ H\_[kk]{} &=& x\_[kk]{} , where all non-vanishing real numbers $ x_{kl}, y_{kl}$ are taken from the same distribution ${\cal P}$ with vanishing mean and finite variance $\sigma^2$. We take the symmetric and uniform distribution on $[-1,1]$ ($\sigma^2
{ "pile_set_name": "ArXiv" }
--- abstract: 'Representation learning focused on disentangling the underlying factors of variation in given data has become an important area of research in machine learning. However, most of the studies in this area have relied on datasets from the computer vision domain and thus, have not been readily extended to music. In this paper, we present a new symbolic music dataset that will help researchers working on disentanglement problems demonstrate the efficacy of their algorithms on diverse domains. This will also provide a means for evaluating algorithms specifically designed for music. To this end, we create a dataset comprising of 2-bar monophonic melodies where each melody is the result of a unique combination of nine latent factors that span ordinal, categorical, and binary types. The dataset is large enough ($\approx$ 1.3 million data points) to train and test deep networks for disentanglement learning. In addition, we present benchmarking experiments using popular unsupervised disentanglement algorithms on this dataset and compare the results with those obtained on an image-based dataset.' bibliography: - '2020-ISMIR-dMelodies.bib' title: 'Melodies: A Music Dataset for Disentanglement Learning' --- Introduction {#sec:intro} ============ Representation learning deals with extracting the underlying factors of variation in a given observation [@bengio_representation_2013]. Learning compact and *disentangled* representations (see   for an illustration) from given data, where important factors of variation are clearly separated, is considered useful for generative modeling and for improving performance on downstream tasks (such as speech recognition, speech synthesis, vision and language generation [@hsu2017unsupervised; @hsu2019disentangling; @kexin2018neural]). Disentangled representations allow a greater degree of interpretability and controllability, especially for content generation, be it language, speech, or music. In the context of Music Information Retrieval (MIR) and generative music models, learning some form of disentangled representation has been the central idea for a wide variety of tasks such as genre transfer [@brunner_midi-vae_2018], rhythm transfer [@yang2019deep; @jiang2020transformer], timbre synthesis [@luo2019learning], instrument rearrangement [@hung2019musical], manipulating musical attributes [@hadjeres_glsr-vae_2017; @pati19latent-reg], and learning music similarity [@lee2020disentangled]. Consequently, there exists a large body of research in the machine learning community focused on developing algorithms for learning disentangled representations. These span unsupervised [@higgins_beta-vae_2017; @chen_isolating_2018; @kim_disentangling_2018; @kumar_variational_2017], semi-supervised [@kingma2014semi; @siddharth2017learning; @Locatello2020Disentangling] and supervised [@lample_fader_2017; @hadjeres_glsr-vae_2017; @kulkarni_deep_2015; @donahue_semantically_2018] methods. However, a vast majority of these algorithms are designed, developed, tested, and evaluated using data from the image or computer vision domain. The availability of standard image-based datasets such as dSprites [@matthey_dsprites_2017], 3D-Shapes [@burges_3d-shapes_2020], and 3D-Chairs [@aubry_seeing_2014] among others has fostered disentanglement studies in vision. Additionally, having well-defined factors of variation (for instance, size and orientation in dSprites [@matthey_dsprites_2017], pitch and elevation in Cars3D [@reed_deep_2015]) has allowed systematic studies and easy comparison of different algorithms. However, this restricted focus on a single domain raises concerns about the generalization of these methods [@locatello_challenging_2019] and prevents easy adoption into other domains such as music. Research on disentanglement learning in music has often been application-oriented with researchers using their own problem-specific datasets. The factors of variation have also been chosen accordingly. To the best of our knowledge, there is no standard dataset for disentanglement learning in music. This has prevented systematic research on understanding disentanglement in the context of music. In this paper, we introduce *dMelodies*, a new dataset of monophonic melodies, specifically intended for disentanglement studies. The dataset is created algorithmically and is based on a simple and yet diverse set of independent latent factors spanning ordinal, categorical and binary attributes. The full dataset contains $\approx 1.3$ million data points which matches the scale of image datasets and should be sufficient to train deep networks. We consider this dataset as the primary contribution of this paper. In addition, we also conduct benchmarking experiments using three popular unsupervised methods for disentanglement learning and present a comparison of the results with the dSprites dataset [@matthey_dsprites_2017]. Our experiments show that disentanglement learning methods do not directly translate between the image and music domains and having a music-focused dataset will be extremely useful to ascertain the generalizability of such methods. The dataset is available online[^1] along with the code to reproduce our benchmarking experiments.[^2] Motivation {#sec:motivation} ========== In representation learning, given an observation $\mathbf{x}$, the task is to learn a representation $r(\mathbf{x})$ which “makes it easier to extract useful information when building classifiers or other predictors” [@bengio_representation_2013]. The fundamental assumption is that any high-dimensional observation $\mathbf{x} \in \mathcal{X}$ (where $\mathcal{X}$ is the data-space) can be decomposed into a semantically meaningful low dimensional latent variable $\mathbf{z} \in \mathcal{Z}$ (where $\mathcal{Z}$ is referred to as the latent space). Given a large number of observations in $\mathcal{X}$, the task of disentanglement learning is to estimate this low dimensional latent space $\mathcal{Z}$ by separating out the distinct factors of variation [@bengio_representation_2013]. An ideal disentanglement method ensures that changes to a single underlying factor of variation in the data changes only a single factor in its representation [@locatello_challenging_2019]. From a generative modeling perspective, it is also important to learn the mapping from $\mathcal{Z}$ to $\mathcal{X}$ to enable better control over the generative process. Lack of diversity in disentanglement learning --------------------------------------------- Most state-of-the-art methods for unsupervised disentanglement learning are based on the Variational Auto-Encoder (VAE) [@kingma_auto-encoding_2014] framework. The key idea behind these methods is that factorizing the latent representation to have an aggregated posterior should lead to better disentanglement [@locatello_challenging_2019]. This is achieved using different means, e.g., imposing constraints on the information capacity of the latent space [@higgins_beta-vae_2017; @burgess_understanding_2018; @rubenstein_learning_2018], maximizing the mutual information between a subset of the latent code and the observations [@chen_infogan_2016], and maximizing the independence between the latent variables [@chen_isolating_2018; @kim_disentangling_2018]. However, unsupervised methods for disentanglement learning are sensitive to inductive biases (such network architectures, hyperparameters, and random seeds) and consequently there is a need to properly evaluate such methods by using datasets from diverse domains [@locatello_challenging_2019]. Apart from unsupervised methods for disentanglement learning, there has also been some research on semi-supervised [@siddharth2017learning; @Locatello2020Disentangling] and supervised [@kulkarni_deep_2015; @lample_fader_2017; @connor2019representing; @engel_latent_2017] learning techniques to manipulate specific attributes in the context of generative models. In these paradigms, a labeled loss is used in addition to the unsupervised loss. Available labels can be utilized in various ways. They can help with disentangling known factors (e.g., digit class in MNIST) from latent factors (e.g., handwriting style) [@bouchacourt_multi-level_2018], or supervising specific latent dimensions to map to specific attributes [@hadjeres_glsr-vae_2017]. However, most of these approaches are evaluated using image domain datasets. Tremendous interest from the machine learning community has led to the creation of benchmarking datasets (albeit image-based) specifically targeted towards disentanglement learning such as dSprites [@matthey_dsprites_2017], 3D-Shapes [@burges_3d-shapes_2020], 3D-chairs [@aubry_seeing_2014], MPI3D [@gondal2019transfer], most of which are artificially generated and have simple factors of variation. While one can argue that artificial datasets do not reflect real-world scenarios, the relative simplicity of these datasets is often desirable since they enable rapid prototyping. Lack of consistency in music-based studies ------------------------------------------ Representation learning has also been explored in the field of MIR. Much like images, learning better representations has been shown to work well for MIR tasks such as composer classification [@bretan15learning; @gururani2019comparison], music tagging [@choi2017transfer], and audio-to-
{ "pile_set_name": "ArXiv" }
--- author: - 'Hiroshi <span style="font-variant:small-caps;">Kunitomo</span>[^1]' title: | Space-time supersymmetry in\ WZW-like open superstring field theory --- Introduction ============ Construction of a complete action including both the Neveu-Schwarz (NS) sector representing space-time bosons and the Ramond sector representing space-time fermions are a long-standing problem in superstring field theory. While the action for the NS sector was constructed based on two different formulations, the WZW-like formulation[@Berkovits:1995ab] and the homotopy-algebra-based formulation,[@Erler:2013xta] it had been difficult to incorporate the Ramond sector in a Lorentz-covariant way. Only recently, however, a complete action has been constructed for the WZW-like formulation[@Kunitomo:2015usa], and soon afterwards for the homotopy-algebra-based formulation.[@Erler:2016ybs] Interestingly enough, in these complete actions, the string field in each sector appears quite asymmetrically. In the WZW-like formulation, for example, the string field $\Phi$ in the NS sector is in the large Hilbert space, characterizing the WZW-like formulation, but the string field $\Psi$ in the Ramond sector is in the restricted small Hilbert space defined using the picture-changing operators. Then the question is how space-time supersymmetry is realized between these two apparently asymmetric sectors. The purpose of this paper is to answer this question by explicitly constructing the space-time supersymmetry transformation in the WZW-like formulation.[^2] In the first quantized formulation, space-time supersymmetry is generated by the supercharge obtained by using the covariant fermion emission vertex,[@Friedan:1985ge] which interchanges each physical state in the NS sector with that in the Ramond sector. Therefore, it is natural to expect first that the space-time supersymmetry transformation in superstring field theory is realized as a linear transformation using this first-quantized supercharge.[@Witten:1986qs] We will see, however, that this expectation is true only for the free theory, while the action including the interaction terms is not invariant under this linear transformation. We modify it so as to be a symmetry of the complete action, and then verify whether the constructed nonlinear transformation satisfies the supersymmetry algebra. We find that the supersymmetry algebra holds, up to the equations of motion and gauge transformation, only except for a nonlinear transformation. It is shown, however, that this extra transformation can also be absorbed into the gauge transformation up to the equations of motion at the linearized level. Under the assumption that the asymptotic condition holds also for the string field theory, this implies, at least perturbatively, that the constructed transformation acts as space-time supersymmetry on the physical states defined by the asymptotic string fields. This guarantees that supersymmetry is realized on the physical S-matrix.[^3] The rest of the paper is organized as follows. In section 2, we summarize the known results on the complete action for the WZW-like open superstring field theory. In addition, restricting the background to the flat space-time, we introduce the GSO projection operator, which is essential to make the physical spectrum supersymmetric. For later use, some basic ingredients, such as the Maurer-Cartan equations and the covariant derivatives, are extended to those based on general derivations of the string product which can be noncommutative. After this preparation, the space-time supersymmetry transformation is constructed in section 3. Using the first-quantized supercharge, a linear transformation is first defined so as to be consistent with the restriction in the Ramond sector. Since this transformation is only a symmetry of the free theory, we first construct the nonlinear transformation perturbatively by requiring it to keep the complete action invariant. Based on some lower-order results, we suppose the full nonlinear transformation $\delta_{\mathcal{S}}$ in a closed form, and prove that it is actually a symmetry of the action. In section 4, the commutator of two transformations is calculated explicitly. We show that it provides the space-time translation $\delta_p$, up to the equations of motion and gauge transformation, except for a nonlinear transformation $\delta_{\tilde{p}}$ that can be absorbed into the gauge transformation only at the linearized level. Thus the supersymmetry algebra holds only on the physical states, and hence the physical S-matrix, defined by the asymptotic string fields under appropriate assumptions on asymptotic properties of the string fields. Although this extra symmetry is unphysical in this sense, it is nontrivial in the total Hilbert space including unphysical degrees of freedom. It produces further unphysical symmetries by taking commutators with supersymmetries or themselves successively. We have a sequence of unphysical symmetries corresponding to the first-quantized charges obtained by taking successive commutators of the supercharge and the unconventional translation charge with picture number $p=-1$. Section 5 is devoted to summary and discussion, and two appendices are added. In Appendix A, we summarize the conventions for the $SO(1,9)$ spinor and the Ramond ground states, which are needed to identify the physical spectrum although they do not appear in this paper explicitly. The triviality of the extra transformation in the Ramond sector, which remains to be shown, is given in Appendix B. Further nonlinear transformations obtained by taking the commutator of two unphysical transformations, $[\delta_{\tilde{p}_1},\delta_{\tilde{p}_2}]$ are also discussed. All the extra symmetries obtained by taking commutators with $\delta_{\mathcal{S}}$ or $\delta_{\tilde{p}}$ repeatedly are shown to be unphysical. Complete gauge-invariant action =============================== On the basis of the Ramond-Neveu-Schwarz (RNS) formulation of superstring theory, an open superstring field is a state in the conformal field theory (CFT) consisting of the matter sector, the reparametrization ghost sector, and the superconformal ghost sector. We assume in this paper that the background space-time is ten-dimensional Minkowski space, for which the matter sector is described by string coordinates $X^\mu(z)$ and their partners $\psi^\mu(z)$ $(\mu=0,1,\cdots,9)$. The reparametrization ghost sector and superconformal ghost sector are described by a fermion pair $(b(z),c(z))$ and a boson pair $(\beta(z),\gamma(z))$, respectively. The superconformal ghost sector has another description by a fermion pair ($\xi(z)$, $\eta(z)$) and a chiral boson $\phi(z)$ [@Friedan:1985ge]. The two descriptions are related through the bosonization relation: $$\beta(z)\ =\ \partial\xi(z) e^{-\phi(z)}\,,\qquad \gamma(z)\ =\ e^{\phi(z)} \eta(z)\,.$$ The Hilbert space for the $\beta\gamma$ system is called the small Hilbert space and that for the $\xi\eta\phi$ system is called the large Hilbert space. The theory has two sectors depending on the boundary condition on the world-sheet fermions $\psi^\mu$, $\beta$, and $\gamma$. The sector in which the world-sheet fermion obeys an antiperiodic boundary condition is known as the Neveu-Schwarz (NS) sector, and describes the space-time bosons. The other sector in which the world-sheet fermion obeys a periodic boundary condition is known as the Ramond (R) sector, and describes the space-time fermions. We can obtain the space-time supersymmetric theory by suitably combining two sectors[@Gliozzi:1976qd]. String fields and constraints ----------------------------- In the WZW-like open superstring field theory, we use the string field $\Phi$ in the large Hilbert space for the NS sector. It is Grassmann even, and has ghost number 0 and picture number 0. Here we further impose the BRST-invariant GSO projection[^4] $$\Phi\ =\ \frac{1}{2}(1+(-1)^{G_{NS}})\, \Phi\,, $$ where $G_{NS}$ is defined by $$\begin{aligned} G_{NS}\ =&\ \sum_{r>0}(\psi^\mu_{-r}\psi_{r\mu}-\gamma_{-r}\beta_r+\beta_{-r}\gamma_r) - 1 \nonumber\\ \equiv&\ \sum_{r>0}\psi^\mu_{-r}\psi_{r\mu} + p_\phi\qquad (\textrm{mod}\ 2)\,,\end{aligned}$$ with $p_\phi=-\oint\frac{dz}{2\pi i}\partial\phi(z)$. This is necessary to remove the tachyon and makes the spectrum supersymmetric[@Gliozzi:1976qd]. For the Ramond sector, we use the string field $\Psi$ constrained on the restricted small Hilbert space satisfying the conditions[@Kunitomo:2015usa] $$\eta\Psi\ =\ 0\,,\qquad XY\Psi\ =\ \Psi\,, \label{R constraints}$$ where $X$ and $Y$ are the picture-changing operator and its inverse acting on the states in the small Hilbert space with picture numbers $-3/2$ and $-1/2$, respectively. They are defined by $$X\ =\ -\delta(\beta_0)G_0 + \delta'(\beta_0)b_0\,,\qquad Y\ =\ -c_0\delta'(\gamma_0)\,, \label{PCO}$$and satisfy $$XYX\ =\ X\,,\qquad YXY\ =\ Y\,
{ "pile_set_name": "ArXiv" }
--- abstract: 'The near-infrared spectral region is becoming a very useful wavelength range to detect and quantify the stellar population of galaxies. Models are developing to predict the contribution of TP-AGB stars, that should dominate the NIR spectra of populations 0.3 to 2 Gyr old. When present in a given stellar population, these stars leave unique signatures that can be used to detect them unambiguously. However, these models have to be tested in a homogeneous database of star-forming galaxies, to check if the results are consistent with what is found from different wavelength ranges. In this work we performed stellar population synthesis on the nuclear and extended regions of 23 star-forming galaxies to understand how the star-formation tracers in the near-infrared can be used in practice. The stellar population synthesis shows that for the galaxies with strong emission in the NIR, there is an important fraction of young/intermediate population contributing to the spectra, which is probably the ionisation source in these galaxies. Galaxies that had no emission lines measured in the NIR were found to have older average ages and less contribution of young populations. Although the stellar population synthesis method proved to be very effective to find the young ionising population in these galaxies, no clear correlation between these results and the NIR spectral indexes were found. Thus, we believe that, in practice, the use of these indexes is still very limited due to observational limitations.' author: - | Lucimara P. Martins$^{1}$[^1], Alberto Rodríguez-Ardila$^2$, Suzi Diniz$^{1,3}$,Rogério Riffel$^{3}$ and Ronaldo de Souza$^{4}$\ $^{1}$NAT - Universidade Cruzeiro do Sul, Rua Galvao Bueno, 868, São Paulo, SP, Brazil\ $^{2}$Laboratório Nacional de Astrofísica/MCT, Rua dos Estados Unidos 154, CEP 37501-064. Itajubá, MG, Brazil\ $^{3}$Universidade Federal do Rio Grande do Sul - IF, Departamento de Astronomia, CP 15051, 91501-970, Porto Alegre, RS, Brasil\ $^{4}$Instituto Astronômico e Geofísico - USP, Rua do Matão, 1226, São Paulo, SP date: 'Accepted ? December ? Received ? December ?; in original form ? October ?' title: 'Spectral Synthesis of Star-forming Galaxies in the Near-Infrared' --- \[firstpage\] Stars: AGB and post-AGB, Galaxies: starburst, Galaxies: stellar content, Infrared: galaxies Introduction ============ The integrated spectrum of galaxies is sensitive to the mass, age, metallicity, dust and star formation history of their dominant stellar populations. Disentangling these stellar populations is important to the understanding of their formation and evolution and the enhancement of star formation in the universe. Star formation tracers in the optical region are nowadays considerably well known and studied, and have been a fundamental tool to identify star-formation in galaxies [@kennicutt88; @kennicutt92; @worthey+97; @balogh+97; @gu+06]. However, the use of this knowledge is not always possible in the case of very dusty galaxies or due to the presence of a luminous AGN. Because of these setbacks, tracers in other wavelength regions have been searched. In this sense, the near-infrared region (NIR hereafter) offers an alternative to tackle this problem. It conveys specific information that adds important constrains in stellar population studies. Except for extreme cases such as ultraluminous IRAS galaxies (Goldader et al. 1995, Lançon et al. 1996), the dominant continuum source is still stellar. The $K$-band light of stellar populations with ages between 0.3 and 2 Gyr is dominated by one single component, namely, the thermally pulsating stars on the asymptotic giants branch (TP-AGB) [@maraston05; @marigo+08]. For populations with age larger than 3 Gyr the NIR light is dominated by stars on the red giant branch (RGB) [@origlia+93]. Their contribution stays approximately constant over large time scales [@maraston05]. By isolating the signature of these stellar evolutionary phases, one expects to gain a better understanding of the properties of the integrated stellar populations. This knowledge is of paramount importance, for example, in the study of high redshift galaxies, when the major star-formation has occurred. Population synthesis models are beginning to account for these stars in a fully consistent way. As a result, they predict prominent molecular bandheads in the NIR. The spectral features of highest relevance to extragalactic studies are the ones located redward of 1 $\mu$m, which should be detectable in the integrated spectra of populations few times 10$^8$ years old. The detection of these bandheads would be a safe indication of the presence of the TP-AGB stars. The most massive of these TP-AGB stars can be very luminous in the NIR, exceeding the luminosity of the tip of the red giant branch by several magnitudes (Melbourne et al. 2012). Models that neglect TP-AGB stars have been shown to over-estimate the masses of distant galaxies by factors of two or more in comparison to models that include them [@ilbert+10]. Models from @maraston05 show that the combination of metallic indexes using these bands can quantify age, metallicity and even separate populations with single bursts from the ones with continuous star formation. However, for nearby objects, some of these bands are located in, or very close to, strong telluric absorption bands, rendering their predictions useless or strongly dependent on the S/N or observing conditions of the spectra. As soon as good quality single stellar population (SSP) models in the NIR became available, the stellar population synthesis emerged as a powerful tool to study galaxies of many different types. For example, by fitting combinations of stellar population models of various ages and metallicities @riffel+08 studied the inner few hundred parsecs stellar populations of starburst galaxies in the NIR.They observed spectra were best explained by stellar populations containing a sizable amount (20-56% by mass) of stars $\sim$1Gyr-old in the thermally pulsing asymptotic giant branch. @riffel+09 applied spectral synthesis to study the differences between the stellar populations of Seyfert 1 (Sy1) and 2 (Sy2) galaxies in the NIR. They found that the central few hundred parsecs of the studied galaxies contain a substantial fraction of intermediate-age SPs with a mean metallicity near solar. They also found that the contribution of the featureless continuum and young components tends to be higher in Sy 1 than in Sy 2. @martins+10 also used stellar population synthesis to investigate the NIR extended spectra of NGC 1068. They found an important contribution of a young stellar population at $\sim$ 100 pc south of the nucleus, which might be associated with regions where the jet encounters dense clouds, possibly inducing star formation. However, if on one hand we have now sophisticated models that predict the spectra of integrated populations in the NIR, on the other hand, few attempts have been made to fit them to observations in a consistent way, in order to calibrate or test these predictions. Much of the work in the NIR has focused on unusual objects with either active galactic nuclei [@larkin+98; @alonso-herrero+00; @ivanov+2000; @reunanen+02; @reunanen+03; @riffel+09; @martins+10; @riffel+10; @riffel+11; @storchi-bergmann+12] or very strong star formation [@goldader+97; @burston+01; @dannerbauer+05; @engelbracht+98; @vanzi+97; @coziol+01; @reunanen+07; @riffel+08]. @kotilainen+12 recently published NIR long-slit spectroscopy for a sample of nearby inactive spiral galaxies to study the composition of their NIR stellar populations. With these galaxies they created NIR HK-band templates spectra for low redshift spiral galaxies along the Hubble sequence. They found a dependency between the strength of the absorption lines and the luminosity and/or temperature of the stars, implying that NIR spectral indices can be used to trace the stellar population of galaxies. Moreover, evolved red stars completely dominate the NIR spectra of their sample, meaning that the contribution from hot young stars is insignificant in this spectral region, although such ages play an important role in other spectral regions [@riffelRogerio+11]. However, to identify and quantify tracers of star formation in the NIR, we need galaxies known to have a significant fraction of star formation. With this in mind, we used the NIR spectral sample of star-forming galaxies of @martins+13, which are known to have star formation from their optical observations. Our objective is to test the predictions of stellar population models in this wavelength range and verify the diagnostics that can, in practice, be used as proxies in stellar population studies. In order to do this we fit the underlying continuum between 0.8 and 2.4 $\micron$ with the stellar population synthesis technique, using the same method described in @riffel+09 and @martins+10. In §2 we present the details of our observations and reduction process; in §3 we briefly describe the stellar population synthesis method; in §4 we present our results and discussions
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study the quantum effects induced by bulk scalar fields in a model with a de Sitter (dS) brane in a flat bulk (the Vilenkin-Ipser-Sikivie model) in more than four dimensions. In ordinary dS space, it is well known that the stress tensor in the dS invariant vacuum for an effectively massless scalar ($m_{{\rm eff}}^2=m^2+\xi {\cal R}=0$ with ${\cal R}$ the Ricci scalar) is infrared divergent except for the minimally coupled case. The usual procedure to tame this divergence is to replace the dS invariant vacuum by the Allen Follaci (AF) vacuum. The resulting stress tensor breaks dS symmetry but is regular. Similarly, in the brane world context, we find that the dS invariant vacuum generates ${{\langle}T_{\mu\nu}{\rangle}}$ divergent everywhere when the lowest lying mode becomes massless except for massless minimal coupling case. A simple extension of the AF vacuum to the present case avoids this global divergence, but ${{\langle}T_{\mu\nu}{\rangle}}$ remains to be divergent along a timelike axis in the bulk. In this case, singularities also appear along the light cone emanating from the origin in the bulk, although they are so mild that ${{\langle}T_{\mu\nu}{\rangle}}$ stays finite except for non-minimal coupling cases in four or six dimensions. We discuss implications of these results for bulk inflaton models. We also study the evolution of the field perturbations in dS brane world. We find that perturbations grow linearly with time on the brane, as in the case of ordinary dS space. In the bulk, they are asymptotically bounded.' address: - '$^1$ Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502, Japan' - '$^2$ Department of Physics, Kyoto University, Kyoto 606-8502, Japan' author: - 'Oriol Pujol[à]{}s$^1$[^1] and Takahiro Tanaka$^{2}$[^2]' title: | Massless scalar fields and infrared divergences\ in the inflationary brane world --- introduction ============ The brane world (BW) scenario [@rsI; @rsII] has been intensively studied in the recent years. Little is known yet concerning the quantum effects from bulk fields in cosmological models [@wade; @nojiri; @kks; @hs]. Quite generically, one expects that local quantities like ${{\langle}T_{\mu\nu}{\rangle}}$ or ${{\langle}\phi^2 {\rangle}}$ can be large close to the branes, due to the well known divergences appearing in Casimir energy density computations. This has been confirmed for example in [@knapman; @romeosaharian] for flat branes. These divergences are of ultraviolet (UV) nature and do not contribute to the force. Hence, they are ignored in Casimir force computations. However, they are relevant to the BW scenario since they may induce large backreaction, and are worth of investigation. In this article, we shall shed light on another aspect of objects like ${{\langle}T_{\mu\nu}{\rangle}}$ in BW. We shall point out that they can suffer from infrared (IR) divergences as well. These divergences arise when there is a zero mode in the spectrum of bulk fields in brane models of RSII type with dS brane[@rsII; @gasa]. The situation is analogous to the case in dS space without brane. It is well known that light scalars in dS develop an IR divergence in the dS invariant vacuum. The main purpose of this article is to explore the effects of scalar fields with light modes in a BW cosmological setup of the RSII type [@rsII]. To consider massless limit of scalar field in inflating BW is especially well motivated in the context of ‘bulk inflaton’ models [@kks; @hs; @shs; @hts; @koyama], in which the dynamics of a bulk scalar drives inflation on the brane. In the simplest realizations, the brane geometry is close to dS and the bulk scalar is nearly massless. Let us recall what happens in the usual dS case [@bida]. For light scalars $m_{{\rm eff}}\ll H$ (with $H$ the Hubble constant) in dS, ${\langle}\phi^2{\rangle}$ and ${{\langle}T_{\mu\nu}{\rangle}}$ in the dS invariant vacuum develop a global IR divergence $\sim1/m_{{\rm eff}}^2$. To be precise, this depends on whether the field is minimally coupled or not. What we have in mind is a generic situation in which the effective mass $m_{{\rm eff}}^2=m^2+\xi {\cal R}$ is small, and $\xi\neq0$. In these cases ${{\langle}T_{\mu\nu}{\rangle}}$ diverges as mentioned. The point is that in the generic massless limit, another vacuum must be chosen to avoid the global IR divergence. This process breaks dS invariance [@af], but this shall not really bother us. The simplest choice is the Allen Follaci (AF) vacuum, in which the stress tensor is globally finite and everywhere regular. The massless minimally coupled case is special [@gaki], and it accepts a different treatment which gives finite ${{\langle}T_{\mu\nu}{\rangle}}$ without violating dS invariance. In the BW scenario [@rsII], the bulk scalar is decomposed into a continuum of KK modes and bound states. Here we consider the case that there is a unique bound state with mass $m_d$. If $m_d$ is light, ${\langle}\phi^2{\rangle}$ and ${{\langle}T_{\mu\nu}{\rangle}}$ for the dS invariant vacuum will also diverge like $1/m_d^2$. In this case, again, one will be forced to take another vacuum state like the AF vacuum. Then one naive question is what is the behavior of the stress tensor in such a vacuum in the BW. Also, one might expect singularities on the light cone emanating from the center (the fixed point under the action of dS group) if we recall that the field perturbations for a massless scalar in dS grow like ${{\langle}\phi^2 {\rangle}}\sim\chi$, where $\chi$ is the proper time in dS [@vilenkinford; @linde; @starobinsky; @vilenkin] (see also [@hawkingmoss]). The light cone in the RSII model corresponds to $\chi\to\infty$. Before we start our discussion, we should mention previous calculation given in Ref. [@xavi]. In that paper the stress tensor for a massless minimally coupled scalar was obtained in four dimensions, in the context of open inflation. Montes showed that ${{\langle}T_{\mu\nu}{\rangle}}$ can be regular everywhere except on the bubble. As we will see, these properties hold as well in other dimensions, but only for massless minimal coupling fields. For simplicity, we consider one extremal case of the RSII model [@rsII] in which the bulk curvature and hence the bulk cosmological constant is negligible. We take into account the gravitational field of the brane by imposing Israel’s matching conditions. The resulting spacetime can be constructed by the ‘cut-and-paste’ technique. Imposing mirror symmetry, one cuts the interior of a dS brane in Minkowski and pastes it to a copy of itself (see Fig. \[fig:mink\]). Such a model was introduced in the context of bubble nucleation by Vilenkin [@v] and by Ipser and Sikivie [@is], and we shall refer to it as ‘the VIS model’. This article is organized as follows. In Section \[sec:vis\], we describe the VIS model and introduce a bulk scalar field with generic bulk and brane couplings. The Green’s function is obtained first for the case when the bound state is massive, $m_d>0$. The form of ${{\langle}T_{\mu\nu}{\rangle}}$ in the limit $m_d\to0$ is also obtained. In Section \[sec:massless\], we consider an exactly massless bound state $m_d=0$, and we present the divergences of the AF vacuum. The case when the bulk mass vanishes is technically simpler and explicit expressions for ${{\langle}T_{\mu\nu}{\rangle}}$ can be obtained. This is done in Section \[sec:M=0\]. With this, we describe the evolution of the field perturbations in Section \[sec:pert\], and conclude in Section \[sec:concl\]. $$\begin{array}{ccc} \includegraphics[width=5cm]{minkowski4.eps} &\qquad &\includegraphics[width=3.7cm]{VIS3.eps} \nonumber\\[-5mm] {\rm (a)}&\qquad&{\rm (b)} \end{array}$$ Scalar fields in the VIS Model {#sec:vis} ============================== In this Section we consider a generic scalar field propagating in the VIS model, describing a gravitating brane in an otherwise flat space [@v; @is]. Specifically, the space time consists of two copies of the interior of the brane glued at the brane location, as illustrated in Fig. \[fig:mink\]. In the usual Minkowski spherical coordinates the metric is $ds^2=-d{T}^2+d{R}^2+{R}^2 d\Omega_{(n)}^2$, where $d\Omega_{(n)}^2$ stands
{ "pile_set_name": "ArXiv" }
--- abstract: 'We propose a novel reinforcement learning algorithm, , that incorporates the strengths of off-policy RL algorithms into Quality Diversity (QD) approaches. Quality-Diversity methods contribute structural biases by decoupling the search for diversity from the search for high return, resulting in efficient management of the exploration-exploitation trade-off. However, these approaches generally suffer from sample inefficiency as they call upon evolutionary techniques. removes this limitation by relying on off-policy RL algorithms. More precisely, we train a population of off-policy deep RL agents to simultaneously maximize diversity inside the population and the return of the agents. selects agents from the diversity-return Pareto Front, resulting in stable and efficient population updates. Our experiments on the environment show that can solve challenging exploration and control problems with deceptive rewards while being more than 15 times more sample efficient than its evolutionary counterparts.' author: - | Geoffrey Cideron\ InstaDeep\ `g.cideron@instadeep.com`\ Thomas Pierrot\ InstaDeep\ `t.pierrot@instadeep.com`\ Nicolas Perrin\ CNRS, Sorbonne Université\ `perrin@isir.upmc.fr`\ Karim Beguir\ InstaDeep\ `kb@instadeep.com`\ Olivier Sigaud\ Sorbonne Université\ `olivier.sigaud@upmc.fr`\ bibliography: - 'ms.bib' title: 'QD-RL: Efficient Mixing of Quality and Diversity in Reinforcement Learning' --- Introduction ============ Despite outstanding successes in specific domains such as games [@silver2017mastering; @jaderberg2019human] and robotics [@tobin2018domain; @akkaya2019solving], Reinforcement Learning (RL) algorithms are still far from being immediately applicable to complex sequential decision problems. Among the issues, a remaining burden is the need to find the right balance between exploitation and exploration. On one hand, algorithms which do not explore enough can easily get stuck in poor local optima. On the other hand, exploring too much hinders sample efficiency and can even prevent users from applying RL to large real world problems. Dealing with this exploration-exploitation trade-off has been the focus of many RL papers [@tang2016exploration; @bellemare2016unifying; @fortunato2017noisy; @plappert2017parameter]. Among other things, having a population of agents working in parallel in the same environment is now a common recipe to stabilize learning and improve exploration, as these parallel agents collect a more diverse set of samples. This has led to two approaches, namely [*distributed RL*]{} where the agents are the same and [*population-based training*]{}, where diversity between agents further favors exploration [@jung2020population; @parker2020effective]. However, such methods do certainly not make the most efficient use of available computational resources, as the agents may collect highly redundant information. Besides, the focus on sparse or deceptive rewards problems led to the realization that looking for diversity independently from maximizing rewards might be a good exploration strategy [@lehman2011abandoning; @eysenbach2018diversity; @colas2018gep]. More recently, it was established that if one can define a [*behavior space*]{} or [*outcome space*]{} corresponding to the smaller space that matters to decide if a behavior is successful or not, maximizing diversity in this space might be the optimal strategy to find the sparse reward source [@doncieux2019novelty]. When the reward signal is not sparse though, one can do better than just looking for diversity. Trying to simultaneously maximize diversity and rewards has been formalized into the Quality-Diversity (QD) framework [@pugh2016quality; @cully2017quality]. The corresponding algorithms try to populate the outcome space as widely as possible with an [*archive*]{} of past solutions which are both diverse and reward efficient. To do so, they generally rely on evolutionary algorithms. Selecting diverse and reward efficient solutions is then performed using the Pareto front of the *diversity* $\times$ *reward efficiency* landscape, or populating a grid of outcome cells with reward efficient solutions in the algorithm [@mouret2015illuminating]. In principle, the QD approach offers a great way to deal with the exploration-exploitation trade-off as it simultaneously ensures pressure towards both wide covering of the outcome space and high return efficiency. However, these methods suffer from relying on evolutionary methods. Though they have been shown to be competitive with deep RL approaches provided enough computational power [@salimans2017evolution; @colas2020scaling], they do not take advantage of the gradient’s analytical form, and thus have to sample to estimate gradients, resulting in far worse sample efficiency than their deep RL counterparts [@sigaud2019policy]. On the other hand, deep RL methods which leverage policy gradients have far better sample efficiency but they struggle on problems that require strong exploration and are sensitive to poorly conditioned reward signals such as deceptive rewards [@colas2018gep]. This is in part because they explore in the action space, the state-action space or the policy space rather than in an outcome space. In this work, we combine the general QD framework with policy gradient methods and capitalize on the strengths of both approaches. Our algorithm explores in an outcome space and thus can solve problems that simultaneously require complex exploration and high dimensional control capabilities. We investigate the properties of by first controlling a low dimensional agent in a maze, and then addressing , a larger benchmark. We compare to several recent algorithms which also combine a diversity objective and a return maximization method, namely the family which mixes evolution strategies with novelty search [@conti2017improving] and the algorithm [@colas2020scaling] which uses to maintain a diverse and high performing population. The latter has been shown to scale well enough to also address large benchmarks, but we show that is several orders of magnitude more sample efficient than these competitors. Related Work {#sec:related} ============ We consider the general context of a fully observable Markov Decision Problem (MDP) $\left( \mathcal{S}, \mathcal{A}, \mathcal{T}, \mathcal{R}, \gamma \right)$ where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $\mathcal{T}: \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{S}$ is the transition function, $\mathcal{R}: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ is the reward function and $\gamma$ is a discount factor. The exploration-exploitation trade-off being central in RL, the search for efficient exploration methods are ubiquitous in the domain. We focus on the relationship between our work and two methods: those which introduce explicit diversity into a multi-actor deep RL approach, and those which combine distinct mechanisms for exploration and exploitation. #### Diversity in multi-actor RL Managing several actors is now a well established method to improve wall clock time and stabilize learning [@jaderberg2017population]. But including an explicit diversity criterion is a more recent trend. The algorithm [@doan2019attraction] uses a combination of attraction and repulsion mechanisms between good agents and poor agents to ensure diversity in a population of agents trained in parallel. The algorithm shows improvement in performance in large continuous action benchmarks such as and sparse reward variants. But diversity is defined in the space of policy performance thus the drive towards novel behaviors could be strengthened. The algorithm [@jung2020population] is an instance of population-based training where the parameters of the best actor are softly distilled into the rest of the population. To prevent the whole population from collapsing into a single agent, a simple diversity criterion is enforced so as to maintain a minimum distance between all agents. The algorithm shows good performance over a large set of continuous action benchmarks, including “delayed” variants where the reward is obtained only every $K$ time steps. However, the diversity criterion they use is far from guaranteeing efficient exploration of the outcome space, particularly in the absence of reward, and it seems that the algorithms mostly benefits from the higher stability of population-based training. With respect to , the algorithm [@parker2020effective] proposes a population-wide diversity criterion which consists in maximizing the volume between the parameters of the agents in a latent space. This criterion better limits redundancy between the considered agents. Like our work, all these methods use a population of deep RL agents and explicitly look for diversity among these agents. However, none of them addresses deceptive reward environments such as the mazes we consider in our work. Furthermore, none of them clearly separates two components nor searches for diversity in the outcome space as does. #### Separated exploration and exploitation mechanisms One extreme case of the separation between exploration and exploitation is “exploration-only” methods. The efficiency of this approach was first put forward within the evolutionary optimization literature [@lehman2011abandoning; @doncieux2019novelty] and then imported into the reinforcement learning literature with works such as [@eysenbach2018diversity] which gave rise to several recent follow-up [@pong2019skew; @lee2019efficient; @islam2019marginalized]. These methods have proven useful in the sparse reward case, but they are inherently limited when some reward signal can be used and maximized during exploration. A second approach is sequential combination. Similarly to us, the algorithm [@colas2018gep] combines a diversity seeking component, namely [*
{ "pile_set_name": "ArXiv" }
--- abstract: 'An improved version of the “optical bar” intracavity readout scheme for gravitational-wave antennae is considered. We propose to call this scheme “optical lever” because it can provide significant gain in the signal displacement of the local mirror similar to the gain which can be obtained using ordinary mechanical lever with unequal arms. In this scheme displacement of the local mirror can be close to the signal displacement of the end mirrors of hypothetical gravitational-wave antenna with arm lengths equal to the half-wavelength of the gravitational wave.' author: - | F.Ya.Khalili[^1]\ [*Dept. of Physics, Moscow State University*]{},\ [*Moscow 199899, Russia*]{} title: ' The “optical lever” intracavity readout scheme for gravitational-wave antennae ' --- Introduction ============ All contemporary large-scale gravitational-wave antennae are based on common principle: they convert phase shift of the optical pumping field into the intensity modulation of the output light beam being registered by photodetector [@Abramovici1992]. This principle allows to obtain sensitivity necessary to detect gravitational waves from astrophysical sources. However, its use in the next generations of gravitational-wave antennae where substantially higher sensitivity is required, encounters serious problems. An excessively high value of optical pumping power which also depends sharply on the required sensitivity, is likely to be the most important one. For example, at the stage II of the LIGO project the light power circulating in the interferometer arms will be increased to about 1 MWatt, in comparison with about 10 KWatt being currently used [@WhitePaper1999]. In particular, so high values of the optical power can produce undesirable non-linear effects in the large-scale Fabry-Perot cavities [@BSV_Instab2001]. This dependence of pumping power on sensitivity can be explained easily using the Heisenberg uncertainty relation. Really, in order to detect displacement $\Delta x$ of test mass $M$ it is necessary to provide perturbation of its momentum $ \Delta p \ge \hbar/2\Delta x$. The only source of this perturbation in the interferometric gravitational-wave antennae is the uncertainty of the optical pumping energy $\Delta\mathcal{E}$. Hence, the following conditions have to be fulfilled: $\Delta\mathcal{E}\propto (\Delta x)^{-1}$. If pumping field is in the coherent quantum state then $\Delta\mathcal{E}\propto\sqrt\mathcal{E}$, and therefore $\mathcal{E}\propto (\Delta x)^{-2}$. Rigorous analysis (see [@Amaldi1999]) shows that pumping energy stored in the interferometer have to be larger than $$\label{E_SQL} \mathcal{E} = \frac{ML^2\Omega^2\Delta\Omega}{4\omega_p\xi^2}\,,$$ where $\Omega$ is the signal frequency, $\Delta\Omega<\Omega$ is the bandwidth where necessary sensitivity is provided, $\omega_p$ is the pumping frequency, $L=c\tau$ is the length of the interferometer arms, $\xi<1$ is the ratio of the amplitude of the signal which can be detected to the amplitude corresponding to the Standard Quantum Limit. This problem can be alleviated by using optical pumping field in squeezed quantum state [@Caves1981], but can not be solved completely, because only modest values of squeezing factor have been obtained experimentally yet. Estimates show that usage of squeezed states allows to decrease $\xi$ by the factor of $\simeq 3$ for the same value of the pumping energy (see [@KLMTV2002]), and the energy still remains proportional to $\xi^{-2}$. In the article [@NonLin1996] the new principle of [*intracavity*]{} readout scheme for gravitational-wave antennae has been considered. It has been proposed to register directly redistribution of the optical field [*inside*]{} the optical cavities using Quantum Non-Demolition (QND) measurement instead of monitoring output light beam. The main advantage of such a measurement is that in this case a non-classical optical field is created by the measurement process automatically. Therefore, sensitivity of these schemes does not depend directly on the circulating power and can be improved by increasing the precision of the intracavity measurement device. The only fundamental limitation in this case is the condition $$\frac{\Delta x}L \gtrsim \frac{\Omega}{\omega_p N}\,,$$ where $N$ is the number of optical quanta in the antenna. In the articles [@OptBar1997; @SymPhot1998] two possible realizations of this principle have been proposed and analyzed. Both of them are based on the pondermotive QND measurement of the optical energy proposed in the article [@JETP1977]. In these schemes displacement of the end mirrors of the gravitational-wave antenna caused by the gravitational wave produces redistribution of the optical energy between the two arms of the interferometer. This redistribution, in its turn, produces variation of the electromagnetic pressure on some additional local mirror (or mirrors). This variation can be detected by measurement device which monitors position of the local mirror(s) relative to reference mass placed outside the pumping field (for example, a small-scale optical interferometric meter can be used as such a meter). The optical pumping field works here as a passive medium which transfers the signal displacement of the end mirrors to the displacement of the local one(s) and, at the same time, transfers perturbation of the local mirror(s) due to measurement back to the end mirrors. In this article we consider an improved version of the “optical bar” scheme considered in the article [@OptBar1997]. We propose to call this scheme “optical lever” because it can provide gain in displacement of the local mirror similar to the gain which can be obtained using ordinary mechanical lever with unequal arms. This scheme is discussed in the section \[sec:OptLeverL\]. In the section \[sec:OptLeverX\] we analyse instability which can exist in both “optical bar” and “optical lever” schemes (namely, in so-called X-topologies of these schemes) and which was not mentioned in the article [@OptBar1997]. We suppose in this article for simplicity that all optical elements of the scheme are ideal. It means that reflectivities of the end mirrors are equal to unity, and all internal elements have no losses. We presume that optical energy have been pumped into the interferometer using very small transparency of some of the end mirrors, and at the time scale of the gravitation-wave signal duration the scheme operates as a conservative one. It has been shown in the article [@OptBar1997] that losses in the optical elements limited the sensitivity at the level $$\xi \gtrsim \frac1{\sqrt{\Omega\tau_\mathrm{opt}^*}}$$ where $\tau_\mathrm{opt}^*$ is the optical relaxation time. Taking into account that value of $\tau_\mathrm{opt}^*$ can be as high as $\gtrsim 1\,\mathrm{s}$, one can conclude that the optical losses do not affect the sensitivity if $\xi\gtrsim 10^{-1}$. The optical lever {#sec:OptLeverL} ================= (100,60) (15,59)(15,15)(79,15) (46,14.75)(79,14.75)(46,15.25)(79,15.25) (14.75,26)(14.75,59)(15.25,26)(15.25,59) (78,10)(80,10)(80,20)(78,20)(78,10,79,15,78,20)(80,21)[(0,0)\[cb\]]{} (80,18)[(1,0)[10]{}]{}(91,18)[(0,0)\[lc\][$x$]{}]{} (47,10)(45,10)(45,20)(47,20)(47,10,46,15,47,20)(45,21)[(0,0)\[cb\]]{} (10,58)(10,60)(20,60)(20,58)(10,58,15,59,20,58)(9,60)[(0,0)\[rt\]]{}(18,60)[(0,-1)[10]{}]{}(18,49)[(0,0)\[ct\][$x$]{}]{} (10,27)(10,25)(20,25)(20,27)(10,27,15,26,20,27)(9,25)[(0,0)\[rb\]]{} (10,20)(20,10) (24,10)(24,20)(26,20)(26,10)(24,10) (25,21)[(0,0)\[cb\]]{} (25,18)[(1,0)[10]{}]{}(36,18)[(0,0)\[lc\][$y$]{}]{} (30,2)(30,12)(32,12)(32,2)(30,2) (26,11)[(1,0)[4]{}]{}(40,11)[(-1,0)[14]{}]{} (33,4)[(0,0)\[lb\][Local position meter]{}]{} One of the possible “optical lever” scheme topologies (L-topology) is presented in Fig.\[fig:OptLeverL\] (another variant — X-topology — is considered in the next section). It differs from the L-topology of the “optical bar” scheme [@OptBar1997] by two additional mirrors and only. These mirrors together with the end mirrors and form two Fabry-Perot cavities with the same initial lengths $L=c\tau$ coupled by means of the central mirror with small transmittance $T_C$. Exactly
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we continue the development of quantum holonomy theory, which is a candidate for a fundamental theory based on gauge fields and non-commutative geometry. The theory is build around the $\mathbf{QHD}(M)$ algebra, which is generated by parallel transports along flows of vector fields and translation operators on an underlying configuration space of connections, and involves a semi-final spectral triple with an infinite-dimensional Bott-Dirac operator. Previously we have proven that the square of the Bott-Dirac operator gives the free Hamilton operator of a Yang-Mills theory coupled to a fermionic sector in a flat and local limit. In this paper we show that the Hilbert space representation, that forms the backbone in this construction, can be extended to include many-particle states.' --- **** On the Fermionic Sector of Quantum Holonomy Theory 6ex Johannes <span style="font-variant:small-caps;">Aastrup</span>$^{a}$[^1] & Jesper Møller <span style="font-variant:small-caps;">Grimstrup</span>$^{b}$[^2]\ 3ex $^{a}\,$*Mathematisches Institut, Universität Hannover,\ Welfengarten 1, D-30167 Hannover, Germany.*\ $^{b}\,$*QHT Gruppen, Copenhagen, Denmark.*\ [*This work is financially supported by Ilyas Khan,\ St. EdmundÕs College, Cambridge, United Kingdom and by\ Tegnestuen Haukohl & Køppen, Copenhagen, Denmark.*]{} 3ex Introduction ============ In this paper we continue the development of [Quantum Holonomy Theory]{}, which is a candidate for a fundamental theory based on gauge fields and formulated within the framework of non-commutative geometry and spectral triples. The basic idea in Quantum Holonomy Theory is to start with an algebra that encodes the canonical commutation relations of a gauge theory in an integrated and non-local fashion. The algebra in question is called the quantum holonomy-diffeomorphisms algebra, denoted $\mathbf{QHD}(M)$, which was first presented in [@Aastrup:2014ppa] and which is generated by parallel transports along flows of vector fields and by translation operators on an underlying configuration space of gauge connections. In [@Aastrup:2015gba] it was demonstrated that this algebra encodes the canonical commutation relations of a gauge theory. Once the $\mathbf{QHD}(M)$ has been identified the question arises whether it has non-trivial Hilbert space representations. This question was answered in the affirmative in [@Aastrup:2017vrm] where we proved that separable and strongly continuous Hilbert space representations of the $\mathbf{QHD}(M)$ exist in any dimensions. A key feature of these Hilbert space representations is that they are non-local. They are labelled by a scale $\tau$, which we tentatively interpret as the Planck scale and which essentially serves as a UV-regulator by suppressing modes in the ultra-violet. This UV-suppression does not break any spatial symmetries, i.e. these representations are isometric. In [@Aastrup:2017atr] we constructed an infinite-dimensional Bott-Dirac operator that interacts with an algebra generated by holonomy-diffeomorphisms alone, denoted by $\mathbf{HD}(M)$, and proved that this Bott-Dirac operator together with the aforementioned Hilbert space representation forms a semi-finite spectral triple over a configuration space of connections. In that paper we also demonstrated that the square of the Bott-Dirac operator coincides in a local and flat limit with the free Hamilton operator of a gauge field coupled to a fermionic sector, a result which opens the door to an interpretation of quantum holonomy theory in terms of a quantum field theory on a curved background. In this paper we continue the analysis of these Hilbert space representations. One feature of the Bott-Dirac operator is that it naturally introduces the CAR algebra into the construction via an infinite-dimensional Clifford algebra. This CAR algebra has a natural interpretation in terms of a fermionic sector due to the aforementioned result that the square of the Bott-Dirac operator includes the Hamilton of a free fermion. One drawback of the Hilbert space representation constructed in [@Aastrup:2017vrm] is that it only involves what amounts to one-particle states. In other words, the Hilbert space representation does not act on the CAR algebra itself. In this paper we construct such a Hilbert space representation of the $\mathbf{QHD}(M)$ algebra. The result that such a representation exist solidifies the interpretation that quantum holonomy theory should be understood as a quantum theory of gauge fields coupled to fermions.\ This paper is organised as follows: We begin by introducing the $\mathbf{HD}(M)$ and $\mathbf{QHD}(M)$ algebras in section 2 and the infinite-dimensional Bott-Dirac operator in section 3. We then review the Hilbert space representation constructed in [@Aastrup:2017vrm] in section 4. Finally we construct in section 5 a new Hilbert space representation where the $\mathbf{QHD}(M)$ algebra acts on the Fock space. We end with a discussion in section 6. The $\mathbf{HD}(M)$ and $\mathbf{QHD}(M)$ algebras {#sektion2} =================================================== In this section we introduce the algebras $\mathbf{HD}(M)$ and $\mathbf{QHD}(M)$, which are generated by parallel transports along flows of vector-fields and for the latter part also by translation operators on an underlying configuration space of connections. The $\mathbf{HD}(M)$ algebra was first defined in [@Aastrup:2012vq; @AGnew] and the $\mathbf{QHD}(M)$ algebra in [@Aastrup:2014ppa]. In the following we shall define these algebras in a local and a global version.\ Let $M$ be a compact manifold and let $\ca$ be a configuration space of gauge connections that takes values in the Lie-algebra of a compact gauge group $G$. A holonomy-diffeomorphism $e^X\in \mathbf{HD}(M)$ is a parallel transport along the flow $t\to \exp_t(X)$ of a vector field $X$. To see how this works we first let $\gamma$ be the path $$\gamma (t)=\exp_{t} (X) (m)$$ running from $m$ to $m'=\exp_1 (X)(m)$. Given a connection $\nabla$ that takes values in a $n$-dimensional representation of the Lie-algebra $\mathfrak{g}$ of $G$ we then define a map $$e^X_\nabla :L^2 (M )\otimes \mathbb{C}^n \to L^2 (M )\otimes \mathbb{C}^n$$ via the holonomy along the flow of $X$ $$(e^X_\nabla \xi )(m')= \hbox{Hol}(\gamma, \nabla) \xi (m) , \label{chopin1}$$ where $\xi\in L^2(M,\mathbb{C}^n)$ and where $\hbox{Hol}(\gamma, \nabla)$ denotes the holonomy of $\nabla$ along $\gamma$. This map gives rise to an operator valued function on the configuration space $\ca$ of $G$-connections via $$\ca \ni \nabla \to e^X_\nabla , {\nonumber}$$ which we denote by $e^X$ and which we call a holonomy-diffeomorphism[^3]. For a function $f\in C^\infty (M)$ we get another operator valued function $fe^X$ on $\ca$. We call the algebra generated by all holonomy-diffeomorphisms $e^X$ for the [*global*]{} holonomy-diffeomorphism algebra, denoted by $\mathbf{HD}_{\mbox{\tiny g}}(M)$, and we call the algebra generated by all holonomy-diffeomorphisms $f e^X$ for the [*local*]{} holonomy-diffeomorphism algebra, denoted simply by $\mathbf{HD}(M)$.\ Furthermore, a $\mathfrak{g}$ valued one-form $\oo$ induces a transformation on $\ca$ and therefore an operator $U_\omega $ on functions on $\ca$ via $$U_\omega (\xi )(\nabla) = \xi (\nabla - \omega) ,$$ which gives us the quantum holonomy-diffeomorphism algebras, denoted either by $\mathbf{QHD}_{\mbox{\tiny g}}(M)$, which is the algebra generated by $\mathbf{HD}_{\mbox{\tiny g}}(M)$ and all the $U_\oo$ operators, or by $\mathbf{QHD}(M)$, which is the algebra generated by $\mathbf{HD}(M)$ and all the $U_\oo$ operators (see also [@Aastrup:2014ppa]). An infinite-dimensional Bott-Dirac operator {#Bott} =========================================== In this section we introduce an infinite-dimensional Bott-Dirac operator that acts in a Hilbert space that shall later play a key role in defining a representation of the $\mathbf{QHD}(M)$ algebras. The following formulation of an infinite-dimensional Bott-Dirac operator is due to Higson and Kasparov [@Higson] (see also [@Aastrup:2017atr]).\ Let $\ch_n= L^2(\mathbb{R}^n)$, where the measure is given by the flat metric, and consider the embedding $$\varphi_n : \ch_n\rightarrow\ch_{n+1}$$ given by $$\varphi_n(\eta)(x_1,x_2,\ldots x_{
{ "pile_set_name": "ArXiv" }
--- abstract: 'We first prove a one-to-one correspondence between finding Hamiltonian cycles in a cubic planar graphs and finding trees with specific properties in dual graphs. Using this information, we construct an exact algorithm for finding Hamiltonian cycles in cubic planar graphs. The worst case time complexity of our algorithm is O$(2^n)$.' author: - | **Bohao Yao\ **Charl Ras, Hamid Mokhtar\ **The University of Melbourne****** title: '**An algorithm for finding Hamiltonian Cycles in Cubic Planar Graphs**' --- Introduction ============ A [*Hamiltonian cycle*]{} is a cycle which passes through every vertex in a graph exactly once. A [*planar graph*]{} is a graph which can be drawn in the plane such that no edges intersect one another. A [*cubic graph*]{} is a graph in which all vertices have degree 3. Finding a Hamiltonian cycle in a cubic planar graph is proven to be an $\mathcal{NP}$-Complete problem [@garey1976planar]. This implies that unless $\mathcal{P}=\mathcal{NP}$, we could not find an efficient algorithm for this problem. Most approaches to finding a Hamiltonian cycle in planar graph utilises the [*divide-and-conquer*]{} method, or its derivation, the [*separator theorem*]{} which partitions the graph in polynomial time [@lipton1979separator]. Exact algorithms using such methods were found to have the complexity of O$(c^{\sqrt{n}})$ [@klinz2006exact] [@dorn2005efficient], where [*n*]{} denotes the number of vertices and [*c*]{} is a constant. In this paper, we consider only cubic planar graphs and attempt to find a new algorithm to provide researchers with a new method to approaching this problem. The Expansion Algorithm ======================= We first start by introducing our so-called [*Expansion Algorithm*]{} which increases the number of vertices in a cycle at each iteration. A cycle can be first found by taking the outer facial cycle of the planar graph. We define it as the [*base cycle*]{}, $\sigma_0$. This base cycle is then expanded by the Expansion Algorithm, which will be described in detail later. \[def1\] Consider a planar graph $G=(V,E)$ A *complementary path*, $P_e^\sigma$, is a path between 2 adjacent vertices, $v_1, v_2 \in \sigma$ connected by the edge, $e$, s.t. $P_e^\sigma$ is internally disjoint from $\sigma$. Furthermore, $P_e^\sigma$ and $e$ together will form the boundary of a face in $G$. Assuming we are not dealing with multigraphs, the complementary path will always have at least one other vertex besides $v_1, v_2$. The restriction that $P_e^\sigma$ and $e$ have to form the boundary of a face will be used later to prove Corollary \[cor1\]. \[def2\] Let $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$. Then, $G_1+G_2 := (V_1 \cup V_2,E_1 \cup E_2)$ \[alg1\] [**Let**]{} $\sigma_0$ be the outer facial cycle.\ [**Let**]{} $i=0$ At each iteration, the algorithm removes an edge, $e$ and adds a path $P_e^\sigma$. Since there are no vertices on $e$ and there is at least 1 vertex on $P_e^\sigma$, the number of vertices on the cycle will always increase at each iteration. Since there is only a finite amount of vertices in the graph, the algorithm will have to terminate eventually. ![Example of utilizing the Expansion Algorithm with the base cycle in blue](Expansion.eps) The *interior* of a cycle, $C$, is the connected region lying to the left of an anticlockwise orientation of $C$. \[lem2\] At each iteration of the Algorithm \[alg1\], all the vertices of G either lie in the interior of $\sigma$, or on $\sigma$. Let $P_i$ be the statement that all the vertices of G either lie in the interior of $\sigma_i$ - $P_0$ is true as $\sigma_0$ is the outer facial cycle, with all the vertices either lie in the interior or on $\sigma_0$. - Assume $P_k$ true. Then all the vertices either lie in the interior or on $\sigma_k$. Assume $\exists P_{e_k}^{\sigma_k}$, then $\sigma_{k+1}$ exists. Let $f_k$ be the face bounded by $P_{e_k}^{\sigma_k}$ and $e_k$. The interior of $f_k$ originally lies inside $\sigma_k$. But by an iteration of Algorithm \[alg1\], the interior of $f_k$ now lies outside the new base cycle, $\sigma_{k+1}$. However, since there are no vertices in the interior of a face, all the vertices in $\sigma_{k+1}$ also lies either in the interior or on $\sigma_{k+1}$. Therefore, if $\sigma_{k+1}$ exists, then $P_k \Rightarrow P_{k+1}$. Hence, by mathematical induction, $P_i$ is true $\forall i$ as long as $\sigma_i$ exists. \[cor1\] If $P_e^\sigma$ exists for an edge $e \in \sigma$, then $P_e^\sigma$ is unique. By Lemma \[lem2\], none of the vertices can lie outside $\sigma$, thus, $P_e^\sigma$ also lies in the interior of $\sigma$. Any edge $e$ lies on the boundary of 2 faces. If $e \in \sigma$, then $\exists$ only 1 possible $P_e^\sigma$ such that $e$ and $P_e$ forms the boundary of a face that lies in the interior of $\sigma$ (the other face that $e$ is a boundary of lies outside $\sigma$) \[lem1\] If $G$ have a Hamiltonian cycle, then $\exists$ a choice of complementary paths that algorithm \[alg1\] can use to find that Hamiltonian cycle. Consider a Hamiltonian cycle, $C$, in $G$. If $C = \sigma_0$ then the case is trivial. If $C \ne \sigma_0$, then by Lemma \[lem2\], all vertices in $C$ lies either on or in the interior of $\sigma_0$ since $C$ contains all the vertices of the graph. Since $C$ contains all the vertices and $\sigma_0$ is a cycle and therefore must contain at least 3 vertices, $C \cap \sigma_0 \ne \emptyset$. Suppose $C \ne \sigma_0$. Let $v_1,v_2,...,v_n$ be consecutive vertices on $\sigma_0$ about the clockwise rotation. Let $P_i$ be the path connecting $v_i,v_{i+1}$ that is the subpath of $C$. Let the edge between $v_i$ and $v_{i+1}$ be $e_i$ which also lies on $\sigma_0$. Since $C$ contains all the vertices in the graph, by iteratively finding $P_e^\sigma$, starting with $e = e_i$, more subpath of $P_i$ will lie on $\sigma$ at each iteration. The algorithm will only move on when $P_i \subset \sigma$. Repeating this process $\forall i$, we will eventually end up with $\sigma = C$, in which the algorithm will terminate. ![Illustration of the proof for Lemma \[lem1\] with $C$ in red](Graph1.eps) The Problem in the Dual Graph ============================= Given a cubic planar graph, $G$, and the corresponding dual graph, $\overline{G}=(\overline{V},\overline{E})$. A *corresponding face*, $f_{\overline{v}} \in G$, is the face corresponding to the vertex $\overline{v} \in \overline{G}$. The *outer vertex*, $\overline{v}^*$, is the vertex in $\overline{G}$ that corresponds to the outer face of $G$. Let $e \in G$ be the shared boundary between $f_{\overline{v}_1},f_{\overline{v}_2}$. A *dual edge*, $\overline{e} \in \overline{G}$, is defined as an edge between $\overline{v}_1, \overline{v}_2 \in \overline{G}$. ![Example of $\overline{e}$ and $\overline{v}^*$](v.eps) Consider Algorithm \[alg1\]. Since $P_e^\sigma$ is unique for an edge $e$ by Corollary \[cor1\], let $e_0,e_1,...,e_n$ be the edges chosen in the [Expansion Algorithm]{} in order. Let $\overline{e}_0,\overline{e}_1,...,\overline{e}_n$ be the corresponding dual edges in $\overline{G}$. ![Expansion Algorithm in the Dual Graph with $\overline{T}$ in Purple](Expansion_Dual.eps) \[lem3\] $\
{ "pile_set_name": "ArXiv" }
--- abstract: 'Earth’s climate, mantle, and core interact over geologic timescales. Climate influences whether plate tectonics can take place on a planet, with cool climates being favorable for plate tectonics because they enhance stresses in the lithosphere, suppress plate boundary annealing, and promote hydration and weakening of the lithosphere. Plate tectonics plays a vital role in the long-term carbon cycle, which helps to maintain a temperate climate. Plate tectonics provides long-term cooling of the core, which is vital for generating a magnetic field, and the magnetic field is capable of shielding atmospheric volatiles from the solar wind. Coupling between climate, mantle, and core can potentially explain the divergent evolution of Earth and Venus. As Venus lies too close to the sun for liquid water to exist, there is no long-term carbon cycle and thus an extremely hot climate. Therefore plate tectonics cannot operate and a long-lived core dynamo cannot be sustained due to insufficient core cooling. On planets within the habitable zone where liquid water is possible, a wide range of evolutionary scenarios can take place depending on initial atmospheric composition, bulk volatile content, or the timing of when plate tectonics initiates, among other factors. Many of these evolutionary trajectories would render the planet uninhabitable. However, there is still significant uncertainty over the nature of the coupling between climate, mantle, and core. Future work is needed to constrain potential evolutionary scenarios and the likelihood of an Earth-like evolution.' author: - 'Bradford J. Foley, Peter E. Driscoll' title: 'Whole planet coupling between climate, mantle, and core: Implications for the evolution of rocky planets ' --- Introduction ============ Overview -------- Recent discoveries have revealed that rocky exoplanets are relatively common [@Batalha2014]. As a consequence, determining the factors necessary for a rocky planet to support life, especially life that may be remotely observable, has become an increasingly important topic. A major requirement, that has been extensively studied, is that solar luminosity must be neither too high nor too low for liquid water to be stable on a planet’s surface; this requirement leads to the concept of the “habitable zone," the range of orbital distances where liquid water is possible [@Hart1978; @Hart1979; @Kasting1993; @Franck2000; @Kopp2014]. Inward of the habitable zone’s inner edge, the critical solar flux that triggers a runaway greenhouse effect is exceeded. The critical flux is typically estimated at $\approx 300 $ W m$^{-2}$, with variations of $\sim 10-100$ W m$^{-2}$ possible due to atmospheric composition, planet size, or surface water inventory [@Ingersoll1969; @Kasting1988; @Nakajima1992; @Abe2011; @Goldblatt2013]. In a runaway greenhouse state liquid water can not condense out of the atmosphere, so any water present would exist as steam. Furthermore a runaway greenhouse is thought to cause rapid water loss to space, and can thus leave a planet desiccated [@Kasting1988; @Hamano2013; @Wordsworth2013]. Beyond the outer edge insolation levels are so low that no amount of CO$_2$ can keep surface temperatures above freezing [@Kasting1993]. However, lying within the habitable zone does not guarantee that surface conditions will be suitable for life. Variations in atmospheric CO$_2$ content can lead to cold climates where a planet is globally glaciated, or hot climates where surface temperatures are higher than any known life can tolerate (i.e. above $\approx 400$ K [@Takai2008]). A hot CO$_2$ greenhouse can also cause rapid water loss to space [@Kasting1988] (though [@Wordsworth2013] argues against this), or even a steam atmosphere if surface temperatures exceed water’s critical temperature of 647 K. Moreover, the solar wind can strip the atmosphere of water and expose the surface to harmful radiation, unless a magnetic field is present to shield the planet [e.g. @Kasting2003; @griessmeier2009; @brain2014]. Atmospheric CO$_2$ concentrations are regulated by the long-term carbon cycle on Earth, such that surface temperatures have remained temperate throughout geologic time [e.g. @walker1981; @Berner2004]. The long-term carbon cycle is facilitated by plate tectonics [e.g. @Kasting2003]. Furthermore the magnetic field is maintained by convection in Earth’s liquid iron outer core (i.e. the geodynamo). As a result, interior processes, namely the operation of plate tectonics and the geodynamo, are vital for habitability. However, whether plate tectonics or a strong magnetic field is likely on rocky planets, especially those in the habitable zone where liquid water is possible, is unclear. The four rocky planets of our solar system have taken dramatically different evolutionary paths, with only Earth developing into a habitable planet possessing liquid water oceans, plate tectonics, and a strong, internally generated magnetic field. In particular the contrast between Earth and Venus, which is approximately the same size as Earth and has a similar composition yet lacks plate tectonics, a magnetic field, and a temperate climate, is striking. In this review we synthesize recent work to highlight that plate tectonics, climate, and the geodynamo are coupled, and that this “whole planet coupling" between surface and interior places new constraints on whether plate tectonics, temperate climates, and magnetic fields can develop on a rocky planet. We hypothesize that whole planet coupling can potentially explain the Earth-Venus dichotomy, as it allows two otherwise similar planets to undergo drastically different evolutions, due solely to one lying inward of the habitable zone’s inner edge and the other within the habitable zone. We also hypothesize that whole planet coupling can lead to a number of different evolutionary scenarios for habitable zone planets, many of which would be detrimental for life, based on initial atmospheric composition, planetary volatile content, and other factors. We primarily focus on habitable zone planets, as these are most interesting in terms of astrobiology, and because the full series of surface-interior interactions we describe involves the long-term carbon cycle, which requires liquid water. Each process, the generation of plate tectonics from mantle convection, climate regulation due to the long-term carbon cycle, and dynamo action in the core, is still incompletely understood and the couplings between these processes are even more uncertain. As a result, significant future work will be needed to place more quantitative constraints on the evolutionary scenarios discussed in this review. Whole planet coupling --------------------- Several basic concepts illustrate the coupling between the surface and interior (Figure \[fig:CTM\]). (1) Climate influences whether plate tectonics can take place on a planet. (2) Plate tectonics plays a vital role in the long-term carbon cycle, which helps to maintain a temperate climate. (3) Plate tectonics effects the generation of the magnetic field via core cooling. (4) The magnetic field is capable of shielding the atmosphere from the solar wind. Cool climates are favorable for plate tectonics because they facilitate the formation of weak lithospheric shear zones, which are necessary for plate tectonics to operate. Low surface temperatures suppress rock annealing, increase the negative buoyancy of the lithosphere, and allow for deep cracking and subsequent water ingestion into the lithosphere, all of which promote the formation of weak shear zones. When liquid water is present on a planet’s surface, silicate weathering, the primary sink of atmospheric CO$_2$ and thus a key component of the long-term carbon cycle, is active. However, silicate weathering also requires a sufficient supply of fresh, weatherable rock at the surface, which plate tectonics helps to provide via orogeny and uplift. As a result, the coupling between plate tectonics and climate can behave as a negative feedback mechanism in some cases, where a cool climate promotes the operation of plate tectonics, and plate tectonics enhances silicate weathering such that the carbon cycle can sustain cool climate conditions. ![Flow chart representing the concept of whole planet coupling. Climate influences tectonics through the role of surface temperature in a planet’s tectonic regime (i.e. stagnant lid versus plate tectonics), while the tectonic regime in turn affects climate through volatile cycling between the surface and interior. The tectonic regime also influences whether a magnetic field can be generated by dictating the core cooling rate. Finally, the strength of the magnetic field influences atmospheric escape, and therefore long-term climate evolution. []{data-label="fig:CTM"}](whole_planet_coup_flowchart.pdf){width="\linewidth"} An additional coupling comes into play via the core dynamo and the magnetic field. The magnetic field is generated by either thermal or chemical convection in the liquid iron core. Thermal convection requires a super-adiabatic heat flux out of the core, which is controlled in part by the style of mantle convection, while chemical convection is driven by light element release during inner core nucleation, which also relies on cooling of the core. Plate tectonics cools the mantle efficiently by continuously subducting cold slabs into the deep interior, thus maintaining a high heat flow out of the core. The magnetic field can in turn limit atmospheric escape, helping retain liquid surface water. The coupling between plate tectonics and the core dynamo, and the magnetic field and the climate, completes the concept of whole planet coupling (Figure \[fig:CTM\]).
{ "pile_set_name": "ArXiv" }
--- abstract: | The reason why smart home remains not popularized lies in bad product user experience, purchasing cost, and compatibility, and a lack of industry standard[@avgerinakis2013recognition]. Echoing problems above, and having relentless devoted to software and hardware innovation and practice, we have independently developed a set of solution which is based on innovation and integration of router technology, mobile Internet technology, Internet of things technology, communication technology, digital-to-analog conversion and codec technology, and P2P technology among others. We have also established relevant protocols (without the application of protocols abroad). By doing this, we managed to establish a system with low and moderate price, superior performance, all-inclusive functions, easy installation, convenient portability, real-time reliability, security encryption, and the capability to manage home furnitures in an intelligent way. Only a new smart home system like this can inject new idea and energy into smart home industry and thus vigorously promote the establishment of smart home industry standard.\ author: - bibliography: - 'GG.bib' title: Promote the Industry Standard of Smart Home in China by Intelligent Router Technology --- Smart home, router technology, industry standard Introduction ============ Since this year, the waves of smart home are on its rise. This October, a company that sells intelligence temperature controllers NEST of Google acquired Revolv, a smart home central control equipment start-up. Xiaomi released its smart plug, smart camera, among four smart end new products. Enterprises at home and abroad one after another plunge into the big cake of smart home. Smart home is opening new vista and space for Internet and household appliance industry. All View Consulting forecasts that by 2020, the ecological product of domestic smart home appliance in China will reach one trillion yuan. SAIF Partners predict that the scale of smart home industry by traditional definition in China will reach 5.5 billion yuan in 2014, and that number will soar to 7.5 billion yuan in 2015. However, three stumbling blocks are in the advancing way of smart home industry: user experience, purchasing cost, and lousy compatibility[@albuquerque2014solution]. In response, we have independently developed a set of solution based on smart router technology. There are many problems existing in routers in the market. Firstly, wireless control based on 315M, 433M and other frequency ranges has no network protocol and can only send simple control command. Collision occurs when there are over three connected devices, which renders the process more difficult to succeed. Secondly, control network based on ZigBee registers small range, poor through-the-wall performance, complex protocol, inordinate price, and at the same time is exclusive and incompatible to devices existing in the market. The third one is control network based on WI-FI. WI-FI boasts a small control range and thus is limited to only a few connected devices. Normally when household router is connected to over ten devices the network would drop or other instabilities would happen. Having taken characteristics above into consideration, echoing the needs for transmission distance, stability, and controlled quantity, our system has utilized control network based on 433M frequency independently developed a self-organized protocol based on the control network which could bear dynamic networking functions similar to that of Zigbee and boast a connected devices number of over 100. As a result, our system has managed the networking capabilities of Zigbee with high connected devices number and long distance transmission. Also the protocol of 433M frequency is an open one and can be compatible with smart devices currently in the market. Therefore, smart router will change the landscape of smart home market and high-end router market and lays a solid foundation for establishment of industry standard for China smart home industry. A set of solution aiming at promoting establishment of industry standard for China smart home market {#SEC: A set of solution aiming at promoting establishment of industry standard for China smart home market} ==================================================================================================== The set of solution includes {#SSEC: The set of solution includes} ---------------------------- Smart router,cloud server, mobile terminal, and intelligent terminal. Meanwhile, the system is a intelligence development platform which enables programmers worldwide to carry out secondary development on this platform and thus an ecological chain with sound circulation takes shape (similar to AppStore of Apple). Feature of the whole set of solution {#SSEC: Feature of the whole set of solution} ------------------------------------ As a household smart center, the smart router is able to administer all of the connected devices, control smart devices in houses, keep houses in security under surveillance, monitor household environment in terms of temperature, humidity, PM 2.5 etc., alarm the police when household accidents happen such as smog or gas, enable users to be remotely connected to the smart center through devices such as cell phone and PAD at any time anywhere to observe and supervise household appliances, and at the same time promptly watch surveillance video in the house. To speak of, the cost of this device is only a small percentage of a traditional smart home product. Users can know at first hand the environment of household through cell phones in many ways such as gas leakage, burglar break-in, illegal opening of doors or windows, abnormal temperature and humidity, smoke alarm, touching of valuable things, real-time temperature and humidity detection, PM2.5 detection. It can be seen as a safety housekeeper. At the same time, users can be promptly aware of the surveillance video image in the house through cell phones, which could provide a more perceptual and intuitive supervision to surrounding environment. Moreover, the router can control household appliances such as television, air conditioner as well as all the other household appliances with remote control. Last but not least, the router is also a cloud service center where users can put personal data in family cloud. Introduction to specific functions {#SSEC: Introduction to specific functions} ---------------------------------- ### Functions of router {#SSSEC: Functions of router} Just like other ordinary routers, the router can visit the Internet and distribute WI-FI data[@zualkernan2009infopods]. ### Intellisense {#SSSEC: Intellisense} Users can receive alarm in houses through cell phones. Smart router can be adaptive to every alarm apparatus. When there is an alarm from apparatuses, smart router will promptly send alarm information to cell phones of users and provide security for users. Alarm apparatus spans gas leakage, burglar break-in, illegal opening of doors or windows, abnormal temperature and humidity, smoke alarm, touching of valuable things. Meanwhile, users can promptly get to know the temperature, humidity, and PM2.5 of houses. ### Intelligent surveillance {#SSSEC: Intelligent surveillance} Smart router can be installed with USB camera of low price as well as wireless camera, which is convenient for users to obtain real-time image through cell phones. Wireless camera can be based on codec of H264, which makes image clear and smooth. ### Intelligent plug {#SSSEC: Intelligent plug} Cell phones can remotely control the on and off of plug, which means controlling the appliances connected to the plug. ### Intelligent cloud service {#SSSEC: Intelligent cloud service} Smart router can serve as a cloud service center that provides personal data management for family members and boasts a good level of privacy. ### Intelligence remote control {#SSSEC: Intelligence remote control} X-Router enables users to put aside all remote controls at home and control all the household appliances through X-Router. For example, users can control lights, curtains, plugs, television, air conditioners, DVD, and STB through cell phones. Main technologies {#SSEC: Main technologies} ----------------- 1\) Communication protocol to control establishment and implementation of protocol. 2\) Transfer function of communication protocol to achieve intelligence transfer of transmission and P2P. 3\) P2P technological research to achieve P2P of low flow. 4\) Establishment and implementation of camera protocol with original server protocol integrated. 5\) development of cell phone software and server side software. Establishment and implementation of chat protocol between software. 6\) Device terminal networking. 7\) establishment and implementation of connection control protocol of device terminal and router. 8\) Establishment and implementation of control protocol of device terminal of each types. Protocols for different types of devices are different and are individually carried out. 9\) Development and production of communication printed circuit board of communication device terminal. 10\) Build software and hardware platform for routers. 11\) Printed circuit board hardware circuit diagram design with low consumption, high simultaneous access, and long duration. Technical index {#SSEC: Technical index} --------------- ### Low consumption {#SSSEC: Low consumption} the transmitted power is only around 1mW. It also uses sleep mode with low power dissipation, making the device use much less electricity. According to estimates, the device can endure a continuous, active period of six months to two years just by two AA batteries, which other wireless devices can hardly match. ### Low cost {#SSSEC: Low cost} the cost for the whole set of solution is around 100 yuan. ### Short time delay {#SSSEC: Short time delay} communication delay and delay period for activation from sleep mode are both very short. Delay for a typical search equipment is 30ms, 15ms for activation from sleep mode, 15ms for active devices to join via channels. Thus the technology is best suited for application in wireless control that is highly commanding in time delay (such as industry control). ### Self-organized network technique {#SSSEC:Self-organized network technique} a star schema has the maximum capacity for
{ "pile_set_name": "ArXiv" }
--- abstract: 'Cosmic acceleration is explained quantitatively, as an apparent effect due to gravitational energy differences that arise in the decoupling of bound systems from the global expansion of the universe. “Dark energy” is a misidentification of those aspects of gravitational energy which by virtue of the equivalence principle cannot be localised, namely gradients in the energy due to the expansion of space and spatial curvature variations in an inhomogeneous universe. A new scheme for cosmological averaging is proposed which solves the Sandage–de Vaucouleurs paradox. Concordance parameters fit supernovae luminosity distances, the angular scale of the sound horizon in the CMB anisotropies, and the effective comoving baryon acoustic oscillation scale seen in galaxy clustering statistics. Key observational anomalies are potentially resolved, and unique predictions made, including a quantifiable variance in the Hubble flow below the scale of apparent homogeneity.' address: 'Department of Physics and Astronomy, University of Canterbury, Private Bag 4800, Christchurch 8140, New Zealand' author: - 'David L. Wiltshire' title: Gravitational energy and cosmic acceleration --- =cmr5=cmr7 \#1[\_]{} \#1[[|\#1]{}]{} \#1 \#1\#2\#3[[\#1\#3\#1\#2]{}]{} \#1\#2[[\#1\#1\#2]{}]{} \#1[\_]{} \#1[\_]{} \#1\#2 \#1[Astrophys. J. [**\#1**]{}]{} \#1[Phys. Lett. [**\#1**]{}]{} \#1[Class. Quantum Grav. [**\#1**]{}]{} \#1[Gen. Relativ. Grav. [**\#1**]{}]{} [March 2007. An essay which received [*Honorable Mention*]{} in the 2007 Gravity Research Foundation Essay Competition.]{} Introduction ============ Our most widely tested “concordance model” of the universe relies on the assumption of an isotropic homogeneous geometry, in spite of the fact that at the present epoch the observed universe is anything but smooth on scales less than 150–300 Mpc. What we actually observe is a foam–like structure, with clusters of galaxies strung in filaments and bubbles surrounding huge voids. Recent surveys suggest that some 40–50% of the volume of the universe is in voids of a characteristic scale 30$h^{-1}$ Mpc, where $h$ is the dimensionless Hubble parameter, $\Hm=100h\kmsMpc$. If larger supervoids and smaller minivoids are included, then it is fair to say that our observed universe is presently void–dominated. It is nonetheless true that a broadly isotropic Hubble flow is observed, which means that a nearly smooth Friedmann–Lemaître–Robertson–Walker (FLRW) geometry must be a good approximation at some level of averaging, if our position is a typical one. In this essay, I will argue, however, that in arriving at a model of the universe which is dominated by a mysterious form of “dark energy” that violates the strong energy condition, we have overlooked subtle physical properties of general relativity in interpreting the relationship of our own measurements to the average smooth geometry. In particular, “dark energy” is a misidentification of those aspects of gravitational energy which by virtue of the equivalence principle cannot be localized. The proposed re–evaluation of cosmological measurements on the basis of a universal [*finite infinity*]{} scale determined by primordial inflation, leads to a new model for the universe. This model appears to pass key observational tests, potentially resolves anomalies, and makes new quantitative predictions. The fitting problem =================== In an arbitrary inhomogeneous spacetime the rods and clocks of any set of observers can only reliably measure local geometry. They give no indication of measurements elsewhere or of the universe’s global structure. By contrast, in an isotropic homogeneous universe, where ideal observers are comoving particles in a uniform fluid, measurements made locally are the same as those made elsewhere on suitable time slices, on account of global symmetries. Our own universe is somewhere between these two extremes. By the evidence of the cosmic microwave background (CMB) radiation, the universe was very smooth at the time of last scattering, and the assumption of isotropy and homogeneity was valid then. At the present epoch we face a much more complicated fitting problem [@fit1] in relating the geometry of the solar system, to that of the galaxy, to that of the local group and the cluster it belongs to, and so on up to the scale of the average observer in a cell which is effectively homogeneous. When we conventionally write down a FLRW metric \[FLRW\] |s\^2 = - \^2 +\^2()\^2\_[k]{} where $\dd\OM^2_{k}$ is the 3–metric of a space of constant curvature, we ignore the fitting problem. In particular, even if the rods and clocks of an ideal isotropic observer can be matched closely to the geometry (\[FLRW\]) at a volume–average position, there is no requirement of theory, principle or observation that demands that such volume–average measurements coincide with ours. The fact that we observe an almost isotropic CMB means that other observers should also measure an almost isotropic CMB, if the Copernican principle is assumed. However, it does not demand that other ideal isotropic observers measure the same mean CMB temperature as us, nor the same angular scale for the Doppler peaks in the anisotropy spectrum. Significant differences can arise due to gradients in gravitational energy and spatial curvature. In general relativity space is dynamical and can carry energy and momentum. By the strong equivalence principle, since the laws of physics must coincide with those of special relativity at a point, it is only internal energy that can be localized in an energy–momentum tensor on the r.h.s. of the Einstein equations. Thus the uniquely relativistic aspects of gravitational energy associated with spatial curvature and geometrodynamics cannot be included in the energy momentum tensor, but are at best described by a quasilocal formulation [@quasi]. The l.h.s. of the Friedmann equation derived from (\[FLRW\]) can be regarded as the difference of a kinetic energy density per unit rest mass, $E\ns{kin}=\half{\dot\ab^2\over\ab^2}$ and a total energy density per unit rest mass $E\ns{tot}=-\half{k\over\ab^2}$ of the opposite sign to the Gaussian curvature, $k$. Such terms represent forms of gravitational energy, but since they are identical for all observers in an isotropic homogeneous geometry, they are not often discussed in introductory cosmology texts. Such discussions appear rarely in the cases of specific inhomogeneous models, such as the Lemaître–Tolman–Bondi (LTB) solutions. In an inhomogeneous cosmology, gradients in the kinetic energy of expansion and in spatial curvature, will be manifest in the Einstein tensor, leading to variations in gravitational energy that cannot be localized. The observation that space is not expanding within bound systems implies that a kinetic energy gradient must exist between bound systems and the volume average in expanding space. Furthermore, the fact that space within galaxies is well approximated by asymptotically flat geometries implies that if there is significant spatial curvature within our present horizon volume, then a spatial curvature gradient should also contribute to the gravitational energy difference between bound systems and the volume average. Finite infinity and boundary conditions from primordial inflation ================================================================= In his pioneering work on the fitting problem, Ellis [@fit1] suggested the notion of [*finite infinity*]{}, “[*fi*]{}$\,$”, as being a timelike surface within which the dynamics of an isolated system such as the solar system can be treated without reference to the rest of the universe. Within finite infinity spatial geometry might be considered to be effectively asymptotically flat, and governed by “almost” Killing vectors. Quasilocal gravitational energy is generally defined in terms of surface integrals with respect to surfaces of a fiducial spacetime, and for the discussions of binding energy and rotational energy to which the quasilocal approach is commonly applied, asymptotic flatness is usually assumed. I propose that to quantify cosmological gravitational energy with respect to observers in bound systems an appropriate notion of finite infinity must be used as the fiducial reference point, since bound systems can be considered to be almost asymptotically flat. To date Ellis’ 1984 suggestion [@fit1] has not been further developed, perhaps because there is no obvious way to define finite infinity in an arbitrary inhomogeneous background. To proceed I will make the crucial observation that since our universe was effectively homogeneous and isotropic at last scattering, a notion of a universal critical density scale did exist then. It was the density required for gravity to overcome the initial uniform expansion velocity of the dust fluid. I will assume, as consistent with primordial inflation, that the present horizon volume of the universe was very close to the critical density at last scattering, with scale–invariant perturbations. Since the evolution of inhomogeneities involves back–reaction we must use an averaging scheme such as that developed by Buchert [@buch1]. An important lesson of such schemes is that averaging a quantity such as the density, and then evolving the average by the Friedmann equation, is not the same as evolving the inhomogeneous Einstein equations and then taking the average. Thus even if our present horizon volume, $\cal H$, was close to critical density at last scattering, differing perhaps by a factor of $\left.\de\rh/\rh\right|\Z{{\cal H}i}\goesas -10^{-5}$, the present horizon volume can nonetheless have a density well below critical. Furthermore, the present [*true critical density*]{} or [*closure density*]{} which demarcates a bound system from an unbound region, can be very different from the notional critical density inferred from a FLRW model using the presently measured global Hubble constant,
{ "pile_set_name": "ArXiv" }
--- author: - | Arttu Rajantie\ Theoretical Physics, Blackett Laboratory, Imperial College, London SW7 2AZ, UK\ E-mail: bibliography: - 'monomass.bib' title: 'Mass of a quantum ’t Hooft-Polyakov monopole' --- Introduction ============ ’t Hooft-Polyakov monopoles [@'tHooft:1974qc; @Polyakov:1974ek] are topological solitons in the Georgi-Glashow model [@Georgi:1972cj] and a wide range of other gauge field theories, including super Yang Mills theories and grand unified theories. They are non-linear objects in which energy is localised around a point in space and which therefore appear as point particles, and they carry non-zero magnetic charge. It is possible that these monopoles actually exist in nature, but so far they have not been discovered[^1] despite extensive searches [@Milton:2001qj]. However, ’t Hooft-Polyakov monopoles are very important theoretically, because they provide a new way of looking at non-Abelian gauge field theories, complementary to the usual perturbative picture. In particular, this has shed more light on the puzzle of confinement [@Mandelstam:1974pi; @tHooftTalk]. So far, concrete results have been limited to supersymmetric theories. The main reason for the lack of progress in non-supersymmetric theories is the difficulty of treating the quantum corrections to the classical monopole solution. For instance, calculating the quantum correction to a soliton mass is a complicated task. Even in simple one-dimensional models, it can typically only be calculated to one-loop order [@Dashen:1974cj], and for ’t Hooft-Polyakov monopoles the situation is even worse as only the leading logarithm is known [@Kiselev:1988gf]. This difficulty is avoided in supersymmetric models, because the symmetry protects the mass from quantum corrections. In this paper, the quantum mechanical mass of a ’t Hooft-Polyakov monopole is calculated using lattice Monte Carlo simulations. The method was developed in Ref. [@Davis:2000kv] and has been used earlier [@Davis:2001mg] in a 2+1-dimensional model in which the monopoles are instanton-like space-time events rather than particle excitations. The mass is defined using the free-energy difference between sectors with magnetic charges one and zero, and the corresponding ensembles are constructed using suitably twisted boundary conditions. This method has several advantages over the alternative approaches based on creation and annihilation operators [@Frohlich:1998wq; @Belavin:2002em; @Khvedelidze:2005rv] or fixed boundary conditions [@Smit:1993gy; @Cea:2000zr]. In particular, it gives a unique, unambiguous result, since it requires neither gauge fixing, choice of a classical field configuration nor identification of individual monopoles in the field configurations. Analogous twisted boundary conditions have been used before to compute soliton masses in simpler models, such as 1+1-dimensional scalar field theory [@Ciria:1993yx], 3+1-dimensional compact U(1) gauge theory [@Vettorazzo:2003fg] and 2+1-dimensional Abelian Higgs model [@Kajantie:1998zn]. In the latter case, the results provided evidence for an asymptotic duality near the critical point [@Kajantie:2004vy]: The model becomes equivalent to a scalar field theory with a global O(2) symmetry, with vortices and scalar fields changing places. It is interesting to speculate whether an electric-magnetic duality might appear in the same way in the Georgi-Glashow model. These methods can, in principle, used to test that conjecture. The outline of this paper is the following: The Georgi-Glashow model and the classical ’t Hooft-Polyakov solution are introduced in Section \[sect:model\]. In Section \[sect:lattice\], the model is discretised on the lattice and the lattice magnetic field is defined. The twisted boundary conditions are discussed in Section \[sect:twist\]. In Sections \[sect:classmass\] and \[sect:simu\] the classical and quantum mechanical monopole masses are computed, and the results are discussed in Section \[sect:discuss\]. Finally, conclusions are presented in Section \[sect:conclude\]. Georgi-Glashow model {#sect:model} ==================== The 3+1-dimensional Georgi-Glashow model [@Georgi:1972cj] consists of an SU(2) gauge field $A_\mu$ and an Higgs field $\Phi$ in the adjoint representation, with the Lagrangian $${\cal L}=-\frac{1}{2}{\rm Tr}F_{\mu\nu}F^{\mu\nu} +{\rm Tr}[D_\mu,\Phi][D^\mu,\Phi]-m^2{\rm Tr}\Phi^2-\lambda\left({\rm Tr}\Phi^2\right)^2,$$ where the covariant derivative $D_\mu$ and the field strength tension are defined as $D_\mu=\partial_\mu+igA_\mu$ and $F_{\mu\nu}=[D_\mu,D_\nu]/ig$. $A_\mu$ and $\Phi$ are traceless, self-adjoint $2\times 2$ matrices, they can be represented as linear combinations of Pauli $\sigma$ matrices, $$\sigma_1=\left(\matrix{0 & 1 \cr 1 & 0}\right),\quad \sigma_2=\left(\matrix{0 & -i \cr i & 0}\right),\quad \sigma_3=\left(\matrix{1 & 0 \cr 0 & -1 }\right),$$ as $A_\mu=A_\mu^a\sigma^a$, $\Phi=\Phi^a\sigma^a$. On classical level, the model has two dimensionless parameters, the coupling constants $g$ and $\lambda$, and the scale is set by $m^2$. When $m^2$ is negative, the SU(2) symmetry is broken spontaneously to U(1) by a non-zero vacuum expectation value of the Higgs field ${\rm Tr}\Phi^2=v^2/2\equiv|m^2|/2\lambda$. In the broken phase, the particle spectrum consists of a massless photon, electrically charged $W^{\pm}$ bosons with mass $m_W=gv$, a neutral Higgs scalar with mass $m_H=\sqrt{2\lambda}v$ and massive magnetic monopoles [@'tHooft:1974qc; @Polyakov:1974ek]. The terms “electric” and “magnetic” refer to the effective U(1) field strength tensor defined as [@'tHooft:1974qc] $$\label{equ:fieldstrength} {\cal F}_{\mu\nu}={\rm Tr}\hat\Phi F_{\mu\nu}-\frac{i}{2g}{\rm Tr} \hat\Phi[D_\mu,\hat\Phi][D_\nu,\hat\Phi].$$ In any smooth field configuration, the corresponding magnetic field ${\cal B}_i=\epsilon_{ijk}{\cal F}_{jk}/2$ is sourceless (i.e., $\vec\nabla\cdot \vec{\cal B}=0$) whenever $\Phi\ne 0$. This is easy to see in the unitary gauge, in which $\Phi\propto\sigma_3$, because Eq. (\[equ:fieldstrength\]) reduces to ${\cal F}_{\mu\nu}=\partial_\mu A^3_\nu-\partial_\nu A^3_\mu$ and therefore $\vec{\cal B}=\vec\nabla \times\vec{A}^3$. At zeros of $\Phi$, the divergence is $\pm 4\pi/g$ times a delta function, indicating a magnetic charge of $q_M=4\pi/g$. The classical ’t Hooft-Polyakov monopole solution [@'tHooft:1974qc; @Polyakov:1974ek] is of the form $$\begin{aligned} \Phi^a&=&\frac{r_a}{gr^2}H(gvr), \nonumber\\ A_i&=&-\epsilon_{aij}\frac{r_j}{gr^2}\left[1-K(gvr)\right],\end{aligned}$$ where $H(x)$ and $K(x)$ are functions that have to be determined numerically. It is easy to check that this solution is a magnetic charge in the above sense. Because the energy is localised around the origin, the solution describes a particle. Once the functions $H(x)$ and $K(x)$ have been found, it is easy to integrate the energy functional to calculate the mass of the particle, as it is simply given by the total energy of the configuration. The energy density falls as $\rho\sim 1/2g^2r^4$, implying that the mass is finite but also that there is a long-range magnetic Coulomb force between monopoles, as expected. The classical monopole mass $M_{\rm cl}$ can be written as $$\label{equ:classmass} M_{\rm cl}=\frac{4\pi m_W}{g^2}f(z),$$ where $f(z)$ is a function of $z=m_H/m_W$ and is known to satisfy $f(0)=1$ [@Bogom
{ "pile_set_name": "ArXiv" }
--- author: - 'Michael Shell,  John Doe,  and Jane Doe, [^1]' title: | Bare Advanced Demo of IEEEtran.cls for\ IEEE Computer Society Journals --- [Shell : Bare Advanced Demo of IEEEtran.cls for IEEE Computer Society Journals]{} Introduction {#sec:introduction} ============ demo file is intended to serve as a “starter file” for IEEE Computer Society journal papers produced under LaTeX using IEEEtran.cls version 1.8b and later. I wish you the best of success. mds August 26, 2015 Subsection Heading Here ----------------------- Subsection text here. ### Subsubsection Heading Here Subsubsection text here. Conclusion ========== The conclusion goes here. Proof of the First Zonklar Equation =================================== Appendix one text goes here. Appendix two text goes here. Acknowledgments {#acknowledgments .unnumbered} =============== Acknowledgment {#acknowledgment .unnumbered} ============== The authors would like to thank... [1]{} H. Kopka and P. W. Daly, *A Guide to [LaTeX]{}*, 3rd ed.1em plus 0.5em minus 0.4emHarlow, England: Addison-Wesley, 1999. [Michael Shell]{} Biography text here. [John Doe]{} Biography text here. [Jane Doe]{} Biography text here. [^1]: Manuscript received April 19, 2005; revised August 26, 2015.
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper focuses on the study of certain classes of Boolean functions that have appeared in several different contexts. Nested canalyzing functions have been studied recently in the context of Boolean network models of gene regulatory networks. In the same context, polynomial functions over finite fields have been used to develop network inference methods for gene regulatory networks. Finally, unate cascade functions have been studied in the design of logic circuits and binary decision diagrams. This paper shows that the class of nested canalyzing functions is equal to that of unate cascade functions. Furthermore, it provides a description of nested canalyzing functions as a certain type of Boolean polynomial function. Using the polynomial framework one can show that the class of nested canalyzing functions, or, equivalently, the class of unate cascade functions, forms an algebraic variety which makes their analysis amenable to the use of techniques from algebraic geometry and computational algebra. As a corollary of the functional equivalence derived here, a formula in the literature for the number of unate cascade functions provides such a formula for the number of nested canalyzing functions.' address: - 'Virginia Bioinformatics Institute (0477), Virginia Tech, Blacksburg, VA 24061, USA' - 'Mathematics Department, De La Salle University, 2401 Taft Avenue, Manila, Philippines' author: - Abdul Salam Jarrah - Blessilda Raposa - Reinhard Laubenbacher title: 'Nested Canalyzing, Unate Cascade, and Polynomial Functions' --- nested canalyzing function, unate cascade function, parametrization, polynomial function, Boolean function, algebraic variety Introduction ============ Canalyzing functions were introduced by Kauffman [@Kauff1] as appropriate rules in Boolean network models of gene regulatory networks. The definition is reminiscent of the concept of “canalisation" introduced by the geneticist Waddington [@wad] to represent the ability of a genotype to produce the same phenotype regardless of environmental variability. Canalyzing functions are known to have other important applications in physics, engineering and biology. They have been used to study the convergence behavior of a class of nonlinear digital filters, called stack filters, which have applications in image and video processing [@gabbouj; @wendt; @yu]. Canalyzing functions also play an important role in the study of random Boolean networks [@Kauff1; @lynch; @stauffer; @stern], have been used extensively as models for dynamical systems as varied as gene regulatory networks [@Kauff1], evolution, [@stern] and chaos [@lynch]. One important characteristic of canalyzing functions is that they exhibit a stabilizing effect on the dynamics of a system. For example, in [@gabbouj], it is shown that stack filters which are defined by canalyzing functions converge to a fixed point called a root signal after a finite number of passes. Moreira and Amaral [@moreira], showed that the dynamics of a Boolean network which operates according to canalyzing rules is robust with regard to small perturbations. A special type of canalyzing function, so-called *nested canalyzing functions* (NCFs) were introduced recently in [@Kauff2], and it was shown in [@Kauff] that Boolean networks made from such functions show stable dynamic behavior and might be a good class of functions to express regulatory relationships in biochemical networks. Little is known about this class of functions, however. For instance, there is no known formula for the number of nested canalyzing functions in a given number of variables. Another field in which special families of Boolean functions have been studied extensively is the theory of computing, in particular the design of efficient logical switching circuits. Since the 1970s, several families of Boolean functions have been investigated for use in circuit design. For instance, the family of [*fanout-free*]{} functions has been studied extensively, as well as the family of cascade functions. A subclass of these are the [*unate cascade functions*]{} see, e.g., [@maitra; @mukho], which we focus on here. It turns out that this class of functions has some very useful properties. For instance, it was shown recently [@butler] that the class of unate cascade functions is precisely the class of Boolean functions that have good properties as binary decision diagrams. In particular, the unate cascade functions (on $n$ variables) are precisely those functions whose binary decision diagrams have the smallest average path length $\ds (2-\frac{1}{2^{n-1}})$ among all Boolean functions of $n$ variables. The notion of average path length is one cost measure for binary decision trees, which measures the average number of steps to evaluate the function on which the tree is based. One way of assessing the relative efficacy of classes of Boolean function for logic circuit or binary decision tree design is to look at the number of different circuits or trees that can be realized with a particular class. That is, one would like to count the number of functions in a given class. This has led to a formula for the number of unate cascade functions [@bendbut]. One of the results in this paper shows that the classes of unate cascade functions and nested canalyzing functions are identical (as classes of functions rather than as classes of logical expressions). As a result of the equivalence we will establish, this formula then also counts the number of nested canalyzing functions. A third framework for studying Boolean functions, in the context of models for biochemical networks, was introduced in [@LS]. There, a new method to reverse engineer gene regulatory networks from experimental data was proposed. The proposed modeling framework is that of time-discrete deterministic dynamical systems with a finite set of states for each of the variables. The number of states is chosen so as to support the structure of a finite field. One consequence is that each of the state transition functions can be represented by a polynomial function with coefficients in the finite field, thereby making available powerful computational tools from polynomial algebra. This class of dynamical systems in particular includes Boolean networks, when network nodes take on two states. It is straightforward to translate Boolean functions into polynomial form, with multiplication corresponding to AND, addition to XOR, and addition of the constant 1 to negation. In this paper we provide a characterization of those polynomial functions over the field with two elements that correspond to nested canalyzing (and, therefore, unate cascade) functions. Using a parameterized polynomial representation, one can characterize the parameter set in terms of a well-understood mathematical object, a common method in mathematics. This is done using the concepts and language from algebraic geometry. To be precise, we describe the parameter set as an algebraic variety, that is a set of points in an affine space that represents the set of solutions of a system of polynomial equations. This algebraic variety turns out to have special structure that can be used to study the class of nested canalyzing functions as a rich mathematical object. Boolean Nested Canalyzing and unate cascade Functions are equivalent ==================================================================== Boolean Nested Canalyzing Functions ----------------------------------- Boolean nested canalyzing functions were introduced recently in [@Kauff2], and it was shown in [@Kauff] that Boolean networks made from such functions show stable dynamic behavior. In this section we show that the set of Boolean nested canalyzing functions is equivalent to the set of unate cascade functions that has been studied before in the engineering and computer science literature. In particular, this equivalence provides a formula for the number of nested canalyzing functions in a given number of variables. We begin by defining the canalyzing property. A Boolean function $f(x_1,\ldots ,x_n)$ is *canalyzing* if there exists an index $i$ and a Boolean value $a$ for $x_i$ such that $f(x_1, \ldots ,x_{i-1},a,x_{i+1},\ldots ,x_n) = b$ is constant. That is, the variable $x_i$, when given the *canalyzing value* $a$, determines the value of the function $f$, regardless of the other inputs. The output value $b$ is called the *canalyzed value*. Throughout this paper, we use the Boolean functions ${\mathrm{AND}}(x,y) = x \wedge y$, ${\mathrm{OR}}(x,y) = x\vee y$ and ${\mathrm{NOT}}(x) = \overline{x}$. The function ${\mathrm{AND}}(x,y) = x \wedge y$ is a canalyzing function in the variable $x$ with canalyzing value 0 and canalyzed value 0. The function ${\mathrm{XOR}}(x,y) := (x\vee y)\wedge \overline{(x\wedge y)}$ is not canalyzing in either variable. Nested canalyzing functions are a natural specialization of canalyzing functions. They arise from the question of what happens when the function does not get the canalyzing value as input but instead has to rely on its other inputs. Throughout this paper, when we refer to a function of $n$ variables, we mean that $f$ depends on all $n$ variables. That is, for $1 \leq i \leq n$, there exists $(a_1,\dots,a_n) \in {\mathds{F}}_2^n$ such that $f(a_1,\dots,a_{i-1},a_i,a_{i+1},\dots,a_n) \neq f(a_1,\dots,a_{i-1},\overline{a_i},a_{i+1},\dots,a
{ "pile_set_name": "ArXiv" }
--- abstract: 'The Casimir-Polder interaction of ground-state and excited atoms with graphene is investigated with the aim to establish whether graphene systems can be used as a shield for vacuum fluctuations of an underlying substrate. We calculate the zero-temperature Casimir-Polder potential from the reflection coefficients of graphene within the framework of the Dirac model. For both doped and undoped graphene we show limits at which graphene could be used effectively as a shield. Additional results are given for AB-stacked bilayer graphene.' author: - Sofia Ribeiro - Stefan Scheel bibliography: - 'phdthesis.bib' title: Shielding vacuum fluctuations with graphene --- Introduction ============ Graphene’s extraordinary electronic and optical properties hold great promise for applications in photonics and optoelectronics. The existence of a true two-dimensional (2D) material having a thickness of a single atom was believed to be impossible for a long time, because both finite temperature and quantum fluctuations would destroy the 2D structure. However, since the first groundbreaking experiments [@Science306_666_2004], the study of graphene became an active field in condensed matter. Theoretical reviews of graphene’s properties can be found in Refs. [@RMP_81_2009; @RMP_Peres_2010]. The technological push towards miniaturization resulted in the idea of devising small structures based on graphene. For instance, placing graphene between different substrates or by patterning a given substrate, it is possible to create artificial materials with tunable properties [@RPP74_082501_2011]. Hybrid quantum systems which combine cold atoms with solid structures hold great promise for the study of fundamental science, creating the possibility to built devices to measure precisely gravitational, electric and magnetic fields [@NJP13_083020_2011]. For instance, many of the proposed extensions to the Standard Model of particle physics include forces, due to compactified extra dimensions, that would modify Newtonian gravity on submicrometer scales [@BookNonNewton; @ARNP53_77_2003]. By performing extremely careful force measurements near surfaces, it is hoped that more stringent limits on the presence of such forces may be obtained. With this in mind, hybrid systems in which neutral atoms and graphene are held in close proximity represent an important and attractive case to study. A quick estimate shows that the Casimir-Polder force dominates gravity by several orders of magnitude at micrometer distances. It is therefore necessary to find a system that is simple enough in order to either be able to calculate its dispersion effect to high enough precision, or to provide a shield against vacuum fluctuations of another (macroscopic) body. Graphene has been shown to be a strong absorber of electromagnetic radiation, it interacts strongly with light over a wide wavelength range, particularly in the far infrared and terahertz parts of the spectrum due to its high carrier mobility and conductivity [@PRL108_047401_2012]. Considering that graphene is only one atomic layer thick, its (universal) absorption coefficient of $\eta=\pi e^2 / (\hbar c) \approx 2.3\%$ is quite remarkable [@graphenebook]. In Ref. [@nnano7_330_2012] new systems made of several layers of graphene are shown to be an effective shield for terahertz radiation, while letting visible light pass. These studies brought the attention to the development of transparent mid- and far-infrared photonic devices. With graphene’s absorption properties in mind we investigate the possibility of shielding electromagnetic vacuum fluctuations of a macroscopic body placed nearby. The purpose of this study is to investigate whether and under which circumstances the Casimir-Polder potential between an atom and a graphene-substrate system is dominated by the interaction with graphene such that the effect of the substrate does not play an important role. This knowledge will allow us to manipulate the Casimir-Polder potential of a layered system by placing the graphene at different graphene-substrate distances or by patterning it into different shapes. This article is organised as follows. After briefly introducing graphene into the formalism of macroscopic QED in Sec. \[sec:CPpotentialgraphene\], we give some numerical results for the Casimir–Polder shift of an atom near a graphene sheet in Sec. \[sec:atomgraphene\]. In Sec. \[sec:1sheet\] and \[sec:bilayer\], we study different the shielding of vacuum fluctuations by single-layer and bilayer graphene, respectively, and give concluding remarks in Sec. \[sec:conclusions\]. Casimir-Polder interaction with graphene {#sec:CPpotentialgraphene} ======================================== It is well known that an atom placed near a macroscopic body will experience a dispersion force — the Casimir-Polder force — due to the presence of fluctuations of the electromagnetic field even at zero temperature [@casimirpolderpaper]. We begin by investigating the Casimir-Polder interaction of an atom next to a graphene layer at zero temperature. We adopt the Dirac model for graphene and calculate the Casimir-Polder interactions based on the formalism of macroscopic QED. Upon quantization of the electromagnetic field in the presence of absorbing bodies, and application of second-order perturbation theory, the Casimir-Polder potential for planar structures can be written as [@acta2008] $$\begin{gathered} U_{\mathrm{CP}} {\ensuremath{\left(z_{A}\right)}} = \frac{\hbar \mu_{0}}{8 \pi^{2}} \int_{0}^{\infty} d \xi \xi^{2} \alpha_{at} {\ensuremath{\left(i \xi\right)}} \nonumber \\ \times \int\limits_{0}^{\infty} d k_{\parallel} \frac{e^{-2 k_{\parallel} \gamma_{0z} z_{A}} }{\gamma_{0z}} \left[ \mathrm{R}_{\mathrm{TE}} + \mathrm{R}_{\mathrm{TM}} \left( 1- \frac{2 k_{\parallel}^{2} \gamma_{0z}^{2} c^{2} }{\xi^{2}} \right) \right] \label{eq:Ucp_1}\end{gathered}$$ where $\gamma_{iz}=\sqrt{1+\varepsilon_i(i\xi)\xi^{2}k_\|^{2}/c^2}$ is the $z$-component of the wavenumber in the medium with permittivity $\varepsilon_i$ for imaginary frequencies (the index 0 refers to the medium in which the atom is placed) and $\alpha_{at} (\omega)$ is the isotropic atomic polarizability defined by $$\begin{aligned} \mathbf{\alpha}_{at} (\omega) = \lim_{\varepsilon \rightarrow 0} \frac{2}{\hbar} \sum_{k_{A} \neq 0_{A}} \frac{\omega_{k 0} \mathbf{d}_{0 k}\cdot \mathbf{d}_{k 0} }{\omega_{k 0}^{2} -\omega^{2} - i \omega \varepsilon } . \label{eq:atomicpol}\end{aligned}$$ This equation is valid for zero temperature. A replacement of the frequency integral by a Matsubara sum has to be performed for finite temperatures [@acta2008]. In this case, the potential is well approximated by inserting the temperature-dependent reflection coefficients in the lowest term in the Matsubara sum ($j=0$) while keeping the zero-temperature coefficients for all higher Matsubara terms [@PRB84_035446_2011]. Only for $k_B T \gtrsim \Delta$ thermal corrections become important [@PRA86_012515_2012] ($\Delta \approx 0.1$ eV is the gap parameter of quasiparticle excitations). In order to compute the reflection coefficients $\mathrm{R}_{\mathrm{TM}}$ and $\mathrm{R}_{\mathrm{TE}}$ for transverse magnetic (TM) and transverse electric (TE) waves from a graphene sheet, two alternative models, the hydrodynamic model and the Dirac model, exist. The first treats graphene as an infinitesimally thin positively charged flat sheet, carrying a homogeneous fluid with some mass and negative charge densities. On the other hand, the Dirac model incorporates the conical electron dispersion relation of Dirac fermions, modelling graphene as a two-dimensional gas of massless Dirac fermions, where low-energy excitations are considered Dirac fermions that move with a Fermi velocity. The ranges of validity of these respective models is not completely resolved [@NJP13_083020_2011]. The reflection coefficients are calculated by matching the dyadic Green function of free space and its derivatives on either side of a two-dimensional conducting sheet [@PRB85_195427_2012], with the result that $$\begin{aligned} \mathrm{R}_{\mathrm{TM}} &=& \frac{\gamma_{0z} \alpha_{\parallel} (k_{\parallel}, \omega)}{1+\gamma_{0z} \alpha_{\parallel} (k_{\parallel}, \omega)} ,\nonumber\\ \mathrm{R}_{\mathrm{TE}} &=& \frac{(\omega / c k_{\parallel})^{2} \alpha_{\perp} (k_{\parallel}, \omega) }{\gamma_{0z} -(\omega / c k_{\parallel})^{2} \alpha_{\perp} (k_{\parallel}, \omega)}, \label{eq:rBo}\end{aligned}$$ where $$\alpha(\mathbf{k},\omega)=- e^2 \frac{\chi(\mathbf{k},\omega)}{ 2 \varepsilon_0 k_{\parallel}} = i \frac{\sigma (\mathbf{k},\omega) \, k_{\parallel}}{2 \varepsilon_0 \omega}$$ is given by the density-
{ "pile_set_name": "ArXiv" }
--- abstract: 'We have studied the diffusion inside the silica network of sodium atoms initially located outside the surfaces of an amorphous silica film. We have focused our attention on structural and dynamical quantities, and we have found that the local environment of the sodium atoms is close to the local environment of the sodium atoms inside bulk sodo-silicate glasses obtained by quench. This is in agreement with recent experimental results.' author: - 'Michaël Rarivomanantsoa$^{a}$, Philippe Jund$^b$ and Rémi Jullien$^c$' title: 'Sodium diffusion through amorphous silica surfaces: A molecular dynamics study' --- Introduction ============ Many characteristics of materials such as mechanical resistance, adsorption, corrosion or surface diffusion depend on the physico-chemical properties of the surface. Thus, the interactions between the surfaces with their physico-chemical environment are very important, and in particular for amorphous materials which are of great interest for a wide range of industrial and technological applications (optical fibers coating, catalysis, chromatography or microelectronics). Therefore a great number of studies have for example focused on the interactions between the amorphous silica surfaces with water, experimentally [@exp1] and by molecular dynamics simulations [@md1; @md12; @bakaev; @md10; @md11; @md13]. On the other hand, the sodium silicate glasses entail great interest due to their presence in most of the commercial glasses and geological magmas. They are also often used as simple models for a great number of silicate glasses with more complicated composition. The influence of sodium atoms on the amorphous silica network is the subject of numerous experimental studies: Raman spectroscopy [@brawer; @millan], IR [@brawer; @wong], XPS [@bruckner; @sprenger] and NMR [@sprenger; @silver] from which we have informations about neighboring distances, bond angle distributions or concentration of so-called Q$^{n}$ tetrahedra. In order to improve the insight about the sodium silicate glass structure and to obtain a good understanding of the role of the modifying Na$^+$ cations, Greaves [[*et al.* ]{}]{}have used new promising investigation techniques like EXAFS and MAS NMR [@baker; @greaves]. Despite all these efforts, the structure of sodo-silicate glasses is still a subject of debate. Another means to give informations about this structure is provided by simulations, by either [*ab initio*]{} [@ispas] or classical [@soules; @vessal; @smith; @oviedo; @horbach; @jund] molecular dynamics (MD). In the present work, we are using classical MD simulations, but [*a contrario*]{} to previous simulations, the sodium atoms are not located before hand inside the amorphous silica sample. Recent experimental studies of the diffusion of Na atoms initially placed at the surface of amorphous silica, using EXAFS spectroscopy [@mazzara], showed that the Na atoms diffuse inside the vitreous silica and once inside the amorphous silica network, the local environment of the Na atoms is characterized by a Na - O distance $d_{{\rm Na-O}} = 2.3$ Å and by a Na - Si distance $d_{{\rm Na-Si}} = 3.8$ Å. These values are close to the distances characterizing the local environment of Na atoms in sodium silicate glasses obtained by quench. In this study we have used classical MD simulations in order to reproduce the diffusion of sodium atoms inside a silica matrix and to check that the local environment of the sodium atoms is close to what is found for quenched sodo-silicate glasses. The sodium atoms have been inserted at the surface of thin amorphous silica films under the form of Na$_2$O groups in order to respect the charge neutrality. Computational method ==================== To simulate the interactions between the different atoms, we use a generalized version [@kramer] of the so-called BKS potential [@bks] where the functional form of the potential remains unchanged: $${\cal{\phi}}\left(\left|\vec{r}_j-\vec{r}_i\right|\right) = \frac{q_iq_j}{\left|\vec{r}_j-\vec{r}_i\right|} -A_{ij}\exp\left(-B_{ij}\left|\vec{r}_j-\vec{r}_i\right|\right) -\frac{C_{ij}}{\left|\vec{r}_j-\vec{r}_i\right|^6}.$$ The potential parameters $A_{ij}$, $B_{ij}$, $C_{ij}$, $q_i$ and $q_j$ involving the silicon and oxygen atoms (describing the interactions inside the amorphous silica network) are extracted from van Beest [[*et al.* ]{}]{}[@bks] and remain unchanged (in particular the partial charges q$_{{\rm Si}} = 2.4\rm e$ and q$_{{\rm O}} = -1.2$e are not modified). The new parameters, devoted to describe the interactions between the sodium atoms and the silica network are given by Kramer [[*et al.* ]{}]{}[@kramer] and are adjusted on [*ab initio*]{} calculations of zeolithes except the partial charge of the sodium atoms whose value q$_{{\rm Na}} = 0.6\rm e$ is chosen in order to respect the system electroneutrality. However, this sodium partial charge does not reproduce the short-range forces and to this purpose, Horbach [[*et al.* ]{}]{}[@horbach] have proposed to vary the charge q$_{{\rm Na}}$ as follows: $$\begin{aligned} q_{{\rm Na}}(r_{ij})&=& \left\{ \begin{array}{ll} 0.6\left(1+\ln\left[C\left( r_c-r_{ij}\right)^2+1\right]\right) & r_{ij} < r_c\\ 0.6 & r_{ij} \geqslant r_c \end{array} \right.\end{aligned}$$ where $r_{ij}$ is the distance between the particles $i$ and $j$. The parameters $C$ and $r_c$ are adjusted to obtain the experimental structure factor of Na$_2$Si$_2$O$_5$ (NS2) and their values are included in Ref [@horbach]. It is important to note that using this method to model the sodium charge, the system electroneutrality is respected for large distances (in fact for distances $r \geqslant r_c$). Next we assume that the modified BKS potential describes reasonably well the system studied here, for which the sodium atoms are initially located outside the amorphous silica sample. In addition other simulations have shown that this interatomic potential is convenient for various compositions, in particular for NS2, NS3 (Na$_2$Si$_3$O$_7$) [@horbach] and NS4 (Na$_2$Si$_4$O$_9$) [@jund] and we assume it is adapted to model any concentration of modifying Na$^+$ cations inside sodo-silicate glasses. Our aim here is to obtain a sodo-silicate glass by deposition of sodium atoms at the amorphous silica surface, as it was done experimentally [@mazzara]. Using the [*modus operandi*]{} described in a previous study [@mr] we have generated Amorphous Silica Films (ASF), each containing two free surfaces perpendicular to the $z$-direction. These samples have been made by breaking the periodic boundary conditions along the $z$-direction, normal to the surface, thus creating two free surfaces located at $L/2$ and $-L/2$ with $L=35.8$ Å. In order to evaluate the Coulomb interactions, we used a two-dimensional technique based on a modified Ewald summation to take into account the loss of periodicity in the $z$-direction. For further technical details see Ref [@mr]. Then, instead of initially positioning the sodium atoms inside the silica matrix, like it was done before [@soules; @huang; @smith; @jund; @oviedo], we have deposited 50 Na$_2$O groups inside two layers located at a distance of 4 Å of each free surface as depicted in  \[figure1\]. Within the layers, the Na$_2$O groups are assumed to be linear, with $d_{{\rm Na-O}}=2$ Å [@elliott], and arranged on a pseudoperiodic lattice represented in the zoom of  \[figure1\]. Hence the system is made of 100 Na$_2$O groups for 1000 SiO$_2$ molecules, corresponding to a sodo-silicate glass of composition NS10 (Na$_2$Si$_{10}$O$_{21}$) and contains 3300 particles. Since our goal is to study the diffusion of the sodium atoms placed at the amorphous silica surfaces, we fixed the initial temperature of the whole system at 2000 K. Indeed, the simulations of Smith [[*et al.* ]{}]{}[@smith] and Oviedo [[*et al.* ]{}]{}[@oviedo] of sodo-silicate glasses have shown that there is no appreciable sodium diffusion for temperatures below $\approx$ 1500 K. On the other hand, it is worth noticing that Sunyer [[*et al.* ]{}]{}[@sunyer] have found a simulated glass transition temperature $T_g \simeq 2400$ K for a NS4 glass. Therefore we thermalized the sodium layers at 2000 K and placed them at
{ "pile_set_name": "ArXiv" }
--- abstract: 'The escape fraction, [$f_{\rm esc}$]{}, of ionizing photons from early galaxies is a crucial parameter for determining whether the observed galaxies at $z \geq 6$ are able to reionize the high-redshift intergalactic medium. Previous attempts to measure [$f_{\rm esc}$]{} have found a wide range of values, varying from less than 0.01 to nearly 1. Rather than finding a single value of $f_{esc}$, we clarify through modeling how internal properties of galaxies affect [$f_{\rm esc}$]{} through the density and distribution of neutral hydrogen within the galaxy, along with the rate of ionizing photons production. We find that the escape fraction depends sensitively on the covering factor of clumps, along with the density of the clumped and interclump medium. One must therefore be cautious when dealing with an inhomogeneous medium. Fewer, high-density clumps lead to a greater escape fraction than more numerous low-density clumps. When more ionizing photons are produced in a starburst, [$f_{\rm esc}$]{} increases, as photons escape more readily from the gas layers. Large variations in the predicted escape fraction, caused by differences in the hydrogen distribution, may explain the large observed differences in [$f_{\rm esc}$]{}  among galaxies. Values of [$f_{\rm esc}$]{} must also be consistent with the reionization history. High-mass galaxies alone are unable to reionize the universe, because [$f_{\rm esc}$]{} $> 1$ would be required. Small galaxies are needed to achieve reionization, with greater mean escape fraction in the past.' author: - 'Elizabeth R. Fernandez, and J. Michael Shull' title: The Effect of Galactic Properties on the Escape Fraction of Ionizing Photons --- INTRODUCTION {#sec:introduction} ============ Observations of the cosmic microwave background optical depth made with the [*[Wilkinson Microwave Anisotropy Probe (WMAP)]{}*]{} [@kogut/etal:2003; @spergel/etal:2003; @page/etal:2007; @spergel/etal:2007; @dunkley/etal:2008; @komatsu/etal:2008; @wmap7] suggest that the universe was reionized sometime between $6<z<12$. Because massive stars are efficient producers of ultraviolet photons, they are the most likely candidates for the majority of reionization. However, in order for early star-forming galaxies to reionize the universe, their ionizing radiation must be able to escape from the halos, in which neutral hydrogen () is the dominant source of Lyman continuum (LyC) opacity. The escape fraction, [$f_{\rm esc}$]{}, of ionizing photons is a key parameter for starburst galaxies at $z > 6$, which are believed to produce the bulk of the photons that reionize the universe [@robertson/etal:2010; @trenti/etal:2010; @Bouwens/etal:2010]. The predicted values of escape fraction span a large range from $0.01 \lesssim f_{\rm esc} < 1$, derived from a variety of theoretical and observational studies of varying complexity. Various properties of the host galaxy, its stars, or its environment are thought to affect the number of ionizing photons that escape into the intergalactic medium (IGM). For example, @ricotti/shull:2000 studied [$f_{\rm esc}$]{} in spherical halos using a Strömgren approach. @wood/loeb:2000 assumed an isothermal, exponential disk galaxy and followed an ionization front through the galaxy using three-dimensional Monte Carlo radiative transfer. Both @wood/loeb:2000 and @ricotti/shull:2000 state that [$f_{\rm esc}$]{} varies greatly, from $<0.01$ to $1$, depending on galaxy mass, with larger galaxies giving smaller values of [$f_{\rm esc}$]{}. A similar dependence with galaxy mass is also seen by the simulations of @yajima/etal:2010, because larger galaxies tend to have star formation buried within dense hydrogen clouds, while smaller galaxies often had clearer paths for escaping ionizing radiation. @gnedin/etal:2008, on the other hand, ran a high resolution N-body simulation with adaptive-mesh refinement in a cosmological context. Contrary to @ricotti/shull:2000, @wood/loeb:2000 and @yajima/etal:2010, they state that lower-mass galaxies have significantly smaller [$f_{\rm esc}$]{}, as the result of a declining star formation rate. In addition, above a critical halo mass, [$f_{\rm esc}$]{}  does not change by much. The model of @gnedin/etal:2008 allowed for the star formation rate to increase with the mass of the galaxy at a higher rate than a linear proportionality would allow. The larger galaxies also tended to have star formation occurring in the outskirts of the galaxy, which made it easier for ionizing photons to escape. Their model included a distribution of gas within the galaxy, which created free sight-lines out of the galaxy. @wise/cen:2008 used adaptive mesh hydrodynamical simulations on dwarf galaxies. Even though their simulations covered a different mass range than the larger galaxies studied by @gnedin/etal:2008, they found much higher value of [$f_{\rm esc}$]{} than would be expected from extrapolating results from @gnedin/etal:2008 to lower masses. @wise/cen:2008 attribute this difference to the irregular morphology of their dwarf galaxies with a turbulent and clumpy interstellar medium (ISM), allowing for large values of [$f_{\rm esc}$]{}. Others have also looked at how the shape and morphology of the galaxy can affect [$f_{\rm esc}$]{}. @dove/shull:1994, using a Strömgren model, studied how [$f_{\rm esc}$]{} varies with various   disk density distributions. In addition, many authors have found that superbubbles and shells can trap radiation until blowout, seen in analytical models of @dove/shull/ferrara:2000 as well as in hydrodynamical simulations of @fujita/etal:2003. The analytical model by @clark/oey:2002 showed that high star formation rates can raise the porosity of the ISM and thereby increase [$f_{\rm esc}$]{}. In addition to bubbles and structure caused from supernovae, galaxies can have a clumpy ISM whose inhomogeneities affect [$f_{\rm esc}$]{}. For example, dense clumps could reduce [$f_{\rm esc}$]{}  [@dove/shull/ferrara:2000]. On the other hand, @boisse:1990, @hobson/scheuer:1993, @witt/gordon:1996, and @wood/loeb:2000 all found that clumps in a randomly distributed medium cause [$f_{\rm esc}$]{} to rise, while @ciardi/etal:2002 found that the effects of clumps depend on the ionization rate. A host of other galaxy parameters have been tested analytically and with simulations. Increasing the baryon mass fraction lowers [$f_{\rm esc}$]{} for smaller halos, but increases it at masses greater than $10^8 \: M_{\sun}$ [@wise/cen:2008]. Star formation history changes the amount of ionizing photons and neutral hydrogen, causing [$f_{\rm esc}$]{} to vary from $0.12$ to $0.20$ for coeval star formation and from $0.04$ to $0.10$ for a time-distributed starburst [@dove/shull/ferrara:2000]. Other galactic quantities, such as spin [@wise/cen:2008] or dust content [@gnedin/etal:2008], do not seem to affect the escape fraction. Observations have also been used to constrain [$f_{\rm esc}$]{}, especially at $z\lesssim3$. Searches for escaping Lyman continuum radiation at redshifts $z \lesssim 1-2$ have found escape fractions of at most a few percent [@bland/maloney:2002; @bridge:2010; @cowie/etal:2009; @tumlinson/etal:1999; @deharveng:2001; @grimes/etal:2007; @grimes:2009; @heckman/etal:2001; @Leitherer/etal:1995; @malkan:2003; @Siana/etal:2007]. @hurwitz/etal:1997 saw large variations in the escape fraction, and @hoopes:2007 and @bergvall/etal:2006 saw a relatively high escape fraction of $10\%$. @ferguson/etal:2001 observed [$f_{\rm esc}$]{} $\approx 0.2$ at $z\approx1$. @hanish/etal:2010 do not see a difference in [$f_{\rm esc}$]{} between starbursts and normal galaxies. @siana/etal:2010 also found low escape fractions at $z \approx 1.3$ and showed that no more than $8\%$ of galaxies at this redshift can have $f_{\rm esc,rel} > 0.5$. Note that $f_{\rm esc,rel}$, which the authors use to compare their results to other surveys, is defined as the ratio of escaping LyC photons to escaping 1500 $\AA$ photons. In our own Galaxy, @bland/maloney:1999 and @putman/etal:2003 found an escape fraction of only a few percent. Observations using $\gamma$-ray bursts [@chen:2007] show [$f_{\rm esc}$]{} $\approx 0.02$ at $z \approx 2$. At higher redshift ($z \approx 3$),
{ "pile_set_name": "ArXiv" }
--- abstract: 'The description of physical processes in accelerated frames opens a window to numerous new phenomena. One can encounter these effects both in the subatomic world and on a macroscale. In the present work we review our recent results on the study of the electroweak interaction of particles with an accelerated background matter. In our analysis we choose the noninertial comoving frame, where matter is at rest. Our study is based on the solution of the Dirac equation, which exactly takes into account both the interaction with matter and the nonintertial effects. First, we study the interaction of ultrarelativistic neutrinos, electrons and quarks with the rotating matter. We consider the influence of the matter rotation on the resonance in neutrino oscillations and the generation of anomalous electric current of charged particles along the rotation axis. Then, we study the creation of neutrino-antineutrino pairs in a linearly accelerated matter. The applications of the obtained results for elementary particle physics and astrophysics are discussed.' author: - Maxim Dvornikov title: ELECTROWEAK INTERACTION OF PARTICLES WITH ACCELERATED MATTER AND ASTROPHYSICAL APPLICATIONS --- Nowadays it is understood that noninertial effects are important in various areas of modern science such as elementary particles physics, general and special relativity, as well as condensed matter physics [@review]. Recently in Refs. [@Dvo14; @Dvo15a; @Dvo15b] it was realized that the electroweak interaction of particles with accelerated matter leads to interesting applications in physics and astrophysics. In those works, the treatment of the particle evolution was made in the comoving frame, where matter is at rest, with the noninertial effects being accounted for exactly. In the present work we review our recent results on the particle interaction with accelerated matter. Our study of the fermion propagation in an accelerated matter is based on the Dirac equation in a comoving frame. In this situation one can unambiguously define the interaction with background matter. It is known that the motion in a noninertial frame is equivalent to the interaction with an effective gravitational field having the metric tensor $g_{\mu\nu}$. The Dirac equation for the particle bispinor $\psi$ in curved space-time has the form [@Dvo15a], $$\label{eq:Depsicurv} \left[ \mathrm{i}\gamma^{\mu}(x)\nabla_{\mu}-m \right] \psi = \gamma_{0}(x) \left\{ \frac{V_{\mathrm{L}}}{2} \left[ 1-\gamma^{5}(x) \right] + \frac{V_{\mathrm{R}}}{2} \left[ 1+\gamma^{5}(x) \right] \right\} \psi,$$ where $\gamma^{\mu}(x)$ are the coordinate dependent Dirac matrices, $\nabla_{\mu}=\partial_{\mu}+\Gamma_{\mu}$ is the covariant derivative, $\Gamma_{\mu}$ is the spin connection, $m$ is the particle mass, $V_{\mathrm{L,R}} \sim G_\mathrm{F} n_\mathrm{eff}$ are the effective potentials of the interaction of left and right chiral projections with background matter, $G_\mathrm{F}$ is the Fermi constant, $n_\mathrm{eff}$ is the effective density of background particles, $\gamma^{5}(x) = -(\mathrm{i}/4!) E^{\mu\nu\alpha\beta} \gamma_{\mu}(x) \gamma_{\nu}(x) \gamma_{\alpha}(x) \gamma_{\beta}(x)$, $E^{\mu\nu\alpha\beta} = \varepsilon^{\mu\nu\alpha\beta} / \sqrt{-g}$ is the covariant antisymmetric tensor in curved space-time, and $g=\det(g_{\mu\nu})$. In Ref. [@Dvo14] we found the solution of Eq. (\[eq:Depsicurv\]) for an ultrarelativistic neutrino moving in a rotating matter. Note that, in case of neutrinos, we should set $V_{\mathrm{R}}=0$. Choosing the appropriate vierbien vectors, we obtained $\psi$, which is expressed in terms of the Laguerre functions. Then we generalized our result to include different neutrino eigenstates and mixing between them. We obtained that the resonance condition is shifted by the matter rotation contrary to our previous claim in Ref. [@Dvo10]. This effect can have the implication for the explanation of great linear velocities of pulsars since there is a correlation between the linear and angular velocities of a pulsar [@Joh05]. In Ref. [@Dvo15a], we obtained the solution of Eq.  for ultrarelaticistic electroweakly interacting electrons and quarks in the rotating matter. Using this solution we derived the nonzero electric current along the rotation axis in the form, $$\label{eq:elcurr} \mathbf{J} = \frac{q\bm{\omega}}{\pi} \left( V_{\mathrm{R}}\mu_{\mathrm{R}}-V_{\mathrm{L}}\mu_{\mathrm{L}} \right),$$ where $\bm{\omega}$ is the angular velocity, $q$ is the electric charge (including the sign) of a test fermion, and $\mu_{\mathrm{R,L}}$ are the chemical potentials of right and left fermions. The existence of the nonzero current in Eq.  is attributed in Ref. [@Dvo15a] to the new *galvano-rotational effect* (GRE). GRE is analogous to the chiral vortical effect [@Vil79], in which the induced current is $\mathbf{J} \sim \bm{\omega} (\mu_{\mathrm{L}}^2 - \mu_{\mathrm{R}}^2)$. However, in the later case the current is vanishing in the equilibrium at $\mu_{\mathrm{L}} = \mu_{\mathrm{R}}$, whereas $\mathbf{J}$ in Eq.  is nonzero in this situation. GRE can be used for the generation of a toroidal magnetic field (TMF) in neutron and quark/hybrid stars. It is well known that, in a star, a purely poloidal magnetic field, which is observed by astronomers, is unstable. A toroidal component, which lays inside a star and can be of the same magnitude as a poloidal one, is required. In Ref. [@Dvo15a] we estimated the strength of TMF generated owing to GRE as $B_\mathrm{tor}\sim |\mathbf{J}|R$, where $R\sim 10\thinspace\text{km}$ is the star radius. Using the characteristics of the background matter in a compact star, one gets that $B_\mathrm{tor} \sim 10^8\thinspace\text{G}$ can be generated [@Dvo15a]. This TMF strength is comparable with the observed magnetic fields in old millisecond pulsars [@PhiKul94]. In Ref. [@Dvo15b] we solved Eq.  for an ultrarelativistic neutrino interacting with a linearly accelerated matter. In this case $\psi$ is expressed via the Whittaker functions. The obtained solution turned out to reveal the instability of the neutrino vacuum leading to the creation of the neutrino-antineutrino ($\nu\bar{\nu}$) pairs. This phenomenon is analogous to the well known Unruh effect [@CriHigMat08] consisting in the emission of the thermal radiation by an accelerated particle, with the effective temperature $T_\mathrm{eff} = a/2\pi$, where $a$ is the particle acceleration. Requiring that the probability of the creation of $\nu\bar{\nu}$ pairs is not suppressed, in Ref. [@Dvo15b], we obtained the upper bound on the neutrino mass, $$\label{eq:masslim} m \lesssim m_{\mathrm{cr}}, \quad m_{\mathrm{cr}}=2\sqrt{\frac{|V_\mathrm{L}|a}{\pi}}.$$ If we study the creation of $\nu\bar{\nu}$ pairs in a core collapsing supernova (SN) at the bounce stage, one gets that $m_{\mathrm{cr}} \sim 10^{-7}\thinspace\text{eV}$. The obtained upper bound is comparable with the constraint on neutrino masses established earlier in Ref. [@DvoGavGit14], where we studied the $\nu\bar{\nu}$ pairs creation in SN at the neutronization. In conclusion we mention that we have studied various phenomena happening with particles electroweakly interacting with accelerated background matter. We have considered two types of the acceleration: due to rotation and a linear acceleration. The exact solutions of the Dirac equation for a test fermion, accounting for both the matter interaction and the noninertial effects, have been found. Then we have discussed the influence of the matter rotation on the resonance in neutrino oscillations, the generation of the electric current flowing along the rotation axis, and the creation of $\nu\bar{\nu}$ pairs in a linearly accelerated matter. Finally we have considered the possibility of the implementation of our results in various astrophysical media such as neutron and quark/hybrid stars as well as SNs. Acknowledgments {#acknowledgments .unnumbered} =============== I am thankful to the Tomsk State University Competitiveness Improvement Program and to RFBR (research project No. 15-02-00293) for partial support. [99]{} D. Alba,
{ "pile_set_name": "ArXiv" }
--- abstract: 'We develop a framework for approximating collapsed Gibbs sampling in generative latent variable cluster models. Collapsed Gibbs is a popular MCMC method, which integrates out variables in the posterior to improve mixing. Unfortunately for many complex models, integrating out these variables is either analytically or computationally intractable. We efficiently approximate the necessary collapsed Gibbs integrals by borrowing ideas from expectation propagation. We present two case studies where exact collapsed Gibbs sampling is intractable: mixtures of Student-$t$’s and time series clustering. Our experiments on real and synthetic data show that our approximate sampler enables a runtime-accuracy tradeoff in sampling these types of models, providing results with competitive accuracy much more rapidly than the naive Gibbs samplers one would otherwise rely on in these scenarios.' author: - 'Christopher Aicher[^1] and Emily B. Fox[^2]' bibliography: - 'bib.bib' title: Approximate Collapsed Gibbs Clustering with Expectation Propagation --- Introduction ============ Background {#sec:background} ========== Approximate Collapsed Gibbs Sampling ==================================== \[sec:inference\] Case Studies ============ \[sec:case\_studies\] We consider two motivating examples for the use of our EP-based approximate collapsed Gibbs algorithm. The first is a mixture of Student-$t$ distributions, which can capture heavy-tailed emissions crucial in robust modeling (i.e., reducing sensitivity to outliers). The second example is a time series clustering model. Mixture of Multivariate Student-$t$ ----------------------------------- \[sec:student\] Time Series Clustering ---------------------- \[sec:tscluster\] Experiments =========== \[sec:experiments\] Conclusion ========== We presented a framework for constructing approximate collapsed Gibbs samplers for efficient inference in complex clustering models. The key idea is to approximately marginalize the nuisance variables by using EP to approximate the conditional distributions of the variables with an individual observation removed; by approximating this conditional, the required integral becomes tractable in a much wider range of scenarios than that of conjugate models. Our use of this EP approximation takes two steps from its traditional use: (1) we approximate a (nearly) full conditional rather than directly targeting the posterior, and (2) our targeted conditional changes as we sample the cluster assignment variables. For the latter, we provided a brief analysis and demonstrated the impact of the changing target, drawing parallels to previously proposed samplers that use stale sufficient statistics. We demonstrated how to apply our EP-based approximate sampling approach in two applications: mixtures of Student-$t$ distributions and time series clustering. Our experiments demonstrate that our EP approximate collapsed samplers mix more rapidly than naive Gibbs, while being computationally scalable and analytically tractable. We expect this method to provide the greatest benefit when approximately collapsing large parameter spaces. There are many interesting directions for future work, including deriving bounds on the asymptotic convergence of our approximate sampler [@pillai2014ergodicity; @dinh2017convergence], considering different likelihood approximation update rules such as *power EP* [@minka2004power], and extending our idea of approximately integrating out variables to other samplers. For the analysis, [@dehaene2015expectation] showed that EP with Gaussian approximations is exact in the large data limit; one could extend these results to consider the case of data being allocated amongst *multiple* clusters. Another interesting direction is to explore our EP-based approximate collapsing within the context of variational inference, possibly extending the set of models for which collapsed variational Bayes [@teh2007collapsed] is possible. Finally, there are many ways in which our algorithm could be made even more scalable through distributed, asynchronous implementations, such as in [@ahmed2012scalable]. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank Nick Foti, You “Shirley” Ren and Alex Tank for helpful discussions. This paper is based upon work supported by the NSF CAREER Award IIS-1350133 This paper is an extension of our previous workshop paper [@aicher2016scalable]. **Appendix** Mixture of Multivariate Student-$t$ {#app:student} =================================== Time Series Clustering {#app:tscluster} ====================== EP Convergence {#app:ep_converge} ============== Synthetic Time Series Trace Plots {#app:traceplots} ================================= Seattle Housing Data {#app:housing_data} ==================== [^1]: Department of Statistics, University of Washington, `aicherc@uw.edu` [^2]: Department of Computer Science and Statistics, University of Washington, `ebfox@uw.edu`
{ "pile_set_name": "ArXiv" }
--- abstract: 'The notion of Kolmogorov complexity (=the minimal length of a program that generates some object) is often useful as a kind of language that allows us to reformulate some notions and therefore provide new intuition. In this survey we provide (with minimal comments) many different examples where notions and statements that involve Kolmogorov complexity are compared with their counterparts not involving complexity.' author: - 'Alexander Shen[^1]' title: Kolmogorov complexity as a language --- Introduction ============ The notion of Kolmogorov complexity is often used as a tool; one may ask, however, whether it is indeed a powerful technique or just a way to present the argument in a more intuitive way (for people accustomed to this notion). The goal of this paper is to provide a series of examples that support both viewpoints. Each example shows some statements or notions that use complexity, and their counterparts that do not mention complexity. In some cases these two parts are direct translations of each other (and sometimes the equivalence can be proved), in other cases they just have the same underlying intuition but reflect it in different ways. Hoping that most readers already know what is Kolmogorov (algorithmic, description) complexity, we still provide a short reminder to fix notation and terminology. The complexity of a bit string $x$ is the minimal length of a program that produces $x$. (The programs are also bit strings; they have no input and may produce binary string as output.) If $D(p)$ is the output of program $p$, the complexity of string $x$ with respect to $D$ is defined as $K_D(x)=\inf\{ |p|\colon D(p)=x\}$. This definition depends on the choice of programming language (i.e., its interpreter $D$), but we can choose an optimal $D$ that makes $K_D$ minimal (up to $O(1)$ constant). Fixing some optimal $D$, we call $K_D(x)$ the *Kolmogorov complexity* of $x$ and denote it by $K(x)$. A technical clarification: there are several different versions of Kolmogorov complexity; if we require the programming language to be self-delimiting or prefix-free (no program is a prefix of another one), we got *prefix* complexity usually denoted by $K(x)$; without this requirement we get *plain* complexity usually denoted by $C(x)$; they are quite close to each other (the difference is $O(\log n)$ for $n$-bit strings and usually can be ignored). *Conditional* complexity of a string $x$ given condition $y$ is the minimal length of a program that gets $y$ as input and transforms it into $x$. Again we need to chose an optimal programming language (for programs with input) among all languages. In this way we get *plain conditional complexity* $C(x|y)$; there exists also a prefix version $K(x|y)$. The value of $C(x)$ can be interpreted as the “amount of information” in $x$, measured in bits. The value of $C(x|y)$ measures the amount of information that exists in $x$ but not in $y$, and the difference $I(y:x)=C(x)-C(x|y)$ measures the amount of information in $y$ about $x$. The latter quantity is almost commutative (classical Kolmogorov – Levin theorem, one of the first results about Kolmogorov complexity) and can be interpreted as “mutual information” in $x$ and $y$. Foundations of probability theory ================================= Random sequences ---------------- One of the motivations for the notion of description complexity was to define randomness: $n$-bit string is random if it does not have regularities that allow us to describe it much shorter, i.e., if its complexity is close to $n$. For finite strings we do not get a sharp dividing line between random and non-random objects; to get such a line we have to consider infinite sequences. The most popular definition of random infinite sequences was suggested by Per Martin-Löf. In terms of complexity one can rephrase it as follows: bit sequence $\omega_1\omega_2\ldots$ is random if $K(\omega_1\ldots\omega_n)\ge n-c$ for some $c$ and for all $n$. (This reformulation was suggested by Chaitin; the equivalence was proved by Schnorr and Levin. See more in [@livitanyi; @uppsala-notes].) Note that in classical probability theory there is no such thing as an individual random object. We say, for example, that randomly generated bit sequence $\omega_1\omega_2\ldots$ satisfies the strong law of large numbers (has limit frequency $\lim (\omega_1+\ldots+\omega_n)/n$ equal to $1/2$) almost surely, but this is just a measure-theoretic statement saying that the set of all $\omega$ with limit frequency $1/2$ has measure $1$. This statement (SLLN) can be proved by using Stirling formula for factorials or Chernoff bound. Using the notion of Martin-Löf randomness, we can split this statement into two: (1) every Martin-Löf random sequence satisfies SLLN; and (2) the set of Martin-Löf random sequences has measure $1$. The second part is a general statement about Martin-Löf randomness (and is easy to prove). The statement (1) can be proved as follows: if the frequency of ones in a long prefix of $\omega$ deviates significantly from $1/2$, this fact can be used to compress this prefix, e.g., using arithmetic coding or some other technique (Lempel–Ziv compression can be also used), and this is impossible for a random sequence according to the definition. (In fact this argument is a reformulation of a martingale proof for SLLN.) Other classical results (e.g., the law of iterated logarithm, ergodic theorem) can be also presented in this way. Sampling random strings ----------------------- In the proceeding of this conference S. Aaronson proves a result that can be considered as a connection between two meanings of the word “random” for finite strings. Assume that we bought some device which is marketed as a random number generator. It has some physical source of randomness inside. The advertisement says that, being switched on, this device produces an $n$-bit random string. What could be the exact meaning of this sentence? There are two ways to understand it. First: the output distribution of this machine is close to the uniform distribution on $n$-bit strings. Second: with high probability the output string is random (=incompressible). The paper of Aaronson establishes some connections between these two interpretations (using some additional machinery). Counting arguments and existence proofs ======================================= A simple example ---------------- Kolmogorov complexity is often used to rephrase counting arguments. We give a simple example (more can be found in [@livitanyi]). Let us prove by counting that there exists an $n\times n$ bit matrix without $3\log n\times 3\log n$ uniform minors. (We obtain minors by selecting some rows and columns; the minor is *uniform* if all its elements are the same.) **Counting**: Let us give an upper bound for the number of matrices with uniform minors. There are at most $n^{3\log n}\times n^{3\log n}$ positions for a minor (we select $3\log n$ rows and $3\log n$ columns). For each position we have $2$ possibilities for the minor (zeros or ones) and $2^{n^2-(3\log n)^2}$ possibilities for the rest, so the total number of matrices with uniform minors does not exceed $$n^{3\log n} \cdot n^{3\log n} \cdot 2 \cdot 2^{n^2-9\log^2n}=2^{n^2-3\log^2 n +1}< 2^{n^2},$$ so there are matrices without uniform minors. **Kolmogorov complexity**: Let us prove that incompressible matrix does not have uniform minors. In other words, let us show that matrix with a uniform minor is compressible. Indeed, while listing the elements of such a matrix we do not need to specify all $9\log^2 n$ bits in the uniform minor individually. Instead, it is enough to specify the numbers of the rows of the minor ($3\log n$ numbers; each contains $\log n$ bits) as well as the numbers of columns (this gives together $6\log^2 n$ bits), and to specify the type of the minor ($1$ bit), so we need only $6\log^2 n + 1 \ll 9 \log^2 n$ bits (plus the bits outside the minors, of course). One-tape Turing machines ------------------------ One of the first results of computational complexity theory was the proof that some simple operations (checking symmetry or copying) require quadratic time when performed by one-tape Turing machine. This proof becomes very natural if presented in terms of Kolmogorov complexity. Assume that initially some string $x$ of length $n$ is written on the tape (followed by the end-marker and empty cells). The task is
{ "pile_set_name": "ArXiv" }
--- abstract: 'We investigate a novel global orientation regression approach for articulated objects using a deep convolutional neural network. This is integrated with an in-plane image derotation scheme, DeROT, to tackle the problem of per-frame fingertip detection in depth images. The method reduces the complexity of learning in the space of articulated poses which is demonstrated by using two distinct state-of-the-art learning based hand pose estimation methods applied to fingertip detection. Significant classification improvements are shown over the baseline implementation. Our framework involves no tracking, kinematic constraints or explicit prior model of the articulated object in hand. To support our approach we also describe a new pipeline for high accuracy magnetic annotation and labeling of objects imaged by a depth camera.' bibliography: - 'egbib.bib' title: 'Rule Of Thumb: Deep derotation for improved fingertip detection' --- Introduction {#sec:intro} ============ ![ Examples from HandNet test set detections. The colors represent fingertips that are correctly located and identified. The white boxes indicate false detections with the error threshold chosen to be 1cm. The top two rows are trained and tested on non-derotated data. The bottom two are trained and tested on derotated data and then rotated back to the non-derotated space. The detections are overlaid on the IR image from the camera which is not part of the classification process. a) Successful examples for all methods. b) Representative challenging examples for which derotation enables better performance. c) Failure cases where derotation fails to improve the results.[]{data-label="fig:fingertips"}](Result.png){width="0.98\linewidth"} In this paper we propose a method for normalizing out the effects of rotation on highly articulated motion of deforming geometric surfaces such as hands observed by a depth camera. Changing the global rotation of an object directly increases the variation in appearance of the object parts. The work of [@KimHIBCOO12] physically removes this variability with a wristworn camera and samples only a single 3D point on each finger to perform full hand pose estimation. For markerless situations, removing variability through partial canonization can significantly reduce the space of possible images used for pose learning instead of trying to explicitly learn the variability through data augmentation. In [@LepetitLF05] the authors show that learning a derotated 2D patch instead of the original one around a feature point dramatically reduces the learning capacity required and improves the classification results while using fewer randomized trees. To develop our method we use fingertip detection as a challenging representative scenario with a propensity for self occlusion and high rotational variability relative to an imaging sensor. Many approaches in the literature use fingertip or hand part detection towards the goal of full hand pose ([@KeskinKKA11],[@qian2014realtime],[@tompson14tog],[@Wang09]) however, they all approach the problem by trying to learn on datasets by augmenting rotational variability. Instead, we propose to remove this hand space variability during both the training phase and run-time. To this end we propose to learn the rotation using a deep convolutional neural network (CNN) in a regression context based on a network similar to that of [@tompson14tog]. We show how this can be used to predict full three degrees of freedom (DOF) orientation information on a database of hand images captured by a depth sensor. We combine the predicted orientation with a novel in-plane derotation scheme. The “Rule of thumb” is derived from the following insight: there is almost always an in-plane rotation which can be applied to an image of the hand which forces the base of the thumb to be on the right side of the image. This implies that the ambiguity inherrent in rotationally variant features can be overcome by derotating the hand image to a canonical pose instead of augmenting a dataset with all variations of the rotational degrees of freedom as is commonly done. Figure \[fig:fingertips\] shows examples of extensive pose variation that can benefit from our approach [^1]. No currently available hand datasets ([@ZhaoCX12],[@qian2014realtime],[@tompson14tog]) include accurate full 3 DOF ground truth hand orientations on a large database of real depth images. Using joint location data from NYUHands [@tompson14tog] it is possible to extract a global hand orientation per pose. However, we found that the size of this dataset and rotational variability are not optimal for learning to predict 3 DOF orientation. A significant contribution of this paper is therefore the creation of a new, large-scale database of fully annotated depth images with 212928 unique hand poses captured by an Intel RealSense camera that we call HandNet[^2]. For the purpose of effectively annotating such a large dataset we describe a novel image annotation technique. To overcome the severe occlusion inherrent in such a process we use DC magnetic trackers which are surprizingly sparsely used by the vision community considering their high accuracy, speed and robustness to occlusions. Using our deep derotation method (DeROT) we show up to 20.5% improvement in mean average precision (mAP) over our baseline results for two state-of-the-art approaches for fingertip detection in depth images, namely, a random decision tree [@KeskinKKA11] (RDT) and a deep convolutional neural network [@tompson14tog] (CNN). We also compare our results to a non-learning based method similar to PCA and show that it produces inferior results, further supporting the proposed use of DeROT. Building HandNet: Creation and annotation {#sec:database} ========================================= ![ The data capture setup. a) 2mm magnetic sensors. The larger rectangular sensors are not used. b) A fingertip sensor inside the inner seam. c) Virtual model used for planning a multi-sensor setup. We only use 5 sensors. d) The RealSense camera rigidly fixed to the TrakStar transmitter. e) The back of the wooden calibration board where the glass sensor housings are firmly pushed through. f) The front of the calibration board where the glass sensor housings are visible on the corners as seen in the inset.[]{data-label="fig:dataprocess"}](DataRecording.png){width="0.95\linewidth"} Synthetic databases such as those created using [@libhand] have a severe disadvantage in that they cannot accurately account for natural hand motion, occlusions and noise characteristics of real depth cameras. The creation of a large hand pose database of real depth images with consistent annotations is therefore of great importance, but beyond the capability of human annotators. The NYUHands database [@tompson14tog] uses a full model of the hand and a three-camera setup to annotate hand joint locations. There are instances where fingers are obstructed and accurate orientation information is not reliable. Similarly the method of [@Wang09] uses inverse kinematics coupled with a colored glove which also has the disadvantage of not having explicitly measured orientation as well as fingertip locations which are obstructed from the depth camera. An alternative to model based systems are sparse marker systems such as those used by [@ZhaoCX12], however, the excessive cost of a modern mocap setup such as Vicon as well as the occlusion problem make such an approach unattractive. In contrast, modern DC magnetic trackers like the TrakStar [@trakstar] are robust to metallic interference and obstruction by non-ferrous metals, and provide sub-millimeter and sub-degree accuracy for location and orientation relative to a fixed based station. Despite their almost non-existent use in modern computer vision literature, we have found them to be an excellent measurement and annotation tool. **Sensors.** To build and annotate our HandNet database we use a RealSense camera combined with $2mm$ TrakStar magnetic trackers. We affix the sensors to a user’s hand and fingertips by using tight elastic loops with sensors in sewn seam pockets. This prevents lateral and medial movement along the finger. This can be seen in Figure \[fig:dataprocess\]. The skin tight elastic loops have an additional significant benefit over gloves in that the depth profile and hand movements are not affected by the attached sensors and thus do not pollute the data. **Callibration.** Camera callibration with known correspondences is a well studied problem [@Zhang96]. However, in our case we need to callibrate between a camera and a sensor frame. We do this by positioning the magnetic sensors on the corners of a checkerboard pattern thereby creating physical correspondence between the detected corner locations and the actual sensors. This setup can be seen in Figure \[fig:dataprocess\]. We use the extracted 2D locations of the corner points on the callibration board [@bouguet2004camera] together with the sampled sensor 3D locations to perform EPnP [@Epnp09] to determine the extrinsic configuration between the devices. ![ The available data annotations after calibration. a) Color image. Illustrates a full hand setup for this work. The color is not used. b) The RGB axes indicate the measured location and orientation of each fingertip and the back of the palm. c) IR image(not used) overlaid with the labels generated from the raycasting described in Section \[sec:database\]. d) IR image overlaid with the generated heatmaps per fingertip and the global orientation of the hand represented as an oriented bounding box (not used). []{data-label="fig:annotation"}](HandData.png){width="0.9\linewidth"} **Annotation.** We model each sensor as a 3D oriented ellipsoid. We then raycast the ellipsoid into the camera frame and set the label to be the identity of the ellipsoid closest to the camera for every pixel.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The critical behaviour of three-dimensional semi-infinite Ising ferromagnets at planar surfaces with (i) random surface-bond disorder or (ii) a terrace of monatomic height and macroscopic size is considered. The Griffiths-Kelly-Sherman correlation inequalities are shown to impose constraints on the order-parameter density at the surface, which yield upper and lower bounds for the surface critical exponent $\beta_1$. If the surface bonds do not exceed the threshold for supercritical enhancement of the pure system, these bounds force $\beta_1$ to take the value $\beta_1^{\text{ord}}$ of the latter system’s ordinary transition. This explains the robustness of $\beta_1^{\text{ord}}$ to such surface imperfections observed in recent Monte Carlo simulations.' address: | Fachbereich Physik, Universität - Gesamthochschule Essen,\ D-45117 Essen, Federal Republic of Germany author: - 'H. W. Diehl' title: 'Critical behaviour of three-dimensional Ising ferromagnets at imperfect surfaces: Bounds on the surface critical exponent $\beta_1$' --- epsf =10000 In a recent paper Pleimling and Selke (PS) [@PS98] reported the results of a detailed Monte Carlo analysis of the effects of two types of surface imperfections on the surface critical behaviour of $d=3$ dimensional semi-infinite Ising models with planar surfaces and ferromagnetic nearest-neighbour (NN) interactions: (i) random surface-bond disorder and (ii) a terrace of monatomic height and macroscopic size on the surface. For type (i), both the ordinary and special transitions were studied. They found that the asymptotic temperature dependence of the disorder-averaged surface magnetization on approaching the bulk critical temperature $T_c$ from below could be represented by a power law $\sim |\tau|^{\beta_1}$ with $\tau\equiv (T-T_c)/T_c$, where $\beta_1$ agreed, within the available numerical accuracy, with the respective values $\beta_1^{\text{ord}}\simeq 0.8$ and $\beta_1^{\text{sp}}\simeq 0.2$ of the pure system’s ordinary and special transitions. For type (ii), where the interaction constants were chosen such that only an ordinary transition could occur, the same value $\beta_1^{\text{ord}}$ of the perfect system was found for $\beta_1$. Their findings for the case of (i) are in conformity with the relevance/irrelevance criteria of Diehl and N[ü]{}sser [@DN90a; @Har74] according to which the pure system’s surface critical behaviour should be expected to be stable or unstable with respect to short-range correlated random surface-bond disorder depending on whether the surface specific heat $C_{11}$ [@Die86a] of the pure system remains finite or diverges at the transition. It is fairly well established [@DD83b; @DDE83] that $C_{11}$ approaches a finite constant at the ordinary transition, but has a leading thermal singularity $\sim|\tau|^{(d-1)\nu -2\Phi}$ at the special transition, where $\Phi$ is the surface crossover exponent. In the latter case, the condition for irrelevance, $\Phi<(d-1)\nu/2$, reduces to $$\label{irrelcon} \Phi<\nu$$ in $d=3$ bulk dimensions. Since various Monte Carlo simulations [@LB90a; @RDW92; @RDWW93] (though not all [@vrf]) and renewed field-theory estimates [@DS94] suggest a value of $\Phi$ between $0.5$ and $0.6$, definitely smaller than the accepted value $0.63$ of $\nu$ for $d=3$, one may be quite confident that the condition (\[irrelcon\]) holds. Thus short-range correlated surface-bond disorder should be irrelevant in the renormalization-group sense at both transitions. Irrelevance criteria of the above Harris type [@DN90a; @Har74] seem to work quite well in practice. Yet, from a mathematical point of view, they are rather weak because they are nothing but a necessary (though not sufficient) condition for stability of the pure system’s critical behaviour. In this note, I shall employ the Griffiths-Kelly-Sherman (GKS) inequalities [@Gri67a] to obtain upper and lower bounds on the surface magnetization densities of both types of imperfect systems, bounds that are given by surface magnetizations of analogous systems without such imperfections. Their known asymptotic temperature dependence near $T_c$ will then be exploited to obtain restrictions on the surface critical behaviour of the imperfect systems considered. For some cases of interest studied by PS [@PS98], the equality $\beta_1=\beta_1^{\text{ord}}$ will be rigorously established. Following these authors, let us consider an Ising model with ferromagnetic NN interactions on a simple cubic lattice of size $L_x\times L_y\times L_z$. Periodic boundary conditions will be chosen along two principal axes (the $x$ and $y$ directions), and free boundary conditions along the third one (the $z$ direction), so that the surface consists of the top layer at $z=1$ and the bottom layer at $z=L_z$. Associated with each pair of spins on NN sites $i$ and $j$ is an interaction constant $J(i,j)>0$, which we assume to have the same value $J$ whenever $i$ or $j$ (or both) belong to layers with $1<z<L_z$. In the case of surface-bond disorder, which we consider first, the $J(i,j)\equiv J^{\text{(s)}}(i,j)$ of all NN pairs of surface sites are independent random variables. The probability density $P(J_1)$ of any one of these will be assumed to have support only in the interval $[J_1^<,J_1^>]$ (with $J_1^>>J_1^<>0$). This is in conformity with, but less restrictive than, PS’s assumption that $J_1$ takes just two values $J_1^<$ and $J_1^>$, either one with probability $1/2$. We will also assume that all (bulk and surface) spins are exposed to the same magnetic field $H>0$, whose limit $H\to 0^+$ will be taken after the thermodynamic limit has been performed. Let $K\equiv J/k_BT$ and $h\equiv H/J$. Define $\bbox{r}^{(\text{s})}$ to be the set of all dimensionless surface coupling constants $J^{(\text{s})}(i,j)/J$. Let $m(i;K,\bbox{r}^{(\text{s})},h)\equiv \langle s_i\rangle$ be the thermal average of a spin at site $i$ for a given disorder configuration $\bbox{r}^{(\text{s})}$, and denote the corresponding quantity of the perfect system with uniform NN surface coupling $J_1=rJ$ as $m(i;K,r,h)$. Since all interactions are ferromagnetic, the GKS inequalities [@Gri67a] are valid. Averages of products of spin variables are monotone non-decreasing functions of all variables $J(i,j)$ and $H$. Hence, for finite $L_x,\, L_y$, and $L_z$, $m(i;K,\bbox{r}^{(\text{s})},h)$ is bounded by $m(i;K,r^<,h)$ from below and by $m(i;K,r^>,h)$ from above. We choose $i\equiv i_s$ to be a surface site, take the thermodynamic limit (first) and then let $H\to 0^+$. The bounds converge towards the respective values of $m_1(K,r,0^+)$, the spontaneous magnetization of the surface layers per site, for $r=r^<$ and $r^>$. Thus we obtain $$\label{Grifineq} m_1(K,r^<,0^+)\le m(i_s;K,\bbox{r}^{(\text{s})},0^+)\le m_1(K,r^>,0^+)\;.$$ The following limiting forms of $m_1$ are well established [@PS98; @Die86a; @LB90a; @rigres; @BD94]: $$\label{limform} m_1=\cases{C_1|\tau|^{\beta_1^{\text{ord}}}[1+o(\tau)]& as $\tau\to 0^-$ at fixed $r<r_c$,\cr C_1'|\tau|^{\beta_1^{\text{sp}}}[1+o(\tau)] &as $\tau\to 0^-$ at fixed $r=r_c$,\cr m_{1c}+O(\tau) &as $\tau\to 0^\pm$ at fixed $r>r_c$,}$$ where $r_c\simeq 1.50 $ [@LB90a] is the critical value associated with the special transition. The quantities $m_{1c}>0$, $C_1$, and $C'_1$ are nonuniversal,
{ "pile_set_name": "ArXiv" }
--- address: | Institute of Nuclear Physics, Catholic University of Louvain\ 2, Chemin du Cyclotron, B-1348 Louvain-la-Neuve, Belgium\ E-mail: govaerts@fynu.ucl.ac.be author: - Jan GOVAERTS title: | On the Road Towards\ the Quantum Geometer’s Universe:\ an Introduction to Four-Dimensional\ Supersymmetric Quantum Field Theory --- Introduction {#Sect1} ============ The organisers of the third edition of the COPROMAPH Workshops had thought it worthwhile to have the second series of lectures during the week-long meeting dedicated to an introduction to supersymmetric quantum field theories. An internationally renowned expert in the field had been invited, and was to deliver the course. Unfortunately, at the last minute fate had it decided otherwise, depriving the participants of what would have been an introduction to the subject of outstanding quality. The present author was finally found to be on hand, without being able to do full justice to the wide relevance of the topic, ranging from pure mathematics and topology to particle physics phenomenology at its utmost best in anticipation of the running of the LHC at CERN by 2007. Fate had it also that the same author had already delivered a similar series of lectures at the previous edition of the COPROMAPH Workshops,[@GovCOPRO2] which in broad brush strokes attempted to paint with vivid colours the fundamental principles of XX$^{\rm th}$ century physics, underlying all the basic conceptual progresses having led to the relativistic quantum gauge field theory and classical general relativity frameworks for the present description of all known forms of elementary matter constituents and their fundamental interactions, as inscribed in the Standard Model of particle physics and Einstein’s classical theory of general relativity. At the same time, a few of the doors onto the roads winding deep into the unchartered territories of the physics that must lie well beyond were also opened. It is thus all too fitting that we get the opportunity to trace together a few steps onto one of these roads, in the embodiement of a Minkowski spacetime structure extended into a superspace including now also anticommuting coordinates in addition to the usual commuting spacetime ones. We are truly embarking on a journey onto the roads leading towards the quantum geometer’s universe! Even if only by marking the path by a few white and precious pebbles to guide us into the unknown territory when the time will have come for more solitary explorations of one’s own in the composition, with a definite African beat, of the music scores of the unfinished symphony of XXI$^{\rm st}$ century physics.[@GovCOPRO2] Even though none are based on actual experimental facts, there exist a series of theoretical and conceptual motivations for considering supersymmetric extensions of ordinary Yang–Mills theories in the quest for a fundamental unification. Spacetime supersymmetry is a symmetry that exchanges particles of integer — bosons — and half-integer — fermions — spin,[^1] enforcing specific relations between the properties and couplings of each class of particles, when supersymmetry remains manifest in the spectrum of the system. In particular, since for what concerns ultra-violet (UV) short-distance divergences of quantum field theories in four-dimensional Minkowski spacetime fermionic fields are less ill-behaved than bosonic fields (namely, in terms of a cut-off in energy, divergences in fermionic loop amplitudes are usually only logarithmically divergent whereas those of bosonic loops are quadratically divergent), one should expect that in the presence of manifest supersymmetry, UV divergences should be better tamed for bosonic fields, being reduced to a logarithmic behaviour only as in the fermionic sector (this has important consequences which we shall not delve into here). Another aspect is that within the context of superstring and M-theory[@Strings] with bosonic and fermionic states, quantum consistency is ensured provided supersymmetries are restricting the dynamics. In this sense, the existence of supersymmetry at some stage of unification beyond the Standard Model is often considered to be a natural prediction of M-theory. Besides such physics motivations just hinted at, supersymmetry has also proved to be of great value in mathematical physics, in the understanding of nonperturbative phenomena in quantum field theories and M-theory,[@Witten1; @MATH] and for uncovering deep connections between different fields of pure mathematics. The algebraic structures associated to Grassmann graded algebras are powerful tools with which to explore new limits in the concepts of geometry, topology and algebra.[@MATH] One cannot help but feel that a great opportunity would be missed if tomorrow’s quantum geometry would not make any use of supersymmetric algebraic structures. Since its discovery in the early 1970’s,[@SUSY1970; @WZ] applications of supersymmetry have been developed in such a diversity of directions and in so large a variety of fields of physics and mathematics, that it is impossible to do any justice to all that work in the span of any set of lectures, let alone only a few. Our aim here will thus be very modest. Namely, starting from the contents of the previous lecture notes,[@GovCOPRO2] build a bridge reaching the entry roads and the shores towards supersymmetric field theories and the fundamental concepts entering their construction. Not that the lectures delivered at the Workshop did not discuss the general superfield approach over superspace as the most efficient and transparent techniques for such constructions in the case of $\mathcal N=1$ supersymmetry, but the latter material being so widely and in such detailed form available from the literature, it is felt that rather a detailed introduction to the topics missing from Ref.  but necessary to understand supersymmetric field theories is of greater use and interest to most readers of this Proceedings volume. With these notes, our aim is thus to equip any interested reader with a few handy concepts and tools to be added to the backpack to be carried on his/her explorer’s journey towards the quantum geometer’s universe of XXI$^{\rm st}$ century physics, in search of the new principle beyond the symmetry principle of XX$^{\rm th}$ century physics.[@GovCOPRO2] Also by lack of space and time, even of the anticommuting type if the world happens to be supersymmetric indeed, we shall thus stop short of discussing explicitly any supersymmetric field theory in 4-dimensional Minkowski spacetime, even the simplest example of the $\mathcal N=1$ Wess-Zumino model[@WZ] that may be constructed using the hand-made tools of an amateur artist-composer in the art of supersymmetries. From where we shall leave the subject in these notes, further study could branch off into a variety of directions of wide ranging applications, beginning with general supersymmetric quantum mechanics and the general superspace and superfield techniques for $\mathcal N=1$ and $\mathcal N=2$ supersymmetric field theories with Yang–Mills internal gauge symmetries and the associated Higgs mechanism of gauge symmetry breaking, to further encompass the search for new physics at the LHC through the construction of supersymmetric extensions of the Standard Model, or also reaching towards the duality properties of supersymmetric Yang–Mills and M-theory, mirror geometry, topological string and quantum field theories,[@GovTQFT] etc., to name just a few examples.[@Strings; @Witten1; @MATH] Let us thus point out a few standard textbooks and lectures for large and diversified accounts of these classes of theories and more complete references to the original literature. Some important such material is listed in Refs.  and . In particular, the lectures delivered at the Workshop were to a significant degree inspired by the contents of Ref. . Any further search through the SPIRES databasis ([http://www.slac.stanford.edu/spires/hep/]{}; UK mirror: [http://www-spires.dur.ac.uk/spires/hep/]{}) will quickly uncover many more useful reviews. In Sec. \[Sec2\], we briefly recall the basic facts of relativistic quantum field theory for bosonic degrees of freedom, discussed at greater length in Ref. , in order to explain why such systems are the natural framework for describing relativistic quantum point-particles. The same considerations are then developed in Sec. \[Sec3\] in the case of fermionic degrees of freedom associated to particles of half-integer spin, based on a discussion of the theory of finite dimensional representations of the Lorentz group, leading in particular to the free Dirac equation for the description of spin 1/2 particles without interactions. Section \[Sec4\] then considers, as a simple introductory illustration of some facts essential and generic to supersymmetric field theories, and much in the same spirit as that of the discussion in Sec. \[Sec2\], the $\mathcal N=1$ supersymmetric harmonic oscillator which already displays quite a number of interesting properties. Section \[Sec5\] then concludes with a series of final remarks related to the actual construction of supersymmetric field theories based on the general concepts of the Lie symmetry algebraic structures inherent to such relativistic invariant quantum field theories and their manifest realisations through specific choices of field content, indeed the underlying theme to both these lectures and the previous ones.[@GovCOPRO2] Basics of Quantum Field Theory: A Compendium for Scalar Fields {#Sec2} ============================================================== Within a relativistic classical framework,[^2] material reality consists, on the one hand, of dynamical fields, and on the other hand, of point-particles. Fields act on particles through forces that they develop, such as the Lorentz force of the electromagnetic field for charged particles, while particles react back onto the fields being sources for the latter, for instance through
{ "pile_set_name": "ArXiv" }
--- abstract: 'The general lines of the derivation and the main properties of the master equations for the master amplitudes associated to a given Feynman graph are recalled. Some results for the 2-loop self-mass graph with 4 propagators are then presented.' --- [**Master Equations for Master Amplitudes $^{\star}$** ]{} [ [M. Caffo$^{ab}$, H. Czy[ż]{} $^{c}$, S. Laporta$^{b}$]{} and [E. Remiddi$^{ba}$\ ]{} ]{} - [*INFN, Sezione di Bologna, I-40126 Bologna, Italy* ]{} - [*Dipartimento di Fisica, Università di Bologna, I-40126 Bologna, Italy* ]{} - [*Institute of Physics, University of Silesia, PL-40007 Katowice, Poland* ]{} e-mail: [caffo@bo.infn.it\ czyz@usctoux1.cto.us.edu.pl\ laporta@bo.infn.it\ remiddi@bo.infn.it\ ]{} [——————————-\ PACS 11.10.-z Field theory\ PACS 11.10.Kk Field theories in dimensions other than four\ PACS 11.15.Bt General properties of perturbation theory\ ]{} $^{\star}$[Presented at the Zeuthen Workshop on Elementary Particle Physics - Loops and Legs in Gauge Theories - Rheinsberg, 19-24 April 1998. ]{} 2[\_2]{} Introduction. =============== The integration by part identities [[@CT]]{} are by now a standard tool for obtaining relations between the many integrals associated to any Feynman graph or, equivalently, for working out recurrence relations for expressing the generic integral in terms of the “master integrals” or “master amplitudes" of the considered graph. A good example of the use of the integration by part identities is given in [[@Tarasov]]{}, where the recurrence relations for all the 2-loop self-mass amplitudes are established in the arbitrary masses case.\ It has been shown in [[@ER]]{} that by that same technique one can obtain a set of linear first order differential equations for the master integrals themselves; the coefficients of the equations are ratio of polynomials with integer coefficients in all the variables; the equations are further non homogeneous, with the non homogeneous terms given by the master integrals of the simpler graphs obtained from the considered graph by removing one or more internal propagators.\ Restricting ourselves for simplicity to the self-mass case, for any Feynman graph the related integrals can in general be written in the form $$A(\alpha,p^2) = \int d^nk \ B(\alpha,p,k) \ . {\label{1}}$$ In more detail, $ d^nk = d^nk_1...d^nk_l $ stands for the $ n $-continuous integration on an arbitrary number $ l $ of loops and $ k $ stands for the set of the corresponding loop momenta, so that there are altogether $ s=(l+1)(l+2)/2 $ different scalar products, including $ p^2 $; $ B(\alpha,p,k) $ is the product of any power of the scalar products in the numerators divided by any power of the propagators occurring in the graph (all masses will always be taken as different, unless otherwise stated); as the propagators are also simple combinations of the scalar products, simplifications might occur between numerator and denominator and as a consequence one expects quite in general $ (s-1) $ different factors altogether in the numerator and denominator, independently of the actual number of propagators present in the graph (graphs with less propagators have more factors in the numerator and [*viceversa*]{}); therefore the symbol $ \alpha $ in [Eq.(\[1\])]{} stands in fact for a set of $ (s-1) $ indices – the (integer) powers of the $ (s-1) $ factors.\ The integration by parts corresponding to the amplitudes of [Eq.(\[1\])]{} are $$\int d^nk \frac{\partial}{\partial k_{i,\mu}} \Bigl[ v_\mu B(\alpha,p,k) \Bigr] = 0 \ , \hskip 1cm i=1,...,l {\label{2}}$$ where $ v $ stands for any of the $ (l+1) $ vectors $ k $ and $ p $; there are therefore $ l(l+1) $ identities for each set of indices $ \alpha $. The identity is easily established – for small $ n $ the integral of the divergence vanishes. When the derivatives are explicitly carried out, one obtains the sum of a number of terms, all equal to a simple coefficient (an integer number or, occasionally, $ n $), times an integrand of the form $ B(\beta,k,p) $, with the set of indices $ \beta $ differing at most by a unity in two places from the set $ \alpha $.\ That set of identities is infinite; even if they are not all independent, they can be used for obtaining the recurrence relations, by which one can express each integral in terms of a few already mentioned “master amplitudes", through a relation of the form $$A(\alpha,p^2) = \sum \limits_{m} C(\alpha,m) A(m,p^2) + \sum \limits_{j} C(\alpha,j) A(j,p^2) \ , {\label{3}}$$ where the set of indices $ m $ takes the very few values corresponding to the master amplitudes, $ j $ refers to simpler master integrals in which one or more denominators are missing, and the coefficients $ C(\alpha,m), C(\alpha,j) $ are ratios of polynomials in $ n $, masses and $ p^2 $.\ Let us consider now one of the master amplitudes themselves, say the master amplitude identified by the set of indices $ m $; according to [Eq.(\[1\])]{} we can write $$A(m,p^2) = \int d^nk \ B(m,p,k) \ . {\label{4}}$$ By acting with $ p_\mu (\partial/\partial p_\mu) $ on both sides we get $$p^2 \frac{\partial}{\partial p^2} A(m,p^2) = \frac{1}{2} \int d^nk \ p_\mu \frac{\partial}{\partial p_\mu} \ B(m,p,k) \ . {\label{5}}$$ According to the discussion following [Eq.(\[2\])]{}, the [*r.h.s.*]{} is a combination of integrands; as [Eq.(\[3\])]{} applies to each of the corresponding integrals, one obtains the relations $$p^2 \frac{\partial}{\partial p^2} A(m,p^2) = \sum \limits_{m'} C(m,m') A(m',p^2) + \sum \limits_{j} C(m,j) A(j,p^2) \ , {\label{6}}$$ which are the required master equations. As in [Eq.(\[3\])]{}, $ j $ refers to simpler master integrals (in which one or more denominators are missing; they constitute the non-homogeneous part of the master equations), to be considered as known when studying the $ A(m,p^2) $.\ It is obvious from the derivation that the master equations can be established regardless of the number of loops. It is equally clear that for graphs depending on several external momenta (such as vertex or 4-body scattering graphs) one has simply to replace the single operator $ p_\mu (\partial/\partial p_\mu) $ of [Eq.(\[5\])]{} by the set of operators $ p^i_\mu (\partial/\partial p^j_\mu) $, where $ i,j $ run on all the external momenta, and with some more algebra one can obtain master equations in any desired Mandelstam variable.\ The master equations are a powerful tool for the study and the evaluation of the master amplitudes; among other things: - they provide information on the values of the master amplitudes at special kinematical points (such as $ p^2=0 $ in [Eq.(\[6\])]{}; the [*l.h.s.*]{} vanishes, as $ p^2=0 $ is a regular point, so that the [*r.h.s.*]{} is a relation among master amplitudes at $ p^2=0 $, usually sufficient to fix their values at that point); - the master equations are valid identically in $ n $, so that they can be expanded in $ (n-4) $ and solved recursively for the various terms of the expansion in $ (n-4) $, starting from the most singular (with 2-loop amplitudes one expects at most a double pole in $ (n-4) $); - when the initial value at $ p^2 = 0 $ has been obtained, the equations can be integrated by means of fast and precise numerical methods (for instance with a Runge-Kutta routine), so providing a convenient approach to their numerical evaluation; note that the numerical approach can be used both for arbitrary $ n $ or for $ n=4 $, once the expansion has been properly carried out; - the equations can be used to work out virtually any kind of expansion, in particular the large $
{ "pile_set_name": "ArXiv" }
--- abstract: | An RNA sequence is a string composed of four types of nucleotides, $A, C, G$, and $U$. Given an RNA sequence, the goal of the RNA folding problem is to find a maximum cardinality set of crossing-free pairs of the form $\{A,U\}$ or $\{C,G\}$. The problem is central in bioinformatics and has received much attention over the years. However, the current best algorithm for the problem still takes $\mathcal{O}\left(\frac{n^3}{\log^2 (n)}\right)$ time, which is only a slight improvement over the classic $\mathcal{O}(n^3)$ dynamic programming algorithm. Whether the RNA folding problem can be solved in $\mathcal{O}(n^{3-\epsilon})$ time remains an open problem. Recently, Abboud, Backurs, and Williams (FOCS’15) made the first progress by showing a conditional lower bound for a generalized version of the RNA folding problem based on a conjectured hardness of the $k$-clique problem. A drawback of their work is that they require the RNA sequence to have at least 36 types of letters, making their result biologically irrelevant. In this paper, we show that by constructing the gadgets using a lemma of Bringmann and Künnemann (FOCS’15) and surrounding them with some carefully designed sequences, the framework of Abboud et al. can be improved upon to work for the case where the alphabet size is 4, yielding a conditional lower bound for the RNA folding problem. We also investigate the Dyck edit distance problem. We demonstrate a reduction from RNA folding problem to Dyck edit distance problem of alphabet size 10, establishing a connection between the two fundamental string problems. This leads to a much simpler proof of the conditional lower bound for Dyck edit distance problem given by Abboud et al. and lowers the required alphabet size for the lower bound to work. author: - 'Yi-Jun Chang[^1]' title: Hardness of RNA Folding Problem with Four Symbols --- Keywords: RNA folding, Dyck edit distance, longest common subsequence, conditional lower bound, clique Introduction \[sec.intro\] ========================== An [*RNA sequence*]{} is a string composed of four types of nucleotides, namely $A, C, G$, and $U$. Given an RNA sequence, the goal of the [*RNA folding*]{} problem is to find a maximum cardinality set of crossing-free pairs of nucleotides, where all the pairs are either $\{A,U\}$ or $\{C,G\}$. The problem is central in bioinformatics and has found applications in many areas of molecular biology. For a more comprehensive exposition of the topic, the reader is referred to e.g. [@S15]. It is well-known that the problem can be solved in cubic time using a simple dynamic programming method [@DEKM98]. Due to the importance of RNA folding in practice, there has been a long line of research on improving the cubic time algorithm (See e.g. [@A99; @FG10; @PTZZ11; @PZTZ13; @S15; @VGF13]). Currently the best upper bound is $\mathcal{O}\left(\frac{n^3}{\log^2 (n)}\right)$ [@PZTZ13; @S15], and this can be obtained via four-Russian method or fast min-plus multiplication (based on ideas from Valiant’s CFG parser [@V75]). Whether the RNA folding problem can be solved in $\mathcal{O}(n^{3-\epsilon})$ time for some $\epsilon > 0$ is still a major open problem. Other than attempting to improve the upper bound, we should also approach the problem in the opposite direction, i.e. showing a lower bound or arguing why the problem is hard. A popular way to show hardness of a problem is to demonstrate a lower bound conditioned on some widely accepted hypothesis. \[Strongly Exponential Time Hypothesis (SETH)\] \[c-1\] There exists no $\epsilon, k_0 > 0$ such that $k$-SAT with $n$ variables can be solved in time $\mathcal{O}(2^{(1-\epsilon)n})$ for all $k > k_0$. \[c-2\] There exists no $\epsilon, k_0 > 0 $ such that $k$-clique on graphs with $n$ nodes can be solved in time $\tilde{\mathcal{O}}\left(n^{(\omega - \epsilon) k/3}\right)$ for all $k > k_0$, where $\omega < 2.373$ is the matrix multiplication exponent. Assuming that SETH (Conjecture \[c-1\]) holds, the following bounds are unattainable for any $\epsilon > 0$: - an $\mathcal{O}(n^{k-\epsilon})$ algorithm for $k$-dominating set problem [@PR10], - an $\mathcal{O}(n^{2-\epsilon})$ algorithm for dynamic time warping, longest common subsequence, and edit distance [@ABV15*; @BI14; @BK15], - an $\mathcal{O}(m^{2-\epsilon})$ algorithm for ($3/2 - \epsilon$)-approximating the diameter of a graph with $m$ edges [@RV13]. As remarked in [@ABV15], it is easy to reduce the longest common subsequence problem on binary strings to the RNA folding problem as following: Given two binary strings $X, Y$, we let $\hat{X} \in {\{A,C\}}^{|X|}$ be the string such that $\hat{X}[i] = A$ if $X[i] = 0$, $\hat{X}[i] = C$ if $X[i] = 1$, and we let $\hat{Y} \in {\{G,U\}}^{|Y|}$ be the string such that $\hat{Y}[i] = U$ if $Y[i] = 0$, $\hat{Y}[i] = G$ if $Y[i] = 1$. Then we have a 1-1 correspondence between RNA foldings of $\hat{X} \circ \hat{Y}^R$ (i.e. concatenation of $\hat{X}$ and the reversal of $\hat{Y}$) and common subsequences of $X$ and $Y$. It has been shown in [@BK15] that there is no $\mathcal{O}(n^{2-\epsilon})$ algorithm for longest common subsequence problem on binary strings conditioned on SETH, and we immediately get the same conditional lower bound for RNA folding from the simple reduction! Very recently, based on a conjectured hardness of $k$-clique problem (Conjecture \[c-2\]), a higher conditional lower bound was proved for a generalized version of the RNA folding problem (which coincides with the RNA folding problem when the alphabet size is 4) [@ABV15]: \[[@ABV15]\] \[thm-1\] If the generalized RNA folding problem on sequences of length $n$ with alphabet size 36 can be solved in $T(n)$ time, then $3k$-clique on graphs with $|V|=n$ can be solved in $\mathcal{O}\left(T\left(n^{k + 2} \log(n) \right)\right)$ time. Therefore, a $\mathcal{O}(n^{\omega - \epsilon})$ time algorithm for the generalized RNA folding with alphabet size at least 36 will disprove Conjecture \[c-2\], yielding a breakthrough to the parameterized complexity of clique problem. However, the above theorem is irrelevant to the RNA folding problem in real life (which has alphabet size 4). It is unknown whether the generalized RNA folding for alphabet size $4$ admits a faster algorithm than the case for alphabet size $> 4$. In fact, there are examples of string algorithms whose running time scales with alphabet size (e.g. string matching with mismatched [@AL91] and jumbled indexing [@ACLL14; @CL15]). We also note that when the alphabet size is 2, the generalized RNA folding can be trivially solved in linear time. In this paper, we improve upon Theorem \[thm-1\] by showing the same conditional lower bound for the RNA folding problem: \[thm-2\] If the RNA folding problem on sequences in ${\{A,C,G,U\}}^n$ can be solved in $T(n)$ time, then $3k$-clique on graphs with $|V|=n$ can be solved in $\mathcal{O}\left(T\left(n^{k + 1} \log(n) \right)\right)$ time. Note that we also get an $\mathcal{O}(n)$ factor improvement inside $T(\cdot)$, though it does not affect the conditional lower bound. The current state-of-art algorithm for $k-$clique, which takes $\tilde{\mathcal{O}}\left(n^{\omega k/3}\right)$ time, requires the use of fast matrix multiplication [@EG04] which does not perform very efficiently in practice. For combinatorial, non-algebraic algorithm for $k-$clique, the current best one runs in $\tilde{\mathcal{O}}\left(\frac{n^k}{\log^k (n)}\right)$ time [@V09], which is only slightly better than the trivial approach. As a result, by Theorem \[thm-2\], even a $\mathcal{O}(n^{3- \epsilon})$ time combinatorial algorithm for RNA folding will lead to an improvement for combinatorial algorithms for
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study the iteration complexity of stochastic gradient descent (SGD) for minimizing the gradient norm of smooth, possibly nonconvex functions. We provide several results, implying that the classical $\mathcal{O}(\epsilon^{-4})$ upper bound (for making the average gradient norm less than $\epsilon$) cannot be improved upon, unless a combination of additional assumptions is made. Notably, this holds even if we limit ourselves to convex quadratic functions. We also show that for nonconvex functions, the feasibility of minimizing gradients with SGD is surprisingly sensitive to the choice of optimality criteria.' author: - | Yoel Drori\ Google Research - | Ohad Shamir\ Weizmann Institute of Science\ and Google Research bibliography: - 'bib.bib' title: | The Complexity of Finding Stationary Points\ with Stochastic Gradient Descent --- Introduction ============ Stochastic gradient descent (SGD) is today one of the main workhorses for solving large-scale supervised learning and optimization problems. Much of its popularity is due to its extreme simplicity: Given a function $f$ and an initialization point ${\mathbf{x}}$, we perform iterations of the form ${\mathbf{x}}_{t+1}={\mathbf{x}}_t-\eta_t {\mathbf{g}}_t$, where $\eta_t>0$ is a step size parameter and ${\mathbf{g}}_t$ is a stochastic vector which satisfies ${\mathbb{E}}[{\mathbf{g}}_t|{\mathbf{x}}_t]=\nabla f({\mathbf{x}}_t)$. For example, in the context of machine learning, $f({\mathbf{x}})$ might be the expected loss of some predictor parameterized by ${\mathbf{x}}$ (over some underlying data distribution) and ${\mathbf{g}}_t$ is the gradient of the loss w.r.t. a single data sample. For convex problems, the convergence rate of SGD to a global minimum of $f$ has been very well studied (for example, [@kushner2003stochastic; @nemirovski2009robust; @moulines2011non; @bertsekas2011incremental; @rakhlin2012making; @bottou2018optimization]). However, for nonconvex problems, convergence to a global minimum cannot in general be guaranteed. A reasonable substitute is to study the convergence to local minima, or at the very least, to stationary points. This can also be quantified as an optimization problem, where the goal is not to minimize $f({\mathbf{x}})$ over ${\mathbf{x}}$, but rather ${\|\nabla f({\mathbf{x}})\|}$. This question of finding stationary points has gained more attention in recent years, with the rise of deep learning and other large-scale nonconvex optimization methods. Compared to optimizing function values, the convergence of SGD in terms of minimizing the gradient norm is relatively less well-understood. A folklore result (see e.g., [@ghadimi2013stochastic], which we repeat in [Appendix \[sec:upperbounds\]]{} for completeness, as well as [@allen2018make]) states that for smooth (Lipschitz gradient) functions, ${\mathcal{O}}(\epsilon^{-4})$ iterations are sufficient to make the average expected gradient ${\mathbb{E}}[\frac{1}{T}\sum_{t=1}^{T}{\|\nabla f({\mathbf{x}}_t)\|}]$ (or minimal gradient ${\mathbb{E}}[\min_t {\|\nabla f({\mathbf{x}}_t)\|}]$) less than $\epsilon$, and it was widely conjectured that this is the best complexity achievable with SGD. However, this bound was recently improved in Fang et al. [@fang2019sharp], which showed a complexity bound of ${\mathcal{O}}(\epsilon^{-3.5})$ for SGD, under the following additional assumptions/algorithmic modifications: 1. **(Complex) aggregation.** Rather than considering the average or minimal gradient norm of the iterates, the algorithm considers the norm of a certain adaptive average of a suffix of the iterates (those which do not deviate too much from the final iterate). 2. \[as:hessian\] **Lipschitz Hessian.** The function is twice differentiable, with a Lipschitz Hessian as well as a Lipschitz gradient. 3. \[as:noise\] **“Dispersive” noise.** The stochastic noise satisfies a “dispersive” property, which intuitively implies that it is well-spread (it is satisfied, for example, for Gaussian or uniform noise in some ball). 4. \[as:dimension\] **Bounded dimension.** The dimension is bounded, in the sense that there is an explicit logarithmic dependence on it in the iteration complexity bound (in contrast, the folklore ${\mathcal{O}}(\epsilon^{-4})$ result is dimension-free). In fact, the result of Fang et al. is even stronger, as it shows convergence to a *second-order* stationary point (where the Hessian is nearly positive definite), but this will not be our focus here. Moreover, it is known that some dimension dependence is difficult to avoid when considering second-order stationary points (see [@simchowitz2017gap]). In this paper, we study the performance limits of SGD for minimizing gradients, through several variants of lower bounds under different assumptions. In particular, we wish to understand which of the assumptions/modifications above are necessary to break the $\epsilon^{-4}$ barrier. Our main take-home message is that most of these indeed appear to be needed in order to attain an iteration complexity better than ${\mathcal{O}}(\epsilon^{-4})$, in some cases even if we limit ourselves just to convex quadratic functions. In a bit more detail: - If we drop assumption \[as:dimension\] (bounded dimension), and consider the norm of the gradient at the output of some fixed, deterministic aggregation scheme (as opposed to returning, for example, an iterate with a minimal gradient norm), then perhaps surprisingly, we show that it is impossible to provide *any* finite complexity bound. This holds under mild algorithmic conditions, which extend far beyond SGD. This implies that for dimension-free bounds, we must either consider rather complicated aggregation schemes, apply randomization, or use optimality criteria which do not depend on a single point (e.g., consider the average gradient $\frac{1}{T}\sum_{t=1}^{T}{\|\nabla f({\mathbf{x}}_t)\|}$ or $\min_t {\|\nabla f(x_t)\|}$, as is often done in the literature). This result is formalized as [Thm. \[thm:infdim\]]{} in [Subsection \[subsec:fixedpoint\]]{}. - Without assumptions \[as:hessian\] (Lipschitz Hessian) and \[as:noise\] (dispersive noise), then even with rather arbitrary aggregation schemes, the iteration complexity of SGD is $\Omega(\epsilon^{-4})$. This result is formalized as [Thm. \[thm:aggregation\_step\]]{} in [Subsection \[subsec:sgdlowbound\]]{}. - Without aggregation and without assumption \[as:noise\] (dispersive noise), the iteration complexity of SGD required to satisfy ${\mathbb{E}}[\min_t {\|\nabla f({\mathbf{x}}_t)\|}]\leq \epsilon$ is $\Omega(\epsilon^{-3})$. This result is formalized as [Thm. \[thm:nonconvex\]]{} in [Subsection \[subsec:sgdlowbound\]]{}. - Without aggregation, the iteration complexity of SGD with “reasonable” step sizes to attain ${\mathbb{E}}[\min_t {\|\nabla f({\mathbf{x}}_t)\|}]\leq \epsilon$ is $\Omega(\epsilon^{-4})$, even for quadratic *convex* functions in moderate dimension and Gaussian noise (namely, all other assumptions are satisfied as well as convexity). This result is formalized as [Thm. \[thm:sgdlow\]]{} in Section \[sec:convex\]. It is important to note that the SGD algorithm, which is the main focus of this paper, is not necessarily an optimal algorithm (in terms of iteration complexity) for minimizing gradient norms in our stochastic optimization setting. For example, for convex problems, it is known that it is possible to achieve an iteration complexity of $\tilde{O}(\epsilon^{-2})$, strictly smaller than our $\Omega(\epsilon^{-4})$ lower bound (see [@foster2019complexity], and for a related result in the deterministic setting see [@nesterov2012make]). However, these algorithms are more complicated and less natural than plain SGD. Our results indicate that this algorithmic complexity might be a necessary price to pay in order to achieve optimal iteration complexity in some cases. Setting and Notation {#sec:setting} ==================== We let bold-face letters denote vectors, use ${\mathbf{e}}_i$ to denote the canonical unit vector, and use $[T]$ as shorthand for $\{1,2,\ldots,T\}$. We assume throughout that the objective $f$ maps ${\mathbb{R}}^d$ to ${\mathbb{R}}$, and either has an $L$-Lipschitz gradient for some fixed parameter $L>0$ or a $\rho$-Lipschitz Hessian for some $\rho>0$. We consider algorithms which use a standard stochastic first-order oracle ([@Book:NemirovskyYudin; @agarwal2009information]) in order to minimize some optimality criteria: This oracle, given a point ${\mathbf{x}}_
{ "pile_set_name": "ArXiv" }
--- abstract: 'In a MC study using a cluster update algorithm we investigate the finite-size scaling (FSS) of the correlation lengths of several representatives of the class of three-dimensional classical O($n$) symmetric spin models on the geometry $T^2\times\R$. For all considered models we find strong evidence for a linear relation between FSS amplitudes and scaling dimensions when applying [*antiperiodic*]{} instead of periodic boundary conditions across the torus. The considered type of scaling relation can be proven analytically for systems on two-dimensional strips with [*periodic*]{} bc using conformal field theory.' address: | Institut für Theoretische Physik, Universität Leipzig, 04109 Leipzig, Germany, and\ Institut für Physik, Johannes Gutenberg-Universität Mainz, 55099 Mainz, Germany author: - 'Martin Weigel[@mw] and Wolfhard Janke[@wj]' title: 'Universal amplitudes in the FSS of three-dimensional spin models' --- Conformal invariance of 2D systems at a critical point has turned out to be the key feature for a complete, analytical description of their critical behavior[@CardyBuch; @HenkelBuch]. In particular, conformal field theory (CFT) supplies exact FSS relations [*including the amplitudes*]{} for these 2D models. For strips of width $L$ with periodic boundary conditions, i.e. the $S^1\times\R$ geometry, Cardy[@Cardy84a] has shown that the FSS amplitudes of the correlation lengths $\xi_i$ of primary (conformally covariant) operators are entirely determined by the corresponding scaling dimensions $x_i$: $$\xi_i=\frac{A}{x_i}L, \label{amplit}$$ with a model independent overall amplitude $A=1/2\pi$. This result relies on both, the greater restrictive strength of the 2D conformal group compared with the higher dimensional cases, which is needed for the definition of the “primarity” of operators, and the fact that the considered geometry is conformally related to the corresponding flat space ${\R}^2$. Generalizing these results to more realistic 3D geometries within the CFT framework generically destroys the rich 2D group structure. Keeping at least the conformal flatness condition, Cardy[@Cardy85] arrived at a conjecture of the form (\[amplit\]) for the $S^{n-1}\times{\R},\,n>2$ geometries. Mainly for reasons of the numerical inaccessibility of these geometries Henkel[@Henkel86; @Henkel87] considered the situation where even this latter condition is cancelled: investigating the scaling behavior of the $S=\frac{1}{2}$ Ising model on 3D columns $T^2\times{\R}$ with periodic (pbc) or antiperiodic (apbc) boundary conditions across the torus via a transfer matrix calculation, he found for the correlation lengths of the magnetization and energy densities (the only primary operators in the [*2D*]{} model) in the scaling regime the ratios: $$\begin{array}{rcl} \xi_\sigma/\xi_\epsilon & = & 3.62(7) \hspace{0.5cm} \mbox{{\em periodic} bc,} \\ \xi_\sigma/\xi_\epsilon & = & 2.76(4) \hspace{0.5cm} \mbox{{\em antiperiodic} bc.} \\ \end{array}$$ Comparing this to the ratio of scaling dimensions of $x_\epsilon/x_\sigma=2.7326(16)$ a relation of the form (\[amplit\]) seems not to hold, [*unless the boundary conditions are changed to be antiperiodic*]{}. This is in qualitative agreement with numerical work done by Weston[@Weston]. In this letter, we first revisit the Ising model on the $T^2\times{\R}$ geometry trying to decide the exposed question with an independent Monte Carlo (MC) method and at an increased level of accuracy. The main purpose is to investigate further models – in our case O($n$)$,\,n>1$ spin models –, thus adding evidence that Henkel’s result is not just a numerical “accident” but reflects a universal property of such 3D systems. #### The model — {#the-model .unnumbered} We consider an O($n$) symmetric classical spin model with nearest-neighbor, ferromagnetic interactions in zero field with Hamiltonian $$\label{Hamilton} {\cal H} = -J \sum_{<ij>} {\bf s}_i\cdot{\bf s}_j,\;\;{\bf s}_i \in S^{n-1}.$$ The spins are located on a sc lattice of dimensions $(L_x,L_y,L_z)$ with $L_x=L_y$, modeling the $T^2$ geometry by applying periodic or antiperiodic bc along the $x$- and $y$-directions. Effects of the finite length of the lattice in the $z$-direction are minimized by choosing $L_z$ such that $L_z/\xi\gg 1$ and sticking the ends together via periodic bc. As is well known[@ZinnJustin], all of these models undergo a continuous phase transition in three dimensions, so that at the critical point the correlation length diverges linearly with the finite length $L=L_x$. Particular representatives of this class are the Ising ($n=1$), the XY ($n=2$) and the Heisenberg ($n=3$) model. #### The simulation — {#the-simulation .unnumbered} For our MC simulations we used the Wolff single-cluster update algorithm[@Wolff89] which is known to be more effective than the Swendsen-Wang[@Swendsen] update for three-dimensional systems[@WJChem]. As we want to consider antiperiodic bc for all systems in addition to the generic periodic bc case, the algorithm had to be adapted to this situation using the fact that in the case of nearest-neighbor interactions antiperiodic bc are equivalent to the insertion of a seam of antiferromagnetic bonds along the relevant boundary. The primary observables to measure are the connected correlation functions of the spin and the energy density: $$\begin{array}{rcl} G_{\sigma}^c({\bf x}_1,{\bf x}_2) & = & \langle{\bf s}({\bf x}_1)\cdot{\bf s}({\bf x}_2)\rangle-\langle{\bf s}\rangle\langle{\bf s}\rangle, \\ G_{\epsilon}^c({\bf x}_1,{\bf x}_2) & = & \langle\epsilon({\bf x}_1)\,\epsilon({\bf x}_2)\rangle-\langle\epsilon\rangle\langle\epsilon\rangle. \\ \end{array} \label{conncorr}$$ The correlation lengths $\xi_i$ in Eq. (\[amplit\]) being understood as measuring the correlations in the longitudinal $\R$-direction, one may average over estimates $\hat{G}^c({\bf x}_1,{\bf x}_2)$ such that $({\bf x}_1-{\bf x}_2)\parallel \hat{e}_z$ and $i\equiv|{\bf x}_1-{\bf x}_2|=\mbox{const}$, thus ending up at estimates $\hat{G}^{c,\parallel}(i)$. This average can be improved by considering a zero momentum mode projection[@WJ93], i.e., by correlating layer variables made up out of the sum of variables in a given layer $z=\mbox{const}$ instead of the original spins or local energies; this reduces the variance by a factor of $1/L_x^2$, the influence of transversal correlations being irrelevant for large distances $i$[@diplom]. Assuming an exponential long-distance behavior of the correlation functions (\[conncorr\]), extracting the correlation lengths via a straightforward fitting procedure requires a nonlinear three-parameter fit of the form $$G^{c,\parallel}(i)=G^{c,\parallel}(0)\exp{(-i/\xi)}+\mbox{const}, \label{corrfunction}$$ since any numerical estimation of $G^{c,\parallel}(i)$ necessarily fails to reproduce the correct long distance limit $G^{c,\parallel}(i)\rightarrow 0$ as $i\rightarrow\infty$ exactly. As this amounts to an investment of the gathered statistics into the determination of three parameters, two of which are completely irrelevant for our ends, we used an alternative method which intrinsically eliminates the two irrelevant parameters by using differences and ratios of $\hat{G}^{c,\parallel}(i)$ rather than the values themselves. Given the correlation function behaves as (\[corrfunction\]), estimators $\hat{\xi}_i$ for the correlation length are given by: $$\hat{\xi}_i=\Delta{\left[\ln\frac{\hat{G}^{c,\parallel}(i)-\hat{G}^{c,\parallel}(i-\Delta)} {\hat{G}^{c,\parallel}(i+\Delta)-\hat{G}^{c,\parallel}(i)}\right]}^{-1}. \label{diffmethoddelta}$$ The generic value for $\Delta$ is one, but it might be advantageous to choose $\Delta>1$ in order to enhance the local drop of $G^{c,\parallel}(i)$ between $i$ and $i+\Delta$ (the signal) against the fluctuations (the noise). Following this procedure one ends up with a set of estimators for the correlation length as a function of distance $i$ as depicted in Fig. \[fig1\] for the spin-spin correlations of the
{ "pile_set_name": "ArXiv" }
--- abstract: 'UX Orionis stars (UXors) are Herbig Ae/Be or T Tauri stars exhibiting sporadic occultation of stellar light by circumstellar dust. GMCephei is such a UXor in the young ($\sim4$ Myr) open cluster Trumpler37, showing prominent infrared excess, emission-line spectra, and flare activity. Our photometric monitoring (2008–2018) detects (1) an $\sim$3.43 day period, likely arising from rotational modulation by surface starspots, (2) sporadic brightening on time scales of days due to accretion, (3) irregular minor flux drops due to circumstellar dust extinction, and (4) major flux drops, each lasting for a couple of months with a recurrence time, though not exactly periodic, of about two years. The star experiences normal reddening by large grains, i.e., redder when dimmer, but exhibits an unusual “blueing" phenomenon in that the star turns blue near brightness minima. The maximum extinction during relatively short (lasting $\leq 50$ days) events, is proportional to the duration, a consequence of varying clump sizes. For longer events, the extinction is independent of duration, suggestive of a transverse string distribution of clumps. Polarization monitoring indicates an optical polarization varying $\sim3\%$–8$\%$, with the level anticorrelated with the slow brightness change. Temporal variation of the unpolarized and polarized light sets constraints on the size and orbital distance of the circumstellar clumps in the interplay with the young star and scattering envelope. These transiting clumps are edge-on manifestations of the ring- or spiral-like structures found recently in young stars with imaging in infrared of scattered light, or in submillimeter of thermalized dust emission.' author: - 'P. C. Huang' - 'W. P. Chen' - 'M. Mugrauer' - 'R. Bischoff' - 'J. Budaj' - 'O. Burkhonov' - 'S. Ehgamberdiev' - 'R. Errmann' - 'Z. Garai' - 'H. Y. Hsiao' - 'C. L. Hu' - 'R. Janulis' - 'E. L. N. Jensen' - 'S. Kiyota' - 'K. Kuramoto' - 'C. S. Lin' - 'H. C. Lin' - 'J. Z. Liu' - 'O. Lux' - 'H. Naito' - 'R. Neuh[ä]{}user' - 'J. Ohlert' - 'E. Pakštienė' - 'T. Pribulla' - 'J. K. T. Qvam' - 'St. Raetz' - 'S. Sato' - 'M. Schwartz' - 'E. Semkov' - 'S. Takagi' - 'D. Wagner' - 'M. Watanabe' - Yu Zhang title: Diagnosing the Clumpy Protoplanetary Disk of the UXor Type Young Star GM Cephei --- Introduction {#sec:intro} ============ Circumstellar environments are constantly changing. A young stellar object (YSO), with prominent chromospheric and coronal activities, interacts intensely with the surrounding accretion disk by stellar/disk winds and outflows. The first few million years of the pre-main-sequence (PMS) evolution coincide with the epoch of possible planet formation, during which grain growth, already taking place in prestellar molecular cores up to micron sizes, continues on to centimeter sizes, and then to planetesimals [@nat07]. The detailed mechanism to accumulate planetesimals and to eventual planets is still uncertain. Competing theories include planetesimal accretion [@wei00] versus gravitational instability [@saf72; @gol73; @joh07]. Given the ubiquity of exoplanets, planet formation must be efficient to complete with the dissipation of PMS optically thick disks in less than 10 Myr [@mam04; @bri07; @hil08]. YSOs are known to vary in brightness. Outbursts arising from intermittent mass accretion events are categorized into two major classes: (1) FU Ori-type stars (or FUors) showing erupt brightening up to 6 mag from quiescent to the high state in weeks to months, followed by a slow decline in decades [@har85], and (2) EX Lup-type stars (EXors) showing brightening up to 5 mag, sometimes recurrent, with roughly the same timescale of months in both rising and fading [@her89]. Sunlike PMS objects, i.e., T Tauri stars, may also display moderate variations in brightness and colors [@her94] due to rotational modulation by magnetic/chromospheric cool spots or accretion/shocking hot spots on the surface. There is an additional class, owing its variability to extrinsic origin, of UXOri type stars [UXors; @her94], that displays irregular dimming caused by circumstellar dust extinction. In addition to the prototype UXOri itself, examples of UXors include COOri, RRTau, and VVSer. The YSO dimming events can be further categorized according to the levels of extinction and the timescales. The “dippers” [@cod10], with AATau being the prototype [@bou99; @bou03], have short (1–5 days) and quasi-periodic events thought to originate from occultation by warps [@ter00; @cod14] or by funnel flows [@bli16] near the disk truncation radius and induced by the interaction between the stellar magnetosphere and the inner disk [@rom13]. The “faders,” with KH15D being the prototype [@kea98; @ham01], show prolonged fading events, each lasting for months to years with typically large extinction up to several magnitudes, thought to be caused by occultation by the outer part of the disk [@bou13; @rod15; @rod16]. The target of this work, GMCephei (hereafter GMCep), a UXor star known to have a clumpy dusty disk [@che12], displays both dipper and fader events. As a member of Trumpler (Tr) 37, a young (1–4 Myr, [@mar90; @pat95; @sic05; @err13]) star cluster as a part of the Cepheus OB2 association, GMCep (R.A.=21$^{\rm h}$38$^{\rm m}$17$\fs32$, Decl.=+573122, J2000) possesses observational properties typical of a T Tauri star, such as emission spectra, infrared excess, and X-ray emission [@sic08; @mer09]. [*Gaia*]{}/DR2 [@bro18] measured a parallax of $\varpi=1.21\pm0.02$ mas ($d=826_{-13}^{+14}$ pc), consistent with being a member of Tr37 at $\sim870$ pc [@con02]. The spectral type of GMCep reported in the literature ranges from a late F [@hua13] to a late G or early K [@sic08]. The star has been measured to have a disk accretion rate up to $10^{-6} M_\sun$ yr$^{-1}$, which is thought to be 2–3 orders higher than the median value of the YSOs in Tr37 and is 1–2 orders higher than those of typical T Tauri stars [@gul98; @sic08]. The broad spectral lines suggest a rotation $ v \sin i\sim43.2$ km s$^{-1}$ much faster than the average $v\sin i\sim10.2$ km s$^{-1}$ of the members of Tr37 [@sic08]. @sic08 presented a comprehensive collection of data on GMCep, including optical/infrared photometry and spectroscopy, plus millimeter line and continuum observations, along with the young stellar population in the cluster Tr37 and the Cep OB2 association [See also @sic04; @sic05; @sic06a; @sic06b]. Limited by the time span of their light curve, @sic08 made the incorrect conclusion that the star belonged to the EXor type. Later, with a century-long light curve derived from archival photographic plates, covering 1895 to 1993, @xia10 classified the star as a UXor, which was confirmed by subsequent intense photometric monitoring [@che12; @sem12; @sem15; @hua18]. @che12 speculated on a possible recurrent time of $\sim1$ yr based on a few major brightness dimming events, but this was not substantiated by @sem15. GMCep has been studied as a part of the Young Exoplanet Transit Initiative (YETI) project [@neu11], which combines a network of small telescopes in distributed time zones to monitor young star clusters, with the goal to find possible transiting exoplanets [@neu11]. Any exoplanets thus identified would have been newly formed or in the earliest evolution, providing a comparative sample with the currently known exoplanets that are almost exclusively found in the general Galactic fields, so are generally older). While so far YETI has detected only exoplanet candidates [@gar16;
{ "pile_set_name": "ArXiv" }
--- address: 'Cambridge University Engineering Dept., Trumpington St., Cambridge, CB2 1PZ U.K.\' title: Discriminative Neural Clustering for Speaker Diarisation --- References ==========
{ "pile_set_name": "ArXiv" }
--- abstract: 'We discuss different methods of calculation of the screened Coulomb interaction $U$ in transition metals and compare the so-called constraint local-density approximation (LDA) with the GW approach. We clarify that they offer complementary methods of treating the screening and, therefore, should serve for different purposes. In the *ab initio* GW method, the renormalization of on-site Coulomb interactions between $3d$ electrons (being of the order of 20-30 eV) occurs mainly through the screening by the same $3d$ electrons, treated in the random phase approximation (RPA). The basic difference of the constraint-LDA method from the GW method is that it deals with the neutral processes, where the Coulomb interactions are additionally screened by the “excited” electron, since it *continues to stay in the system*. This is the main channel of screening by the itinerant ($4sp$) electrons, which is especially strong in the case of transition metals and missing in the GW approach, although the details of this screening may be affected by additional approximations, which typically supplement these two methods. The major drawback of the conventional constraint-LDA method is that it does not allow to treat the energy-dependence of $U$, while the full GW calculations require heavy computations. We propose a promising approximation based on the combination of these two methods. First, we take into account the screening of Coulomb interactions in the $3d$-electron-line bands located near the Fermi level by the states from the subspace being orthogonal to these bands, using the constraint-LDA methods. The obtained interactions are further renormalized within the bands near the Fermi level in RPA. This allows the energy-dependent screening by electrons near the Fermi level including the same $3d$ electrons.' author: - 'I. V. Solovyev' - 'M. Imada' title: Screening of Coulomb interactions in transition metals --- \[sec:intr\]Introduction ======================== The description of electronic structure and properties of strongly correlated systems presents a great challenge for *ab initio* electronic structure calculations. The main complexity of the problem is related with the fact that such electronic systems typically bear both localized and itinerant character, where most of conventional methods do not apply. A canonical example is the local-\[spin\]-density approximation (L\[S\]DA) in the density-functional theory (DFT).[@DFT] The DFT, which is a ground-state theory, is based on the minimization of the total energy functional $E[\rho]$ with respect to the electron density $\rho$. In the Kohn-Sham (KS) scheme, which is typically employed for practical calculations, this procedure is formulated as the self-consistent solution of single-particle KS equations $$\left( -\nabla^2 + V_{\rm KS}[\rho] \right) \psi_i[\rho] = \varepsilon_i \psi_i[\rho], \label{eqn:KS}$$ which are combined with the equation for the electron density: $$\rho = \sum_i f_i |\psi_i|^2, \label{eqn:rho}$$ defined in terms of eigenfunctions ($\psi_i$), eigenvalues ($\varepsilon_i$), and the occupation numbers ($f_i$) of KS quasiparticles. The LSDA provides an explicit expression for $V_{\rm KS}[\rho]$. However, it is based on the homogeneous electron gas model, and strictly speaking applicable only for itinerant electron compounds. The recent progress, which gave rise to such directions as LDA$+$ Hubbard $U$ (Refs. ) and LDA+DMFT (dynamical mean-field theory) (Refs. ), is based on the idea of partitioning of electronic states. It implies the validity of the following postulates:\ (1) All solutions of KS equations (\[eqn:KS\]) in LDA can be divided (by introducing proper projection-operators) into two subgroups: $i$$\in$$I$, for which LSDA works reasonably well, and $i$$\in$$L$, for which LSDA encounters serious difficulties and needs to be improved (a typical example is the $3d$ states in transition-metal oxides and some transition metals).\ (2) Two orthogonal subspaces, $I$ and $L$, are “flexible” in the sense that they can be defined for a wider class of electron densities, which can be different from the ground-state density in LDA. This allows to “improve” LDA by adding a proper correction $\Delta\hat{\Sigma}$ (generally, an $\omega$-dependent self-energy) to the KS equations, which acts solely in the $L$-subspace but may also affect the $I$-states through the change of $\rho$ associated with this $\Delta\hat{\Sigma}$. Thus, in the KS equations, the $L$- and $I$-states remain decoupled even after including $\Delta\hat{\Sigma}$: $\langle \psi_{i \in I}[\rho] |(-\nabla^2$$+$$V_{\rm KS}[\rho]$$+$$ \Delta\hat{\Sigma})| \psi_{i \in L}[\rho] \rangle$$=$$0$. For many applications, the $L$-states are atomic or Wannier-type orbitals. In this case, the solution of the problem in the $L$-space becomes equivalent to the solution of a multi-orbital Hubbard-type model, and the formulation of the LDA$+$$U$ approach is basically a mapping of the electronic structure in LDA onto this Hubbard model. In the following, by referring to the LDA$+$$U$ we will mean not only the static version of this method, originally proposed in Ref. , but also its recent extensions designed to treat dynamics of correlated electrons and employing the same idea of partitioning of the electronic states.[@LSDADMFT; @LichtPRL01]\ (3) All physical interactions, which contribute to $\Delta\hat{\Sigma}$, can be formally derived from LDA by introducing certain constraining fields $\{ \delta \hat{V}_{\rm ext} \}$ in the subspace of $L$-states of the KS equations (i.e., in a way similar to $\Delta\hat{\Sigma}$). The purpose of including these $\{ \delta \hat{V}_{\rm ext} \}$ is to simulate the change of the electron density, $\delta \rho$, and then to extract parameters of electronic interactions from the total energy difference $E[\rho$$+$$\delta \rho]$$-$$E[\rho]$, by doing a mapping onto the Hubbard model. The total energy difference is typically evaluated in LDA,[@JonesGunnarsson] and the method itself is called the constraint-LDA (CLDA).[@UfromconstraintLSDA; @Gunnarsson; @AnisimovGunnarsson; @PRB94.2] However, despite a more than decade of rather successful history, the central question of LDA$+$$U$ is not completely solved and continues to be the subject of various disputes and controversies.[@NormanBrooks; @PRB96; @Pickett; @Springer; @Kotani; @Ferdi] This question is how to define the parameter of the effective Coulomb interaction $U$. To begin with, the Coulomb $U$ is *not* uniquely defined quantity, as it strongly depends on the property for the description of which we want to correct our LDA scheme. One possible strategy is the excited-state properties, associated with the complete removal of an electron from (or the addition of the new electron to) the system, i.e. the processes which are described by Koopman’s theorem in Hartree-Fock calculations and which are corrected in the GW method by taking into account the relaxation of the wavefunctions onto the created electron hole (or a new electron).[@Hedin; @FerdiGunnarsson] However the goal which is typically pursued in LDA$+$$U$ is somewhat different. Namely, one would always like to stay as close as it is possible to the description of the ground-state properties. The necessary precondition for this, which should be taken into account in the definition of the Coulomb $U$ and all other interactions which may contribute to $\Delta \hat{\Sigma}$ is the conservation of the total number of particles. In principle, similar strategy can be applied for the analysis of neutral excitations (e.g., by considering the $\omega$-dependence of $\Delta \hat{\Sigma}$), for which the total number of electrons is conserved.[@LichtPRL01] The basic difference between these two processes is that the “excited” electron in the second case continues to stay in the system and may additionally screen the Coulomb $U$. This screening may also affect the relaxation effects.[@OnidaReiningRubio] The purpose of this paper is to clarify several questions related with the definition of the Coulomb interaction $U$ in transition metals. We will discuss both the momentum (${\bf q}$) and energy ($\omega$) dependence of $U$, corresponding to the response of the Coulomb potential onto the site (${\bf R}$) and time ($t$) dependent perturbation $\delta \hat{V}_{\rm ext}$, and present a comparative analysis of the existing methods of calculations of this interaction, like CLDA and GW. We will argue that, despite a common believe, the GW method does not take into account the major effect of screening of the effective Coulomb interaction $U$ between the $3d$ electrons by the (itinerant) $4sp$ electrons, which may also contribute to the ${\bf q}$-dependence of $U$. This channel of screening is included in CLDA, although under an additional approximation separating the $3d$- and $4
{ "pile_set_name": "ArXiv" }
--- abstract: | We consider the model of nonregular nonparametric regression where smoothness constraints are imposed on the regression function $f$ and the regression errors are assumed to decay with some sharpness level at their endpoints. The aim of this paper is to construct an adaptive estimator for the function $f$. In contrast to the standard model where local averaging is fruitful, the nonregular conditions require a substantial different treatment based on local extreme values. We study this model under the realistic setting in which both the smoothness degree $\beta> 0$ and the sharpness degree $\mathfrak {a}\in(0, \infty)$ are unknown in advance. We construct adaptation procedures applying a nested version of Lepski’s method and the negative Hill estimator which show no loss in the convergence rates with respect to the general risk and a logarithmic loss with respect to the pointwise risk. Optimality of these rates is proved for $\mathfrak{a}\in(0, \infty)$. Some numerical simulations and an application to real data are provided. address: - | M. Jirak\ M. Rei[ß]{}\ Institut für Mathematik\ Humboldt-Universität zu Berlin\ Unter den Linden 6\ D-10099 Berlin\ Germany\ \ - | A. Meister\ Institut für Mathematik\ Universität Rostock\ D-18051 Rostock\ Germany\ author: -   -   -   title: 'Adaptive function estimation in nonparametric regression with one-sided errors' --- , Introduction {#1} ============ In the standard model of nonparametric regression, the data $$\label{eq11} Y_j = f(x_j) + \varepsilon_j, \qquad j=1,\ldots,n$$ are observed. In this paper, in contrast to classical theory, the observation errors $(\varepsilon_j)$ are not assumed to be centred, but to have certain support properties. This is motivated from many applications where rather the support than the mean properties of the noise are known and where the regression function $f$ describes some frontier or boundary curve. Below we shall discuss concrete applications to sunspot data and annual sport records. Typical economical examples include auctions where the bidders’ private values are inferred from observed bids (see Guerre et al. [@guerre2000] or Donald and Paarsch [@paarsch1993]) and note the extension to bid and ask prices in financial data. Related phenomena arise in the context of inference for deterministic production frontiers, where it is assumed that $f$ is concave (convex) or monotone. A pioneering contribution in this area is due to Farrell [@farrell1957], who introduced data envelopment analysis (DEA), based on either the conical hull or the convex hull of the data. This was further extended by Deprins et al. [@deprins1984] to the free disposal Hull (FDH) estimator, whose properties have been extensively discussed in the literature; see, for instance, Banker [@banker1993], Korostelev et al. [@korostelev1995a], Kneip et al. [@Kneip1998; @kneip2008], Gijbels et al. [@gijbels1999], Park et al. [@park2000; @park2010], Jeong and Park [@jeong2006] and Daouia et al. [@daouiasimar2010]. The issue of stochastic frontier estimation goes back to the works of Aigner et al. [@Aigner1977] and Meeusen and van den Broeck [@meeusen1977]; see also the more recent contributions of Kumbhakar et al. [@Kumbhakar2007], Park et al. [@Park2007] and Kneip et al. [@kneip2012]. In a general nonparametric setting the accuracy of the estimator heavily depends on the average number of observations in the vicinity of the support boundary. The key quantity is the sharpness ${\mathfrak{a}}_{{x}}>0$ of the distribution function $F_{{x}}$ of $\varepsilon_j$ at ${x}=x_j$, which in its simplest case has polynomial tails $$\label{defstandard} F_{{x}}(y) = 1 - {\mathfrak{c}}_{{x}}' {\vert}y{\vert}^{{\mathfrak{a}}_{{x}}} + {\mathcal{O}}\bigl({\vert}y{\vert}^{{\mathfrak{a}}_{{x}} + \delta} \bigr)\qquad\mbox{with }{\mathfrak{c}}_{{x}}', \delta> 0\mbox{ as }y \to0.$$ The cases $0 < {\mathfrak{a}}_{{x}} < 1$, ${\mathfrak{a}}_{{x}} = 1$ and ${\mathfrak{a}}_{{x}} > 1$ are sometimes called *sharp boundary*, *fault-type boundary* and *nonsharp boundary*. From a theoretical noise models with ${\mathfrak{a}}_{{x}}\in(0,2)$ are nonregular (e.g., Ibragimov and Hasminskii [@ibrakhasminskii1981]) since they exhibit nonstandard statistical theory already in the parametric case. Chernozhukov and Hong [@chernozhukovhong2004] discuss extensively parametric efficiency of maximum-likelihood and Bayes estimators in this context and show their relevance in . From a nonparametric statistics point of view, Korostelev and Tsybakov [@korostelevtsyb1993] and Goldenshluger and Zeevi [@goldenshluger2006] treat a variety of boundary estimation problems. The focus is on applications in image recovery and is mathematically and practically substantially different from ours. The optimal convergence rate $n^{(-2 \beta)/({\mathfrak{a}}\beta+1)}$ over $\beta$-Hölder classes of regression functions $f$ depends heavily on ${\mathfrak{a}}$ (not assumed to be varying in $x$); for ${\mathfrak{a}}_{{x}}\in(0,2)$ it is faster than for local averaging estimators in standard mean regression and can even become faster than the regular squared parametric rate $n^{-1}$. Hall and van Keilegom [@hallkeilegom2009] study a local-linear estimator in a closer related nonparametric regression model and establish minimax optimal rates in $L_2$-loss if the smoothness and sharpness parameters $\beta \in(0,2]$ and ${\mathfrak{a}}>0$ are known. Earlier contributions in a related setup are due to Härdle et al. [@haerdle1995], Hall et al. [@hall1997; @hall1998] and Gijbels and Peng [@gijbels2000]. If the support of $(\varepsilon_j)$ is not one-sided, but symmetric like $[-a,a]$ and $\beta\le1$, ${\mathfrak{a}}=1$, Müller and Wefelmeyer [@muellerwefelmayer2010] have shown that mid-range estimators attain also these better rates. Recently, Meister and Reiss [@meisterreiss2013] have proved strong asymptotic equivalence in Le Cam’s sense between a nonregular nonparametric regression model for ${\mathfrak{a}}=1$ and a continuous-time Poisson point process experiment. All the references above consider a theoretically optimal bandwidth choice which depends on the unknown quantities ${\mathfrak{a}}$ and/or $\beta$. Completely data-driven adaptive procedures have been rarely considered in the literature because the nonlinear inference and the nonmonotonicity of the stochastic and approximation error terms block popular concepts from mean regression like cross-validation or general unbiased risk estimation; cf. the discussion in Hall and Park [@hall2004]. Recently, Chichignoud [@chichi2012] was able to produce a $\beta $-adaptive minimax optimal estimator, which, however, uses a Bayesian approach hinging on the assumption that the law of the errors $(\varepsilon_j)$ is perfectly known in advance (in fact, after log transform a uniform law is assumed). Moreover, a log factor due to adaptation is paid, which is natural only under pointwise loss. It remained open whether under a global loss function like an $L_q$-norm loss adaptation without paying a log factor is possible. For regular nonparametric problems Goldenshluger and Lepski [@goldenshlugerlepski2011] study adaptive methods and convergence rates with respect to general $L_q$-loss which is much more involved in the general case $q\ge1$ than for $q=2$. It is therefore of high interest, both from a theoretical and a practical perspective, to establish a fully data-driven estimation procedure where the error distribution and the regularity of the regression function are unknown and to analyze it under local (pointwise) and global ($L_q$-norm) loss. In particular, neither ${\mathfrak{a}}$ nor $\beta$ that determine the optimal convergence rate are fixed in advance. In this paper we introduce a fully data-driven (${\mathfrak{a}},\beta $)-adaptive procedure for estimating $f$ and prove that it is minimax optimal over ${\mathfrak{a}}, \beta> 0$. To ease the presentation, we restrict to equidistant design points $x_j = j/n$ on $[0,1]$ and regression errors $(\varepsilon_
{ "pile_set_name": "ArXiv" }
--- abstract: 'For any integer $d\geq 1$ we construct examples of finitely presented algebras with intermediate growth of type $[e^{n^{d/(d+1)}}]$. We produce these examples by computing the growth types of some finitely presented metabelian Lie algebras.' author: - Dİlber Koçak bibliography: - 'metabelian.bib' title: On growth of metabelian Lie algebras --- [Introduction]{} Let $A$ be an (not necessarily associative) algebra over a field $k$ generated by a finite set $X$, and $X^n$ denote the subspace of $A$ spanned by all monomials on $X$ of length at most $n$. The *growth function* of $A$ with respect to $X$ is defined by $$\gamma_{X,A}(n)=dim_k(X^n)$$ This function depends on the choice of the generating set $X$. To remove this dependence the following equivalence relation is introduced: Let $f(n)$ and $g(n)$ be two monotone functions on $ \mathbb{N}$. We write $ f\precsim g$ if there exists a constant $C\in \mathbb{N}$ such that $f(n)\leq Cg(Cn)$ for all natural numbers $n$. If $ f\precsim g$ and $ g\precsim f$, we say $f$ and $g$ are equivalent and denote this by $f\sim g$. The equivalence class containing $f$ is denoted by $[f]$ and is called the *growth rate of f*. Set $[f]\leq [g]$ if and only if $f\precsim g$. The growth rate $[2^n]$ is called *exponential* and a growth rate strictly less than exponential is called *subexponential*. A subexponential growth which is greater than $[n^d]$ for any $d$ is called *intermediate*. The growth rate is a widely studied invariant for finitely generated algebraic structures such as groups, semigroups and algebras. The notion of growth for groups was introduced by Schwarz [@schwarz55] and independently by Milnor [@milnor68]. The study of growth of algebras dates back to the papers by Gelfand and Kirillov, [@GK661; @GK66]. A theorem of Milnor and Wolf [@milnor68; @Wolf68] states that any solvable group has a polynomial growth if it is virtually nilpotent, otherwise it has exponential growth. The description of groups of polynomial growth was obtained by Gromov in his celebrated work [@gromov81]. He proved that every finitely generated group of polynomial growth is virtually nilpotent. The situation for algebras is different from the case of groups. M. Smith [@smith76] showed that there exists an infinite dimensional solvable Lie algebra $L$ whose universal enveloping algebra $U(L)$ has intermediate growth. Later, in [@lichtman84], Lichtman proved that the universal envelope of an arbitrary finitely generated infinite dimensional virtually solvable Lie algebra has intermediate growth. In [@LiUf95], Lichtman and Ufnarovski showed that the growth rate of a finitely generated free solvable Lie algebra of derived length $k>3$ and its universal envelope are almost exponential (this means that it is less than exponential growth $[2^n]$ but greater than the growth $[2^{n^{\alpha}}]$ for any $\alpha< 1$). The first examples of finitely generated groups of intermediate growth were constructed by Grigorchuk [@Gri83; @Gri84]. It is still an open problem whether there exists finitely presented groups of intermediate growth. In contrast, there are examples of finitely presented algebras of intermediate growth. The first such example is the universal enveloping algebra of the Lie algebra $W$ with basis $\{w_{-1},w_0,w_1,w_2,\dots\}$ and brackets defined by $[w_i,w_j]=(i-j)w_{i+j}$. $W$ is a subalgebra of the generalized Witt algebra $W_{\mathbb{Z}}$ (see [@as74 p.206] for definitions). It was proven in [@stewart75] that $W$ has a finite presentation with two generators and six relations. It is also a graded algebra with generators of degree $-1$ and $2$. Since $W$ has linear growth, its universal enveloping algebra has growth $[e^{\sqrt{n}}]$ and it is finitely presented. Known examples of finitely presented algebras of intermediate growth have growth of type $[e^{\sqrt{n}}]$. In this note we present examples of finitely presented associative algebras of intermediate growth having different growth types. Specifically, our main result is the following: For any positive integer $d$, there exists a finitely presented associative algebra with intermediate growth of type $[e^{n^{d/d+1}}]$. The steps in proving the theorem are as follows: In [@bau77], Baumslag established the fact that every finitely generated metabelian Lie algebra can be embedded in a finitely presented metabelian Lie algebra. Using ideas of [@bau77] (and clarifying some arguments thereof), we embed the free metabelian Lie algebra $M$ (with $d$ generators) into a finitely presented metabelian Lie algebra $W^+$. Next we show that $W^+$ has polynomial growth of type $[n^d]$. Finally, considering the universal enveloping algebra of $W^+$ we obtain a finitely presented associative algebra of growth type $[e^{n^{d/d+1}}]$. Growth of a finitely generated free metabelian Lie algebra ========================================================== Let $k$ be a field and $L$ a Lie algebra over $k$ generated by a finite set $X$. Elements of $X$ are monomials of length $1$. Inductively, a monomial of length $n$ is an element of the form $[u,v]$, where $u$ is a monomial of length $i<n$ and $v$ is monomial of length $n-i$. Every element of $L$ is a linear combination of monomials. If $a_1,\dots,a_n \in X$ then $[a_1,\dots ,a_n]$ is defined inductively by $$[a_1,\dots,a_n]=[[a_1,\dots,a_{n-1}],a_n]\;\text{for}\; n>2$$ Monomials of the form $[a_1,\dots,a_n]$ are called *left-normed*. We will frequently use the following simple lemmas in the remainder of this note. \[l0\] Let $x,\;y,\;z$ be elements of a Lie algebra and $[x,y]=0$. Then the following relations hold: $$[x,[y,z]]=[y,[x,z]]$$ $$[x,z,y]=[y,z,x].$$ Direct consequence of Jacobi identity. \[l00\] Any element of a Lie algebra can be written as a linear combination of left-normed monomials. By induction on the length of monomials. A Lie algebra $L$ is called *solvable of derived length $n$* if $$L^{(n)}=0\;\text{and}\;L^{(n-1)}\neq 0$$ where $L^{(m+1)}=[L^{(m)},L^{(m)}]$ and $L^{(0)}=L$. We also denote $L^{(1)}=[L,L]$ by $L^{\prime}$ and call it the *commutator* of $L$. A solvable Lie algebra of derived length $2$ is called *metabelian*. Let $X=\{x_1,\dots,x_d\}$ be a finite set and $L_X$ be the free Lie algebra generated by $X$. $M=L_X/L_{X}^{(2)}$ is the free metabelian Lie algebra generated by $X$. The following proposition can be found in [@bokut63]. \[p1\] Let $M$ be a free metabelian Lie algebra over a field $k$ with the generating set $X=\{x_1,\dots,x_d\}$ and $x_1<\dots <x_d$ be an order on $X$. If $\mathcal{B}$ is the set of left-normed monomials of the form $$[a_0,a_1,\dots,a_{n-1}]$$ where $a_0>a_1\leq a_2\leq\dots \leq a_{n-1}$ for $a_i\in X$ and $n\geq 1$. Then $\mathcal{B}$ forms a basis for $M$. Let $M_n$ denote the subspace of $M$ spanned by all left-normed monomials of length $n$ in $M$ ($M_0=k$ and $M_1$ is the subspace spanned by $X$). Since the relations $[x,y]=-[y,x]$ for $x,y\in M$ and Jacobi identity are homogeneous, $ M_k\cap M_l= \emptyset $ for $k\neq l$. By Lemma \[l00\], we get $M= \bigoplus_{n=0}^{\infty} M_n$ and $\mathcal{B}=\bigsqcup_{n=1}^{\infty}\mathcal{B}_n$ where $\mathcal{B}_n$ denotes the
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, we explain the importance of finite decomposition semigroups and present two theorems related to their structure.' address: 'LIPN - UMR 7030 du CNRS, 99, avenue Jean-Baptiste Clément, Université Paris 13, 93430 Villetaneuse, France' author: - 'Matthieu Deneufchâtel, Gérard H. E. Duchamp' bibliography: - 'Biblio.bib' title: Finite Decomposition Semigroups --- Introduction ============ Theories of “special sums” have highlighted different products over the indices. For example, Chen’s lemma states that the product of two iterated integrals is ruled out by the shuffle product defined by $$\begin{aligned} 1 \shuffle w = w \shuffle 1 & = w ~; \\ (a u) \shuffle (b v) & = a(u \shuffle (bv)) + b ( (au) \shuffle v) \end{aligned}$$ for all words $u, \, v, \, w \in A^*$ and all letters $a,b$ of the alphabet $A$.\ Indeed (see [@JGJY1]), if $\mathscr H$ is a vector space of integrable functions over $(c_1,c_2)$ and $f_1 , \dots , f_n$ some functions of $\mathscr H$, define the following integral: $$\langle f_1 \dots f_n \rangle = \int_{c_1}^{c_2} dy_1 \int_{c_1}^{y_1} \dots \int_{c_1}^{y_{n-1}} d y_n \, f_1(y_1) \dots f_n(y_n)$$ (considered as a linear form defined on ${\mathscr H}^{\otimes n}$). If the functions $\phi_{a_i}$ are indexed by letters of the alphabet $A$, we associate to $w = a_{i_1} \dots a_{i_{|w|}}$ the integral $$\langle w \rangle = \langle \phi_{a_{i_1}} \dots \phi_{a_{i_{|w|}}} \rangle.$$ Then Chen’s lemma gives the following relation[^1], $\forall \, u , v \in X^*$: $$\label{Chen} \left\{ \begin{aligned} \langle u \rangle \langle v \rangle & = \langle u \shuffle v \rangle; \\ \langle 1 \rangle & = 1 . \end{aligned} \right.$$ Some of these iterated integrals have been thoroughly studied, for example the polyzetas: one considers the alphabet $\left\{ x_0 , x_1 \right\}$ and constructs recursively the following integrals: $\forall z \in \C \backslash \left] - \infty , 0 \right] \cup \left[ 1 , + \infty \right[$, $$\displaystyle {\rm Li}_{x_0^n}(z) = \frac{\ln^n(z)}{n!},$$ $${\rm Li}_{x_1 w} (z) = \int_0^z \frac{dt}{1-t} {\rm Li}_w(t),$$ and, $\forall w \in X^* x_1 X^*$, $${\rm Li}_{x_0 w} (z) = \int_0^z \frac{dt}{t} {\rm Li}_w(t).$$ The specialization of these functions for $z=1$ yields the Multiple Zeta Values (henceforth denoted by MZV) $\zeta({\bf s})$ where the multiindex ${\bf s}$ is obtained from $w$ with the correspondence $w = x_0^{s_1 - 1} x_1 \dots x_0^{s_k - 1} x_1 \leftrightarrow {\bf s} = ( s_1 , \dots s_k )$. One can show that the product of two MZV’s is, like the quasi symmetric functions, ruled by the stuffle product $\stuffle$ defined by $$\begin{array}{rl} (s_1 , \dots , s_p ) \stuffle ( t_1 , \dots , t_q ) & = \\ & s_1 (s_2 , \dots s_p ) \stuffle ( t_1 , \dots , t_q )\\ + & t_1 (s_1 , \dots , s_p ) \stuffle ( t_2 , \dots , t_q ) \\ + & (s_1 + t_1 ) (s_2 , \dots , s_p ) \stuffle ( t_2 , \dots , t_q ). \end{array}$$ Further, coloured polyzetas ([@Kreimer; @SLC44]) need an indexation by bicompositions $\displaystyle \left( \genfrac{}{}{0pt}{}{s'_1 \dots s'_p}{s_1'' \dots s_p''} \right)$ with a product $\diamond$ given by $$\begin{array}{rl} \displaystyle \left( \genfrac{}{}{0pt}{}{s'_1 \dots s'_p}{s_1'' \dots s_p''} \right) \diamond \left( \genfrac{}{}{0pt}{}{t'_1 \dots t'_p}{t_1'' \dots t_p''} \right) & = \\ &\displaystyle \left( \left( \genfrac{}{}{0pt}{}{s'_1}{s_1''} \right) \left( \genfrac{}{}{0pt}{}{s'_2 \dots s'_p}{s_2'' \dots s_p''} \right) \diamond \left( \genfrac{}{}{0pt}{}{t'_1 \dots t'_p}{t_1'' \dots t_p''} \right) \right) \\ + & \displaystyle \left( \left( \genfrac{}{}{0pt}{}{t'_1}{t_1''} \right) \left( \genfrac{}{}{0pt}{}{s'_1 \dots s'_p}{s_1'' \dots s_p''} \right) \diamond \left( \genfrac{}{}{0pt}{}{t'_2 \dots t'_p}{t_2'' \dots t_p''} \right) \right) \\ + & \displaystyle \left( \left( \genfrac{}{}{0pt}{}{s_1'+t'_1}{s_1''+t_1''} \right) \left( \genfrac{}{}{0pt}{}{s'_2 \dots s'_p}{s_2'' \dots s_p''} \right) \diamond \left( \genfrac{}{}{0pt}{}{t'_2 \dots t'_p}{t_2'' \dots t_p''} \right) \right) . \end{array}$$ Even algebras of diagrams $\LDIAG$ ([@SLC62]), which need coding with words whose letters, belonging to an alphabet $A$, are composable, are endowed with a product $\uparrow$ of this type. These algebras contain plane bipartite graphs with multiple ordered legs which are in bijection with the elements of $(\mathfrak{MON}^+(X))^*$ where $\mathfrak{MON}^+(X)$ is the set of non void commutative monomials in the variables of the alphabet $X$; formally speaking, let $X = \left\{ x_i \right\}_{i \geq 1}$ be an alphabet; denote by $\mathfrak{MON}(X)$ (resp. $\mathfrak{MON}^+(X)$) the monoid of monomials $X^\al$ for $\al \in \N^{(X)}$ (resp. for $\al \in \N^{(X)} \setminus \left\{ 0 \right\}$). Then, the elements of the monoid $(\mathfrak{MON}^+(X))^*$ are *words of monomials* which represent some diagrams. The bilinear product $\uparrow$ of two diagrams is given on the corresponding words of monomials by $$\left\{ \begin{aligned} 1_{(\mathfrak{MON}^+(X))^*} \uparrow w & = w \uparrow 1_{(\mathfrak{MON}^+(X))^*} = w; \\ a u \uparrow b v & = a (u \uparrow bv) + b (au \uparrow v ) + (a \cdot b) ( u \uparrow v) \end{aligned} \right.$$ for all $a, \, b \in \mathfrak{MON}(X)$ and $u, \, v \in (\mathfrak{MON}(X))^*$. The dualization of the superposition law $(a,b)\rightarrow a \cdot b$ leads to the definition of coproducts given by sums over a semigroup which has the following property: each of its elements has a finite number of decompositions as a product of two elements of the semigroup. This fact motivates the study of such semigroups, called *finite decomposition semigroups*, and of their structure. Note that the law of semigroup can be deformed with a bicharacter [@Thibon96; @Hoffman] or a colour factor [@EnjalbertMinh; @SLC62; @duch
{ "pile_set_name": "ArXiv" }
--- abstract: 'We theoretically consider charge transport through two quantum dots coupled in series. The corresponding full counting statistics for noninteracting electrons is investigated in the limits of sequential and coherent tunneling by means of a master equation approach and a density matrix formalism, respectively. We clearly demonstrate the effect of quantum coherence on the zero-frequency cumulants of the transport process, focusing on noise and skewness. Moreover, we establish the continuous transition from the coherent to the incoherent tunneling limit in all cumulants of the transport process and compare this with decoherence described by a dephasing voltage probe model.' author: - 'G. Kie[ß]{}lich' - 'P. Samuelsson' - 'A. Wacker' - 'E. Sch[ö]{}ll' title: Counting statistics and decoherence in coupled quantum dots --- [*Introduction*]{}. The analysis of current fluctuations in mesoscopic conductors provides detailed insight into the nature of charge transfer [@BLA00; @NAZ03]. The complete information is available by studying the full counting statistics (FCS), i.e. by the knowledge of all cumulants of the distribution of the number of transferred charges [@LEV; @NAZ03]. As a crucial achievement, the measurement of the third-order cumulant of transport through a single tunnel junction was recently reported [@REU]. To what extent one can extract informations from current fluctuations about quantum coherence and decoherence is the subject of intense theoretical investigations: e.g. dephasing in mesoscopic cavities and Aharonov-Bohm rings [@PAL04] and decoherence in a Mach-Zehnder interferometer [@FOE05]. Quantum dots (QDs) constitute a representative system for mesoscopic conductors. Recently, the real-time tunneling of single electrons could be observed in QDs [@WEI] providing an important step towards an experimental observation of the FCS. For single dots the FCS is known to display no effects of quantum coherence [@JON96; @BAG03a]. In contrast, in serially-coupled double QDs [@WIE03] the superposition between states from both dots causes prominent coherent effects. Noise properties have been studied theoretically both in the low [@ELA02] and finite frequency range [@SUN99; @AGU04] for these structures but no FCS studies are available yet. Experimentally, the low-frequency noise has been investigated very recently in related double-well junctions [@YAU]. In this Letter we show that detailed information about quantum coherence in double QD systems can be extracted from the zero-frequency current fluctuations. For this purpose we elaborate on the FCS in the limits of coherent and incoherent transport through the QD system by means of a density matrix (DM) and master equation (ME) description. We demonstrate a smooth transition between these approaches by decoherence originating from coupling the QDs to a charge detector. The results are compared to a scattering approach, where decoherence is introduced via phenomenological voltage probes. [*Model*]{}. The central quantity in the FCS is $P(N,t_0)$, the distribution function of the number $N$ of transferred charges in the time interval $t_0$. The associated cumulant generating function (CGF) $F(\chi )$ is [@NAZ03] $$\begin{aligned} \exp{[-F(\chi )]}=\sum_NP(N,t_0)\exp{[iN\chi ]} \label{eq:CGF-general}\end{aligned}$$ Here we consider the zero frequency limit, i.e. $t_0$ much longer than the time for tunneling through the system. From the CGF we can obtain the cumulants $C_k=-(-i\partial_\chi)^kF(\chi )\vert_{\chi=0}$ which are related to e.g. the average current $\langle I\rangle =eC_1/t_0$ and to the zero-frequency noise $S=2e^2C_2/t_0$. The Fano factor is defined as $C_2/C_1$. The skewness of the distribution of transferred charges is given by the third-order cumulant $C_3$. ![(Color online) Current statistics for $\Omega /\Gamma =0.5$ and for various dephasing rates $\Gamma_\varphi /\Gamma =$0, 5, 20; dashed lines: master equation (ME) approach, solid lines: density matrix (DM) formalism; on-resonance $\Delta \varepsilon =0$, symmetric contact coupling: $\Gamma =\Gamma_\textrm{e}=\Gamma_\textrm{c}$. $\Gamma_0\equiv (2\Gamma \Omega^2)/[ 4\Omega^2+\Gamma (\Gamma +\Gamma_\varphi )]$. Inset: Setup of the coupled QD system with (e)mitter and (c)ollector contact and mutual coupling $\Omega$.[]{data-label="fig1"}](figure1){width="45.00000%"} The setup of the coupled QD system is shown as the inset of Fig. \[fig1\]: QD1 is connected to the emitter with a tunneling rate $\Gamma_\textrm{e}$ and QD2 to the collector contact with rate $\Gamma_\textrm{c}$. Mutually they are coupled by the tunnel matrix element $\Omega$. One level in each dot, at energies $\varepsilon_1$ and $\varepsilon_2$ respectively, is assumed. We consider zero temperature and work in the limit of large bias applied between the collector and emitter, with the broadened energy levels well inside the bias window. To compare DM/ME- and scattering approaches we consider noninteracting electrons (spin degrees of freedom decouple, we give all results for a single spin direction) throughout this Letter. We note, however, that strong Coulomb blockade can be treated within the DM/ME-approaches along the same lines. [*Coherent tunneling*]{}. The FCS for coherent tunneling through coupled QDs can be obtained from the approach developed by Gurvitz and coworkers in a series of papers [@GUR96c; @ELA02] (for related work see e.g. Ref. [@RAM04]). Starting from the time dependent Schr[ö]{}dinger equation one derives a modified Liouville equation, a system of coupled first order differential equations for DM elements $\rho_{\alpha\beta}^N(t_0)$ at a given number $N$ of electrons transferred through the QD system at time $t_0$. Here $\alpha ,\beta\in \{a,b,c,d\}$, where $a,b,c$ and $d$ denote the Fock-states $|00\rangle, |10\rangle, |01\rangle,|11\rangle$ of the system, i.e., no electrons, one electron in the first dot, one in the second dot, and one in each dot, respectively. The probability distribution is then directly given by $P(N,t_0)=\rho_{aa}^N(t_0)+\rho_{bb}^N(t_0)+\rho_{cc}^N(t_0)+\rho_{dd}^N(t_0)$. The FCS is formally obtained by first Fourier transforming the DM elements as $\rho_{\alpha\beta}(\chi,t_0)=\sum_{N}\rho_{\alpha\beta}^N(t_0)e^{iN\chi}$. This gives the Fourier transformed equation $\dot \rho=\mathcal{L}_c(\chi)\rho$, with $$\begin{aligned} \mathcal{L}_c(\chi )=\left( \begin{array}{cccccc} -\Gamma_\textrm{e} & 0 & \Gamma_\textrm{c}e^{i\chi} & 0 & 0 & 0\\ \Gamma_\textrm{e} & 0 & 0 & \Gamma_\textrm{c}e^{i\chi} & 0 & 2\Omega\\ 0 & 0 & -2\Gamma & 0 & 0 & -2\Omega \\ 0 & 0 & \Gamma_\textrm{e} & -\Gamma_\textrm{c} & 0 & 0\\ 0 & 0 & 0 & 0 & -\Gamma & -\Delta\varepsilon\\ 0 & -\Omega & \Omega & 0 & \Delta\varepsilon & -\Gamma \end{array} \right) \label{eq:matrix-coh}\end{aligned}$$ and $\rho\equiv \big(\rho_{aa},\rho_{bb}, \rho_{cc},\rho_{dd},\textrm{Re}[\rho_{bc}],\textrm{Im}[\rho_{bc}]\big)^T$, $\Gamma\equiv (\Gamma_\textrm{e}+\Gamma_\textrm{c})/2$, $\Delta\varepsilon\equiv\varepsilon_1 -\varepsilon_2$. Note that the counting field $\chi$ enters the matrix elements in (\[eq:matrix-coh\]), where an electron jumps from QD2 into the collector contact. The CGF is then obtained as the eigenvalue of $\mathcal{L}_c$ which goes to zero for $\chi=0$, as required by probability conservation \[see Eq. (\[eq:CGF-general\])\] $$\begin{aligned} F_c(\chi)=\frac{t_0}{2}\left[2\Gamma-\left(p_1+2\sqrt{p_2^2+16\Gamma^2 \Omega^2 (e^{i\chi}-1)}\right)^{1/2}\right]
{ "pile_set_name": "ArXiv" }
--- author: - 'N. Pradel' - 'P. Charlot' - 'J.-F. Lestrade' bibliography: - '3021.bib' date: 'Received / Accepted ' title: 'Astrometric accuracy of phase-referenced observations with the VLBA and EVN' --- Introduction {#Introduction} ============ Very Long Baseline Interferometry (VLBI) narrow-angle astrometry pioneered by Shapiro et al. (1979) makes use of observations of pairs of angularly close sources to cancel atmospheric phase fluctuations between the two close lines of sight. In this initial approach, the relative coordinates between the two strong quasars and and other ancilliary parameters were adjusted by a least-squares fit of the differenced phases after connecting the VLBI phases for both sources over a multi-hour experiment. Then, [@Mar83; @Mar84] made the first phase-referenced map where structure and astrometry were disentangled for the double quasar and B. Both of these experiments demonstrated formal errors at the level of a few tens of microarcseconds or less in the relative angular separation between the two sources. Another approach was designed to tackle faint target sources by observing a strong reference source (quasar) to increase the integration time of VLBI from a few minutes to a few hours [@Les90]. This approach improves the sensitivity by the factor $$\sqrt { N_b\times\frac{T_{int}}{T_{scan}}},$$ where $N_b$ is the number of VLBI baselines, $T_{int}$ is the extended integration time permitted by phase-referencing (several hours) and $T_{scan}$ is the individual scan length (a few minutes). As this factor is very large (e.g. $> 50$ for the 45 baselines of the Very Long Baseline Array), faint target sources can be detected and their positions can be concomitantly measured with high precision. In the approach above, the VLBI phases of the strong reference source are connected, interpolated in time and differenced with the VLBI phases of the faint source that do not need to be connected. The differenced visibilities are then inverted to produce the map of the brightness distribution of the faint target source and its position is determined by reading directly the coordinates of the map peak which are relative to the a priori reference source coordinates. The map is usually highly undersampled but suffices for astrometry. This [*mapping astrometry*]{} technique is implemented in the SPRINT software [@Les90] and a similar procedure is also used within the NRAO AIPS package to produce phase-referenced VLBI maps with absolute source coordinates on the sky. While phase-referencing in this way is efficient, it still provides no direct positional uncertainty as does least-squares fitting of differenced phases [@Sha79]. In order to circumvent this problem, we have developed simulations to evaluate the impact of systematic errors in the derived astrometric results. Such simulations have been carried out for of a pair of sources observed with the Very Long Baseline Array (VLBA) and the European VLBI Network (EVN) at various declinations and angular separations. Systematic errors in station coordinates, Earth rotation parameters, reference source coordinates and tropospheric zenith delays were studied in turn. The results of the simulations are summarized below in tables that indicate positional uncertainties when considering these systematic errors either separately or altogether. Such tables can be further interpolated to determine the accuracy of any full-track experiment with the VLBA and EVN. Our study includes atmospheric fluctuations caused by the turbulent atmosphere above all stations. These fluctuations have been considered uniform and equivalent to a delay rate noise of $0.1$ ps/s for all stations. The impact of these fluctuations is limited if the antenna switching cycle between the two sources is fast enough. The phase structure function measured at 22 GHz above the VLA by [@Car99] provides prescriptions on this switching time. At high frequency, it can be as short as 10s, as e.g. in @Rei03 who carried out precise 43 GHz VLBA astrometric observations of Sgr A$^*$ at a declination of $-28\degr$. Switching time in more clement conditions is typically a few minutes at 8.4 GHz for northern sources. A few applications of [*mapping astrometry*]{} are the search for extra-solar planets around radio-emitting stars [@Les94], the determination of the Gravity Probe B guide star proper motion [@Leb99], the determination of absolute motions of VLBI components in extragalactic sources, e$.$g$.$ in compact symetric objects [@Cha03] or core-jet sources [@Ros99], probing the jet collimation region in extragalactic nuclei [@Ly004], pulsar parallax and proper motion measurements [@Bri02] and the determination of parallaxes and proper motions of maser sources in the whole Galaxy as planned with the VERA project [@Kaw00; @Hon00]. Method {#Method} ====== As indicated in e.g. [@Tho86], the theoretical precision of astrometry with the interferometer phase is $$\sigma_{\alpha, \delta} = {1 \over {2 \pi}} ~ {1 \over SNR } ~ { \lambda \over B} ,$$ where $SNR$ is the signal-to-noise ratio of the observation, $\lambda$ is the wavelength and $B$ is the baseline length projected on the sky. For observations with the VLBA ($B\sim 8000$ km), $\lambda = 3.6$ cm, and a modest $SNR$ of $10$, this theoretical precision is breathtakingly $\sim 15~ \mu$as. Although a single observation of the target yields an ambiguous position, multiple observations over several hours easily remove ambiguities even with a sparse u-v plane coverage [@Les90]. While the theoretical precision above might be regarded as the potential accuracy attainable for the VLBI, systematic errors in the model of the phase limit narrow-angle astrometry precision to roughly ten times this level in practice [@Fom99]. An analytical study of systematic errors in phase-referenced VLBI astrometry over a single baseline is given in @Sha79 and it shows that all systematic errors are scaled by the source separation. Another error analysis in such differential VLBI measurements can be found in @Mor84. However, for modern VLBI arrays with 10 or more antennae, the complex geometry makes the analytical approach intractable. For this reason, we have estimated such systematic errors by simulating VLBI visibilities and inverting them for a range of model parameters (station coordinates, reference source coordinates, Earth Orientation parameters, and tropospheric dry and wet zenith delays) corresponding to the expected errors in these parameters. The visibilities were simulated for a pair of sources at declinations $-25\degr$, $0\degr$, $25\degr$, $50\degr$, $75\degr$, $85\degr$ and with angular separations $0.5\degr$, $1\degr$ and 2$\degr$ for the VLBA, EVN and global VLBI array (VLBA+EVN). For each of these cases, we simulated visibilities every 2.5 min from source rise to set (full track) with a lower limit on elevation of 7$\degr$. The adopted flux for each source (calibrator and target) was 1 Jy to make the phase thermal noise negligeable in our simulations. For applications to faint target sources, one should combine the corresponding thermal astrometric uncertainty (Eq. 1) with the systematic errors derived below. The simulated visibilities were then inverted using uniform weighting to produce a phase-referenced map of the target source and estimate its position. This operation was repeated 100 times in a [*Monte Carlo*]{} analysis after varying slightly the parameters of the model based on errors drawn from a Gaussian distribution with zero-mean and plausible standard deviation. We report the rms of the differences found between the known a priori position of the target source and the resulting estimated positions as a measure of the corresponding systematic errors for each of the above cases. We have adopted the usual astrometric frequency of $8.4$ GHz for this analysis. Phase model used in simulation {#model} ============================== The phase delay and group delay in VLBI are described in @Sov98. The phase $ \phi = \nu \tau$ at frequency $\nu $ is related to the interferometer delay $$\tau = \tau_g + \tau_{trop} +\tau_{iono}+ \tau_R + \tau_{struc} + \tau_{clk} .$$ Specifically, the geometric delay is: $$\tau_g = { { { [P][N][EOP]} { \vec b . \vec k \over c }}}$$ with the precession matrix $[P]$, the nutation matrix $[N]$, the Earth Orientation Parameters matrix $[EOP]$, the baseline coordinates $\vec b$ in the terrestrial frame, the source direction coordinates $\vec k$ computed with source right ascension and declination in the celestial frame. The “retarded baseline correction” to account for Earth rotation during elapsed time $\tau_g$ must also be modelled [@Sov98]. The differential tropospheric delay $\tau _{trop}$ between the two stations is computed with a static tropospheric model and the simple mapping function $ 1/\sin E $ (where $E$ is the source elevation at station) to transform the z
{ "pile_set_name": "ArXiv" }
--- abstract: 'Flow-based generative models, conceptually attractive due to tractability of both the exact log-likelihood computation and latent-variable inference, and efficiency of both training and sampling, has led to a number of impressive empirical successes and spawned many advanced variants and theoretical investigations. Despite their computational efficiency, the density estimation performance of flow-based generative models significantly falls behind those of state-of-the-art autoregressive models. In this work, we introduce *masked convolutional generative flow* (**<span style="font-variant:small-caps;">MaCow</span>**), a simple yet effective architecture of generative flow using masked convolution. By restricting the local connectivity in a small kernel, <span style="font-variant:small-caps;">MaCow</span> enjoys the properties of fast and stable training, and efficient sampling, while achieving significant improvements over Glow for density estimation on standard image benchmarks, considerably narrowing the gap to autoregressive models.' author: - | Xuezhe Ma\ Language Technologies Institute\ Carnegie Mellon University\ Pittsburgh, PA, USA\ `xuezhem@cs.cmu.edu` Eduard Hovy\ Language Technologies Institute\ Carnegie Mellon University\ Pittsburgh, PA, USA\ `hovy@cmu.edu` bibliography: - 'macow.bib' title: 'MaCow: Masked Convolutional Generative Flow' --- Introduction ============ Unsupervised learning of probabilistic models is a central yet challenging problem. Deep generative models have shown promising results in modeling complex distributions such as natural images [@radford2015unsupervised], audio [@van2016wavenet] and text [@bowman2015generating]. A number of approaches have emerged over the years, including Variational Autoencoders (VAEs) [@kingma2014auto], Generative Adversarial Networks (GANs) [@goodfellow2014generative], autoregressive neural networks [@larochelle2011neural; @oord2016pixel], and flow-based generative models [@dinh2014nice; @dinh2016density; @kingma2018glow]. Among these models, flow-based generative models gains popularity for this capability of estimating densities of complex distributions, efficiently generating high-fidelity syntheses and automatically learning useful latent spaces. Flow-based generative models are typically warping a simple distribution into a complex one by mapping points from the simple distribution to the complex data distribution through a chain of invertible transformations whose Jacobian determinants are efficient to compute. This design guarantees that the density of the transformed distribution can be analytically estimated, making maximum likelihood learning feasible. Flow-based generative model has spawned significant interests in improving and analyzing it from both theoretical and practical perspectives, and applying it to a wide range of tasks and domains. In their pioneering work, @dinh2014nice proposed *Non-linear Independent Component Estimation* (NICE), where they first applied flow-based models for modeling complex dimensional densities. RealNVP [@dinh2016density] extended NICE with more flexible invertible transformation and experimented on natural images. However, these flow-based generative models have much worse density estimation performance compared to state-of-the-art autoregressive models, and are incapable of realistic-looking synthesis of large images compared to GANs [@karras2017progressive; @brock2018large]. Recently, @kingma2018glow proposed Glow: generative flow with invertible 1x1 convolutions, significantly improving the density estimation performance on natural images. Importantly, they demonstrated that flow-based generative models optimized towards the plain likelihood-based objective are capable of generating realistic-looking high-resolution natural images efficiently. @prenger2018waveglow investigated applying flow-based generative models to speech synthesis by combining Glow with WaveNet [@van2016wavenet]. Unfortunately, the density estimation performance of Glow on natural images still falls behind autoregressive models, such as PixelRNN/CNN [@oord2016pixel; @salimans2017pixelcnn++], Image Transformer [@parmar2018image], PixelSNAIL [@chen2017pixelsnail] and SPN [@menick2018generating]. We noted in passing that there are also some work [@rezende2015variational; @kingma2016improved; @zheng2017] trying to apply flow to variational inference. In this paper, we propose a novel architecture of generative flow, *masked convolutional generative flow* (**<span style="font-variant:small-caps;">MaCow</span>**), using masked convolutional neural networks [@oord2016pixel]. The bijective mapping between input and output variables can easily be established; meanwhile, computation of the determinant of the Jacobian is efficient. Compared to inverse autoregressive flow (IAF) [@kingma2016improved], <span style="font-variant:small-caps;">MaCow</span> has the merits of stable training and efficient inference and synthesis by restricting the local connectivity in a small “masked” kernel, and large receptive fields by stacking multiple layers of convolutional flows and using reversed ordering masks (§\[subsec:macow\]). We also propose a fine-grained version of the multi-scale architecture adopted in previous flow-based generative models to further improve the performance (§\[subsec:multi-scale\]). Experimentally, on three benchmark datasets for images — CIFAR-10, ImageNet and CelebA-HQ, we demonstrate the effectiveness of <span style="font-variant:small-caps;">MaCow</span> as a density estimator by consistently achieving significant improvements over Glow on all the three datasets. When equipped with the variational dequantization mechanism [@ho2018flow++], <span style="font-variant:small-caps;">MaCow</span> considerably narrows the gap on density estimation to autoregressive models (§\[sec:experiment\]). Flow-based Generative Models {#sec:background} ============================ In this section, we first setup notations, describe flow-based generative models, and review Glow [@kingma2018glow], which is <span style="font-variant:small-caps;">MaCow</span> built on. Notations --------- Throughout we use uppercase letters for random variables, and lowercase letters for realizations of the corresponding random variables. Let $X \in \mathcal{X}$ be the randoms variables of the observed data, e.g., $X$ is an image or a sentence for image and text generation, respectively. Let $P$ denote the true distribution of the data, i.e., $X \sim P$, and $D = \{x_1, \ldots, x_N\}$ be our training sample, where $x_i, i=1,\ldots, N,$ are usually i.i.d. samples of $X$. Let $\mathcal{P} = \{P_\theta : \theta \in \Theta\}$ denote a parametric statistical model indexed by parameter $\theta \in \Theta$, where $\Theta$ is the parameter space. $p$ is used to denote the density of corresponding distribution $P$. In the literature of deep generative models, deep neural networks are the most widely used parametric models. The goal of generative models is to learn the parameter $\theta$ such that $P_{\theta}$ can best approximate the true distribution $P$. In the context of maximal likelihood estimation, we wish to minimize the negative log-likelihood of the parameters: $$\label{eq:mle} \min\limits_{\theta \in \Theta} \frac{1}{N} \sum\limits_{i=1}^{N} -\log p_{\theta}(x_i) = \min\limits_{\theta \in \Theta} \mathrm{E}_{\widetilde{P}(X)} [-\log p_{\theta}(X)],$$ where $\tilde{P}(X)$ is the empirical distribution derived from training data $D$. Flow-based Models ----------------- In the framework of flow-based generative models, a set of latent variables $Z \in \mathcal{Z}$ are introduced with a prior distribution $p_{Z}(z)$, typically a simple distribution like multivariate Gaussian. For a bijection function $f: \mathcal{X} \rightarrow \mathcal{Z}$ (with $g = f^{-1}$), the change of variable formula defines the model distribution on $X$ by : $$p_{\theta}(x) = p_{Z}(f_{\theta}(x))\left| \det(\frac{\partial f_{\theta}(x)}{\partial x})\right|,$$ where $\frac{\partial f_{\theta}(x)}{\partial x}$ is the Jacobian of $f_{\theta}$ at $x$. The generative process is defined straight-forwardly as: $$\begin{array}{rcl} z & \approx & p_{Z}(z) \\ x & = & g_{\theta}(z). \end{array}$$ Flow-based generative models focus on certain types of transformations $f_{\theta}$ that both the inverse functions $g_{\theta}$ and the Jacobian determinants are tractable to compute. By stacking multiple such invertible transformations in a sequence, which is also called a (normalizing) *flow* [@rezende2015variational], a flow is capable of warping a simple distribution ($p_{Z}(z)$) in to a complex one ($p(
{ "pile_set_name": "ArXiv" }
--- author: - 'N. Ysard' - 'M. Juvela' - 'L. Verstraete' bibliography: - 'biblio.bib' title: Modelling the spinning dust emission from dense interstellar clouds --- Introduction ============ Discovered in the nineties, the anomalous microwave emission (AME) has aroused great interest [@Kogut1996; @Leitch1997]. First because it appears in a frequency window that is optimal for the detection of the Cosmic Microwave Background (CMB) fluctuations. @DL98 proposed that AME could be caused by electric dipole emission of rapidly rotating grains: the spinning dust emission. This mechanism is now most often invoked to explain the AME and several models have been published [@Ali2009; @Ysard2010a; @Hoang2010; @Silsbee2011]. The study of spinning dust could help in understanding the life cycle of interstellar dust grains because it may be a new tracer of the smallest grains, the interstellar Polycyclic Aromatic Hydrocarbons (PAHs). The preference of spinning dust models over other mechanisms is based on several arguments. First, AME is correlated with dust IR emission and this correlation is particularly tight for the mid-IR emission of small grains. Second, AME is weakly polarized as expected for PAHs because these grains are not supposed to be aligned with the interstellar magnetic field [@Battistelli2006; @Casassus2008; @Lopez2011]. Third, the shape and the intensity of the AME can be reproduced with spinning dust spectra (e.g. Watson et al. 2005; Planck Collaboration et al. 2011b, and many other references). However, the spinning dust emission depends on the local physical conditions (gas ionisation state and radiation field) and on the size distribution of small grains. Recent observations of interstellar clouds point out dissimilar morphologies in the mid-IR and in the microwave range that may be explained by local variations of the environmental conditions [@Casassus2006; @Casassus2008; @Ysard2010b; @Castellanos2011; @Vidal2011]. In this work we study the spinning dust emission of interstellar clouds including a treatment of the gas state and radiative transfer. In this context we reexamine the relationship between the AME and the dust IR emission. The paper is organised as follows. In Section \[models\] we describe the models. In Section \[gas\_properties\] we detail our method to estimate the gas properties (ionisation state and temperature). In Section \[environment\] we present the variations of the spinning dust spectrum with the gas density and with the intensity of the radiation field. We also consider variations of the cosmic-ray ionisation rate as suggested by recent observations. In Section \[radiative\_transfer\] we present the spinning dust emission with radiative transfer modelling. Finally, we present in Section \[conclusions\] our conlusions. Models ====== Current models of spinning dust [@DL98; @Ali2009; @Ysard2010a; @Hoang2010; @Silsbee2011] take into account a number of processes for the rotational excitation and damping of the grains: the emission of IR photons, the collisions with neutral and ionised gas particles (, , and ), the plasma drag ( and ), the photoelectric effect, and the formation of H$_2$ molecules at the surface of the grains. The publically available SpDust[^1] code [@Ali2009; @Silsbee2011] includes the most recent developments regarding the gas-grain interactions and the grain dynamics (rotation around non-principal axis of inertia). The results of SpDust agree well with other models that include a more detailed treatment of the IR emission or of the gas-grain interactions [@Ysard2010a; @Hoang2010]. SpDust is fast and well-suited for coupling to other codes, especially radiative transfer codes. In the following, we use SpDust to model the spinning dust emission. In order to estimate dust emission from the mid-IR to the microwave range in a consistent way, we coupled SpDust with the dust emission model described by @Compiegne2011, DustEM[^2]. DustEM is based on the formalism of @Desert1990 and includes three dust types: interstellar PAHs, amourphous carbonaceous grains, and amourphous silicates. We used the dust populations defined by @Compiegne2011 for the diffuse, high galactic latitude interstellar medium (DHGL). For PAHs we assumed a log-normal size distribution with centroid $a_0=0.64$ nm and width $\sigma=0.4$, with a dust-to-gas mass ratio $M_{PAH}/M_H=7.8\times 10^{-4}$. In current models, the smallest grains (PAHs) carry the spinning dust emission that is sensitive to the gas density and the radiation field intensity[^3], $G_0$, but also to the ionisation state (abundance of the $\ion{H}{ii}$ and $\ion{C}{ii}$ ions, noted $x_H$ and $x_C$ respectively). Radiative transfer calculations are performed with the CRT[^4] tool [@Juvela2003; @Juvela2005], to which we have coupled DustEM and SpDust. CRT is only used to estimate the dust temperature and the resulting dust emission from the mid-IR to the microwave range. Our treatment of the gas properties is presented in Section \[gas\_properties\]. Gas state {#gas_properties} ========= As discussed above, the dynamics of spinning dust grains involves gas-grain interactions and radiative processes. The spinning dust emission is therefore sensitive to the gas density ($n_H$) and temperature ($T_{{\rm gas}}$), and to the intensity of the UV radiation field traced by the factor $G_0$. In particular the gas-grain interactions depend on the gas ionisation state, i.e., the abundance of the major charged species (electrons, , , ect.), which primarily depends on $n_H$, $T_{{\rm gas}}$, and $G_0$ but also on the chemistry occurring locally. Realistic modelling consequently requires a consistent treatment of the spinning motion of the grains and gas ionisation state. For the present work, where we consider the influence of radiative transfer on the spinning dust emission (see Section \[radiative\_transfer\]), we treat the gas ionisation state with a simplified scheme that we present below. Using this scheme, we then discuss the influence of $n_H$ and $G_0$, and look at the effect of an enhanced cosmic-ray ionisation rate as suggested by recent observations (see Section \[environment\]). When the radiation field intensity is low ($G_0 \leqslant 1$), inelastic collisions with neutral and ionised species of the interstellar gas become the dominant processes for the excitation and the damping of the grain rotation [@Ali2009; @Ysard2010a]. The ion fractions $x_H = n_{\ion{H}{ii}}/n_H$ and $x_C = n_{\ion{C}{ii}}/n_H$ where $n_H = n(\ion{H}{i}) + n(\ion{H}{ii}) + 2n({\rm H}_2)$ accordingly need to be carefully determined to perform a quantitative study of the variations of spinning dust emission with environmental properties. Where CO has not formed (unshielded regions in which most of the gas phase carbon is in the form of or ), we estimate the electron and ion fractions ($x_e=n_e/n_H$, $x_H$, and $x_C$) by simultaneously solving the / and / equilibria, including the recombination of carbon with H$_2$ [@Roellig2006]. Furthermore, we take into account the recombination of with PAHs as described in @Wolfire2008. In neutral gas and neglecting the contribution of helium, the ionisation balance of hydrogen including H$_2$ reads $$\begin{aligned} ({\rm \ion{H}{i}, H_2}) + {\rm CR} & \rightleftarrows & ({\rm \ion{H}{ii}, H_2^+}) + {\rm e}^{\,-} \\ \zeta_{CR} (1-x_H) & = & x_H x_e n_H a_H,\end{aligned}$$ where $\zeta_{CR}$ is the cosmic-ray ionisation rate per second and per proton and $a_H = 3.5 \times 10^{-12} (T/300 {\rm K})^{-0.75}$ cm$^3$/s is the recombination rate [@Roellig2006]. Unless otherwise stated, we assume $\zeta_{CR} = 5 \times 10^{-17}$ s$^{-1}$H$^{-1}$. In regions where CO has not formed, we assume that is the dominant ionised heavy element and write the electron fraction as $x_e \simeq x_H + x_C$. The abundance thus becomes $$\label{premiere_estimation} x_C = x_e - \frac{1}{1+x_e n_H a_H / \zeta_{CR}}.$$ On the other hand, $x_C$ can be derived from the ionisation balance of carbon where we take into account the following reactions: $$\begin{aligned} {\rm \ion{C}{i}} + h\nu & \stackrel{k_i}{\longrightarrow} & {\rm \ion{C}{ii}} + {\rm e}^{\,-} \\ {\rm \ion{C}{ii}} + {\rm e}^{\,-} & \stackrel{k_r}{\longrightarrow} & {\rm \ion
{ "pile_set_name": "ArXiv" }
--- abstract: 'In order to investigate the low-energy antiferromagnetic Cu-spin correlation and its relation to the superconductivity, we have performed muon spin relaxation ($\mu$SR) measurements using single crystals of the electron-doped high-$T_{\rm c}$ cuprate Pr$_{1-x}$LaCe$_x$CuO$_4$ in the overdoped regime. The $\mu$SR spectra have revealed that the Cu-spin correlation is developed in the overdoped samples where the superconductivity appears. The development of the Cu-spin correlation weakens with increasing $x$ and is negligibly small in the heavily overdoped sample where the superconductivity almost disappears. Considering that the Cu-spin correlation also exist in the superconducting electron-doped cuprates in the undoped and underdoped regimes \[T. Adachi [*et al.*]{}, J. Phys. Soc. Jpn. [**85**]{}, 114716 (2016)\], our findings suggest that the mechanism of the superconductivity is related to the low-energy Cu-spin correlation in the entire doping regime of the electron-doped cuprates.' author: - 'Malik A. Baqiya' - Tadashi Adachi - Akira Takahashi - Takuya Konno - Taro Ohgi - Isao Watanabe - Yoji Koike title: 'Muon spin relaxation study of the spin correlation in the overdoped regime of electron-doped high-$T_{\rm c}$ cuprate superconductors' --- Introduction {#sec:introduction} ============ In the research of high-$T_{\rm c}$ cuprate superconductivity, the relationship between the Cu-spin correlation and superconductivity has been the central issue in both hole-doped and electron-doped cuprates. For the hole-doped cuprate of La$_{2-x}$Sr$_x$CuO$_4$ (LSCO), neutron-scattering experiments have revealed that the commensurate Cu-spin correlation in the antiferromagnetic (AF) state of the parent compound changes to the incommensurate one with hole doping in the superconducting (SC) state, [@yamada-prb] followed by the disappearance of both incommensurate Cu-spin correlation and superconductivity in the heavily overdoped regime. [@wakimoto] Muon-spin-relaxation ($\mu$SR) measurements in Zn-impurity-substituted La$_{2-x}$Sr$_x$Cu$_{1-y}$Zn$_y$O$_4$ have revealed that the development of the Cu-spin correlation vanishes at the end point of the SC region in the heavily overdoped regime of the phase diagram. [@risdi-lsco] Therefore, the incommensurate Cu-spin correlation appears to be intimately related to the superconductivity. For the electron-doped cuprates, on the other hand, the commensurate Cu-spin correlation has been observed in the optimally doped regime of Nd$_{2-x}$Ce$_x$CuO$_4$ [@yamada-prl] and Pr$_{1-x}$LaCe$_x$CuO$_4$ (PLCCO). [@kang; @wilson] The relationship between the commensurate Cu-spin correlation and superconductivity has been unclear in the electron-doped cuprates. Recently, the so-called undoped (Ce-free) superconductivity in the electron-doped cuprates has attracted considerable research attention. It has been reported that the superconductivity appears even in the parent compound of $x=0$ and in a wide range of $x$ in Nd$_{2-x}$Ce$_x$CuO$_4$ thin films through the appropriate reduction annealing to remove excess oxygen from the as-grown thin films. [@tsukada; @matsumoto] The superconductivity in the parent compound has also been confirmed in the polycrystalline samples. [@asai; @takamatsu] Two possible mechanisms of the undoped superconductivity have been proposed; the electron doping by the oxygen deficiency (oxygen non-stoichiometry) [@horio-nco] and the collapse of the charge-transfer gap due to square-planer coordination of oxygen in the CuO$_2$ plane. [@adachi-jpsj] If the latter is the case, the undoped superconductivity indicates that the phase diagram is completely different from the former one, that is, the superconductivity in the electron-doped cuprates cannot be understood in terms of carrier doping into the parent Mott insulators as in the case of the hole-doped cuprates. An important issue is whether or not the Cu-spin correlation is related to the superconductivity in the electron-doped cuprates. Through the improved reduction annealing, high-quality SC single crystals have been obtained in underdoped Pr$_{2-x}$Ce$_x$CuO$_4$ with $x \ge 0.04$ [@brinkmann] and Pr$_{1.3-x}$La$_{0.7}$Ce$_x$CuO$_{4+\delta}$ with $x \ge 0.05$. [@adachi-jpsj; @horio; @adachi-review] Formerly, we have performed $\mu$SR measurements of the SC parent polycrystal of La$_{1.8}$Eu$_{0.2}$CuO$_4$ and the SC underdoped single crystal of Pr$_{1.3-x}$La$_{0.7}$Ce$_x$CuO$_4$ with $x=0.10$. [@adachi-review; @adachi-jpsj2] It has been found that a short-range magnetic order is formed at low temperatures in both samples, suggesting a coexisting state of superconductivity with the short-range magnetic order. The development of the Cu-spin correlation has also been confirmed in $\mu$SR measurements of the SC parent thin film of La$_{1.9}$Y$_{0.1}$CuO$_4$. [@kojima] These results suggest that a small amount of residual excess oxygen in a sample causes the development of the Cu-spin correlation and/or the formation of the short-range magnetic order, indicating a strongly correlated electron system of the undoped and electron-underdoped cuprates. The next issue is how the Cu-spin correlation changes with electron doping concomitant with the weakening of the superconductivity in the overdoped regime. Inelastic neutron-scattering experiments in the overdoped PLCCO with $x \le 0.18$ have revealed that the characteristic energy of the Cu-spin correlation decreases with increasing $x$ and seems to disappear with the superconductivity. [@fujita] This is different from the results of the hole-doped cuprates in which the characteristic energy of the Cu-spin correlation is unchanged but the spectral weight decreases with hole doping, [@wakimoto] suggesting the occurrence of a phase separation into SC and normal-state regions in a sample. [@tanabe] From the former $\mu$SR measurements in the SC polycrystal of PLCCO with $x=0.14$, slowing down of the Cu-spin fluctuations has been observed at low temperatures without any magnetic order. [@risdi-plcco] NMR experiments of the SC single crystal of Pr$_{1.3-x}$La$_{0.7}$Ce$_x$CuO$_4$ with $x=0.15$ have also indicated the presence of AF spin fluctuations. [@yamamoto] These suggest that, compared with the short-range magnetic order in the parent and underdoped samples, [@adachi-review; @adachi-jpsj2] the development of the Cu-spin correlation weakens with increasing $x$ but is apparently observed in the slightly overdoped regime. In order to obtain detailed information on the low-energy Cu-spin correlation in the heavily overdoped regime and its relation to the superconductivity, we have carried out $\mu$SR measurements using PLCCO single crystals in the heavily overdoped regime of $x=0.17$ and $0.20$. Experimental ============ Single crystals of PLCCO with $x=0.17$ and $0.20$ were prepared by the traveling solvent floating zone method. [@lambacher; @malik] The quality of the grown crystals was checked by the x-ray back-Laue photography and powder x-ray diffraction to be good. The composition of the crystals was analyzed by the inductively-coupled-plasma spectrometry. For the reduction annealing in a vacuum condition of $2 \times 10^{-4}$ Pa, the two-step annealing was performed at 900$^{\rm o}$C for 12 h and 500$^{\rm o}$C for 12 h for $x=0.17$. For $x=0.20$, the improved one-step reduction annealing was carried out at 800$^{\rm o}$C for 24 h. [@adachi-jpsj] Magnetic-susceptibility measurements were performed using a SC quantum interference device (SQUID) magnetometer (Quantum Design, MPMS). Figure 1 shows the temperature dependence of the magnetic susceptibility of PLCCO with $x=0.17$ and $0.20$ together with $x=0.13$ and $0.15$. [@malik] The SC transition temperature $T_{\rm c}$ of $x=0.17$ is $\sim 5$ K and the Meissner diamagnetism at 2 K is much smaller than those of $x=0.13$ and $0.15$, indicating that the superconductivity is weak. For $x=0.20$, the Meissner diamagnetism is unobservable, indicating a non-SC state of this sample. As shown in
{ "pile_set_name": "ArXiv" }
--- abstract: 'X-ray observations with the ROSAT High Resolution Imager (HRI) often have spatial smearing on the order of 10$\arcsec$ (Morse 1994). This degradation of the intrinsic resolution of the instrument (5$\arcsec$) can be attributed to errors in the aspect solution associated with the wobble of the space craft or with the reacquisition of the guide stars. We have developed a set of IRAF/PROS and MIDAS/EXSAS routines to minimize these effects. Our procedure attempts to isolate aspect errors that are repeated through each cycle of the wobble. The method assigns a ’wobble phase’ to each event based on the 402 second period of the ROSAT wobble. The observation is grouped into a number of phase bins and a centroid is calculated for each sub-image. The corrected HRI event list is reconstructed by adding the sub-images which have been shifted to a common source position. This method has shown $\sim$30% reduction of the full width half maximum (FWHM) of an X-ray observation of the radio galaxy 3C 120. Additional examples are presented.' author: - 'D.E. Harris, J.D. Silverman, G. Hasinger' - 'I. Lehmann' date: 'Received date / Accepted date' title: Spatial Corrections of ROSAT HRI Observations --- Introduction ============ Spatial analysis of ROSAT HRI observations is often plagued by poor aspect solutions, precluding the attainment of the potential resolution of about 5”. In many cases (but not all), the major contributions to the degradation in the effective Point Response Function (PRF) come from aspect errors associated either with the ROSAT wobble or with the reacquisition of the guide stars. To avoid the possibility of blocking sources by the window support structures (Positional Sensitive Proportional Counter) or to minimize the chance that the pores near the center of the microchannel plate would become burned out from excessive use (High Resolution Imager), the satellite normally operates with a constant dither for pointed observations. The period of the dither is 402s and the phase is tied to the spacecraft clock. Any given point on the sky will track back and forth on the detector, tracing out a line of length $\approx$ 3 arcmin with position angle of 135$^{\circ}$ in raw detector coordinates (for the HRI). Imperfections in the star tracker (see section \[sec:MM\]) can produce an erroneous image if the aspect solution is a function of the wobble track on the CCD of the star tracker. This work is similar to an analysis by Morse (1994) except that we do not rely on a direct correlation between spatial detector coordinates and phase of the wobble. Moreover, our method addresses the reacquisition problem which produces the so-called cases of “displaced OBIs”. An “OBI” is an observation interval, normally lasting for 1 ks to 2 ks (i.e. a portion of an orbit of the satellite). A new acquisition of the guide stars occurs at the beginning of each OBI and we have found that different aspect solutions often result. Occasionally a multi-OBI observation consists of two discrete aspect solutions. A recent example (see section \[sec:120B\]) showed one OBI for which the source was 10$^{\prime\prime}$ north of its position in the other 17 OBIs. Note that this sort of error is quite distinct from the wobble error. Throughout this discussion, we use the term “PRF” in the dynamic sense: it is the point response function realized in any given situation: i.e. that which includes whatever aspect errors are present. We start with an observation for which the PRF is much worse than it should be. We seek to improve the PRF by isolating the offending contributions and correcting them if possible or rejecting them if necessary. Model and Method {#sec:MM} ================ The “model” for the wobble error assumes that the star tracker’s CCD has some pixels with different gain than others. As the wobble moves the de-focused star image across the CCD, the centroiding of the stellar image gets the wrong value because it is based on the relative response from several pixels. If the roll angle is stable, it is likely that the error is repeated during each cycle of the wobble since the star’s path is over the same pixels (to a first approximation if the aspect ‘jitter’ is small compared to the pixel size of $\approx$ 1 arcmin). What is not addressed is the error in roll angle induced by erroneous star positions. If this error is significant, the centroiding technique with one strong source will fix only that source and its immediate environs. The correction method assigns a ’wobble phase’ to each event; then divides each OBI (or other suitably defined time interval) into a number of wobble phase bins. The centroid of the reference source is measured for each phase bin. The data are then recombined after applying x and y offsets in order to ensure that the reference source is aligned for each phase bin. What is required is that there are enough counts in the reference source to obtain a reliable centroid. Variations of this method for sources weaker than approx 0.1 count/s involve using all OBIs together before dividing into phase bins. This is a valid approach so long as the nominal roll angle is stable (i.e. within a few tenths of a degree) for all OBIs, and so long as major shifts in the aspect solutions of different OBIs are not present. Diagnostics =========== Our normal procedure for evaluation is to measure the FWHM (both the major and minor axes) of the observed response on a map smoothed with a 3$^{\prime\prime}$ Gaussian. For the best data, we find the resulting FWHM is close to 5.7$^{\prime\prime}$. While there are many measures of source smearing, we prefer this approach over measuring radial profiles because there is no uncertainty relating to the position of the source center; we are normally dealing with elliptical rather than circular distributions; and visual inspection of the two dimensional image serves as a check on severe abnormalities. It has been our experience that when we are able to reduce the FWHM of the PRF, the wings of the PRF are also reduced. Wobble Errors ------------- If the effective PRF is evaluated for each OBI separately, the wobble problem is manifest by a degraded PRF in one or more OBIs. Most OBIs contain only the initial acquisition of the guide stars, so when the PRF of a particular OBI is smeared, it is likely to be caused by the wobble error and the solution is to perform the phased ‘de-wobbling’. Misplaced OBI ------------- For those cases where each OBI has a relatively good PRF but the positions of each centroid have significant dispersion, the error cannot be attributed to the wobble. We use the term ‘misplaced OBI’ to describe the situation in which a different aspect solution is found when the guide stars are reacquired. In the worst case, multiple aspect solutions can produce an image in which every source in the field has a companion displaced by anywhere from 10 to 30 arcsec or more. When the separation is less than 10 arcsec, the source can appear to have a tear drop shape (see section \[sec:120A\]) or an egg shape. However, depending on the number of different aspect solutions, almost any arbitrary distortion to the (circularly symmetric) ideal PRF is possible. The fix for these cases is simply to find the centroid for each OBI, and shift them before co-adding (e.g., see Morse et al. 1995). IRAF/PROS Implementation ======================== The ROSAT Science Data Center (RSDC) at SAO has developed scripts to assist users in evaluating individual OBIs and performing the operations required for de-wobbling and alignment. The scripts are available from our anonftp area: sao-ftp.harvard.edu. cd to pub/rosat/dewob. An initial analysis needs to be performed to determine the stable roll angle intervals, to check for any misalignment of OBIs and to examine the guide star combinations. These factors together with the source intensity are important in deciding what can be done and the best method to use. OBI by OBI Method {#sec:ObyO} ----------------- If the observation contains a strong source ($\ge$ 0.1 counts/s) near the field center (i.e. close enough to the center that the mirror blurring is not important), then the preferred method is to dewobble each OBI. The data are thus divided into n $\times$ p qpoe files (n = number of OBIs; p = number of phase bins). The position of the centroid of the reference source is determined and each file is shifted in x and y so as to align the centroids from all OBIs and all phase bins. The data are then co-added or stacked to realize the final image (qpoe file). Stable Roll Angle Intervals --------------------------- For sources weaker than 0.1 counts/s, it is normally the case that there are not enough counts for centroiding when 10 phase bins are used. If it is determined that there are no noticeable shifts between OBIs, then it is possible to use many OBIs together so long as the roll angle does not change
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider a basic cache network, in which a single server is connected to multiple users via a shared bottleneck link. The server has a database of files (content). Each user has an isolated memory that can be used to cache content in a prefetching phase. In a following delivery phase, each user requests a file from the database, and the server needs to deliver users’ demands as efficiently as possible by taking into account their cache contents. We focus on an important and commonly used class of prefetching schemes, where the caches are filled with uncoded data. We provide the exact characterization of the rate-memory tradeoff for this problem, by deriving both the *minimum average rate* (for a uniform file popularity) and the *minimum peak rate* required on the bottleneck link for a given cache size available at each user. In particular, we propose a novel caching scheme, which strictly improves the state of the art by exploiting commonality among user demands. We then demonstrate the exact optimality of our proposed scheme through a matching converse, by dividing the set of all demands into types, and showing that the placement phase in the proposed caching scheme is universally optimal for all types. Using these techniques, we also fully characterize the rate-memory tradeoff for a decentralized setting, in which users fill out their cache content without any coordination.' author: - 'Qian Yu,  Mohammad Ali Maddah-Ali,  and A. Salman Avestimehr,  [^1] [^2] [^3] [^4] [^5] [^6]' bibliography: - 'uache\_checked.bib' title: | The Exact Rate-Memory Tradeoff\ for Caching with Uncoded Prefetching --- Caching, Coding, Rate-Memory Tradeoff, Information-Theoretic Optimality Introduction ============ Caching is a commonly used approach to reduce traffic rate in a network system during peak-traffic times, by duplicating part of the content in the memories distributed across the network. In its basic form, a caching system operates in two phases: (1) a placement phase, where each cache is populated up to its size, and (2) a delivery phase, where the users reveal their requests for content and the server has to deliver the requested content. During the delivery phase, the server exploits the content of the caches to reduce network traffic. Conventionally, caching systems have been based on uncoded unicast delivery where the objective is mainly to maximize the hit rate, i.e. the chance that the requested content can be delivered locally [@sleator1985amortized; @dowdy82; @almeroth96; @dan96; @korupolu99; @meyerson01; @baev08; @borst10]. While in systems with single cache memory this approach can achieve optimal performance, it has been recently shown in [@maddah-ali12a] that for multi-cache systems, the optimality no longer holds. In [@maddah-ali12a], an information theoretic framework for multi-cache systems was introduced, and it was shown that coding can offer a significant gain that scales with the size of the network. Several coded caching schemes have been proposed since then [@DBLP:journals/corr/Chen14h; @wan2016caching; @sahraei2016k; @tian2016caching; @amiri2016fundamental; @amiri2016coded]. The caching problem has also been extended in various directions, including decentralized caching [@maddah-ali13], online caching [@pedarsani13], caching with nonuniform demands [@niesen13; @zhang15; @ji2015order; @ramakrishnan2015efficient], hierarchical caching [@hachem14; @karamchandani14; @hachem15], device-to-device caching [@ji14b], cache-aided interference channels [@maddah2015cache; @naderializadeh2016fundamental; @hachem2016layered; @DBLP:journals/corr/HachemND16a], caching on file selection networks [@wang2015information; @DBLP:journals/corr/WangLG15; @lim2016information], caching on broadcast channels [@timo2015joint; @bidokhti2016erasure; @bidokhti2016noisy; @bidokhti2016upper], and caching for channels with delayed feedback with channel state information [@zhang2015fundamental; @zhang2015coded]. The same idea is also useful in the context of distributed computing, in order to take advantage of extra computation to reduce the communication load [@2016arXiv160407086L; @globedcd16; @li2016scalable; @7901473; @yu2017howto]. Characterizing the exact rate-memory tradeoff in the above caching scenarios is an active line of research. Besides developing better achievability schemes, there have been efforts in tightening the outer bound of the rate-memory tradeoff [@ghasemi15; @lim2016information; @sengupta15; @DBLP:journals/corr/WangLG16; @tian2016symmetry; @prem2015critical]. Nevertheless, in almost all scenarios, there is still a gap between the state-of-the-art communication load and the converse, leaving the exact rate-memory tradeoff an open problem. In this paper, we focus on an important class of caching schemes, where the prefetching scheme is required to be uncoded. In fact, almost all caching schemes proposed for the above mentioned problems use uncoded prefetching. As a major advantage, uncoded prefetching allows us to handle asynchronous demands without increasing the communication rates, by dividing files into smaller subfiles [@maddah-ali13]. Within this class of caching schemes, we characterize the exact rate-memory tradeoff for both the *average rate* for uniform file popularity and the *peak rate*, in both centralized and decentralized settings, for all possible parameter values. In particular, we first propose a novel caching strategy for the centralized setting (i.e., where the users can coordinate in designing the caching mechanism, as considered in [@maddah-ali12a]), which strictly improves the state of the art, reducing both the average rate and the peak rate. We exploit commonality among user demands by showing that the scheme in [@maddah-ali12a] may introduce redundancy in the delivery phase, and proposing a new scheme that effectively removes all such redundancies in a systematic way. In addition, we demonstrate the exact optimality of the proposed scheme through a matching converse. The main idea is to divide the set of all demands into smaller subsets (referred to as types), and derive tight lower bounds for the minimum peak rate and the minimum average rate on each type separately. We show that, when the prefetching is uncoded, the rate-memory tradeoff can be completely characterized using this technique, and the placement phase in the proposed caching scheme universally achieves those minimum rates on all types. Moreover, we extend the techniques we developed for the centralized caching problem to characterize the exact rate-memory tradeoff in the decentralized setting (i.e. where the users cache the contents independently without any coordination, as considered in [@maddah-ali13]). Based on the proposed centralized caching scheme, we develop a new decentralized caching scheme that strictly improves the state of the art [@maddah-ali13; @amiri2016coded]. In addition, we formally define the framework of decentralized caching, and prove matching converses given the framework, showing that the proposed scheme is optimal. To summarize, the main contributions of this paper are as follows: - Characterizing the rate-memory tradeoff for average rate, by developing a novel caching design and proving a matching information theoretic converse. - Characterizing the rate-memory tradeoff for peak rate, by extending the achievability and converse proofs to account for the worst case demands. - Characterizing the rate-memory tradeoff for both average rate and peak rate in a decentralized setting, where the users cache the contents independently without coordination. Furthermore, in one of our recent works [@yu2017characterizing], we have shown that the achievablity scheme we developed in this paper also leads to the yet known tightest characterization (within factor of $2$) in the general problem with coded prefetching, for both average rate and peak rate, in both centralized and decentralized settings. The problem of caching with uncoded prefetching was initiated in [@kai2016optimality; @wan2016caching], which showed that the scheme in [@maddah-ali12a] is optimal when considering *peak rate* and *centralized caching*, if there are more files than users. Although not stated in [@kai2016optimality; @wan2016caching], the converse bound in our paper for the special case of peak rate and centralized setting could have also been derived using their approach. In this paper however, we introduce the novel idea of demand types, which allows us to go beyond and characterize the rate-memory tradeoff for both peak rate and average rate for all possible parameter values, in both centralized and decentralized settings. Our result covers the peak rate centralized setting, as well as strictly improves the bounds in all other cases. More importantly, we introduce a new achievability scheme, which strictly improves the scheme in [@maddah-ali12a]. The rest of this paper is organized as follows. Section \[sec:sys\] formally establishes a centralized caching framework, and defines the main problem studied in
{ "pile_set_name": "ArXiv" }
0.1667in [**An Ignored Mechanism**]{} [**for the Longitudinal Recoil Force in Railguns and**]{} [**Revitalization of the Riemann Force Law**]{} Department of Electrical Engineering National Tsinghua University Hsinchu, Taiwan **Abstract*** *– The electric induction force due to a time-varying current is used to account for the longitudinal recoil force exerted on the rails of railgun accelerators. As observed in the experiments, this induction force is longitudinal to the rails and can be the strongest at the heads of the rails. Besides, for the force due to a closed circuit, it is shown that the Riemann force law, which is based on a potential energy depending on a relative speed and is in accord with Newton’s law of action and reaction, can reduce to the Lorentz force law. PACS numbers: 03.50.De, 41.20.-q $\vspace{1.5cm}$ [**1. Introduction**]{}\ It is known that a railgun utilizes the magnetic force to accelerate an armature to move along two parallel rails on which it is placed. Further, it has been reported that a recoil force, which is longitudinal to the rails and is exerted on them, was observed during the acceleration of the armature [@Graneau87]. Based on the Biot-Savart (Grassmann) force law, the magnetic force exerted on a wire segment of directed length $d\mathbf{l}_{1}$ and carrying a current $I_{1}$ due to another current element $I_{2}d\mathbf{% l}_{2}$ is given by $$\mathbf{F}=-\frac{\mu _{0}}{4\pi }I_{1}I_{2}\frac{1}{R^{2}}\left[ \hat{R}(d% \mathbf{l}_{1}\cdot d\mathbf{l}_{2})-(d\mathbf{l}_{1}\cdot \hat{R})d\mathbf{l% }_{2}\right] ,\eqno (1)$$ where $\hat{R}$ is a unit vector pointing from element 2 to element 1 and $R$ is the separation distance between them. By using a vector identity it is readily seen that the magnetic force is always perpendicular to the wire segment carrying the current $I_{1}$. Thus the longitudinal force cannot be accounted for by the Biot-Savart force law. Alternatively, in some experiments the Ampère force law $$\mathbf{F}=-\frac{\mu _{0}}{4\pi }I_{1}I_{2}\frac{\hat{R}}{R^{2}}\left[ 2(d% \mathbf{l}_{1}\cdot d\mathbf{l}_{2})-3(d\mathbf{l}_{1}\cdot \hat{R})(d% \mathbf{l}_{2}\cdot \hat{R})\right] \eqno (2)$$ is applied to account for this longitudinal recoil force [@Graneau87], though this force law is not well accepted. From this law it seems that the longitudinal force can be expected. However, it can be shown that the force predicted from the Ampère law is identical to the one from the Biot-Savart law, when the force is due to a closed circuit with uniform current as it is ordinarily. Such an identity has also been proved by two elegant but similar approaches by using vector identities [@Jolly; @Ternan], where the current is given by a volume density as it is actually and the singularity problem which occurs when the distance $R$ becomes zero for the self-action term is then avoided. In these derivations the magnetostatic condition, under which the divergence of the current density is zero, is assumed. A closed circuit with uniform current is a common case of this condition. Some specific analytical or numerical integrations with volume or even surface current densities [@Moyssides; @Assis96] also support the identity. Thereby, without doubt, the Ampère law is identical to the Biot-Savart law for the force due to closed circuits and hence the longitudinal recoil force can be accounted for by neither of them. In spite of these theoretical arguments, there remains controversy over the experimental observations of the railgun longitudinal force and the experimental demonstrations for the validity of the force laws \[6–11\]. In this investigation, it is pointed out that the railgun longitudinal force can be accounted for by the electric induction force which as well as the Biot-Savart magnetic force is incorporated in the Lorentz force law. This induction force is due to a time-varying current and its direction is longitudinal to the current. This force is of the same order of magnitude of the magnetic force, but it appears to be ignored in the literature dealing with railguns. As to the Ampère force law, it has an appealing feature that it is obviously in accord with Newton’s third law of motion. This is a consequence of the situation that the Weber force law and hence the Ampère force law can be derived from a potential energy of which the involved velocity is a relative velocity between two associated charged particles. In section 5 it is shown that the Riemann force law, which is derived from a potential energy where the involved velocity is also relative, can reduce to the Lorentz force law. Thus the longitudinal rail recoil force can be accounted for by a force law which is in accord both with the nowadays standard theory and with Newton’s law of action and reaction. [**2. Electric Induction Force in Railguns**]{}\ It is well known that in the presence of electric and magnetic fields, the electromagnetic force exerted on a particle of charge $q$ and velocity $% \mathbf{v}$ is given by the Lorentz force law $$\mathbf{F}=q\left( \mathbf{E}+\mathbf{v}\times \mathbf{B}\right) .\eqno (3)$$ This force law and Maxwell’s equations form the fundamental equations adopted by Lorentz in the early development of electromagnetics. The Lorentz force law can be given directly in terms of the scalar and the vector potential originating from the charge and the current density, respectively. That is, $$\mathbf{F}=q\left( -\nabla \Phi -\frac{\partial \mathbf{A}}{\partial t}+% \mathbf{v}\times \nabla \times \mathbf{A}\right) ,\eqno (4)$$ where $\Phi $ is the electric scalar potential and $\mathbf{A}$ is the magnetic vector potential. The term associated with the gradient of the scalar potential, with the time derivative of the vector potential, and the one with the particle velocity are known as the electrostatic force, the electric induction force, and the magnetic force, respectively. Quantitatively, the scalar and the vector potential are given explicitly in terms of the charge density $\rho $ and the current density $\mathbf{J}$ respectively by the volume integrals $$\Phi (\mathbf{r},t)=\frac{1}{4\pi \epsilon _{0}}\int \frac{\rho (\mathbf{r}% ^{\prime },t)}{R}dv^{\prime }\eqno (5)$$ and $$\mathbf{A}(\mathbf{r},t)=\frac{\mu _{0}}{4\pi }\int \frac{\mathbf{J}(\mathbf{% r}^{\prime },t)}{R}dv^{\prime },\eqno (6)$$ where $\mu _{0}\epsilon _{0}=1/c^{2}$, $R=|\mathbf{r}-\mathbf{r}^{\prime }|$, and the time retardation $R/c$ from the source point $\mathbf{r}^{\prime }$ to the field point $\mathbf{r}$ is neglected. It is noted that compared to the electrostatic force due to the scalar potential, both the electric induction force and the magnetic force due to the vector potential are of the second order of normalized speed with respect to $c$. In railgun accelerators, the current $I$ flowing on the loop formed by the rails, the armature, and the breech generates a magnetic vector potential $% \mathbf{A}$ and a magnetic field $\mathbf{B}$. Then the current-carrying armature experiences a magnetic force, which tends to accelerate the armature to move along the rails. Correspondingly, there is another magnetic force exerted on the breech as a recoil force. Meanwhile, the motion of the armature results in another magnetic force on the armature itself. This force is along the armature and then will counteract the electrostatic force which in turn is established by an external power supply to support the current $I$. The current depends on the resultant force and hence on the speed of the armature. If the applied voltage is fixed, the current and hence the magnetic vector potential will decrease. According to the Lorentz force law, a time-varying vector potential will generate an electric induction force. The electric induction force exerted on the ions of a straight metal wire carrying a current decreasing with time is parallel to the current. Thus the net induction force exerted on each rail of a railgun will have a major component longitudinal to the rails. This force is not expected to depend significantly on the location along each rail, while the forces exerted on the respective rails are in opposite directions. As the electric induction force is proportional to the time rate of change of the current $I$, it depends on the acceleration of the armature.$\vspace{0.3cm}$ $\hspace{2.4cm}$$\vspace{% 0.3cm}$ $\hspace{1.1cm}$ Another effect of the motion of the armature is to constantly introduce new current elements located on the rails just behind the armature, where the current changes abruptly from zero to $I$, as depicted in Fig.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Random tessellations of the space represent a class of prototype models of heterogeneous media, which are central in several applications in physics, engineering and life sciences. In this work, we investigate the statistical properties of $d$-dimensional isotropic Poisson geometries by resorting to Monte Carlo simulation, with special emphasis on the case $d=3$. We first analyse the behaviour of the key features of these stochastic geometries as a function of the dimension $d$ and the linear size $L$ of the domain. Then, we consider the case of Poisson binary mixtures, where the polyhedra are assigned two ‘labels’ with complementary probabilities. For this latter class of random geometries, we numerically characterize the percolation threshold, the strength of the percolating cluster and the average cluster size.' author: - 'C. Larmier' - 'E. Dumonteil' - 'F. Malvagi' - 'A. Mazzolo' - 'A. Zoia' title: 'Finite-size effects and percolation properties of Poisson geometries' --- Introduction {#intro} ============ Heterogeneous and disordered media emerge in several applications in physics, engineering and life sciences. Examples are widespread and concern for instance light propagation through engineered optical materials [@NatureOptical; @PREOptical; @PREQuenched] or turbid media [@davis; @kostinski; @clouds], tracer diffusion in biological tissues [@tuchin], neutron diffusion in pebble-bed reactors [@larsen] or randomly mixed immiscible materials [@renewal], inertial confinement fusion [@zimmerman; @haran], and radiation trapping in hot atomic vapours [@NatureVapours], only to name a few. Stochastic geometries provide convenient models for representing such configurations, and have been therefore widely studied [@santalo; @torquato; @kendall; @solomon; @moran; @ren], especially in relation to heterogeneous materials [@torquato], stochastic or deterministic transport processes [@pomraning], image analysis [@serra], and stereology [@underwood]. A particularly relevant class of random media is provided by the so-called Poisson geometries [@santalo], which form a prototype process of isotropic stochastic tessellations: a portion of a $d$-dimensional space is partitioned by randomly generated $(d-1)$-dimensional hyper-planes drawn from an underlying Poisson process. The resulting random geometry (i.e., the collection of random polyhedra determined by the hyper-planes) satisfies the important property that an arbitrary line thrown within the geometry will be cut by the hyper-planes into exponentially distributed segments [@santalo]. In some sense, the exponential correlation induced by Poisson geometries represents perhaps the simplest model of ‘disordered’ random fields, whose single free parameter (i.e., the average correlation length) can be deduced from measured data [@mikhailov]. Following the pioneering works by Goudsmit [@goudsmit], Miles [@miles1964a; @miles1964b] and Richards [@richards] for $d=2$, the statistical features of the Poisson tessellations of the plane have been extensively analysed, and rigorous results have been proven for the limit case of domains having an infinite size: for a review, see, e.g., [@santalo; @moran; @ren]. An explicit construction amenable to Monte Carlo simulations for two-dimensional homogeneous and isotropic Poisson geometries of finite size has been established in [@switzer]. Theoretical results for infinite Poisson geometries have been later generalized to $d=3$, which is key for real-world applications but has comparatively received less attention, and higher dimensions by several authors [@miles1969; @miles1970; @miles1971; @miles1972; @matheron; @santalo]. The two-dimensional construction for isotropic Poisson geometries has been analogously extended to three-dimensional (and in principle $d$-dimensional) domains [@serra; @mikhailov]. In this work, we will numerically investigate the statistical properties of $d$-dimensional isotropic Poisson geometries by resorting to Monte Carlo simulation, with special emphasis on the case $d=3$. Our aim is two-fold: first, we will focus on finite-size effects and on the convergence towards the limit behaviour of infinite domains. In order to assess the impact of dimensionality on the convergence patterns, comparisons to analogous numerical or exact findings obtained for $d=1$ and $d=2$ (where available) will be provided. In so doing, we will also present and discuss the simulation results for some physical observables for which exact asymptotic results are not known, yet. Then, we will consider the case of ‘coloured’ Poisson geometries, where each polyhedron is assigned a label with a given probability. Such models emerge, for instance, in connection to particle transport problems, where the label defines the physical properties of each polyhedron [@pomraning; @mikhailov]. The case of random binary mixtures, where only two labels are allowed, will be examined in detail. In this context, we will numerically determine the statistical features of the coloured polyhedra, which are obtained by regrouping into clusters the neighbouring volumes by their common label. Attention will be paid in particular to the percolation properties of such binary mixtures for $d=3$: the percolation threshold at which a cluster will span the entire geometry, the average cluster size and the probability that a polyhedron belongs to the the spanning cluster will be carefully examined and contrasted to the case of percolation on lattices [@percolation_book]. The effect of dimensionality will be again assessed by comparison with the case $d=2$, for which analogous results were numerically determined in [@lepage]. This paper is structured as follows: in Sec. \[construction\] we will recall the explicit construction for $d$-dimensional isotropic Poisson geometries, with focus on $d=3$. In Sec. \[uncolored\_geo\] we will discuss the statistical properties of Poisson geometries, and assess the convergence to the limit case of infinite domains. In Sec. \[colored\_geo\] we will extend our analysis to the case of coloured geometries and related percolation properties. Conclusions will be finally drawn in Sec. \[conclusions\]. Construction of Poisson geometries {#construction} ================================== For the sake of completeness, in this Section we will recall the strategy for the construction of Poisson geometries, spatially restricted to a $d$-dimensional box. The case $d=1$ simply stems from the Poisson point process on the line [@santalo], and will not be detailed here. The explicit construction of homogeneous and isotropic Poisson geometries for the case $d=2$ restricted to a square has been originally proposed by [@switzer], based on a Poisson point field in an auxiliary parameter space in polar coordinates. It has been recently shown that this construction can be actually extended to $d=3$ and even higher dimensions [@mikhailov] by suitably generalizing the auxiliary parameter space approach of [@switzer] and using the results of [@serra]. In particular, such $d$-dimensional construction satisfies the homogeneity and isotropy properties [@mikhailov]. The method proposed by [@mikhailov] is based on a spatial decomposition (tessellation) of the $d$-hypersphere of radius $R$ centered at the origin by generating a random number $q$ of $(d-1)$-hyperplanes with random orientation and position. Any given $d$-dimensional subspace included in the $d$-hypersphere will therefore undergo the same tessellation procedure, restricted to the region defined by the boundaries of the subspace. The number $q$ of $(d-1)$-hyperplanes is sampled from a Poisson distribution with parameter $R \Lambda_d$, with $\Lambda_d= \lambda {\cal A}_d(1)/{\cal V}_{d-1}(1) $. Here ${\cal A}_{d}(1)=2\pi^{d/2}/\Gamma(d/2)$ denotes the surface of the $d$-dimensional unit sphere ($\Gamma(a)$ being the Gamma function [@special_functions]), ${\cal V}_{d}(1)=\pi^{d/2}/\Gamma(1+d/2)$ denotes the volume of the $d$-dimensional unit sphere, and $\lambda$ is the arbitrary density of the tessellation, carrying the units of an inverse length. This normalization of the density $\lambda$ corresponds to the convention used in [@santalo], and is such that $\lambda t$ yields the mean number of $(d-1)$-hyperplanes intersected by an arbitrary segment of length $t$. ![(Color online) Cutting a cube with a random plane. A cube of side $L$ is centered in $O$. The circumscribed sphere centered in $O$ has a radius $R=\sqrt{3}L/2$. The point $\mathbf M$ is defined by $\mathbf M=r {\mathbf n}$, where $r$ is uniformly sampled in the interval $[0,R]$ and ${\mathbf n}$ is a random unit vector of components ${\mathbf n}=(n_1,n_2,n_3)^T$, with $n_1=1-2\xi_1$, $n_2=\sqrt{1-n_1^2}\cos{(2 \pi \xi_2)}$ and $n_3=\sqrt{1-n_1^2}\sin{(2 \pi \xi_2)}$. The auxiliary variables $\
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present the C++ library CppSs (C++ super-scalar), which provides efficient task-parallelism without the need for special compilers or other software. Any C++ compiler that supports C++11 is sufficient. CppSs features different directionality clauses for defining data dependencies. While the variable argument lists of the taskified functions are evaluated at compile time, the resulting task dependencies are fixed by the runtime value of the arguments and are thus analysed at runtime. With CppSs, we provide task-parallelism using merely native C++.' author: - bibliography: - 'bibliography.bib' title: 'CppSs – a C++ Library for Efficient Task Parallelism' --- ***Keywords–high-performance computing; task parallelism; parallel libraries.*** Introduction ============ Programming models implementing task-parallelism play a major role when preparing code for modern architectures with many cores per node and thousands of nodes per cluster. In high performance computing, a common approach for achieving the best parallel performance is to apply the message passing interface (MPI)[@mpi-web] for inter-node communication and a shared-memory programming model for intra-node parallelisation. This way, the communication overhead of pure MPI applications can be overcome. Shared memory models are also crucial when using single node computers as there are systems consisting of hundreds or even thousands of processing units accessing the same memory address space. These systems offer great parallelism to the developer. But utilising the processing units evenly, so that they can run efficiently, is a non-trivial task. Many scientific applications are based on processing large amounts of data. Usually, the processing of this data can be split up and some of these chunks have to be executed in a well defined order while others are independent. This is the level on which task based programming models are employed. We will call the chunks of work to be processed tasks, while the appearances in the code (e.g., if they are implemented as functions, methods or subroutines) are going to be called task instances. The dependencies between tasks can be stated explicitly by the programmer or inferred automatically by some kind of preprocessing of the code. In the case of fork-join-models (e.g. OpenMP[@openmp-web]), all tasks after a “fork” are (potentially) parallel while code after the “join” and all consecutive forks depend on them. For example, in figure \[fig:forkjoin\]a), tasks 2, 3 and 4 can run in parallel, if sufficient processing units are available. Task 5 cannot be executed before all other tasks have finished. In programming models which support nesting (e.g. Cilk[@cilk1]), the dependencies can sometimes be derived from the placement of the calls (see Figure \[fig:forkjoin\]b)). In many implementations of task based programming models, the data dependencies are specified explicitly by the programmer (e.g. SMPSs[@text-web], OMPSs[@ompss-web], StarPU[@starpu-1] and XKAAPI[@xkaapi-web]). This allows for more complex dependency graphs and therefore more possibilities to adjust the parallelisation to the code, the amount of data and the architecture. However, these implementations suffer from a number of disadvantages: - The tasks and/or task instances and their dependencies have to be marked by special directives, usually within a `#pragma` in C or using special comments in Fortran. These use keywords and syntax which is not part of the actual language and which the programmer needs to learn. - In order to compile the instrumented code, the programmer needs a special compiler or preprocessor. She depends on this additional software to be available on the desired platform, which is not generally the case. - The need for special compilers also poses additional work to system administrators who will be asked by the programmer to install the specific compiler used in the application. - The code of the programming model implementation itself becomes more difficult to maintain and usually at least one additional compile step is introduced when compiling the user code. In order to avoid these inconveniences, we developed a pure C/C++ library, which allows functions to be marked as tasks and to execute them asynchronously. The programmer still needs to prepare the code looking for the parts feasible for parallelisation and separate them into functions. Also, it is still necessary to instrument the code with the CppSs API. But contrary to the implementations mentioned above, this is achieved using standard C++11 syntax instead of an “imposed” pragma language. To execute the application serially, e.g. for debugging, the programmer can define the macro `NO_CPPSS`, which bypasses the creation of additional threads and converts the tasks instances into normal function calls. In the following, we will illustrate the usage (Section \[sec:usage\]) and present the basic implementation of the library CppSs (Section \[sec:impl\]). Lastly, we will sum up our conclusions in Section \[sec:concl\]. (a)![(a) Example of fork-join-parallelism. After task 1 the execution thread is forked. of nested parallelism. Task 1 spawns tasks 2 and 5. Before task 5 is created, task 3 and 4 are spawned, hence the numbering.[]{data-label="fig:forkjoin"}](./forkjoin.png "fig:"){width=".20\textwidth"} (b)![(a) Example of fork-join-parallelism. After task 1 the execution thread is forked. of nested parallelism. Task 1 spawns tasks 2 and 5. Before task 5 is created, task 3 and 4 are spawned, hence the numbering.[]{data-label="fig:forkjoin"}](./nested.png "fig:"){width=".22\textwidth"} CppSs - usage {#sec:usage} ============= CppSs is a library which compiles on any system with a working C++ compiler. The C++11 features necessary for CppSs are provided by the GNU compiler of version 4.6 or higher and the Intel compiler of version 13 or higher. In order to use CppSs, the programmer only needs to include the header `CppSs.h` and link against the library `libcppss.so`. All of CppSs’ application programming interface (API) functions are declared in the namespace [CppSs]{} to avoid overlap with other libraries’ functions. In the following, the CppSs API is introduced presenting the declaration of tasks (Section \[sec:decl\]), the initialisation and finishing of the parallel execution (Section \[sec:initfinish\]) and setting barriers (Section \[sec:barriers\]). Finally, we will give a minimal example putting everything together in Section \[sec:example\]. Declaring Tasks {#sec:decl} --------------- Parallelisation with CppSs relies on functions with well defined directionality of their parameters. Loop parallelisation and anonymous code blocks are not supported. To convert a function into a task, the programmer has to call the API function [MakeTask]{}, which takes the following parameters (see listing in Figure  \[lst:minimal\]): - a pointer to the function, - an initialiser list containing directionality specifiers for each function parameter, - (optional) a string with the function name for debugging purposes and - (optional) a priority level, which is ignored in the present version. Future versions will provide one or more priority queues. <!-- --> void func1(int *a1, double *a2, double *b) { //... } auto func1_task = CppSs::MakeTask(func1, {INOUT,IN,OUT}, "func1"); It is required that the arguments of the taskified function which are intended to cause dependencies are pointers. These can be used to access arrays, built-in types or any other data structure. However, potential overlap with other data structures is not detected. The directionality specifier must be one of [IN, OUT, INOUT]{}, [REDUCTION]{} or [PARAMETER]{}. The latter is used for arguments which are not to be interpreted as a potential dependency and must be of a built-in numerical type. The effect of each of the directionality specifiers are described in the following: #### IN The task treats this argument as input. It will not be executed until all task instantiations which were called before the function and which write to this argument (i.e. have an [OUT, INOUT]{} or [REDUCTION]{} specifier for the same argument value) have finished. #### OUT The task treats this argument as output. The content of the variable or array pointed to is (possibly) overwritten. This affects functions with an [IN]{} or [INOUT]{} specifier for the same argument value. #### INOUT The task intends to read from and write to this argument value. It will be dependent on the last task writing to this memory address. The following tasks reading from this memory address will be dependent on this task. #### REDUCTION Similar to [INOUT]{}. The task intends to read from and write to this argument value. In contrast to [INOUT]{}, the tasks with a [REDUCTION]{} clause will depend on other tasks with a [REDUCTION]{} clause on the same argument value
{ "pile_set_name": "ArXiv" }
--- abstract: 'The two-dimensional one-component plasma, i.e. the system of pointlike charged particles embedded in a homogeneous neutralizing background, is studied on the surface of a cylinder of finite circumference, or equivalently in a semiperiodic strip of finite width. The model has been solved exactly by Choquard et al. at the free-fermion coupling $\Gamma=2$: in the thermodynamic limit of an infinitely long strip, the particle density turns out to be a nonconstant periodic function in space and the system exhibits long-range order of the Wigner-crystal type. The aim of this paper is to describe, qualitatively as well as quantitatively, the crystalline state for a larger set of couplings $\Gamma=2 \gamma$ ($\gamma=1,2\ldots$ a positive integer) when the plasma is mappable onto a one-dimensional fermionic theory. The fermionic formalism, supplemented by some periodicity assumptions, reveals that the density profile results from a hierarchy of Gaussians with a uniform variance but with different amplitudes. The number and spatial positions of these Gaussians within an elementary cell depend on the particular value of $\gamma$. Analytic results are supported by the exact solution at $\gamma=1$ ($\Gamma=2$) and by exact finite-size calculations at $\gamma=2,3$.' author: - 'L. [Š]{}amaj$^1$, J. Wagner$^1$, and P. Kalinay$^{1,2}$' title: ' Translation Symmetry Breaking in the One-Component Plasma on the Cylinder ' --- [**KEY WORDS:**]{} Two-dimensional jellium; semiperiodic boundary conditions; translation symmetry breaking. $^1$ Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 845 11 Bratislava, Slovak Republic $^2$ Courant Institute of Mathematical Sciences, New York University, New York, NY 10012 INTRODUCTION ============ According to the laws of electrostatics, the Coulomb potential $v$ at a spatial position ${\bf r}\in R^{\nu}$ of the $\nu$-dimensional Euclidean space, induced by a unit charge at the origin ${\bf 0}$, is defined as the solution of the Poisson equation $$\label{1.1} \Delta v({\bf r}) = - s_{\nu} \delta({\bf r})$$ where $s_{\nu}$ is the surface area of the unit sphere in $R^{\nu}$. The pair interaction energy of particles with charges $q$ and $q'$, localized at the respective positions ${\bf r}$ and ${\bf r}'$, is given by $$\label{1.2} v({\bf r},q\vert {\bf r}',q') = q q' v(\vert {\bf r}-{\bf r}'\vert)$$ In one dimension (1D), $s_1=2$ and the solution of (\[1.1\]) reads $$\label{1.3} v(x) = - \vert x \vert , \quad \quad \nu=1$$ In 2D, $s_2=2\pi$ and the solution of (\[1.1\]), subject to the boundary condition $\nabla v({\bf r})\to {\bf 0}$ as $\vert {\bf r} \vert \to \infty$, reads $$\label{1.4} v({\bf r}) = - \ln \left( \frac{\vert {\bf r}\vert}{r_0} \right), \quad \quad \nu=2$$ where $r_0$ is a free length constant which fixes the zero point of the potential. The Coulomb potential defined by Eq. (\[1.1\]) exhibits in the Fourier ${\bf k}$-space the characteristic singular $\vert {\bf k}\vert^{-2}$ form. This maintains many generic properties, like the sum rules [@Martin], of “real” 3D Coulomb systems with the interaction potential $v({\bf r})=1/\vert {\bf r}\vert$, ${\bf r}\in R^3$. The present paper deals with the equilibrium properties of the classical (i.e. non-quantum) one-component plasma, sometimes called jellium, formulated in 1D or quasi-1D domains. The jellium model consists of only one mobile pointlike particle species of charge $q$ embedded in a fixed background of charge $-q$ and density $n$ such that the system as a whole is neutral. Thermodynamics of the 1D jellium has been obtained exactly a long time ago by Baxter [@Baxter]. It was proven subsequently that the 1D jellium is never in a fluid state, but forms a Wigner crystal [@Kunz; @Brascamp]. In particular, choosing the free (hard walls) boundary conditions and going to the infinite volume limit, the one-particle density becomes periodic in space with period $1/n$. This long-range order is present for all densities $n$ and all temperatures. Although the 1D jellium is not in a fluid state, it behaves as a conductor in the sense that arbitrary boundary charges are perfectly screened by means of a global transport of the particle lattice in the background, with no additional polarization in the bulk [@Lugrin]. Translation symmetry breaking was documented also on a quasi-1D system, namely the 2D one-component plasma living on the surface of a cylinder of circumference $W$ [@Choquard1]. This system is exactly solvable at the dimensionless coupling constant $\Gamma=2$ [@Choquard2]. In the thermodynamic limit of an infinitely long cylinder, the one-particle density is given by an array of equidistant identical Gaussians along the cylinder’s axis, with period $1/(nW)$. In 1D and quasi-1D Coulomb systems, the variance of the charge in an interval $I$ remains uniformly bounded as $\vert I\vert \to \infty$. The existence of periodic structures is related to this boundedness of the charge fluctuations [@Aizenman; @Jancovici1]. The present work proceeds in the study of the 2D jellium on the cylinder surface [@Choquard1; @Choquard2]. Our aim is to describe, qualitatively as well as quantitatively, the crystalline state for a larger set of couplings $\Gamma=2\gamma$ ($\gamma=1,2,\ldots$ a positive integer). At these couplings, the underlying model is shown to be mappable onto a 1D anticommuting-field theory following the method of Ref. [@Samaj1], and its density profile is expressible in terms of the corresponding field correlators. The assumption of the periodicity of the particle density in the thermodynamic limit reveals uniquely that the density profile results from a superposition of a hierarchy of nonidentical Gaussians with a uniform variance but with different amplitudes. The number and spatial positions of these Gaussians within an elementary cell depend on the particular value of $\gamma$. The analytic results for the crystalline state are supported by the exact solution at $\gamma=1$ ($\Gamma=2$) and by the exact finite-size calculations at $\gamma=2,3$. The paper is organized as follows. In Section 2, we present basic formulas for the one-component plasma living on the cylinder surface. Section 3 deals with the 1D fermionic representation of the model for the special values of the coupling constant $\Gamma=2\gamma$ ($\gamma$ a positive integer). Section 4 is devoted to a general analysis of the density profile in the thermodynamic limit; the Gaussian structure of the crystalline state is revealed for any value of $\gamma$. The analytic results are verified in Section 5 on the exact solution of the model at $\gamma=1$, and by the exact finite-size calculations at $\gamma=2$ and $\gamma=3$. THE MODEL ========= First we define the 2D one-component plasma confined to the surface of a cylinder of circumference $W$ and finite length $L$, in the canonical ensemble. The cylinder surface can be represented as a 2D semiperiodic rectangle domain $\Lambda$ with ${\bf r}=(x,y) \in \Lambda$ if $-L/2\le x\le L/2$ (free or hard walls boundary conditions at $x=\pm L/2$) and $-W/2\le y\le W/2$ (periodic boundary conditions at $y=\pm W/2$). It is sometimes useful to use the complex coordinates $z=x+{\rm i}y$ and ${\bar z}=x-{\rm i}y$. There are $N$ mobile pointlike particles of charge $q$ in $\Lambda$, embedded in a homogeneous background of charge density $\rho_b=-q n$ with $$\label{2.1} n = \frac{N}{L W}$$ so that the system as a whole is neutral. The interaction potential between two unit charges at ${\bf r}_1$ and ${\bf r}_2$ is given by the 2D Poisson equation (\[1.1\]) with the requirement of periodicity along the $y$-axis with period $W$. Writing the potential as a Fourier series in $y$, one gets [@Choquard1] $$\label{2.2} v({\bf r}_1,{\bf r}_2) = - \ln \left\vert 2\, {\rm sinh} \frac{\pi(z_1-z_2)}{W} \right\vert$$ At small distances $\vert {\bf r}_1-{\bf r}_2 \vert << W$, this potential behaves like the 2D Coulomb potential (\[1.4\]) with the constant $r_0 = W/(2\
{ "pile_set_name": "ArXiv" }
‘=11 1200 =500 =1000 =5000 1,2mm =12,8pt \#1[\#1]{} \#1[by 1 \#1 ]{} \#1[\#1]{} \#1[ () ()]{} = = = = = = = = = = =msbm10 =msbm7 =msbm10 =msbm7 =msbm10 =msbm7 =msbm10 =msbm7 =msbm10 =msbm10 =msbm10 ß =msbm7 \#1|\#2| =1 . \#1\#2 [\#1\#2|]{} \#1\#2|[[\#1\#2]{}]{} \#1 [[\#1|]{} ]{} \#1\#2|[[\#1\#2]{}]{} \#1 [[\#1|]{} ]{} ¶[[(P\_t)]{}\_[t0]{}]{} Ł[[L]{}]{} **HYPERCONTRACTIVE MEASURES,** **TALAGRAND’S INEQUALITY, AND INFLUENCES** 0,5cm =cmcsc10 [D. Cordero-Erausquin, M. Ledoux]{} 0,1cm *University of Paris 6 and University of Toulouse, France* 1,2cm Abstract. – [*We survey several Talagrand type inequalities and their application to influences with the tool of hypercontractivity for both discrete and continuous, and product and non-product models. The approach covers similarly by a simple interpolation the framework of geometric influences recently developed by N. Keller, E. Mossel and ${\hbox {A. Sen}}$. Geometric Brascamp-Lieb decompositions are also considered in this context.*]{} 1,8mm [**1. Introduction**]{} In the famous paper \[T\], M. Talagrand showed that for every function $f$ on the discrete cube $ X = \{-1, +1\}^N$ equipped with the uniform probability measure $\mu $, $${\rm Var}_\mu (f) = \int_X f^2 d\mu - \bigg ( \int_X f d\mu \bigg)^2 \leq C \sum_{i=1}^N { {\| D_i f\| }_2^2 \over 1+ \log \big ( {\| D_i f\| }_2 / {\| D_i f\| }_1 \big ) } \eqno (1)$$ for some numerical constant $C \geq 1$, where ${\| \cdot \| }_p$ denote the norms in $\L^p(\mu )$, $1 \leq p \leq \infty$, and for every $i = 1, \ldots, n$ and every $ x = (x_1, \ldots, x_N) \in \{-1, +1\}^N$, $$D_i f(x) = f( \tau _i x) - f (x) \eqno (2)$$ with $\tau _i x = (x_1, \ldots, x_{i-1}, -x_i, x_{i+1}, \ldots , x_N)$. Up to the numerical constant, this inequality improves upon the classical spectral gap inequality (see below) $${\rm Var}_\mu (f) \leq {1 \over 4} \sum_{i=1}^N {\| D_i f\| }_2^2 \, . \eqno (3)$$ The proof of (1) is based on an hypercontractivity estimate known as the Bonami-Beckner inequality \[Bo\], \[Be\] (see below). Inequality (1) was actually deviced to recover (and extend) a famous result of J. Kahn, G. Kalai and N. Linial \[K-K-L\] about influences on the cube. Namely, applying (1) to the Boolean function $f = {\bf 1}_A $ for some set $A \subset \{-1, +1\}^N$, it follows that $$\mu (A) \big ( 1 - \mu (A) \big ) \leq C \sum_{i=1}^N{ 2I_i(A) \over 1 + \log \big (1/ \sqrt {2I_i(A)} \, \big )} \eqno (4)$$ where, for each $i = 1, \ldots, N$, $$I_i (A) = \mu \big ( \{ x \in A, \tau _i x \notin A \} \big )$$ is the so-called influence of the $i$-th coordinate on the set $A$ (noticing that $\| D_i {\bf 1}_A \|^p _p = 2 I_i(A)$ for every $p \geq 1$). In particular, for a set $A$ with $ \mu (A) = a$, there is a coordinate $i$, $1 \leq i\leq N$, such that $$I_i (A) \geq {a(1-a) \over 8CN} \, \log \Big ( {N \over a(1-a)} \Big) \geq {a(1-a) \log N \over 8C N} \eqno (5)$$ which is the main result of \[K-K-L\]. (To deduce (5) from (4), assume for example that $I_i(A) \leq \big ( {a(1-a) \over N} \big) ^{1/2}$ for every $i=1, \ldots , N$, since if not the result holds. Then, from (4), there exists $i$, $1 \leq i \leq N$, such that $${a(1-a) \over CN} \leq { 2I_i(A) \over 1 + \log \big (1/\sqrt { 2I_i(A)} \, \big )} \leq { 8 I_i(A) \over 4 + \log ( N / 4 a(1-a) ) }$$ which yields (5)). Note that (5) remarkably improves by a (optimal) factor $\log N$ what would follow from the spectral gap inequality (3) applied to $ f = {\bf 1}_A$. The numerical constants like $C$ throughout this text are not sharp. The aim of this note is to amplify the hypercontractive proof of Talagrand’s original inequality (1) to various settings, including non-product spaces and continuous variables, and in particular to address versions suitable to geometric influences. It is part of the folklore indeed (cf. e.g. \[B-H\]) that an inequality similar to (1), with the same hypercontractive proof, holds for the standard Gaussian measure $\mu $ on $\rr^N$ (viewed as a product measure of one-dimensional factors), that is, for every smooth enough function $f$ on $\rr^N$ and some constant $C>0$, $${\rm Var}_\mu (f) \leq C \sum_{i=1}^N { {\| \partial_i f\| }_2^2 \over 1+ \log ( {\| \partial_i f\| }_2 / {\| \partial_i f\| }_1) } \, . \eqno (6)$$ (A proof will be given in Section 2 below.) However, the significance of the latter for influences is not clear, since its application to characteristic functions is not immediate (and requires notions of capacities). Recently, N. Keller, E. Mossel and A. Sen \[K-M-S\] introduced a notion of geometric influence of a Borel set $A$ in $\rr^N$ with respect to a measure $\mu $ (such as the Gaussian measure) simply as $ {\| \partial_i f \| }_1$ for some smooth approximation $f$ of $ {\bf 1}_A$, and proved for it the analogue of (5) (with $\sqrt {\log N}$ instead of $\log N$) for the standard Gaussian measure on $\rr^N$. It is therefore of interest to seek for suitable versions of Talagrand’s inequality involving only $\L^1$-norms $ {\| \partial_i f \| }_1$ of the partial derivatives. While the authors of \[K-M-S\] use isoperimetric properties, we show here how the common hypercontractive tool together with a simple interpolation argument may be developed similarly to reach the same conclusion. In particular, for the standard Gaussian measure $\mu $ on $\rr^N$, we will see that for every smooth enough function $f $ on $\rr^N$ such that $|f| \leq 1$, $${\rm Var}_\mu (f) \leq C \sum_{i=1}^N { {\| \partial _i f\|} _1 \big (1 + {\| \partial _i f\|} _1 \big) \over \big [ 1 + \log^+ \big (1/ {\| \partial _i f\|}_1 \big ) \big ]^{1/2} } \, . \eqno (7)$$ Applied to $f =
{ "pile_set_name": "ArXiv" }
--- author: - 'E. Tognelli' - 'S. Degl’Innocenti' - 'P. G. Prada Moroni' bibliography: - 'bibliografia\_litio.bib' date: 'Received 24 February 2012 / accepted 8 October 2012' subtitle: 'Testing theory against clusters and binary systems.' title: '$^7$Li surface abundance in pre-MS stars.' --- [The disagreement between theoretical predictions and observations for surface lithium abundance in stars is a long-standing problem, which indicates that the adopted physical treatment is still lacking in some points. However, thanks to the recent improvements in both models and observations, it is interesting to analyse the situation to evaluate present uncertainties.]{} [We present a consistent and quantitative analysis of the theoretical uncertainties affecting surface lithium abundance in the current generation of models.]{} [By means of an up-to-date and well tested evolutionary code, `FRANEC`, theoretical errors on surface $^7$Li abundance predictions, during the pre-main sequence (pre-MS) and main sequence (MS) phases, are discussed in detail. Then, the predicted surface $^7$Li abundance was tested against observational data for five open clusters, namely Ic 2602, $\alpha$ Per, Blanco1, Pleiades, and Ngc 2516, and for four detached double-lined eclipsing binary systems. Stellar models for the aforementioned clusters were computed by adopting suitable chemical composition, age, and mixing length parameter for MS stars determined from the analysis of the colour-magnitude diagram of each cluster. We restricted our analysis to young clusters, to avoid additional uncertainty sources such as diffusion and/or radiative levitation efficiency.]{} [We confirm the disagreement, within present uncertainties, between theoretical predictions and $^7$Li observations for standard models. However, we notice that a satisfactory agreement with observations for $^7$Li abundance in both young open clusters and binary systems can be achieved if a lower convection efficiency is adopted during the pre-MS phase with respect to the MS one.]{} Introduction ============ In the last two decades, a large number of $^7$Li observations have been collected for isolated stars, binary systems, and open clusters from the pre-MS to the late MS phases [see e.g. Table 1 and references therein in @jeffries00; @sestito05], showing that $^7$Li depletion is a strong function of both mass and age. A detailed and homogeneous analysis has been carried out by @sestito05, who determined surface $^7$Li abundance for a large sample of open clusters in a wide range of ages and chemical compositions, supplying a useful tool for accurately analysing the temporal evolution of surface $^7$Li abundance. Open clusters and detached double-lined eclipsing binaries (EBs) are ideal systems for testing the validity of stellar evolutionary models, since their members have the same chemical composition and age. As a consequence, they allow the different lithium depletion pattern to be investigated as a function of the stellar mass once the age and the chemical composition have been kept fixed. Besides the large amount of $^7$Li data available, a strong effort in theoretical modelling has been made in the past years, and many different theoretical scenarios have been proposed to explain the observed surface $^7$Li abundance and its temporal evolution [see e.g. the reviews in @deliyannis00; @pinsonneault00; @charbonnel00], both in the framework of *standard* and *non-standard models* [see e.g., @pinsonneault90; @pinsonneault94; @chaboyer95; @dantona97; @ventura98; @piau02; @dantona03; @montalban06]. *Standard models* assume a spherically symmetric structure and convection and diffusion are the only processes that mix surface elements with the interior. Although the validity of such models in reproducing the main evolutionary parameters has been largely tested against observations, they fail to reproduce the observed $^7$Li abundances. Indeed, standard models show a $^7$Li depletion during the pre-MS phase that is much stronger than observed, while the opposite occurs in the MS phase [see e.g., @jeffries00]. Moreover, they cannot fully account for the formation of the so-called lithium dip for MS stars in the temperature range $6000\,\mathrm{K}\la T_\mathrm{eff}\la 7000\,\mathrm{K}$ [@boesgaard86], see e.g. @richer93. The comparison between theory and observation is improved, in some cases, by introducing *non-standard* processes into the models, e.g. rotation, gravity waves, magnetic fields, and accretion/mass loss [@pinsonneault90; @dantona93; @chaboyer95; @talon98; @ventura98; @mendes99; @siess99; @dantona00; @charbonnel05; @baraffe10; @vick10]. All these processes produce structural changes, with a related strong effect on lithium abundance [see e.g. the reviews by @charbonnel00; @talon08; @talon10]. In particular, models with rotation-induced mixing plus gravity waves are able to reproduce $^7$Li the depletion during the MS and post MS phases [i.e. the lithium dip feature and red-giant branch abundances, see e.g., @talon10; @pace12]. A crucial point in stellar modelling, both for standard and non-standard models, concerns the treatment of the over-adiabatic convection efficiency in the stellar envelope, which is an important issue for lithium depletion, too. In evolutionary codes, the most widely used convection treatment is the simplified scheme of the *mixing length theory* [MLT, @bohm58]. In this formalism, convection efficiency depends on a free parameter to be calibrated. It is a common approach to calibrate it by reproducing the solar radius. This choice usually gives good agreement between models and photometric data; however, to reproduce the effective temperature of stars with different masses in different evolutionary phases, an ad hoc value of the mixing length parameter should be adopted, as suggested by observations [see e.g., @chieffi95; @morel00; @ferraro06; @yildiz07; @gennaro11; @piau11; @bonaca12] and detailed hydrodynamical simulations [see e.g., @ludwig99; @trampedach07]. The main goal of this paper is to re-examine the old lithium problem in light of the improvements in the adopted physical inputs and observational data and to perform a quantitative analysis of the uncertainties affecting surface lithium depletion during the pre-MS phase. The aim is to compute, by means of updated models, theoretical error bars to be applied to the comparison between predictions and data available for stars in young open cluster and binary systems, as partially done in earlier other works [see e.g., @dantona84; @swenson94; @ventura98; @piau02; @sestito06]. The paper is structured in the following way. Section \[sec:data\] presents the adopted $^7$Li data sample for the selected open clusters, followed by a brief description of present models (Sect. \[sec:models\]). In Sect. \[sec:error\] we evaluate the main theoretical uncertainties affecting surface lithium abundance. Finally, in Sect. \[sec:results\], the comparison between predicted and observed lithium abundances for both young open clusters and binary systems is discussed. Lithium data {#sec:data} ============ Surface $^7$Li abundances for young open clusters are taken from the homogeneous database made available by @sestito05. Here, we focus our analysis on clusters younger than about 150 - 200 Myr, in order to avoid MS depletion effects [see e.g., @sestito05], with different metallicities for which a significant number of data in a wide range of effective temperatures are available. The clusters that satisfy these criteria are, Ic 2602, $\alpha$ Per, Blanco 1, Pleiades, and Ngc 2516. Lithium abundances for young double-lined eclipsing binaries are not present in the database by @sestito05, but they have been measured by different authors, as we discuss in Sect. \[sec:binary\]. Theoretical stellar models {#sec:models} ========================== Present stellar models were computed with an updated version of the `FRANEC` evolutionary code [@deglinnocenti08], which adopts the most recent input physics, as described in detail by @tognelli11. The initial deuterium mass fraction abundance is fixed to $X_{\mathrm{D}} = 2\times 10^{-5}$ as a representative value for population I stars [see e.g. @geiss98; @linsky06; @steigman07]. The logarithmic initial lithium abundance is assumed to be $\epsilon_{\mathrm{Li}} = 3.2 \pm 0.2$ [see e.g., @jeffries06; @lodders09], which approximatively corresponds to $X_{^7\mathrm{Li}} \approx 7\times 10^{-9}$ - $1\times 10^{-8}$ in dependence on the metallicity adopted for the models[^1]. Convection is treated according to the mixing length theory, using the same formalism presented in @cox. The adopted reference value of mixing length parameter is $\alpha = 1.0$ (as suggested by present comparison with pre-
{ "pile_set_name": "ArXiv" }
--- abstract: 'In the recent years we have seen that Grover search algorithm [@grover-search] by using quantum parallelism has revolutionized the field of solving huge class of NP problems in comparison to classical systems. In this work we explore the idea of extending the Grover search algorithm to approximate algorithms. Here we try to analyze the applicability of Grover search to process an unstructured database with dynamic selection function as compared to the static selection function in the original work[@grover-search]. This allows us to extend the application of Grover search to the field of randomized search algorithms. We further use the Dynamic Grover search algorithm to define the goals for a recommendation system, and define the algorithm for recommendation system for binomial similarity distribution space giving us a quadratic speedup over traditional unstructured recommendation systems. Finally we see how the Dynamic Grover Search can be used to attack a wide range of optimization problems where we improve complexity over existing optimization algorithms.' author: - 'Indranil Chakrabarty, Shahzor Khan and Vanshdeep Singh' title: 'Dynamic Grover Search: Applications in Recommendation systems and Optimization problems' --- =10000 \[theorem\][Lemma]{} \[theorem\][Corollary]{} \[theorem\][Proposition]{} \[theorem\][Conjecture]{} \[theorem\][Definition]{} I. Introduction =============== The promise of quantum computation is to enable new algorithm which renders physical problems using exorbitant physical resources for their solution on a classical computer. There are two broader class of algorithms. The first class is build upon *Shor’s quantum fourier transform* [@shor] and includes remarkable algorithms for solving the discrete logarithm problems, providing a striking exponential speedup over the best known classical algorithms. The second class of algorithm is based upon Grover’s algorithm for performing *quantum searching* [@grover-search]. Apart from these two broader line of divisions Deutsch algorithm based on *quantum parallelism/interference*[@Deutsch] is another example which has no classical analogue. This provide a remarkable speed up over the best possible classical algorithms. With the induction of quantum algorithms questions were raised for proving complexity superiority of Quantum Model over Classical Model [@feynman].\ Grover’s Search algorithm was one of the first algorithms that opened a class of problems solvable by Quantum Computation[@grover-framework], in a quadratic speedup over classical systems. Classical unstructured search or processing of search space, is essentially linear as we have to process each item using randomized search functions which can be at best optimized to $N/2$ states. In 1996, L.K.Grover gave the Grover Search algorithm to search through a search space in $ \mathcal{O}\sqrt{N} $ [@grover-search]. The algorithm leverages the computational power of superimposed quantum states. In its initialization step an equi-probable superposition state is prepared from the entire search space. In each iteration of the algorithm the coefficients of selected states, based on a selection function, are increased and that of the unselected states are decreased by inversion about the mean. This method increases the coefficients of selected states quadratically and in $ \mathcal{O}\sqrt{N} $ steps we get the selected states with high probability. The unstructured search approach can be used to compute any NP problem by iterating over the search space.\ From an application perspective quantum search algorithms have several applications, as it can be used to extract statistic, such as the minimal element from an unordered data set more quickly than is possible on a classical computer[@min]. It has been extended to solve various problems like finding different measures of central tendency like mean [@mean] and median [@median]. It can be used to speed up algorithms for some problem in NP specifically those problems for which a straightforward search for a solution is the best algorithm known. Finally it can be used to speedup the search for keys to the cryptographic systems such as widely used Data Encryption Standard (DES).\ In the field of e-commerce we have seen recommended system collects information on the preferences of users for a set of items. The information can be acquired explicitly (by collecting user’s ratings) or implicitly (monitoring user’s behavior)[@lee; @nun; @choi]. It make uses of different sources of information for providing user with prediction and recommended items. They try to balance various factors like accuracy, novelty, dispersity and stability in the recommended items. Collaborative filtering plays an important role in the recommendations although they are often used with other filtering techniques like content-based, knowledge-based. Another important approach in recommending process is the k-nearest neighbor hood approach in which we find the k nearest neighbors of the search item. Recently recommendation system implementations has increased and has facilitated in diverse areas [@park] like recommending various topics like music, television , book documents; in e-learning and e-commerce; application in markets and web search [@car; @serr; @zai; @huang; @castro; @costa; @Mcnally]. Mostly these recommendations are done in structured classical database .\ NP problems[@NPProblems] have been explored in general to be solved using Grover Search [@grover-framework]. In extension to that optimization problems have been used to find solution to various specific applications. The class of NP Optimization problem (NPO)[@NPO] exists which finds solution for the combinatorial optimization problems under specific conditions. In this work we replace the static selection function of the Grover search with a dynamic selection function. This allows us to extend the application of Grover search to the field of randomized search algorithms. One such application is the use in recommendation System. We also define the goals for a recommendation system. Finally we define the algorithm for recommendation system for binomial similarity distribution space giving us a quadratic speedup over traditional unstructured recommendation systems. Another application is in finding an optimal search state, for a given NPO problem. We see that Durr and Hoyer’s work [@min] also performs optimization in $O(\log (N) \sqrt{N})$, however use of dynamic Grover Search can achieve the same in $O(\sqrt{N})$.\ In section II we give a brief introduction to Grover search by using standard static selection function. In section III we introduce our model of dynamic grover search and by defining the algorithm over the binomial distribution space and then by comparing it with the traditional unstructured recommended systems. In the last section IV we provide an application of this dynamic grover search in recommendation systems and optimization algorithms.\ II. Grover Search Algorithm =========================== In this section we briefly describe Grover search algorithm as a standard searching procedure and elaborate on the fact that how it expedites the searching process as compared to a classical search in an unstructured database[@nielsen].\ **Oracle:** Suppose we wish to search for a given element through an unstructured search space consisting of $N$ elements. For the sake of simplicity, instead of directly searching a given element we assign indices to each of these elements which are just numbers in the range of $0$ to $N-1$. Without loss of generality we assume $N=2^n$ and we also assume that there are exactly $M$ solutions ($1\leq M \leq N$) to this search problem. Further, we define a selection function $f$ which takes an input state $\ket{x}$, where the index $x$ lies in the range $0$ to $N-1$. It assigns value $1$ when the state is a solution to the search problem and the value $0$ otherwise, $$\begin{aligned} f = \begin{cases} 0 & \text{if $\ket{x}$ is not selected}, \\ 1 & \text{if $\ket{x}$ is selected}. \end{cases}\end{aligned}$$ Here we are provided with quantum oracle-black box which precisely is a unitary operator $O$ and its action on the computational basis is given by, $$\ket{x}\ket{q} \rightarrow \ket{x} \ket{q \oplus f(x)}.$$ In the above equation we have $\ket{x}$ as the index register. The symbol $\oplus$ denotes addition modulo $2$, and the oracle qubit $\ket{q}$ gets flipped if we have $f(\ket{x})=1$. It remains unchanged otherwise. This helps us to check whether $\ket{x}$ is a solution to the search problem or not as this is equivalent of checking the oracle qubit is flipped or not.\ **Algorithm:** The algorithm starts by creating a superposition of $N$ quantum states by applying a Hadamard transformation on $\ket{0}^{\otimes n}$. $$\ket{\psi} =\frac{1}{\sqrt{N}}\sum _{x=0}^{N-1}\ket{x}$$ The algorithm then proceeds to repeated application of a quantum subroutine known as the Grover iteration or as the Grover operator denoted by $G$. The grover subroutine consists of following steps:\ **Procedure: Grover Subroutine** - Apply the Oracle $O$. - Perform inversion about mean. **Algorithm: Grover Search** - Initialize the system, such that there is same amplitude for all the N states - Apply Grover Iteration $O(\sqrt{N})$ times - Sample the resulting state, where we get the expected state with probability greater than 1/2 **Geometry:** The entire process of Grover iteration can be considered as a rotation in the two dimensional space, where 1 dimension represents the solution space, and the other represents the remaining search space. These normalized states are written as, $$\begin{aligned
{ "pile_set_name": "ArXiv" }
--- abstract: '[**Abstract:**]{} We have studied some thermodynamics features of Kiselev black hole and dilaton black hole. Specifically we consider Reissner Nordström black hole surrounded by radiation and dust, and Schwarzschild black hole surrounded by quintessence, as special cases of Kiselev solution. We have calculated the products of black hole thermodynamics parameters, including surface gravities, surface temperatures, Komar energies, areas, entropies, horizon radii and the irreducible masses, at the inner and outer horizons. The products of surface gravities, surface temperature product and product of Komar energies at the horizons are not universal quantities. For Kiselev solutions products of areas and entropies at both the horizons are independent of mass of the black holes (except for Schwarzschild black hole surrounded by quintessence). For charged dilaton black hole, all the products vanish. Using the Smarr formula approach, the first law of thermodynamics is also verified for Kiselev solutions. The phase transitions in the heat capacities are also observed.' author: - Bushra Majeed - Mubasher Jamil - Parthapratim Pradhan title: '**Thermodynamic Relations for Kiselev and Dilaton Black Hole** ' --- Introduction ============ Black holes are the most exotic objects in physics and their connection with thermodynamics is even more surprising. Just like other thermodynamical systems, black holes have physical temperature and entropy. The analogy between the black hole thermodynamics and the four laws of thermodynamics was first proposed in 1970’s [@ch9670; @ch5271; @pe7771; @be3373] and the temperature ($T$) and entropy $(S)$ are analogous of the surface gravity $(\kappa)$ and area $(A)$ of the black hole event horizon respectively. Laws of black hole thermodynamics are studied in literature [@ja6111]. In [@ca0812] universal properties of black holes and the first law of black hole inner mechanics is discussed. In [@wa3114] horizon entropy sums in A(dS) spacetimes is studied. In [@cu6279] authors have discussed the spin entropy of a rotating black hole. The study of phase transition in black holes is a fascinating topic [@pa9590; @ne1865]. If a black hole has a Cauchy horizon (${\cal H}^-$) and an event horizon(${\cal H}^+$) then it is quite interesting to study different quantities like the product of areas of a black hole on these horizons. Products of thermal quantities of the rotating black holes [@cv0111; @pr8714; @prarx14; @bushra] and area products for stationary black hole horizons [@vi1413] have been studied in literature. Calculations show that sometimes these products do not depend on the ADM (Arnowitt-Deser-Misner) mass parameter but only on the charge and angular momentum. The relations that are independent of the black hole mass are of particular interest because these may turn out to be “universal” and hold for more general solutions with nontrivial surroundings too. Kiselev [@ki8703] considered Einstein’s field equation surrounded by quintessential matter and proposed new solutions, dependent on state parameter $\omega$ of the matter surrounding black hole. Recently some dynamical aspects, i.e. collision between particles and their escape energies after collision around Kiselev black hole [@ja2415] have been studied. In this work we consider the solution of Reissner Nordström(RN) black hole surrounded by energy-matter, derived by Kiselev and study the important thermodynamic features of black hole at both the horizons of the black hole. We also consider solution of Schwarzschild black hole surrounded by energy-matter and analyzed its different thermodynamic products. Furthermore, we have considered the charged dilaton black hole and computed its various thermodynamic products. The plan of the work is as follows: In section (II), we discuss the basic aspects of RN black hole surrounded by radiation. Results show that the products of area and entropy calculated at $\mathcal{H}^{\pm}$ are independent of mass of the black hole, while the other products are mass dependent. In subsections of (II), the first law of thermodynamics is obtained by the Smarr formula approach, later the rest mass is written in terms of irreducible mass of the black hole, also the phase transition in heat capacity of the black hole is discussed. Section (III) consists of discussions on thermodynamic aspects of RN black hole surrounded by dust. In section (IV), thermodynamics of the Schwarzschild black hole surrounded by quintessence is studied. In section (V) we have computed the thermodynamic product relations for dilaton black hole. All the work is concluded in the last section. We set $G = \hbar = c = 1$, throughout the calculations. RN Black Hole Surrounded by Radiation ===================================== The spherically symmetric and static solutions for Einstein’s field equations, surrounded by energy-matter, as investigated by Kiselev [@ki8703] can be written as: $$\begin{aligned} \label{M1} ds^2&= &-f(r)dt^2 + \frac{1}{f(r)}dr^2+r^2(d\theta^2+ \sin^2\theta d\phi^2),\end{aligned}$$ where $$\label{M01} f(r)= 1-\frac{2{\cal M}}{r}+ \frac{Q^2}{r^2} -\frac{\sigma}{r^{3\omega+1}},$$ here ${\cal M}$ and $Q$ are the mass and electric charge, of the black hole respectively, $\sigma$ is the normalization parameter and $\omega$ is the state parameter of the matter around black hole. We consider the cases when RN black hole is surrounded by radiation ($\omega= 1/3$) and dust ($\omega= 0$). For $\omega= 1/3$ two horizons of the black hole are obtained from: $$1-\frac{2{\cal M}}{r}+ \frac{Q^2}{r^2} -\frac{\sigma_r}{r^2}=0,$$i.e. $$\label{M3} r_{\pm}={\cal M}\pm \sqrt{{\cal M}^2 -Q^2 + \sigma_r}.$$ Here $\sigma_r$ denotes the normalization parameter for radiation case, with dimensions, $[\sigma_r]=L^2$, where $L$ denotes length, $r_+$ is the outer horizon named as event horizon ${\cal H}^+$ and $r_-$ is the inner horizon known as Cauchy horizon ${\cal H}^{-}$, ${\cal H}^{\pm}$ are the null surfaces of infinite blue-shift and infinite red-shift respectively [@ch83]. Using Eq. (\[M3\]) one can obtain, $$\label{M4} r_+ r_- = Q^2 -\sigma_r,$$ so product of horizons is independent of mass of the black hole but depends on electric charge and $\sigma_r$. Areas of both horizons of the black hole are: $$\label{M5} \mathcal{A_{\pm}}= \int^{2\pi}_0\int^\pi_0 \sqrt{g_{\theta\theta} g_{\phi \phi}}d\theta d\phi=4 \pi r_{\pm}^2= 4\pi(2{\cal M}r_{\pm} -Q^2 +\sigma_r).$$ The corresponding semi-classical Bakenstein-Hawking entropy at ${\cal H}^{\pm}$ is [@ha4471]: $$\begin{aligned} \label{M7} \mathcal{S}_{\pm}&=&\frac{\mathcal{A}_{\pm}}{4}\nonumber\\ &= &\pi(2{\cal M}r_{\pm} -Q^2 +\sigma_r).\end{aligned}$$ Hawking temperature of ${\cal H}^{\pm}$ is determined by using the formula $$\begin{aligned} \label{M9} T_{\pm}&=&\frac{1}{4 \pi}\frac{df}{dr}\mid_{r=r_{\pm}}\nonumber\\ &=&\frac{1}{4 \pi}\Big[\frac{r_{\pm}^2-Q^2+\sigma_r}{r_{\pm}^3}\Big] \nonumber\\ &=&\frac{r_{\pm}- {\cal M}}{2\pi(2{\cal M}r_{\pm} -Q^2 +\sigma_r)}.\end{aligned}$$ Surface gravity is the force required to an observer at infinity, for holding a particle in place, which is equal to the acceleration at horizon due to gravity of a black hole [@po02]: $$\label{M8}\kappa_{\pm}=\frac{1}{2}\frac{df}{dr}\mid_{r=r_{\pm}}=2\pi T_{\pm},$$ $$\kappa_{\pm}=\frac{r_{\pm}- {\cal M}}{(2{\cal M}r_{\pm} -Q^2 +\sigma_r)}.$$ The Komar energy of the black hole is defined as [@ko3459] $$E_{\pm}= 2 \mathcal{S}_{\pm} T_{\pm}={ r_{\pm}-{\cal M}}.\label{M10}$$ Products of surface gravities and surface temperatures at ${\cal H}^{\pm}$ are $$\kappa_+\kappa_-=4\pi^2 T_+ T_-= \frac{Q^2-\sigma_r-{\cal M}^2}{(Q^2-\sigma_r)^2}.\label{M11}$$ The Komar energies at ${\cal H}^{\pm}$ results in $$\label{M13} E_+E_-= {Q^
{ "pile_set_name": "ArXiv" }
--- abstract: 'The density matrix of a graph is the combinatorial laplacian matrix of a graph normalized to have unit trace. In this paper we generalize the entanglement properties of mixed density matrices from combinatorial laplacian matrices of graphs discussed in Braunstein [*et al.*]{} Annals of Combinatorics, [**10**]{}(2006)291 to tripartite states. Then we proved that the degree condition defined in Braunstein [*et al.*]{} Phys. Rev. A [**73**]{}, (2006)012320 is sufficient and necessary for the tripartite separability of the density matrix of a nearest point graph.' author: - | Zhen Wang and Zhixi Wang\ [Department of Mathematics]{}\ [Capital Normal University, Beijing 100037, China]{}\ [wangzhen061213@sina.com,  wangzhx@mail.cnu.edu.cn]{} title: The tripartite separability of density matrices of graphs --- Introduction ============ Quantum entanglement is one of the most striking features of the quantum formalism$^{\tiny\cite{peres1}}$. Moreover, quantum entangled states may be used as basic resources in quantum information processing and communication, such as quantum cryptography$^{\tiny\cite{ekert}}$, quantum parallelism$^{\tiny\cite{deutsch}}$, quantum dense coding$^{\tiny\cite{bennett1,mattle}}$ and quantum teleportation$^{\tiny\cite{bennett2,bouwmeester}}$. So testing whether a given state of a composite quantum system is separable or entangled is in general very important. Recently, normalized laplacian matrices of graphs considered as density matrices have been studied in quantum mechanics. One can recall the definition of density matrices of graphs from [@sam1]. Ali Saif M. Hassan and Pramod Joag$^{\tiny\cite{Ali}}$ studied the related issues like classification of pure and mixed states, von Neumann entropy, separability of multipartite quantum states and quantum operations in terms of the graphs associated with quantum states. Chai Wah Wu$^{\tiny\cite{chai}}$ showed that the Peres-Horodecki positive partial transpose condition is necessary and sufficient for separability in $C^2\otimes C^q$. Braunstein [*et al.*]{}$^{\tiny\cite{sam2}}$ proved that the degree condition is necessary for separability of density matrices of any graph and is sufficient for separability of density matrices of nearest point graphs and perfect matching graphs. Ali Saif M. Hassan and Pramod Joag shows that the degree condition is also necessary and sufficient condition for the separability of $m$-partite pure quantum states living in a real or complex Hilbert space in $\cite{Ali2}$. Hildebrand [*et al.*]{}$^{\tiny\cite{roland}}$ testified that the degree condition is equivalent to the PPT-criterion. They also considered the concurrence of density matrices of graphs and pointed out that there are examples on four vertices whose concurrence is a rational number. The paper is divided into three sections. In section 2, we recall the definition of the density matrices of a graph and define the tensor product of three graphs, reconsider the tripartite entanglement properties of the density matrices of graphs introduced in [@sam1]. In section 3, we define partially transposed graph at first and then shows that the degree condition introduced in [@sam2] is also sufficient and necessary condition for the tripartite mixed state of the density matrices of nearest point graphs. The tripartite entanglement properties of the density matrices of graphs ======================================================================== Recall that from [@sam1] a [*graph*]{} $G=(V(G),\ E(G))$ is defined as: $V(G)=\{v_1,\ v_2,\ \cdots,\ v_n\}$ is a non-empty and finite set called [*vertices*]{}; $E(G)=\{\{v_i,\ v_j\}:\ v_i,\ v_j\in V\}$ is a non-empty set of unordered pairs of vertices called [*edges*]{}. An edge of the form $\{v_i,\ v_i\}$ is called as a [*loop*]{}. We assume that $E(G)$ does not contain any loops. A graph $G$ is said to be on $n$ vertices if $|V(G)|=n$. The [*adjacency matrix*]{} of a graph $G$ on $n$ vertices is an $n\times n$ matrix, denoted by $M(G)$, with lines labeled by the vertices of $G$ and $ij$-th entry defined as: $$[M(G)]_{i,j}=\left\{ \begin{array}{ll} 1, & \hbox{if $(v_{i},\ v_{j})\in E(G)$;}\\ 0, & \hbox{if $(v_{i},\ v_{j})\notin E(G)$.} \end{array} \right.$$ If $\{v_i,\ v_j\}\in E(G)$ two distinct vertices $v_i$ and $v_j$ are said to be [*adjacent*]{}. The [*degree*]{} of a vertex $v_i\in V(G)$ is the number of edges adjacent to $v_i$, we denote it as $d_G(v_i)$. $d_G=\displaystyle\sum_{i=1}^nd_G(v_i)$ is called as the [*degree sum*]{}. Notice that $d_G=2|E(G)|.$ The [*degree matrix*]{} of $G$ is an $n\times n$ matrix, denoted as $\Delta(G)$, with $ij$-th entry defined as: $$[\Delta(G)]_{i,\ j}=\left\{ \begin{array}{ll} d_{G}(v_{i}), & \hbox{if $i=j$;\ }\\ 0, & \hbox{if $i\neq j$.\ } \end{array} \right.$$ The [*combinatorial laplacian matrix*]{} of a graph $G$ is the symmetric positive semidefinite matrix $$L(G)=\Delta(G)-M(G).$$ The density matrix of $G$ of a graph $G$ is the matrix $$\rho(G)=\frac{1}{d_{G}}L(G).$$ Recall that a graph is called [*complete*]{}$^{\tiny\cite{gtm207}}$ if every pair of vertices are adjacent, and the [*complete graph*]{} on $n$ vertices is denoted by $K_n$. Obviously, $\rho(K_n)=\frac{1}{n(n-1)}(nI_n-J_n),$ where $I_n$ and $J_n$ is the $n\times n$ identity matrix and the $n\times n$ all-ones matrix, respectively. A [*star graph*]{} on $n$ vertices $\alpha_1,\ \alpha_2,\ \cdots,\ \alpha_n$, denoted by $K_{1,n-1}$, is the graph whose set of edges is $\{\{\alpha_1,\ \alpha_i\}:\ i=2,\ 3,\ \cdots,\ n\}$, we have $$\rho(K_{1,n-1}) =\frac{1}{2(n-1)} \left( \begin{array}{ccccc} n-1&-1&-1&\cdots&-1\\[3mm] -1&1&&&\\[3mm] -1&&1&&\\[3mm] \vdots&&&\ddots&\\[3mm] -1&&&&1 \end{array} \right).$$ Let $G$ be a graph which has only a edge. Then the density matrix of $G$ is pure. The density matrix of a graph is a uniform mixture of pure density matrices, that is, for a graph $G$ on $n$ vertices $v_1,\ v_2,\ \cdots,\ v_n,$ having $s$ edges $\{v_{i_1},\ v_{j_1}\},\ \{v_{i_2},\ v_{j_2}\},\ \cdots,\ \{v_{i_s},\ v_{j_s}\},$ where $1\leq i_1,\ j_1,\ i_2,\ j_2,\ \cdots,\ i_k,\ j_k\leq n,$ $$\rho(G)=\displaystyle\frac{1}{s}\sum_{k=1}^{s}\rho(H_{i_kj_k}),$$ here $H_{i_kj_k}$ is the factor of $G$ such that $$[M(H_{i_kj_k})]_{u,\ w}=\left\{ \begin{array}{ll} 1, & \hbox{if}\ u=i_k\ \hbox{and}\ w=j_k\ \hbox{or}\ w=i_k\ \hbox{and}\ u=j_k;\\ 0, & \hbox{otherwise.} \end{array} \right.$$ It is obvious that $\rho(H_{i_kj_k})$ is pure. Before we discuss the tripartite entanglement properties of the density matrices of graphs we will at first recall briefly the definition of the tripartite separability: [**Definition 1**]{}The state $\rho$ acting on ${\cal H}={\cal H_A}\otimes{\cal H_B}\otimes{\cal H_C}$ is called [*tripartite separability*]{} if it can be written in the form $$\rho=\
{ "pile_set_name": "ArXiv" }
--- abstract: 'To what extent should we expect the syzygies of Veronese embeddings of projective space to depend on the characteristic of the field? As computation of syzygies is impossible for large degree Veronese embeddings, we instead develop an heuristic approach based on random flag complexes. We prove that the corresponding Stanley–Reisner ideals have Betti numbers which almost always depend on the characteristic, and we use this to conjecture that the syzygies of the $d$-uple embedding of projective $r$-space with $r\geq 7$ should depend on the characteristic for almost all $d$.' author: - 'Caitlyn Booms, Daniel Erman, and Jay Yang' bibliography: - 'bib.bib' title: 'Heuristics for $\ell$-torsion in Veronese Syzygies' --- Introduction ============ Imagine ${\mathbb{P}}^{10}$ embedded into a larger projective space by the $d$-uple Veronese embedding, where $d$ is some large integer like $d=100$ or $d=100000$. What should we expect about the syzygies? Such questions were raised by Ein and Lazarsfeld in [@ein-lazarsfeld-asymptotic] and later in [@ein-erman-lazarsfeld-random]. While they focused on quantitative behaviors that are independent of the ground field, we ask: [*To what extent should we expect the syzygies to depend on the characteristic, if at all? Given the impossibility of computing data for large $d$, how can we make a reasonable conjecture?*]{} The central idea in this paper is the development of an heuristic—based on a random flag complex construction—for modelling the syzygies of Veronese embeddings of projective space. The resulting conjectures propose that, when it comes to dependence on the characteristic of the ground field, pathologies are the norm. Let us make this more precise. For any integers $r,d\geq 1$ and any field $k$, we may consider the $d$-uple embedding of ${\mathbb{P}}^r_k$ into ${\mathbb{P}}^{\binom{r+d}{d}-1}$; the image is given by an ideal $I\subset S$, where $S$ is a polynomial ring in $\binom{r+d}{d}$ variables over $k$. We denote the algebraic Betti numbers of the image by $\beta_{i,j}({\mathbb{P}}^r_k;d) := \dim_k \operatorname{Tor}_i^S(S/I,k)_j$. These encode the number of degree $j$ generators for the $i$’th syzygies, and a major open question is to describe the Betti table $\beta({\mathbb{P}}^r_k;d)$, which is the collection of all these Betti numbers [@green-koszul2; @castryck-et-al; @big-computation; @anderson; @bouc; @jozefiak-pragacz-weyman; @reiner-roberts; @ottaviani-paoletti; @vu; @greco-martino; @ein-lazarsfeld-asymptotic; @ein-erman-lazarsfeld-quick; @raicu]. Since each individual Betti number is invariant under flat extensions, the Betti table is determined by the integers $r,d$ and the characteristic of $k$. For a prime $\ell$, we say that $\beta({\mathbb{P}}^r;d)$ [**has $\ell$-torsion**]{} if $\beta({\mathbb{P}}^r_{\mathbb F_\ell};d) \ne \beta({\mathbb{P}}^r_{\mathbb Q};d)$, and we say that $\beta({\mathbb{P}}^r;d)$ [**depends on the characteristic**]{} if this occurs for some $\ell$.[^1] There are two known cases. - For $r=1$ and any $d$, the Betti numbers in $\beta({\mathbb{P}}^r; d)$ do not depend on the characteristic, as any rational normal curve is resolved by an Eagon-Northcott complex. - If $r\geq 7$, Andersen’s thesis [@anderson] shows that $\beta_{5,7}(\mathbb P^r;2)$ has $5$-torsion. Very little else seems to be known or even conjectured about the dependence of Veronese syzygies on the characteristic, including no known examples of $\ell$-torsion for $\ell\ne 5$. One key challenge in this area is the difficulty of generating good data. For instance, the syzygies of ${\mathbb{P}}^2$ under the $5$-uple embedding were only recently computed [@castryck-et-al; @big-computation]. For larger values of $d$ and $r$, computation is essentially impossible: in the case of ${\mathbb{P}}^{10}$ and $d=100$ mentioned above, the computation would involve $\approx 4.68\times 10^{13}$ variables. Heuristics can provide an alternate route for generating conjectures, especially when computation is infeasible. (Such an approach is quite common for predicting properties of how the prime numbers are distributed, for instance.) In this paper, we use an heuristic model to motivate conjectures about $\ell$-torsion in $\beta({\mathbb{P}}^r;d)$. For instance, we are led to conjecture that dependence on the characteristic should be commonplace as $d\to \infty$. \[conj:dependence\] Let $r\geq 7$. For any $d\gg 0$, the Betti table of $\mathbb P^r$ under the $d$-uple embedding depends on the characteristic. This conjecture is based upon corresponding properties of the following model for Veronese syzygies. We let $\Delta\sim \Delta(n,p)$ denote a random flag complex on $n$ vertices with attaching probability $p$. (See §\[sec:background\] for details.) For a given field $k$, we let $I_\Delta$ be the corresponding Stanley–Reisner ideal in $S=k[x_1,\dots,x_n]$. Ein and Lazarsfeld showed that if $d\gg 0$, then almost all of the Betti numbers in rows $1,\dots, r$ of $\beta({\mathbb{P}}_k^r;d)$ are nonzero (see for instance [@erman-yang Theorem 1.1]). Theorem 1.3 of [@erman-yang] gives that a similar result holds for $I_\Delta$ as long as $n^{-1/(r-1)}\ll p \ll n^{-1/r}$ and $n\gg 0$. Thus, if $p$ is in the specified range, then the Betti table $\beta(S/I_\Delta)$ as $n\to \infty$ satisfies similar nonvanishing properties[^2] as $\beta({\mathbb{P}}_k^r;d)$ as $d\to \infty$; in this sense, the Betti tables $\beta(S/I_\Delta)$ determined by $\Delta(n,p)$ can act as a random model for Veronese syzygies. To predict how $\beta({\mathbb{P}}^r;d)$ depends on the characteristic, we will therefore consider the corresponding questions for $\beta(S/I_\Delta)$ for various fields $k$. As with Veronese syzygies, we say that the Betti table of the Stanley–Reisner ideal of $\Delta$ **has $\ell$-torsion** if this Betti table is different when defined over a field of characteristic $\ell$ than it is over $\mathbb Q$, and we say that this Betti table **depends on the characteristic** if this occurs for some $\ell$. We prove: \[thm:Delta depend\] Let $r\geq 7$, and let $\Delta\sim \Delta(n,p)$ be a random flag complex with $n^{-1/(r-1)} \ll p \ll n^{-1/r}$. With high probability as $n\to \infty$, the Betti table of the Stanley–Reisner ideal of $\Delta$ depends on the characteristic. In other words, if $p$ is in the range where the Betti table of the Stanley–Reisner ideal of $\Delta$ behaves like $\mathbb P^{r}$—in the sense of [@erman-yang Theorem 1.3]—then this Betti table will almost always depend on the characteristic for $n\gg 0$. This theorem is the basis of Conjecture \[conj:dependence\]. Since our $r \geq 7$ hypothesis in Conjecture \[conj:dependence\] is based upon properties of the $\Delta(n,p)$ model, the fact that this hypothesis lines up with Andersen’s example appears to be a coincidence; see Remarks \[rmk:r7\] and \[rmk:r bound\] for more details. Note also that, based on [@anderson], we might even find $\ell$-torsion in $\beta({\mathbb{P}}^r; d)$ for small values of $d$ as well; however, Theorem \[thm:Delta depend\] is asymptotic in nature, which motivates the $d\gg 0$ hypothesis in Conjecture \[con
{ "pile_set_name": "ArXiv" }
--- abstract: | We study modes trapped in a rotating ring carrying the self-focusing (SF) or defocusing (SDF) cubic nonlinearity and double-well potential $\cos ^{2}\theta $, where $\theta $ is the angular coordinate. The model, based on the nonlinear Schrödinger (NLS) equation in the rotating reference frame, describes the light propagation in a twisted pipe waveguide, as well as in other optical settings, and also a Bose-Einstein condensate (BEC) trapped in a torus and dragged by the rotating potential. In the SF and SDF regimes, five and four trapped modes of different symmetries are found, respectively. The shapes and stability of the modes, and transitions between them are studied in the first rotational Brillouin zone. In the SF regime, two symmetry-breaking transitions are found, of subcritical and supercritical types. In the SDF regime, an antisymmetry-breaking transition occurs. Ground-states are identified in both the SF and SDF systems. author: - 'Yongyao Li$^{1,2}$, Wei Pang$^{3}$, and Boris A. Malomed$^{1}$' bibliography: - 'apssamp.bib' title: 'Nonlinear modes and symmetry breaking in rotating double-well potentials' --- Introduction ============ The concept of the spontaneous symmetry breaking (SSB) in nonlinear systems was introduced in Ref. [@Chris]. Its significance has been later recognized in various physical settings, including numerous ones originating in nonlinear optics [@Snyder]-[@photo], Bose-Einstein condensates (BECs) [@Milburn]-[@Arik], and degenerate fermionic gases [Padua]{}. A general analysis of the SSB phenomenology was developed too [misc]{}, which is closely related to the theory of bifurcations in nonlinear systems [@Bif]. Fundamental manifestations of the SSB occur in nonlinear systems based on symmetric double-well potentials (DWPs) or dual-core configurations. A paradigmatic example of the latter in nonlinear optics is the twin-core nonlinear fiber, which may serve as a basis for the power-controlled optical switching [@Snyder]. DWP settings in optics were analyzed theoretically and implemented experimentally in photorefractive crystals [@photo]. In the realm of matter waves, main effects predicted in DWPs are Josephson oscillations [@Zapata], the asymmetric self-trapping of localized modes [@Warsaw], and similar effects in binary mixtures [@Mazzarella; @Ng]. Both the Josephson and self-trapping regimes were implemented in the atomic condensate with contact repulsive interactions [@Markus]. The SSB was also analyzed in one- and two-dimensional (1D and 2D) models of BEC trapped in dual-core configurations [@Arik]. Another dynamical setting which has produced a number of interesting effects in media with the intrinsic nonlinearity, especially in BEC, is provided by rotating potentials. It is well known that stirring the self-repulsive condensate typically leads to the formation of vortex lattices [vort-latt]{}, although it was found experimentally [@Cornell] and demonstrated theoretically [@giant] that giant vortices, rather than lattices, may also be formed under special conditions (when the centrifugal force nearly compensates the trapping harmonic-oscillator potential). On the other hand, the rotation of self-attractive condensates gives rise to several varieties of stable localized modes, such as vortices, “crescents" (mixed-vorticity states), and the so-called center-of-mass modes (quasi-solitons) [@rotating-trap]. Further development of this topic was achieved by the consideration of rotating lattice potentials, which can be implemented experimentally as an optical lattice induced in BEC by a broad laser beam transmitted through a revolving sieve [@sieve], or using *twisted* photonic-crystal fibers in optics [@twistedPC]. In these systems, quantum BEC states and vortex lattices have been studied [@in-sieve], as well as solitons and solitary vortices depinning from the lattice when its rotation velocity exceeds a critical value [@HS]. A specific implementation of the latter settings is provided by the quasi-1D lattice, or a single quasi-1D potential well, revolving about its center in the 2D geometry [@Barcelona]. In particular, the rotation makes it possible to create fundamental and vortical soliton in the self-repulsive medium, where, obviously, nonrotating quasi-1D potentials cannot maintain bright solitons [@Kon]. Furthermore, the rotation of a DWP gives rise to azimuthal Bloch bands [@Ueda; @Stringari]. As mentioned above, the static DWP and its limit form reducing to dual-core systems are fundamental settings for the onset of the SSB [@Snyder]-[Arik]{}. A natural problem, which is the subject of the present work, is the SSB and related phenomenology, i.e., the existence and stability of symmetric, antisymmetric, and asymmetric modes, in *rotating* DWPs (recently, a revolving DWP configuration was considered in a different context in Ref. [@WenLuo], as a stirrer generating vortex lattices). To analyze basic features of the phenomenology, we here concentrate on the one-dimensional DWP placed onto a rotating ring. As shown in Fig. \[fig\_1\], in optics this setting may be realized as a hollow pipe waveguide twisted with pitch $2\pi /\omega $, while the azimuthal modulation of the refractive index, that gives rise to the effective potential $V(\theta )$, is written into the material of the pipe. Alternatively, a *helical* potential structure can be created in a straight sheath waveguide by means of optical-induction techniques, using pump waves with the ordinary polarization in a photorefractive material (while the probe wave is to be launched in the extraordinary polarization [@Moti]), or the method of the electromagnetically-induced transparency (EIT) [@Fleischhauer], including its version recently proposed for supporting spatial solitons [Yongyao]{}. In the latter case, one can make the pipe out of Y$_{2}$SiO$_{5}$ crystal doped by Pr$^{3+}$ (Pr:YSO) ions [@Kuznetsova]. In either case of the use of the photorefractive material or EIT, the helical structure may be induced by a superposition of a pair of co-propagating *vortical* pump waves, with equal amplitudes, a small mismatch of the propagation constants $k_{1,2}$ ($\Delta k\equiv k_{1}-k_{2}\ll k_{1}$), and opposite vorticities $\left( \pm S\right) $, which will give rise to an effective potential profile, $$V(\theta ,z)\sim r^{S}\cos \left( \Delta k\cdot z+2S\theta \right) , \label{V}$$where $z\ $is the propagation distance, while $r$ and $\theta $ are the polar coordinates in the transverse plane. In terms of the BEC, a similar setting may be based on ring-shaped (toroidal) traps, which have been created in experiments [@torus] and investigated in various contexts theoretically [@Salasnich]. In that case, the rotating periodic potential can be added to the toroidal trap [@sieve] , which is equivalent to the consideration of the rotating ring [Stringari,rotating-ring]{}. In this work, we study basic types of trapped modes and their SSB phenomenology in the 1D rotating ring, in both cases of the self-focusing and self-defocusing (SF and SDF) cubic nonlinearities. In Sec. II we formulate the model and present analytical results, which predict a boundary between the symmetric and asymmetric modes, the analysis being possible for the small-amplitude potential and the rotation rate close to $\omega =1/2$. Numerical results are reported in a systematic form, and are compared to the analytical predictions, in Secs. III and IV for the SF and SDF nonlinearities, respectively. The paper is concluded by Sec. IV. The model and analytical considerations ======================================= As said above, we consider the limit of a thin helical shell, which implies a fixed value of the radius in Eq. (\[V\]), $r=r_{0}$, that we normalize to be $r_{0}=1$. Taking the harmonic periodic potential in the form of Eq. (\[V\]) with $S=1$, $V(\theta ,z)=2A\cos ^{2}(\theta -\omega z)$, the corresponding scaled nonlinear Schrödinger equation is $$i{\frac{\partial }{\partial z}}\psi =\left[ -{\frac{1}{2}}{\frac{\partial ^{2}}{\partial \theta ^{2}}}+V(\theta ,z)-\sigma |\psi |^{2}\right] \psi , \label{Eq5}$$where $\sigma =+1$ and $-1$ refer to SF and SDF nonlinearities, respectively. Then, we rewrite Eq. (\[Eq5\]) in the helical coordinate system, with $\theta ^{\prime }\equiv \theta -\omega z$: $$i{\frac{\partial }{\partial z}}\psi =\left[ -{\frac{1}{2}}{\frac{\partial ^{2}}{\partial \theta ^{\prime }{}^{2}}}+i\omega {\frac{\partial }{\partial \theta ^{\prime }}}+2A\cos ^{2}(\theta ^{\prime })-\sigma |\psi |^{2}\right] \psi , \label{Eq5p}$$where the solution domain is defined at $-\pi \leq \theta ^{\prime }\leq +\pi $. For the narrow toroidal BEC
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, the dynamical attractor and heteroclinic orbit have been employed to make the late-time behaviors of the model insensitive to the initial condition and thus alleviate the fine-tuning problem in the torsion cosmology. The late-time de Sitter attractor indicates that torsion cosmology is an elegant scheme and the scalar torsion mode is an interesting geometric quantity for physics. The numerical solutions obtained by Nester et al. are not periodic solutions, but are quasi-periodic solutions near the focus for the coupled nonlinear equations.' author: - 'Xin-zhou Li' - 'Chang-bo Sun' - Ping Xi title: Torsion cosmological dynamics --- INTRODUCTION ============ The current observations, such as SNeIa (Supernovae type Ia), CMB (Cosmic Microwave Background) and large scale structure, converge on the fact that a spatially homogeneous and gravitationally repulsive energy component, referred as dark energy, accounts for about $70$ % of the energy density of universe. Some heuristic models that roughly describe the observable consequences of dark energy were proposed in recent years, a number of them stemming from a certain physics [@Padmanabhan01] and the others being purely phenomenological [@Copeland02]. About thirty years ago, the bouncing cosmological model with torsion was suggested in Ref.[@Kerlick], but the torsion was imagined as playing role only at high densities in the early universe. Goenner et al. made a general survey of the torsion cosmology [@Goenner], in which the equations for all the PGT (Poincar[é]{} Gauge Theory of gravity) cases were discussed although they only solved in detail a few particular cases. Recently some authors have begun to study torsion as a possible reason of the accelerating universe [@Boeheretal]. Nester and collaborators [@shie03] consider an accounting for the accelerated universe in term of PGT: dynamic scalar torsion. With the usual assumptions of homogeneity and isotropy in cosmology, they find that torsion field could play a role of dark energy. This elegant model has only a few adjustable parameters, so scalar torsion may be easily falsified as “dark energy”. The fine-tuning problem should be one of the most important issues for the cosmological models, and a good model should limit the fine-tuning as much as possible. The dynamical attractor of the cosmological system has been employed to make the late-time behaviors of the model insensitive to the initial condition of the field and thus alleviates the fine-tuning problem [@Hao04]. In this paper, we study attractor and heteroclinic orbit in the torsion cosmology. We show that the late-time de Sitter behaviors cover a wide range of the parameters. This attractor indicates that torsion cosmology is an elegant scheme and the scalar torsion mode is an interesting geometric quantity for physics. Furthermore, there are only exact periodic solutions for the linearized system, which just correspond to the critical line (line of centers). The numerical solutions in Ref.[@shie03] are not periodic, but are quasi-periodic solutions near the focus for the coupled nonlinear equations. AUTONOMOUS EQUATIONS ==================== PGT [@Hehl05] based on a Riemann-Cartan geometry, allows for dynamic torsion in addition to curvature. The affine connection of the Riemann-Cartan geometry is $$\label{PGT} \Gamma_{\mu\nu}{}^\kappa=\overline{\Gamma}_{\mu\nu}{}^\kappa+\frac{1}{2}(T_{\mu\nu}{}^\kappa+T^\kappa{}_{\mu\nu} +T^\kappa{}_{\nu\mu})\,,$$ where $\bar{\Gamma}_{\mu\nu}{}^{\kappa}$ is the Levi-Civita connection and $T_{\mu\nu}^{\kappa}$ is the torsion tensor. Meanwhile, the Ricci curvature and scalar curvature can be written as $$\begin{aligned} &&R_{\mu\nu} = \overline{R}_{\mu\nu} + \overline{\nabla}_\nu T_\mu +\frac{1}{2} (\overline{\nabla}_\kappa - T_\kappa)(T_{\nu\mu}{}^\kappa+T^\kappa{}_{\mu\nu}+T^\kappa{}_{\nu\mu})\nonumber\\ &&+\frac{1}{4}(T_{\kappa\sigma\mu}T^{\kappa\sigma}{}_\nu+2T_{\nu \kappa \sigma}T^{\sigma \kappa}{}_\mu)\,,\\ &&R=\overline{R} + 2\overline{\nabla}_\mu T^\mu+\frac{1}{4}(T_{\mu\nu \kappa}T^{\mu\nu \kappa} +2T_{\mu\nu \kappa}T^{\kappa\nu\mu}-4T_\mu T^\mu),\nonumber\\\end{aligned}$$ where $\bar{R}_{\mu\nu}$ and $\bar{R}$ are the Riemannian Ricci curvature and scalar curvature, respectively, and $\bar{\nabla}$ is the covariant derivative with the Levi-Civita connection and $T_{\mu}\equiv T^{\nu}_{\mu\nu}$. According as Ref.[@shie03] we take the restricted form of torsion in this paper $$\begin{aligned} T_{\mu\nu\rho}=\frac{2}{3}T_{[\mu}g_{\nu]\rho} \label{restrictedT}\end{aligned}$$ therefore, the gravitational Lagrangian density for the scalar mode is (For a detailed discussion see Ref.[@Nester2]) $$\begin{aligned} L_g &=& -\frac{a_0}{2}R +\frac{b}{24}R^2\nonumber\\ &&+\frac{a_1}{8}(T_{\nu\sigma\mu}T^{\nu\sigma\mu} +2T_{\nu\sigma\mu}T^{\mu\sigma\nu}-4T_\mu T^\mu)\,,\label{Lg0+mode}\end{aligned}$$ Since current observations favor a flat universe, we will work in the spatially flat Robertson-Walker metric. According to the homogeneity and isotropy, the torsion $T_{\mu}$ should be only time dependent, so one can let $T_{t}(t)\equiv\Phi(t)$ and the spatial parts vanish since we have taken the restricted form (\[restrictedT\]) of torsion. For the general form, the torsion tensor have two independent components [@Goenner]-[@Boeheretal]. From the field equations one can finally give the necessary equations for the matter-dominated era to integrate (For a detailed discussion see Ref.[@shie03]) $$\begin{aligned} \dot{H}&=&\frac{\mu}{6a_1}R-\frac{\rho}{6a_1}-2H^2\,,\label{dtH}\\ \dot{\Phi}&=&-\frac{a_0}{2a_1}R-\frac{\rho}{2a_1}-3H\Phi +\frac{1}{3}\Phi^2\,,\label{dtphi}\\ \dot{R}&=&-\frac{2}{3}\left(R+\frac{6\mu}{b}\right)\Phi\,,\label{dtR}\end{aligned}$$ where $\mu= a_1-a_0$ and the energy density of matter component $$\begin{aligned} &&\rho=\frac{b}{18}(R+\frac{6\mu}{b})(3H-\Phi)^2-\frac{b}{24}R^2-3a_1H^2 \,.\label{fieldrho} \end{aligned}$$ One can scale the variables and the parameters as $$\begin{aligned} &&t\rightarrow l_{p}^{-2}H_{0}^{-1}t,\,\, H\rightarrow l_{p}^{2}H_{0} H, \,\, \Phi\rightarrow l_{p}^{2}H_{0}\Phi,\,\, R\rightarrow l_{p}^{4}H_{0}^{2}R,\nonumber\\ &&a_0\rightarrow l_{p}^{2}a_0,\,\, a_1\rightarrow l_{p}^{2}a_1,\,\, \mu\rightarrow l_{p}^{2}\mu,\,\, b\rightarrow l_{p}^{-2}H_{0}^{-2}b,\label{scale}\end{aligned}$$ where $H_0$ is the present value of Hubble parameter and $l_p\equiv\sqrt{8\pi G}$ is the Planck length. Under the transform (\[scale\]), Eqs. (\[dtH\])-(\[dtR\]) remain unchanged. After transform, new variables $t$, $H$, $\Phi$ and $R$, and new parameters $a_0$, $a_1$, $\mu$ and $b$ are all dimensionless. Furthermore, the Newtonian limit requires $a_0=-1$. Obviously, Eqs. (\[dtH\])-(\[dtR\]) is an autonomous system, so we can use the qualitative method of ordinary differential equations. It is worth noting that in the analysis of critical points, Copeland et al. [@Copeland] introduced the elegant compact variables which are defined from the Friedmann equation constraint, but in our case, the Friedmann equation can not be written as the ordinary form, so the compact variables are not convenient here. Therefore, we will analyze the system of Eqs.(\[dtH\])-(\[dtR\]) using the variables $H$, $\Phi$ and $R$ under the transform (\[scale\]). LATE TIME DE SITTER ATTRACTOR ============================= In the case of scalar
{ "pile_set_name": "ArXiv" }
--- abstract: 'We calculate QCD corrections to transversely polarized Drell-Yan process at a measured $Q_T$ of the produced lepton pair in the dimensional regularization scheme. The $Q_T$ distribution is discussed resumming soft gluon effects relevant for small $Q_T$.' address: - | Radiation Laboratory, RIKEN\ 2-1 Hirosawa, Wako, Saitama 351-0198, JAPAN\ kawamura@rarfaxp.riken.jp - | Theory Division, High Energy Accelerator Research Organization (KEK)\ 1-1 OHO, Tsukuba 305-0801, JAPAN - 'Department of Physics, Juntendo University, Inba-gun, Chiba 270-1695, JAPAN' author: - HIROYUKI KAWAMURA - JIRO KODAIRA and HIROTAKA SHIMIZU - KAZUHIRO TANAKA title: '$Q_T$ RESUMMATION IN TRANSVERSELY POLARIZED DRELL-YAN PROCESS [^1]' --- Hard processes with polarized nucleon beams enable us to study spin-dependent dynamics of QCD and the spin structure of nucleon. The helicity distribution $\Delta q(x)$ of quarks within nucleon has been measured in polarized DIS experiments, and $\Delta G(x)$ of gluons has also been estimated from the scaling violations of them. On the other hand, the transversity distribution $\delta q(x)$, i.e. the distribution of transversely polarized quarks inside transversely polarized nucleon, can not be measured in inclusive DIS due to its chiral-odd nature,[@RS:79] and remains as the last unknown distribution at the leading twist. Transversely polarized Drell-Yan (tDY) process is one of the processes where the transversity distribution can be measured, and has been undertaken at RHIC-Spin experiment. We compute the 1-loop QCD corrections to tDY at a measured $Q_T$ and azimuthal angle $\phi$ of the produced lepton in the dimensional regularization scheme. For this purpose, the phase space integration in $D$-dimension, separating out the relevant transverse degrees of freedom, is required to extract the $\propto \! \cos(2\phi)$ part of the cross section characteristic of the spin asymmetry of tDY.[@RS:79] The calculation is rather cumbersome compared with the corresponding calculation in unpolarized and longitudinally polarized cases, and has not been performed so far. We obtain the NLO ${\cal O}(\alpha_s)$ corrections to the tDY cross section in the $\overline{\rm MS}$ scheme. We also include soft gluon effects by all-order resummation of logarithmically enhanced contributions at small $Q_T$ (“edge regions of the phase space”) up to next-to-leading logarithmic (NLL) accuracy, and obtain the first complete result of the $Q_T$ distribution for all regions of $Q_T$ at NLL level. We first consider the NLO ${\cal O}(\alpha_s)$ corrections to tDY: $h_1(P_1,s_1)+h_2(P_2,s_2)\rightarrow l(k_1)+\bar{l}(k_2)+X$, where $h_1,h_2$ denote nucleons with momentum $P_1,P_2$ and transverse spin $s_1,s_2$, and $Q=k_1+k_2$ is the 4-momentum of DY pair. The spin dependent cross section $\Delta_T d \sigma \equiv (d \sigma (s_1 , s_2) - d \sigma (s_1 , - s_2))/2$ is given as a convolution $$\Delta_T d\sigma = \int d x_1 d x_2\, \delta H (x_1 \,,\,x_2 ; \mu_F)\, \Delta_T d \hat{\sigma} (s_1\,,\,s_2 ; \mu_F),$$ where $\mu_F$ is the factorization scale, and $$\delta H (x_1 \,,\,x_2 ; \mu_F)\, = \sum_i e_i^2 [\delta q_i(x_1 ; \mu_F)\delta \bar{q}_i(x_2 ; \mu_F) +\delta \bar{q}_i(x_1 ; \mu_F)\delta q_i(x_2 ; \mu_F)]$$ is the product of transversity distributions of the two nucleons, and $\Delta_T d \hat{\sigma}$ is the corresponding partonic cross section. Note that, at the leading twist level, the gluon does not contribute to the transversely polarized process due to its chiral odd nature. We compute the one-loop corrections to $\Delta_T d \hat{\sigma}$, which involve the virtual gluon corrections and the real gluon emission contributions, e.g., $q (p_1 , s_1) + \bar{q} (p_2 , s_2) \to l (k_1) + \bar{l} (k_2) + g$, with $p_i = x_i P_i$. We regularize the infrared divergence in $D=4 - 2 \epsilon$ dimension, and employ naive anticommuting $\gamma_5$ which is a usual prescription in the transverse spin channel.[@WV:98] In the $\overline{\rm MS}$ scheme, we eventually get,[@KKST; @KKST2] to NLO accuracy, $$\begin{aligned} \frac{\Delta_T d \sigma}{d Q^2 d Q_T^2 d y d \phi} = N\, \cos{(2 \phi )} \left[ X\, (Q_T^2 \,,\, Q^2 \,,\, y) + Y\, (Q_T^2 \,,\, Q^2 \,,\, y) \right], \label{cross section}\end{aligned}$$ where $N = \alpha^2 / (3\, N_c\, S\, Q^2)$ with $S=(P_1 +P_2 )^2$, $y$ is the rapidity of virtual photon, and $\phi$ is the azimuthal angle of one of the leptons with respect to the initial spin axis. For later convenience, we have decomposed the cross section into the two parts: the function $X$ contains all terms that are singular as $Q_T \rightarrow 0$, while $Y$ is of ${\cal O}(\alpha_s)$ and finite at $Q_T=0$. Writing $X = X^{(0)} + X^{(1)}$ as the sum of the LO and NLO contributions, we have[@KKST; @KKST2] $X^{(0)} = \delta H (x_1^0\,,\,x_2^0\,;\, \mu_F )\ \delta (Q_T^2)$, and $$\begin{aligned} X^{(1)} &=& \frac{\alpha_s}{2 \pi} C_F\ \Biggl\{ \delta H (x_1^0\,,\,x_2^0\,;\, \mu_F ) \left[\, 2\, \left( \frac{\ln Q^2 / Q_T^2}{Q_T^2} \right)_+ - \frac{3}{(Q_T^2)_+} + \left(\, - 8 + \pi^2 \right) \delta (Q_T^2) \right] \nonumber\\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!+&& \!\!\!\!\!\!\!\!\left( \frac{1}{(Q_T^2)_+} + \delta (Q_T^2) \ln \frac{Q^2}{\mu_F^2} \right)\!\!\! \left[ \int^1_{x_1^0} \frac{d z}{z} \delta P_{qq}^{(0)} (z)\ \delta H \left( \frac{x_1^0}{z}, x_2^0 ;\ \mu_F \right) + ( x_1^0 \leftrightarrow x_2^0 ) \right] \Biggr\} , \label{eq:x}\end{aligned}$$ where $x_1^0 = \sqrt{\tau}\ e^y , x_2^0 =\sqrt{\tau}\ e^{-y}$ are the relevant scaling variables with $\tau =Q^2/S$, and $\delta P_{qq}^{(0)} (z) = 2 z/(1 - z)_+ + (3/2)\, \delta (1 - z)$ is the LO transverse splitting function.[@AM:90] In (\[eq:x\]), the terms involving $\delta(Q_T^2 )$ come from the virtual gluon corrections, while the other terms represent the recoil effects due to the real gluon emissions. For the analytic expression of $Y$, see Ref.[@KKST2]. Eq. (\[cross section\]) gives the first NLO result in the $\overline{\rm MS}$ scheme. We note that there has been a similar NLO calculation of tDY cross section in massive gluon scheme.[@VW:93] We also note
{ "pile_set_name": "ArXiv" }
--- author: - | Maxim Naumov$^*$, John Kim$^\dagger$, Dheevatsa Mudigere$^\ddagger$, Srinivas Sridharan, Xiaodong Wang,\ Whitney Zhao, Serhat Yilmaz, Changkyu Kim, Hector Yuen, Mustafa Ozdal, Krishnakumar Nair,\ Isabel Gao, Bor-Yiing Su, Jiyan Yang and Mikhail Smelyanskiy\ Facebook, 1 Hacker Way, Menlo Park, CA bibliography: - 'refs.bib' title: 'Deep Learning Training in Facebook Data Centers: Design of Scale-up and Scale-out Systems' ---
{ "pile_set_name": "ArXiv" }
--- author: - 'Simon Taylor, Chris Sherlock, Gareth Ridall and Paul Fearnhead' bibliography: - 'MUNErefs.bib' date: 11th April 2018 title: Motor Unit Number Estimation via Sequential Monte Carlo --- Abstract {#abstract .unnumbered} ======== A change in the number of motor units that operate a particular muscle is an important indicator for the progress of a neuromuscular disease and the efficacy of a therapy. Inference for realistic statistical models of the typical data produced when testing muscle function is difficult, and estimating the number of motor units from these data is an ongoing statistical challenge. We consider a set of models for the data, each with a different number of working motor units, and present a novel method for Bayesian inference, based on sequential Monte Carlo, which provides estimates of the marginal likelihood and, hence, a posterior probability for each model. To implement this approach in practice we require sequential Monte Carlo methods that have excellent computational and Monte Carlo properties. We achieve this by leveraging the conditional independence structure in the model, where given knowledge of which motor units fired as a result of a particular stimulus, parameters that specify the size of each unit’s response are independent of the parameters defining the probability that a unit will respond at all. The scalability of our methodology relies on the natural conjugacy structure that we create for the former and an enforced, approximate conjugate structure for the latter. A simulation study demonstrates the accuracy of our method, and inferences are consistent across two different datasets arising from the same rat tibial muscle. Keywords {#keywords .unnumbered} ======== Motor Unit Number Estimation; Sequential Monte Carlo; Model Selection Introduction {#sec:Intro} ============ Motor unit number estimation (MUNE) is a continuing challenge for clinical neurologists. An ability to determine the number of motor units (MUs) that operate a particular muscle provides important insights into the progression of various neuromuscular ailments such as amyotrophic lateral sclerosis [@She06; @Bro07], and aids the assessment of the efficacy of potential therapy treatments [@Cas10]. A MU is the fundamental component of the neuromuscular system and consists of a single motor neuron and the muscle fibres whos contraction it governs. Restriction to a MU’s operation may be a result of impaired communication between the motor neuron and muscle fibres, abnormaility in their function, or atrophy of either cell type. A direct investigation into the number of MUs via a biopsy, for example, is not helpful since this only determines the presence of each MU, not its functionality. Electromyography (EMG) provides a set of electrical stimulii of varying intensity to a group of motor neurons; each stimulus artificially induces a twitch in the targeted muscle, providing an *in situ* measurement of the functioning of the MUs. The effect on the muscle may be measured by recording either the minute variation in muscle membrane potential or the physical force the muscle exerts [@Maj05]. The generic methods developed in this article are applicable to either type of measurement. Since our data consist of whole muscle twitch force (WMTF) measurements we henceforth describe the response in these terms. In a healthy subject, the stimulus-response curve is typically sigmoidal [@Hen06], illustrating the smooth recruitment of additional MUs as the stimulus increases; however, the relatively low number of MUs in a patient with impaired muscle function may manifest within the stimulus-response relationship as large jumps in WMTF measurements. Figure \[fig:RatData\] shows the two data sets that will be described and analysed in detail in Section \[sec:CaseStudy\], with the large jumps clearly visible. The histograms of absolute differences in response for adjacent stimuli show two main modes, one, near 0mN, corresponding to noise and the other, around 40mN indicating that different MUs fired. The noise arises primarily because of small variations in the contribution to the WMTF provided by any particular MU, whenever it fires. The second general source of noise, visible in isolation at very low stimuli when no MUs are firing, is called the baseline noise. This arises from respiration movements and pulse pressure waves, and particular care is taken to minimise such influences, for example by earthing the subject and equipment, restraining the limb, digitally resetting the force signal prior to each stimulus, synchronising stimuli with the pulse cycle and using highly sensitive measurement devices. MUNE uses the observed stimulus-response pattern to estimate the number of functioning MUs. Techniques for MUNE generally form two classes: the average and comprehensive approaches. The most common averaging approach is the incremental technique of [@McC71], which assumes that the MUs can be characterised by an ‘average’ MU with a particular single motor unit twitch force (MUTF), estimated as the average of the magnitudes of the observed stepped increases in twitch force. A large stimulus, known as the supramaximal stimulus, is applied in order to cause all MUs to react. The quotient of the WMTF arising from the supramaximal stimulus and the average MUTF provides a count estimate. However, there is no guarantee that a particular single-stepped increase in response corresponds to a new, previously latent, MU, since it may instead be due to a phenomenon called alternation [@Bro76]. This occurs when two or more MUs have similar activation thresholds such that different combinations of MUs may fire in reaction to two identical stimuli. Consequently, the incremental technique tends to underestimate the average MUTF and hence overestimate the number of MUs. A number of improvements both experimentally [@Kad76; @Sta94 e.g.] and empirically [@Dau95; @Maj07 e.g.] have been proposed to try to deal with the alternation problem but, despite these improvements, each method oversimplifies the data generating mechanism and there is no gold-standard averaging approach; @Bro07 and @Goo14 provide thorough discussions on these approaches to MUNE. ![Stimulus-response curve from a rat tibial muscle using 10sec (left) and 50sec duration stimuli. Histogram inserts represent the frequency in the absolute difference of twitch forces when ordered by stimulus.[]{data-label="fig:RatData"}](figures/SRcurve_rat.pdf){width="80.00000%"} Motor units are more diverse than simple replicates of the ‘average’ MU, with many factors influencing their function. A desire for a more complete model for the data generating mechanism motivated the comprehensive approach to MUNE in @Rid06, which proposed three assumptions: - MUs fire independently of each other and of previous stimuli in an all-or-nothing response. Each MU fires precisely when the stimulus intensity exceeds a random threshold whose distribution is unique to that MU, with a sigmoidal cumulative distribution function, called an excitability curve. - The firing of a MU is characterised by a MUTF which is independent of the size of the stimulus that caused it to fire, and has a Gaussian distribution with an expectation specific to that MU and a variance common to all MUs. - The measured WMTF is the superposition of the MUTFs of those MUs that fired, together with a baseline component which has a Gaussian distribution with its own mean and variance. From these assumptions, @Rid06 proposed a set of similar statistical models each of which assumed a different *fixed* number of MUs. MUNE thus reduced to selection of a best model, for which the Bayesian information criterion was used. The class of methods which performs MUNE within a Bayesian framework is commonly referred to as Bayesian MUNE. In a subsequent paper, @Rid07 extended the method by constructing a reversible jump Markov chain Monte Carlo (RJMCMC) [@Gre95] to sample from the MU-number posterior mass function directly. However, its implementation is highly challenging with slow and uncertain convergence particularly when the studied muscle has many MUs. This is partly attributed to difficulty in defining efficient and meaningful transitions between models, with transition rates found to be 0.5–2% [@And07]. The between model transition rate was improved in @Dro14 where it was noticed that under Assumption A1, for a given stimulus, the majority of MUs are either almost certain to fire or almost certain to not fire. Approximating this near certainty by absolute certainty led to a substantial reduction in the size of the sample space. The approximate sample space for the firing events was sufficiently small to permit marginalisation in the calculation of between-model transition probabilities, increasing the acceptance rate to 9.2% with simulated examples. Nevertheless, substantial issues over convergence remain as the parameter posterior distributions for models with more than the true number of MUs are multimodal. In this paper, slight alterations of the neuromuscular assumptions permit the development of a fully adapted sequential Monte Carlo (SMC) filter, leading to SMC-MUNE, the first Bayesian MUNE method compatible with real-time analysis. As in @Rid06, the principal inference targets are separate estimates of the marginal likelihood for models with $u=1, \ldots, {u_{\max}}$ MUs, for some maximum size ${u_{\max}}$. The paper proceeds as follows. Section \[sec:Model\] presents the neuromuscular model of @Rid06 for a fixed number of MUs and defines the priors for the model parameters. Section \[sec:Method\] describes the SMC-MUNE method. Due the complexity of the problem that MUNE addresses, this section is broken into three parts: inference for the firing events and associated parameters; inference for the parameters of the baseline and MUTF processes; and, estimation of the marginal likelihood so as to evaluate the posterior mass function for MU-number. Section \[sec:SimStudy\]
{ "pile_set_name": "ArXiv" }
--- author: - | [**Jian Gao,  Linzhi Shen,  Fang-Wei Fu** ]{}\ [Chern Institute of Mathematics and LPMC, Nankai University]{}\ [Tianjin, 300071, P. R. China]{}\ title: '[**Skew Generalized Quasi-Cyclic Codes Over Finite Fields**]{}' --- [ In this work, we study a class of generalized quasi-cyclic (GQC) codes called skew GQC codes. By the factorization theory of ideals, we give the Chinese Remainder Theorem over the skew polynomial ring, which leads to a canonical decomposition of skew GQC codes. We also focus on some characteristics of skew GQC codes in details. For a $1$-generator skew GQC code, we define the parity-check polynomial, determine the dimension and give a lower bound on the minimum Hamming distance. The skew quasi-cyclic (QC) codes are also discussed briefly.]{} [ Skew cyclic codes; Skew GQC codes; $1$-generator skew GQC codes; Skew QC codes]{} [**Mathematics Subject Classification (2000)** ]{} 11T71 $\cdot$ 94B05 $\cdot$ 94B15 0.2in [**1 Introduction**]{} Recently, it has been shown that codes over finite rings are a very important class of codes and many types of codes with good parameters could be constructed over rings [@Aydin; @Abualrub2; @Siap2]. Skew polynomial rings are an important class of non-commutative rings. More recently, applications in the construction of algebraic codes have been found [@Abualrub1; @Bhaintwal2; @Boucher1; @Boucher2; @Boucher3], where codes are defined as ideals or modules in the quotient ring of skew polynomial rings. The principle motivation for studying codes in this setting is that polynomials in skew polynomials rings have more factorizations than that in the commutative case. This suggests that it may be possible to find good or new codes in the skew polynomial ring with lager minimum Hamming distance. Some researchers have indeed shown that such codes in skew polynomial rings have resulted in the discovery of many new linear codes with better minimum Hamming distances than any previously known linear codes with same parameters [@Abualrub1; @Boucher1]. Quasi-cyclic (QC) codes over commutative rings constitute a remarkable generalization of cyclic codes [@Aydin; @Bhaintwal1; @Conan; @Ling2; @Siap2]. More recently, many codes were constructed over finite fields which meet the best value of minimum distances with the same length and dimension [@Aydin; @Siap2]. In [@Abualrub1], Abualrub et al. have studied skew QC codes over finite fields as a generalization of classical QC codes. They have introduced the notation of similar polynomials in skew polynomial rings and shown that parity-check polynomials for skew QC codes are unique up to similarity. They also constructed some skew QC codes with minimum Hamming distances greater than previously best known linear codes with the given parameters. In [@Bhaintwal2], Bhaintwal studied skew QC codes over Galois rings. He gave a necessary and sufficient condition for skew cyclic codes over Galois rings to be free, and presented a distance bound for free skew cyclic codes. Futhermore, he also discussed the sufficient condition for 1-generator skew QC codes to be free over Galois rings. A canonical decomposition and the dual codes of skew QC codes were also given. The notion of generalized quasi-cyclic (GQC) codes over finite fields were introduced by Siap and Kulhan [@Siap1] and some further structural properties of such codes were studied by Esmaeili and Yari [@Esmaeili]. Based on the structural properties of GQC codes, Esmaeili and Yari gave some construction methods of GQC codes and obtained some optimal linear codes over finite fields. In [@Cao1], Cao studied GQC codes of arbitrary length over finite fields. He investigated the structural properties of GQC codes and gave an explicit enumeration of all $1$-generator GQC codes. As a natural generalization, GQC codes over Galois rings were introduced by Cao and structural properties and explicit enumeration of GQC codes were also obtained in [@Cao2]. But the problem of researching skew GQC codes over finite fields has not been considered to the best of our knowledge. Let $\mathbb{F}_{q}$ be a finite field, where $q=p^m$, $p$ is a prime number and $m$ is a positive integer. The Frobenius automorphism $\theta$ of $\mathbb{F}_{q}$ over $\mathbb{F}_p$ is defined by $\theta (a)=a^p$, $a\in\mathbb{F}_{q}$. The automorphism group of $\mathbb{F}_{q}$ is called the Galois group of $\mathbb{F}_{q}$. It is a cyclic group of order $m$ and is generated by $\theta$. Let $\sigma$ be an automorphism of $\mathbb{F}_{q}$. The *skew polynomial ring* $R=\mathbb{F}_q[x, \sigma]$ is the set of polynomials over $\mathbb{F}_q$, where the addition is defined as the usual addition of polynomials and the multiplication is defined by the following basic rule $$(ax^i)(bx^j)=a\sigma^i(b)x^{i+j},~~a,b\in\mathbb{F}_q.$$ From the definition one can see that $R$ is a non-commutative ring unless $\sigma$ is an identity automorphism. Let $\mid \sigma\mid$ denote the order of $\sigma$ and assume $\mid \sigma\mid=t$. Then there exists a positive integer $d$ such that $\sigma=\theta^d$ and $m=td$. Clearly, $\sigma$ fixes the subfield $\mathbb{F}_{p^d}$ of $\mathbb{F}_q$. Let $Z(\mathbb{F}_q[x,\sigma])$ denote the center of $R$. For $f, g\in R$, $g$ is called a *right divisor* (resp. *left divisor*) of $f$ if there exists $r\in R$ such that $f=rg$ (resp. $f=gr$). In this case, $f$ is called a *left multiple* (resp. *right multiple*) of $g$. Let the division be defined similarly. Then $\bullet$  If $g, f \in Z(\mathbb{F}_q[x, \sigma])$, then $g\cdot f=f\cdot g$. $\bullet$  Over finite fields, a skew polynomial ring is both a right Euclidean ring and a left Euclidean ring. Let $f, g \in R$. A polynomial $h$ is called a *greatest common left divisor* (gcld) of $f$ and $g$ if $h$ is a left divisor of $f$ and $g$; and if $u$ is another left divisor of $f$ and $g$, then $u$ is a left divisor of $h$. A polynomial $e$ is called a *least common left multiple* (lclm) of $f$ and $g$ if $e$ is a right multiple of $f$ and $g$; and if $v$ is another right multiple of $f$ and $g$, then $v$ is a right multiple of $e$. The *greatest common right divisor* (gcrd) and *least common right multiple* (lcrm) of polynomials $f$ and $g$ are defined similarly. The main aim of this paper is to study the structural properties of skew generalized quasi-cyclic (GQC) codes over finite fields. The rest of this paper is organized as follows. In Section 2, we survey some well known results of skew cyclic codes and give the BCH-type bound for skew cyclic codes. By the factorization theory of ideals, we give the Chinese Remainder Theorem in skew polynomial rings. In Section 3, using the Chinese Remainder Theorem, we give a necessary and sufficient condition for a code to be a skew GQC code. And this leads to a canonical decomposition of skew GQC codes. In Section 4, we mainly describe some characteristics of $1$-generator GQC codes including parity-check polynomials, dimensions and the minimum Hamming distance bounds. In Section 5, we discuss a special class of skew GQC codes called skew QC codes. [**2 Skew cyclic codes** ]{} Let $\sigma$ be an automorphism of the finite field $\mathbb{F}_q$ and $n$ be a positive integer such that the order of $\sigma$ divides $n$. A linear code $C$ of length $n$ over $\mathbb{F}_q$ is called *skew cyclic code* or *$\sigma$-cyclic code* if for any codeword $(c_0, c_1, \ldots, c_{n-1})\in C$, the vector $(\sigma(c_{n-1}), \sigma(c_0), \ldots, \sigma(c_{n-2}))$ is also a codeword in $C$. In polynomial representation, a linear code of length $n$ over $\mathbb{F}_q$ is a skew cyclic code if and only if it is a *left ideal* of the ring $R/(x^n-1)$, where $(x^n-1)$ denotes the *two-sided ideal* generated by $x^n-1$. In general, if $
{ "pile_set_name": "ArXiv" }
--- author: - 'Aimeric Colléaux $^{1,}$ [^1] , Sergio Zerbini $^{2,}$' title: | Modified Gravity Models\ Admitting Second Order Equations of Motion --- **Abstract** : The aim of this paper is to find higher order geometrical corrections to the Einstein-Hilbert action that can lead to only second order equations of motion. The metric formalism is used, and static spherically symmetric and Friedmann-Lemaître space-times are considered, in four dimensions. The FKWC-basis are introduced in order to consider all the possible invariant scalars, and both polynomial and non-polynomial gravities are investigated.\ Introduction ============ Most of the equations of motion describing physical effects are second order, that is, we need to specify either an initial and a final position in space-time to describe the dynamics between them, or we need an initial position and velocity to describe how the system will evolve. Concerning General Relativity (GR), for which the gravitational field $g_{\mu\nu}(x)$ is encoded into the geometry of space-time : $$\begin{aligned} ds^2 = g_{\mu\nu}(x) dx^{\mu}dx^{\nu} \,,\end{aligned}$$ The Einstein field equations, describing the dynamics of the geometry, are also second order ones. However, it is well known that two of the simpliest solutions of GR, the Schwarzschild metric and the Friedmann-Lemaître one, suffer from the existence of singularities. When one is dealing with ordinary matter, this is a general fact. Furthermore this theory alone is not able to describe dark energy, even though the inclusion of a suitable cosmological constant is sufficient. But then, other problems arise, like the cosmological constant one [@1] and the coincidence problem. Therefore, one can think about modifying the Einstein equations, in the hope to describe dark energy and to cure singularities. In order to do so, one can add higher order invariant scalars, like $R_{\mu\nu}R^{\mu\nu}$, to the Einstein-Hilbert action to have high energy corrections that could describe what really happen around the singularities [@2; @3]. With regard to dark energy issue, see [@4; @5; @6; @7; @8]. Within an higher order modified gravity model, the equations of motion will no longer be second order ones : there will be more than two initial conditions to specify in order to find the dynamics, so to keep the physical sense of what is an equation of motion, it is needed to introduce new fields for which these additional initial conditions would apply, such that at the end, the theory would involve two dynamical fields, with second order equations of motion for both of them. By doing so, we face an important problem, that is the presence of Ostrogradsky instabilities (see, for example [@4]) : the new field defined in this way can carry negative kinetic energy such that the Hamiltonian of the theory is not bounded from below and can reach arbitrarily negative energies, what would make this theory impossible to quantize in a satisfying way [@4]. And there are no general rules to avoid this problem, although a well known class of modified gravity, equivalent to GR plus a scalar field, the $f \big( R \big)$ one, might not suffer from this problem [@9]. Moreover, with a new field involved in the dynamics of gravity, this last would not be a fully geometrical theory anymore, which is yet one of the most important implications of General Relativity. Nevertheless, it is possible to find second order equations of motion from the addition into the Einstein-Hilbert action of higher order scalars [@10]. In this way, the Ostrogradsky instability may be avoided and there are no additional field involved in the dynamics, so these corrections can be said to be “geometrical” ones. This kind of modifications are the Lovelock scalars, but it turns out that in four dimensions, the only higher order scalar made of contractions of curvature tensors (only) that leads to second order equations [@10] is the so-called Gauss-Bonnet invariant : $$\begin{aligned} \mathcal{E}_4 = R^2 - 4 R_{\alpha \beta} R^{\alpha \beta} + R_{\alpha\beta\gamma}^{\ \ \ \ \delta}R^{\alpha\beta\gamma}_{\ \ \ \ \delta} ,\end{aligned}$$ which is however a total derivative in four dimensions [@11], and then it does not contribute to the equation of motion: $$\begin{aligned} \sqrt{-g}\mathcal{E}_4 = \partial_{\alpha} \Bigg( -\sqrt{-g} \, \epsilon^{\alpha \beta \gamma \delta} \; \epsilon_{\rho \sigma}^{\ \ \mu \nu} \Gamma_{\mu \beta}^{\ \ \ \rho} \Big( \frac{1}{2} R_{\delta\gamma\nu}^{\ \ \ \ \sigma} - \frac{1}{3} \Gamma_{\lambda \gamma}^{\ \ \ \sigma}\Gamma_{\nu \delta}^{\ \ \ \lambda} \Big) \Bigg).\end{aligned}$$ This result is background independent, which means that if we want to find a second order correction for all possible metrics in four dimensions, then this unique term does not contribute to the dynamics. That is why, in order to find anyway significant corrections to General Relativity that could cure some of its problems, we will search for additional terms that will give second order equations for only some specific metrics : the most studied ones, that suffer from singularities, the FLRW space-time describing the large scale dynamics of the universe, and the static spherically symmetric space-time describing neutral non-rotating stars and black holes. We note however that our way to find second order corrections is not at all the only possible one. There are other formulations of GR than the metric one, where the equations of motion are found by varying the action with respect to the metric field only. In the spirit of gauge theories, one can also vary the action with respect to the connections and independently with respect to the metric. Then, it is possible to find second order corrections with no a priori background structures [@12]. In some sense, our approach is similar to Horndeski’s theory which is the most general one leading to second order equations of motion for gravity described by a metric $g_{\mu\nu}$ coupled with a scalar field $\phi $ and its first two derivatives [@13]. This theory involves non linear higher order derivatives of the scalar field, like $\big( \Box \phi \big)^2 $, and yet leads to second order. Moreover, if all the matter fields are minimally coupled with the same metric $\widetilde{g}_{\mu \nu} \big( g_{\mu\nu} , \phi \big)$, one can expect the equivalence principle to hold [@14], which is also a fundamental feature of GR that one wants to keep. Briefly, the outline of the paper is the following. First we consider all the independent scalar invariants built from the metric field and its derivatives, for example of the form $\big( \Box R \big)^2$, and see if some linear combinations of them, or, in the spirit of [@15] and [@16], if some roots of these combinations, could lead to second order differential equations for FLRW space-time and static spherically symmetric one. The basis of independent scalars that are needed have been presented in [@17], but for specific backgrounds, we will show that this basis may be reduced. Furthermore, we will start to exibit, order by order for FLRW, the existence of polynomial and non-polynomial gravity models that give second order equations and polynomial corrections to the Friedmann equation. Finally, we will investigate the static spherically symmetric space-times. Order 6 FKWC-basis ================== The basis of all independent invariant geometrical scalars involving $2n$ derivatives of the metric are separated into different classes, depending on how many covariant derivatives act on curvature tensors. For order 6 ($n=3$), the first class, that does not involve explicitly covariant derivatives, from $\mathcal{L}_1$ to $\mathcal{L}_8$, is denoted by $\mathcal{R}_{6,3}^0$ : these scalars are built with six derivatives of the metric and by the contraction of 3 curvature tensors. The two other classes, $\mathcal{R}_{\left\{ 2,0 \right\}}^0$ and $\mathcal{R}_{\left\{ 1,1 \right\}}^0$, contain scalars that involve respectively, a curvature tensor contracted with two covariant derivatives acting on another curvature tensor (from $\mathscr{L}_1$ to $\mathscr{L}_4$), and two covariant derivatives, each acting on one curvature tensor (from $\mathscr{L}_5$ to $\mathscr{L}_8$) :  \    \   $\left\{ \begin{array}{l} \mathcal{L}_1=R^{\mu\nu\alpha\beta}R_{\alpha\beta\sigma\rho}R^{\sigma\rho}_{\;\,\;\,\,\mu\nu} \quad\;\;\;\; , \;\;\quad \mathcal{L}_2=R^{\mu\nu}_{\;\,\;\,\,\alpha\beta}R^{\alpha\sigma}_{\;\,\;\,\,\nu\rho}R^{\beta\rho}_{\;\,\;\,\,\mu\sigma} \\ \mathcal{L}_3=R^{\mu\nu\alpha\beta}R_{\alpha\beta\nu\sigma}R^{\
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a Monte Carlo simulation of the perturbative Quantum Chromodynamics (pQCD) shower developing after a hard process embedded in a heavy-ion collision. The main assumption is that the cascade of branching partons traverses a medium which (consistent with standard radiative energy loss pictures) is characterized by a local transport coefficient $\hat{q}$ which measures the virtuality per unit length transferred to a parton which propagates in this medium. This increase in parton virtuality alters the development of the shower and in essence leads to extra induced radiation and hence a softening of the momentum distribution in the shower. After hadronization, this leads to the concept of a medium-modified fragmentation function. On the level of observables, this is manifest as the suppression of high transverse momentum ($P_T$) hadron spectra. We simulate the soft medium created in heavy-ion collisions by a 3-d hydrodynamical evolution and average the medium-modified fragmentation function over this evolution in order to compare with data on single inclusive hadron suppression and extract the $\hat{q}$ which characterizes the medium. Finally, we discuss possible uncertainties of the model formulation and argue that the data in a soft momentum show evidence of qualitatively different physics which presumably cannot be described by a medium-modified parton shower.' author: - Thorsten Renk title: 'Parton shower evolution in a 3-d hydrodynamical medium' --- Introduction ============ Jet quenching, i.e. the energy loss of hard partons created in the first moments of a heavy ion collision due to interactions with the surrounding soft medium has long been regarded a promising tool to study properties of the soft medium [@Jet1; @Jet2; @Jet3; @Jet4; @Jet5; @Jet6]. The basic idea is to study the changes induced by the medium to a hard process which is well-known from p-p collisions. A number of observables is available for this purpose, among them suppression in single inclusive hard hadron spectra $R_{AA}$ [@PHENIX_R_AA], the suppression of back-to-back correlations [@Dijets1; @Dijets2] or single hadron suppression as a function of the emission angle with the reaction plane [@PHENIX-RP]. Calculations have now reached a high degree of sophistication. Different energy loss formalisms are used together with a 3-d hydrodynamical description of the medium [@Hydro3d] in central and noncentral collisions to determine the pathlength dependence of energy loss [@HydroJet1; @HydroJet2; @HydroJet3; @HydroJet4]. Some of these models have also been employed successfully to describe the suppression of hard back-to-back hadron correlations [@Dihadron1; @Dihadron2; @Dihadron3]. The existing formulations of energy loss can roughly be divided into two groups: Some compute the energy loss from a leading parton [@Jet2; @Jet5; @QuenchingWeights] whereas others compute an in-medium fragmentation function by following the evolution of a parton shower [@HydroJet2; @HBP]. Recently, the Monte Carlo (MC) code JEWEL [@JEWEL] has also been developed which simulates the evolution of a parton shower in the medium in a non-analytic way. This model builds on the success of MC shower generators like PYTHIA [@PYTHIA; @PYSHOW] or HERWIG [@HERWIG] for showers in vacuum. In the present work, we follow an approach which is very similar to the one taken with JEWEL, i.e. we modify a MC code for vacuum shower to account for medium effects. However, while JEWEL so far chiefly implements elastic scattering with medium constituents and includes radiative energy loss only in a schematic way, we rather wish to focus on radiative energy loss in the following. This is based on the observation that elastic energy loss has the wrong pathlength dependence to account properly for the suppression of back-to-back correlations [@Elastic] and hence cannot be a large contribution to the total energy loss of light quarks. In particular, we assume that partons traversing the medium pick up additional virtuality which induces additional branchings in the shower, thus softening the parton spectrum, but that there is no transfer of longitudinal momentum from the hard parton to the scatterers in the medium. This work is organized as follows: First, we outline the computation of hadron spectra in the formalism, starting from the hard process. The key ingredient of our model, the medium-modified fragmentation function (MMFF) is described in detail in section \[S-MMFF\] where we outline the MC simulation of showers in vacuum and present how the algorithm is modified to simulate showers in medium. We present various observables which show the expected modification of the jet by the medium. In section \[S-Data\] we present a comparison of the suppression calculated using the MMFF in a 3-d hydrodynamical model for the medium evolution [@Hydro3d] with the measured nuclear suppression in central AuAu collisions at 200 AGeV and use this result to extract an estimate for the medium transport coefficient $\hat{q}$. We follow with a discussion of the model uncertainties. Finally the limits of the approach in the light of the patterns seen in semi-hard and soft correlations with a hard trigger hadron are discussed. The hard process ================ We aim at a description of the production of high $P_T$ hadrons both in p-p and in Au-Au collisions. The underlying hard process can be computed in leading order (LO) pQCD. We assume in the following that the actual hard process is not influenced by the fact that a soft medium is created in Au-Au collisions, but that the subsequent parton shower (which extends to timescales at which a medium is relevant) is modified by the presence of such a medium, whereas hadronization itself takes place sufficiently away from the medium such that it can be assumed to take place as in vacuum. In this section, we describe the computation of the hard process itself. The production of two hard back to back partons $k,l$ with momentum $p_T$ in a p-p or A-A collision in LO pQCD is described by $$\label{E-2Parton} \frac{d\sigma^{AB\rightarrow kl +X}}{d p_T^2 dy_1 dy_2} \negthickspace = \sum_{ij} x_1 f_{i/A} (x_1, Q^2) x_2 f_{j/B} (x_2,Q^2) \frac{d\hat{\sigma}^{ij\rightarrow kl}}{d\hat{t}}$$ where $A$ and $B$ stand for the colliding objects (protons or nuclei) and $y_{1(2)}$ is the rapidity of parton $k(l)$. The distribution function of a parton type $i$ in $A$ at a momentum fraction $x_1$ and a factorization scale $Q \sim p_T$ is $f_{i/A}(x_1, Q^2)$. The distribution functions are different for the free protons [@CTEQ1; @CTEQ2] and protons in nuclei [@NPDF; @EKS98]. The fractional momenta of the colliding partons $i$, $j$ are given by $ x_{1,2} = \frac{p_T}{\sqrt{s}} \left(\exp[\pm y_1] + \exp[\pm y_2] \right)$. Expressions for the pQCD subprocesses $\frac{d\hat{\sigma}^{ij\rightarrow kl}}{d\hat{t}}(\hat{s}, \hat{t},\hat{u})$ as a function of the parton Mandelstam variables $\hat{s}, \hat{t}$ and $\hat{u}$ can be found e.g. in [@pQCD-Xsec]. Inclusive production of a parton flavour $f$ at rapidity $y_f$ is found by integrating over either $y_1$ or $y_2$ and summing over appropriate combinations of partons, $$\label{E-1Parton} \begin{split} \frac{d\sigma^{AB\rightarrow f+X}}{dp_T^2 dy_f} = \int d y_2 \sum_{\langle ij\rangle, \langle kl \rangle} \frac{1}{1+\delta_{kl}} \frac{1}{1+\delta_{ij}} &\Bigg\{ x_1 f_{i/A}(x_1,Q^2) x_2 f_{j/B}(x_2,Q^2) \bigg[ \frac{d\sigma^{ij\rightarrow kl}}{d\hat{t}}(\hat{s}, \hat{t},\hat{u}) \delta_{fk} + \frac{d\sigma^{ij\rightarrow kl}}{d\hat{t}}(\hat{s}, \hat{u},\hat{t}) \delta_{fl} \bigg]\\ +&x_1 f_{j/A}(x_1,Q^2) x_2 f_{i/B}(x_2,Q^2) \bigg[ \frac{d\sigma^{ij\rightarrow kl}}{d\hat{t}}(\hat{s}, \hat{u},\hat{t}) \delta_{fk}
{ "pile_set_name": "ArXiv" }
--- abstract: 'Disentanglement is the process which transforms a state $\rho$ of two subsystems into an unentangled state, while not effecting the reduced density matrices of each of the two subsystems. Recently Terno [@Terno98] showed that an arbitrary state cannot be disentangled into a [*tensor product*]{} of its reduced density matrices. In this letter we present various novel results regarding disentanglement of states. Our main result is that there are sets of states which cannot be successfuly disentangled (not even into a separable state). Thus, we prove that a universal disentangling machine cannot exist.' author: - 'Tal Mor[^1]' title: On the Disentanglement of States --- [2]{} Entanglement plays an important role in quantum physics [@Peres93]. Due to its peculiar non-local properties, entanglement is one of the main pillars of non-classicality. The creation of entanglement and the destruction of entanglement via general operations are still under extensive study [@entanglement]. Here we concentrate on the process of disentanglement of states. For simplicity, we concentrate on qubits in this letter, and on the disentanglement of two subsystems. Let there be two two-level systems “X” and “Y”. The state of each such system is called a quantum bit (qubit). A pure state which is a tensor product of two qubits can always be written as $|0({\rm X})0({\rm Y})\rangle$ by an appropriate choice of basis, $|0\rangle$ and $|1\rangle $ for each qubit. For convenience, we drop the index of the subsystem (whenever it is possible), and order them so that “X” is at the left side. By an appropriate choice of the basis $|0\rangle$ and $|1\rangle$, and using the Schmidt decomposition (see [@Peres93]), an entangled pure state of two qubits can always be written as $ | \psi \rangle = \cos \phi |00\rangle + \sin \phi |11\rangle $ or using a density matrix notation $\rho = |\psi\rangle \langle \psi|$ $$\rho = [ \cos \phi |00\rangle + \sin \phi |11\rangle ] [ \cos \phi \langle 00| + \sin \phi \langle 11|] \ .$$ The reduced density matrix of each of the qubits is $\rho_{\rm X} = {\rm Tr}_{\rm Y} [\rho({\rm XY})] $ and $\rho_{\rm Y} = {\rm Tr}_{\rm X} [\rho({\rm XY})] $. In the basis used for the Schmidt decomposition the two reduced density matrices are $$\label{reduced-state} \rho_{\rm X} = \rho_{\rm Y} = \left( \begin{array}{cc} \cos^2\phi & 0 \\ 0 & \sin^2 \phi \end{array} \right) \ .$$ Following Terno [@Terno98] and Fuchs [@Fuchs] let us provide the following two definitions (note that the second is an interesting special case of the first): [*Definition*]{}.— Disentanglement is the process that transforms a state of two (or more) subsystems into an unentangled state (in general, a mixture of product states) such that the reduced density matrices of each of the subsystems are uneffected. [*Definition*]{}.— Disentanglement into a tensor product state is the process that transforms a state of two (or more) subsystems into a tensor product of the two reduced density matrices. We noticed that according to these definitions, when a successful disentanglement is applied onto any pure product state, the state must be left unmodified. That is, $$\label{pure-ps} |00\rangle \longrightarrow |00\rangle$$ (in an appropriate basis). This fact proved very useful in the analysis we report here. The main goal of this letter is to show that a universal disentangling machine cannot exist. A universal disentangling machine is a machine that could disentangle any state which is given to it as an input. In order to prove that such a machine cannot exist, it is enough to find [*one*]{} set of states that cannot be disentangled if the data (regarding which state is used) is not available. To analyze the process of disentanglement consider the following experiment involving two subsystems “X” and “Y”, and a sender who sends [*both systems*]{} to the receiver who wishes to disentangle the state of these two subsystems: Let the sender (Alice) and the disentangler (Eve) define a finite set of states $|\psi_i\rangle$; let Alice choose one of the states at random, and let it be the input of the disentangling machine designed by Eve. Eve does not get from Alice the data regarding [*which*]{} of the states Alice chose, so Eve’s aim is to design a machine that will succeed to disentangle any of the possible states $|\psi_i\rangle$. In the same sense that an arbitrary state cannot be cloned (a universal cloning machine does not exist [@WZ82]), it was recently shown by Terno [@Terno98] that an arbitrary state cannot be disentangled into a tensor product of its reduced density matrices. Note that this novel result of [@Terno98] proves that [*universal disentanglement into product states*]{} is impossible, and it leaves open the more general question of whether a [*universal disentanglement*]{} is impossible (that is, disentanglement into separable states). We extend the investigation of the process of disentanglement well beyond Terno’s novel analysis in several ways. First, we find a larger class (then the one found by Terno) of states which cannot be disentangled into product states. Then, we show that there are non-trivial sets of states that [*can*]{} be disentangled. In particular, we present a set of states that cannot be disentangled into tensor product states, [*but*]{} can be disentangled into separable states. Finally, we present our most important result; a set of states that [*cannot be disentangled*]{}. The existence of such a set of states proves that a universal disentangling machine cannot exist. Using the terminology of [@WZ82] we can say that our letter shows that [*a single quantum can not be disentangled*]{}. Consider a set of states containing only one state. Since the state is known, obviously it can be disentangled. E.g., it is replaced by the appropriate tensor product state. We first prove that there are infinitely many sets of states that [*cannot*]{} be disentangled into product states. Our proof here follows from Terno’s method, with the addition of using the Schmidt decomposition to analyze a larger class of states. The most general form of two entangled states can always be presented (by an appropriate choice of bases) as: $$\begin{aligned} \label{the-states} |\psi_0 \rangle &=& \cos \phi_0 |00\rangle + \sin \phi_0 |11\rangle \nonumber \\ |\psi_1 \rangle &=& \cos \phi_1 |0'0'\rangle + \sin \phi_1 |1'1'\rangle \ .\end{aligned}$$ To prove that there are states for which disentanglement into tensor product states is impossible, let us restrict ourselves to the simpler subclass $$\begin{aligned} |\psi_0 \rangle &=& \cos \phi |00\rangle + \sin \phi |11\rangle \nonumber \\ |\psi_1 \rangle &=& \cos \phi |0'0'\rangle + \sin \phi |1'1'\rangle \ .\end{aligned}$$ There exists some basis $$|0''\rangle = {1 \choose 0} ; |1''\rangle = {0 \choose 1}$$ such that the bases vectors $|0\rangle;|1\rangle$ and $|0'\rangle;|1'\rangle$ become $$|0\rangle = {\cos \theta \choose \sin \theta} ; |1\rangle = {\sin \theta \choose -\cos \theta} \ ,$$ and $$|0'\rangle = {\cos \theta \choose -\sin \theta} ; |1'\rangle = {\sin \theta \choose \cos \theta}$$ respectively, in that basis. The states (\[the-states\]) are now $$\begin{aligned} |\psi_0 \rangle &=& c_\phi {c_\theta \choose s_\theta} {c_\theta \choose s_\theta} + s_\phi {s_\theta \choose - c_\theta} {s_\theta \choose - c_\theta} \nonumber \\ |\psi_1 \rangle &=& c_\phi {c_\theta \choose - s_\theta} {c_\theta \choose - s_\theta} + s_\phi {s_\theta \choose c_\theta} {s_\theta \choose c_\theta} \ ,\end{aligned}$$ with $c_\phi \equiv \cos \phi$, etc. The overlap of the two states is ${\rm OL}= \langle \psi_0 | \psi_1 \rangle =
{ "pile_set_name": "ArXiv" }
--- abstract: 'In a D-brane model of space-time foam, there are contributions to the dark energy that depend on the D-brane velocities and on the density of D-particle defects. The latter may also reduce the speeds of photons [*linearly*]{} with their energies, establishing a phenomenological connection with astrophysical probes of the universality of the velocity of light. Specifically, the cosmological dark energy density measured at the present epoch may be linked to the apparent retardation of energetic photons propagating from nearby AGNs. However, this nascent field of ‘D-foam phenomenology’ may be complicated by a dependence of the D-particle density on the cosmological epoch. A reduced density of D-particles at redshifts $z \sim 1$ - a ‘D-void’ - would increase the dark energy while suppressing the vacuum refractive index, and thereby might reconcile the AGN measurements with the relatively small retardation seen for the energetic photons propagating from GRB 090510, as measured by the Fermi satellite.' author: - John Ellis - 'Nick E. Mavromatos' - 'Dimitri V. Nanopoulos' title: 'D-Foam Phenomenology: Dark Energy, the Velocity of Light and a Possible D-Void' --- Introduction to D-Phenomenology =============================== The most promising framework for a quantum theory of gravity is string theory, particularly in its non-perturbative formulation known as M-theory. This contains solitonic configurations such as D-branes [@polchinski], including D-particle defects in space-time. One of the most challenging problems in quantum gravity is the description of the vacuum and its properties. At the classical level, the vacuum may be analyzed using the tools of critical string theory. However, we have argued [@emn1] that a consistent approach to quantum fluctuations in the vacuum, the so-called ‘space-time foam’, needs the tools of non-critical string theory. As an example, we have outlined an approach to this problem in which D-branes and D-particles play an essential role [@Dfoam2; @Dfoam]. Within this approach, we have identified two possible observable consequences, which may give birth to an emergent subject of ‘D-foam phenomenology’. One possible consequence is a linear energy-dependence of the velocity of light [@aemn; @nature; @Farakos] due to the interactions of photons with D-particle defects, which would also depend linearly on the space-time density of these defects [@emnnewuncert; @mavro_review2009]. Another possible consequence is a contribution to the vacuum energy density (dark energy) that depends on the velocities of the D-branes and, again, the density of D-particle defects [@emninfl]. Therefore, between them, measurements of the dark energy and of the velocities of energetic photons could in principle constrain the density of D-particle defects and the velocities of D-branes. The experimental value of the present dark energy density $\Lambda$ is a fraction $\Omega_\Lambda = 0.73 (3)$ of the critical density $1.5368 (11) \times 10^{-5} h^2$GeV/cm$^3$, where $h = 0.73 (3)$, and the matter density fraction $\Omega_M = 0.27 (3)$. The available cosmological data are consistent with the dark energy density being constant, but some non-zero redshift dependence of $\Lambda$ cannot be excluded, and would be an interesting observable in the context of D-foam phenomenology, as we discuss later. The observational status of a possible linear energy dependence of the velocity of light is less clear. As shown in Fig. \[fig:data\], observations of high-energy emissions from AGN Mkn 501 [@MAGIC2] and PKS 2155-304 [@hessnew] are compatible with photon velocities $$v \; = \; c \times \left( 1 - \frac{E}{M_{QG}} \right) , \label{linear}$$ where $\Delta t/E_\gamma = 0.43 (0.19) \times K(z)$ s/GeV, corresponding to $M_{QG} =(0.98^{+0.77}_{-0.30}) \times 10^{18}$ GeV [@emnnewuncert2]. This range is also compatible with Fermi satellite observations of GRB 09092B [@grb09092B], from which at least one high-energy photon arrived significantly later than those of low energies, and of GRB 080916c [@grbglast]. On the other hand, as also seen in Fig. \[fig:data\], Fermi observations of GRB 090510 [@grb090510] seem to allow only much smaller values of the retardation $\Delta t$, and hence only values of $M_{QG} > M_P = 1.22 \times 10^{19}$ GeV. However, these data probe different redshift ranges. In this first combined exploration of D-foam phenomenology, we start by reviewing the general connection between dark energy and a vacuum refractive index in the general framework of D-branes moving through a gas of D-particle defects in a 10-dimensional space. As we discuss, there are various contributions to the dark energy that depend in general on the density of defects and on the relative velocity of the D-branes. On the other hand, the magnitude of the vacuum refractive index (\[linear\]) is proportional to the density of D-particle defects. We then discuss the ranges of D-brane velocity and D-particle density that are compatible with the measurements of $\Lambda$ and delays in the arrivals of photons from AGN Mkn 501 and PKS 2155-304. As seen in Fig. \[fig:data\], these AGNs are both at relatively low redshift $z$, where the experimental measurement of $\Lambda$ has been made, so the same D-particle density is relevant to the two measurements. However, it is not yet clear whether this value of $\Lambda$, and hence the same D-particle density, also applied when $z \sim 1$. If the density of D-particles was suppressed when $z \sim 1$ - a [*D-void*]{} - this could explain [@mavro_review2009] the much weaker energy dependence of the velocity of light allowed by the Fermi observations of GRB 090510, which has a redshift of $0.903 (3)$, as also seen in Fig. \[fig:data\]. In this case, the value of $\Lambda$ should also have varied at that epoch - a [*clear experimental prediction of this interpretation of the data*]{}. On the other hand, only a very abrupt resurgence of the D-particle density could explain the retardation seen by Fermi in observations of GRB 09092b. Alternatively, this retardation could be due to source effects - which should in any case also be allowed for when analyzing the retardations seen in emissions from other sources. A D-Brane Model of Space-Time Foam and Cosmology ================================================ As a concrete framework for D-foam phenomenology, we use the model illustrated in the left panel of Fig. \[fig:recoil\] [@Dfoam2; @Dfoam]. In this model, our Universe, perhaps after appropriate compactification, is represented as a Dirichlet three-brane (D3-brane), propagating in a bulk space-time punctured by D-particle defects [^1]. As the D3-brane world moves through the bulk, the D-particles cross it. To an observer on the D3-brane the model looks like ‘space-time foam’ with defects ‘flashing’ on and off as the D-particles cross it: this is the structure we term ‘D-foam’. As shown in the left panel of Fig. \[fig:recoil\], matter particles are represented in this scenario by open strings whose ends are attached to the D3-brane. They can interact with the D-particles through splitting and capture of the strings by the D-particles, and subsequent re-emission of the open string state, as illustrated in the right panel of Fig. \[fig:recoil\]. This set-up for D-foam can be considered either in the context of type-IIA string theory [@emnnewuncert], in which the D-particles are represented by point-like D0-branes, or in the context of the phenomenologically more realistic type-IIB strings [@li], in which case the D-particles are modelled as D3-branes compactified around spatial three-cycles (in the simplest scenario), since the theory admits no D0-branes. For the time being, we work in the type-IIA framework, returning later to the type-IIB version of D-foam phenomenology. ![*Left: schematic representation of a generic D-particle space-time foam model, in which matter particles are treated as open strings propagating on a D3-brane, and the higher-dimensional bulk space-time is punctured by D-particle defects. Right: details of the process whereby an open string state propagating on the D3-brane is captured by a D-particle defect, which then recoils. This process involves an intermediate composite state that persists for a period $\delta t \sim \sqrt{\alpha '} E$, where $E$ is the energy of the incident string state, which distorts the surrounding space time during the scattering, leading to an effective refractive index but *not* birefringence.*[]{data-label="fig:reco
{ "pile_set_name": "ArXiv" }
--- abstract: 'Let $2\leq m \leqs n$ and $q \in (1,\infty)$, we denote by $W^mL^{\frac nm,q}(\mathbb H^n)$ the Lorentz–Sobolev space of order $m$ in the hyperbolic space $\mathbb H^n$. In this paper, we establish the following Adams inequality in the Lorentz–Sobolev space $W^m L^{\frac nm,q}(\mathbb H^n)$ $$\sup_{u\in W^mL^{\frac nm,q}(\mathbb H^n),\, \|\nabla_g^m u\|_{\frac nm,q}\leq 1} \int_{\mathbb H^n} \Phi_{\frac nm,q}\big(\beta_{n,m}^{\frac q{q-1}} |u|^{\frac q{q-1}}\big) dV_g \leqs \infty$$ for $q \in (1,\infty)$ if $m$ is even, and $q \in (1,n/m)$ if $m$ is odd, where $\beta_{n,m}^{q/(q-1)}$ is the sharp exponent in the Adams inequality under Lorentz–Sobolev norm in the Euclidean space. To our knowledge, much less is known about the Adams inequality under the Lorentz–Sobolev norm in the hyperbolic spaces. We also prove an improved Adams inequality under the Lorentz–Sobolev norm provided that $q\geq 2n/(n-1)$ if $m$ is even and $2n/(n-1) \leq q \leq \frac nm$ if $m$ is odd, $$\sup_{u\in W^mL^{\frac nm,q}(\mathbb H^n),\, \|\na_g^m u\|_{\frac nm,q}^q -\lam \|u\|_{\frac nm,q}^q \leq 1} \int_{\mathbb H^n} \Phi_{\frac nm,q}\big(\beta_{n,m}^{\frac q{q-1}} |u|^{\frac q{q-1}}\big) dV_g \leqs \infty$$ for any $0\leqs \lambda \leqs C(n,m,n/m)^q$ where $C(n,m,n/m)^q$ is the sharp constant in the Lorentz–Poincaré inequality. Finally, we establish a Hardy–Adams inequality in the unit ball when $m\geq 3$, $n\geq 2m+1$ and $q \geq 2n/(n-1)$ if $m$ is even and $2n/(n-1) \leq q \leq n/m$ if $m$ is odd $$\sup_{u\in W^mL^{\frac nm,q}(\mathbb H^n),\, \|\na_g^m u\|_{\frac nm,q}^q -C(n,m,\frac nm)^q \|u\|_{\frac nm,q}^q \leq 1} \int_{\mathbb B^n} \exp\big(\beta_{n,m}^{\frac q{q-1}} |u|^{\frac q{q-1}}\big) dx \leqs \infty.$$' author: - Van Hoang Nguyen title: 'The sharp Adams type inequalities in the hyperbolic spaces under the Lorentz-Sobolev norms' --- [^1] [^2] [^3] Introduction ============ It is well-known that the Sobolev’s embedding theorems play the important roles in the analysis, geometry, partial differential equations, etc. Let $m\geq 1$, we we traditionally use the notation $$\na^m = \begin{cases} \Delta^{\frac m2} &\mbox{if $m$ is even,}\\ \na \Delta^{\frac{m-1}2} &\mbox{if $m$ is odd} \end{cases}$$ to denote the $m-$th derivatives. For a bounded domain $\Om\subset \R^n, n\geq 2$ and $1\leq p \leqs \infty$, we denote by $W^{m,p}_0(\Om)$ the usual Sobolev spaces which is the completion of $C_0^\infty(\Om)$ under the Dirichlet norm $\|\na^m u\|_{L^p(\Om)} = \Big(\int_\Om |\na^m u|^p dx \Big)^{\frac1p}$. The Sobolev inequality asserts that $W^{m,p}_0(\Om) \hookrightarrow L^q(\Om)$ for any $q \leq \frac{np}{n-mp}$ provided $mp \leqs n$. However, in the limits case $mp = n$ the embedding $W^{m,\frac nm}_0(\Om) \hookrightarrow L^\infty(\Om)$ fails. In this situation, the Moser–Trudinger inequality and Adams inequality are perfect replacements. The Moser–Trudinger inequality was proved independently by Yudovic [@Yudovic1961], Pohozaev [@Pohozaev1965] and Trudinger [@Trudinger67]. This inequality was then sharpened by Moser [@Moser70] in the following form $$\label{eq:Moserineq} \sup_{u\in W^{1,n}_0(\Om), \|\nabla u\|_{L^n(\Om)} \leq 1} \int_\Om e^{\alpha |u|^{\frac n{n-1}}} dx \leqs \infty$$ for any $\al \leq \al_{n}: = n \om_{n-1}^{\frac 1{n-1}}$ where $\om_{n-1}$ denotes the surface area of the unit sphere in $\R^n$. Furthermore, the inequality is sharp in the sense that the supremum in will be infinite if $\al \geqs \al_n$. The inequality was generalized to higher order Sobolev spaces $W^{m,\frac nm}_0(\Om)$ by Adams [@Adams] in the following form $$\label{eq:AMT} \sup_{u \in W^{m,n}_0(\Om), \, \int_\Om |\na^m u|^{\frac nm} dx \leq 1} \int_\Om e^{\al |u|^{\frac n{n-m}}} dx \leqs \infty,$$ for any $$\al \leq \al_{n,m}: = \begin{cases} \frac 1{\si_n}\Big(\frac{\pi^{n/2} 2^m \Gamma(\frac m2)}{\Gamma(\frac{n-m}2)}\Big)^{\frac n{n-m}} &\mbox{if $m$ is even},\\ \frac 1{\si_n}\Big(\frac{\pi^{n/2} 2^m \Gamma(\frac {m+1}2)}{\Gamma(\frac{n-m+1}2)}\Big)^{\frac n{n-m}} &\mbox{if $m$ is odd}, \end{cases}$$ where $\si_n = \om_{n-1}/n$ is the volume of the unit ball in $\R^n$. Moreover, if $\al \geqs \al_{n,m}$ then the supremum in becomes infinite though all integrals are still finite. The Moser-Trudinger inequality and Adams inequality play the role of the Sobolev embedding theorems in the limiting case $mp = n$. They have many applications to study the problems in analysis, geometry, partial differential equations, etc such as the Yamabe’s equation, the $Q-$curvature equations, especially the problems in partial differential equations with exponential nonlinearity, etc. There have been many generalizations of the Moser–Trudinger inequality and Adams inequality in literature. For examples, the Moser–Trudinger inequality and Adams inequality were established in the Riemannian manifolds in [@YangSuKong; @ManciniSandeep2010; @AdimurthiTintarev2010; @ManciniSandeepTintarev2013; @Bertrand; @Karmakar; @LuTang2013; @DongYang] and were established in the subRiemannian manifolds in [@CohnLu; @CohnLu1; @Balogh]. The singular version of the Moser–Trudinger inequality and Adams inequality was proved in [@AdimurthiSandeep2007; @LamLusingular]. The Moser–Trudinger inequality and Adams inequality were extended to unbounded domains and whole spaces in [@Ruf2005; @LiRuf2008; @RufSani; @AdimurthiYang2010; @LamLuHei; @Adachi00; @LamLuAdams; @LamLunew], and to fractional order Sobolev spaces in [@Martinazzi; @FM1; @FM2]. The improved version of the Moser–Trudinger inequality and Adams inequality were given in [@AdimurthiDruet2004; @Tintarev2014; @WangYe2012; @Nguyenimproved; @LuYangAiM; @Nguyen4; @delaTorre; @Mancini; @Yangjfa; @DOO; @NguyenCCM; @LuZhu; @LuYangHA; @LiLuYang]. An interesting question concerning to the Moser–Trudinger inequality and Adams inequality is whether or not the extremal functions exist. For this interesting topic, the reader may consult the papers [@Carleson86; @Flucher92; @Lin96; @Ruf2005; @LiRuf2008;
{ "pile_set_name": "ArXiv" }
--- abstract: 'The general solution of Einstein’s gravity equation in $D$ dimensions for an anisotropic and spherically symmetric matter distribution is calculated in a bulk with position dependent cosmological constant. Results for $n$ concentric $(D-2)-$branes with arbitrary mass, radius, and pressure with different cosmological constant between branes are found. It is shown how the different cosmological constants contribute to the effective mass of each brane. It is also shown that the equation of state for each brane influences the dynamics of branes, which can be divided into eras according to the dominant matter. This scenario can be used to model the universe in the $D=5$ case, which may presents a phenomenology richer than the current models. The evolution law of the branes is studied, and the anisotropic pressure that removes divergences is found. The Randall-Sundrum metric in an outside the region in the flat branes limit is also derived.' author: - 'I. C. Jardim' - 'R. R. Landim' - 'G. Alencar' - 'R. N. Costa Filho' title: 'Construction of multiple spherical branes cosmological scenario\' --- Introduction ============ The general model for the cosmos is based on the description of the universe as a perfect fluid that admits a global cosmic time. This scenario has a space-time with constant curvature given by the Friedmann-Robertson-Walker metric, where its dynamics is determined by a cosmological scale factor that depends on the fluid state equation. Despite the success of that model in the description of the primordial nucleosintesys and the cosmic microwave background, it has failures that led to the emergence of new models. Among the major flaws of the current model are the problem of dark energy showing the accelerated expansion in the currently observed universe, and the dark matter which is the divergence between the rotation of the halo of some galaxies and the amount of matter contained in them according to gravitational dynamics [@weinberg:cosmology]. Cosmological models with extra dimensions appeared first in Kaluza-Klein models with extra dimensions and later in Randall-Sundrum scenarios [@Randall:1999vf; @Randall:1999ee]. These models describe the observed universe as a brane universe in a hyper-dimensional space-time. Despite the fact that the Friedmann-Robertson-Walker metric did not determine the geometry of the observed universe. The majority of studies focused on plane geometry. Because of its simplicity, this geometry is not able to change the dynamics of the universe and thus cannot solve the problem of dark matter or the initial singularity. Although the first studies to describe the universe as a spherical shell back to the 80’s [@Rubakov:1983bb; @Visser:1985qm; @Squires:1985aq], the spherical brane-universe has shown very rich phenomenology in the past decade [@Gogberashvili:1998iu; @Boyarsky:2004bu]. Besides being compatible with the observational data [@Tonry:2003zg; @Luminet:2003dx; @Overduin:1998pn], the models provide an explanation for; the galaxy isotropic runaway (isotropic expansion), the existence of a preferred frame, and a cosmic time. They show how the introduction of different cosmological constants in each region of the bulk can change the dynamics of the cosmological scale factor so as to make it compatible with the observed dynamics [@Knop:2003iy; @Riess:2004nr] without the introduction of dark energy [@Gogberashvili:2005wy]. Similar to other models with extra dimensions, the spherical shell models open the possible to obtain an energy scale in order to solve the problem of hierarchy [@Gogberashvili:1998vx] and can be used as a basis for systems with varying speed of light in the observed universe [@Gogberashvili:2006dz]. The introduction of other branes and different cosmological constants can modify the overall dynamics of the observed universe. Local density fluctuations of density can change the local dynamics such as galactic dynamics (since the field of other branes interacts gravitationally with the matter of the brane-universe) without dark matter. Herein this piece of work we extend and generalize the scenario of the world as one expanding shell [@Gogberashvili:1998iu] to multiple concentric spherical $(D-2)$-branes in a $D$ dimensional space-time. For this, we solve the Einstein’s equation in $D$ dimensions to $n$ $(D-2)-$branes with different masses in a space with different cosmological constants between the branes. A previous study considered a continuous distributions of matter. However, only one cosmological constant was used [@Das:2001md]. We solve the $D-$dimensional case, but for a cosmological model we limited ourselves to the case $D=5$, since the observed universe has only three spatial dimensions. This work is organized as follows: In the second section we review the Einstein’s equations in $D$ dimensions with a cosmological constant for spherically symmetric matter distribution. In the third section we solve this set of equations for $n$ shells with different cosmological constants $\Lambda$ between them. In Sec. 4, the energy-momentum tensor conservation law is used to determine the possible anisotropic pressure which removes the divergences in brane evolution equation. In the fifth section we particularize the solution found to take the flat brane limit in order to obtain the Randall-Sundrum metric in the exterior region. In the last section we discuss the conclusions and possible consequences. Static and Spherically Symmetric Space-time in $D$ Dimensions ============================================================= To learn about the gravitational effect of a distribution of matter we must determine the geometry of space-time. For this we need to know the $D(D+1)/2$ independent components of the metric solving the Einstein’s equation. However, it is possible to use the symmetry of the problem to reduce these components to just two, given by the invariant line element[@Gogberashvili:1998iu], $$ds^{2} = -A(r,t)dt^{2} +B(r,t)dr^{2} +r^{2}d\Omega^{2}_{D-2}$$ where $\Omega_{D-2}$ is the element of solid angle in $D$ dimensions, formed by $D-2$ angular variables. Therefore we are left only with two functions, $A(r,t)$ and $B(r,t)$, to be determined by the Einstein’s equation in $D$ dimensions $$\label{einD} R_{\mu}^{\nu} -\frac{1}{2}R\delta_{\mu}^{\nu} +\Lambda\delta_{\mu}^{\nu} = \kappa_{D}T_{\mu}^{\nu},$$ where $\Lambda$ is the cosmological constant, which depends on $r$ and possibly on $t$. Also $\kappa_{D}$ is the gravitational coupling constant in $D$ dimensions. Due to the symmetries of the problem we only have four non null independent components of the Einstein’s equation (\[einD\]), which are $$\begin{aligned} \kappa_{D}T_{0}^{0} &=&-\frac{D-2}{2r^{2}}\left[(D-3)\left(1 -B^{-1}\right) +\frac{rB'}{B^{2}}\right] +\Lambda \label{ein00}, \\ \kappa_{D}T_{1}^{1} &=& -\frac{D-2}{2r^{2}}\left[(D-3)\left(1 -B^{-1}\right) -\frac{rA'}{AB}\right] +\Lambda \label{ein11},\\ \kappa_{D}T^{1}_{0} &=& \frac{D-2}{2r}\frac{\dot{B}}{B^{2}}, \label{ein10}\\ \kappa_{D}T_{2}^{2} &=& \frac{1}{4A}\left[\frac{\dot{A}\dot{B}}{AB} +\frac{\dot{B}^{2}}{B^{2}} -\frac{2\ddot{B}}{B}\right] +\frac{(D-3)(D-4)}{2Br^{2}} - \nonumber \\&&-\frac{2(D-3)(D-4)}{r^{2}} +\frac{(D-3)}{2Br}\left(\frac{A'}{A} -\frac{B'}{B}\right) + \nonumber \\&& +\frac{1}{4B}\left[\frac{2A''}{A} -\frac{A'^{2}}{A^{2}} -\frac{A'B'}{AB}\right] +\Lambda \label{ein22},\end{aligned}$$ where the prime means derivation with respect to $r$ and the dot is the derivative with respect to $t$. We can see that if we know $T^{0}_{0}$, $T^{1}_{1}$ and $\Lambda$ we can, from (\[ein00\]) and (\[ein11\]), completely determine the solutions with two boundary conditions. This comes from the fact that we have two first order differential equations. In this case the remaining equations determine the flow of energy $T^{1}_{0} $, and the tangential stresses $ T^{2}_{2}$. To find the exact solution we need to specify the form of matter $T^{\mu}_{\nu}$ which we use. General Solution for Thin Spherical Branes =========================================== The cosmological scenario we shall consider consists of $n$
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study indirect CPT violating effects in $B_d$ meson decays and mixing, taking into account the recent constraints on the CPT violating parameters from the Belle collaboration. The life time difference of the $B_d$ meson mass eigenstates, expected to be negligible in the standard model and many of its CPT conserving extensions, could be sizeable ($\sim$ a few percent of the total width) due to breakdown of this fundamental symmetry. The time evolution of the direct CP violating asymmetries in one amplitude dominated processes (inclusive semileptonic $B_d$ decays, in particular) turn out to be particularly sensitive to this effect.' --- 21.0cm 16.0cm 0.0cm 0.0cm - 0.75cm ł ø DO-TH 02/10\ hep–ph/0209090\ May 2002 \ Amitava Datta[^1]\ \ Emmanuel A. Paschos[^2]\ \ and L.P. Singh[^3]\ \ The suggestion for two distinct lifetimes for the $B_d$ or $B_s$ meson mass eigenstates originated in parton model calculations [@ref1], which, at that time, were limited by numerous uncertainties of hadronic ($f_B$, the bag parameter, top quark mass, ...) and weak parameters (CKM matrix elements). Many of these, however, cancel in the ratio $$\left(\frac{\Delta m}{\Delta\Gamma}\right)_d = \frac{8}{9\pi} \left( \frac{\eta_t}{\eta}\right)\, \left(\frac{m_t}{m_b}\right)^2 f(x_t)$$ where $\Delta m_d(\Delta\Gamma_d)$ is the mass (width) difference of the $B_d$ meson mass eigenstates, $\eta_t,\,\eta$ are calculable perturbative QCD corrections, $x_t = \frac{m_t}{m_w}$ and $$f(x) = \frac{3}{2} \frac{x^2}{(1-x)^3}\, \ln x- \left( \frac{1}{4}+\frac{9}{4}\frac{1}{(1-x)}\, - \frac{3}{2}\, \frac{1}{(1-x)^2}\right)\, .$$ Following the discovery of mixing in the $B_d$ system [@ref2], $\Delta m_d$ was measured and $m_t$ was the only major source of uncertainty in the ratio. Using the then lower bound on $m_t$ it was shown [@ref3; @ref4] that $\Delta\Gamma_d$ is indeed very small, while $\Delta\Gamma_s$, the width difference of the $B_s$ meson mass eigenstates could be rather large as is indicated by the scaling law [@ref4] $$\left(\frac{\Delta\Gamma}{\Gamma}\right)_s = \left( \frac{X_{Bs}}{X_{Bd}}\right)\cdot \left|\frac{V_{ts}}{V_{td}}\right|^2\cdot \left(\frac{\Delta\Gamma}{\Gamma}\right)_d$$ where $V_{ij}$’s are the elements of the CKM matrix and $$X_{B_q}=\langle B_q|\left[ \bar{q}\gamma^{\mu} (1-\gamma_5)b\right]^2|\bar{B}_q\rangle\, .$$ In the meanwhile, many advances have taken place with the discovery of the top quark and the determination of its mass [@ref5] and more precise values for CKM matrix elements. Combining the new values with the above scaling laws, the width difference among the $B_d$ states is $(\Delta\Gamma/\Gamma)_d \approx 0.0012$, which is unobservable, but for $B_s$ eigenstates $(\Delta\Gamma/\Gamma)_s\approx 0.045$. More recent calculations using heavy quark effective theory and improved QCD corrections [@ref6; @ref7] suggest that calculations based on the absorptive parts of the box diagram improved by QCD corrections give reasonable estimates for both $B_d$ and $B_s$ systems. Nevertheless the possibility that there are loopholes in the above calculations cannot be totally excluded. For example, $\Delta \Gamma_q$ (q = d or s) is determined by only those channels which are accessible to both $B_q$ and $\bar{B}_q$ decays. Its computation in the parton model may not be as reliable as the calculation of $\Gamma_q$, the total width which depends on fully inclusive decays and quark- hadron duality is valid. In addition to the expected phenomena, one should, therefore, be prepared for unexpected effects and the final verdict on this subject should wait for experimental determination of $\Delta\Gamma$ from the B–factories, B–TeV or LHC–B. Many different suggestions for measuring $\Delta\Gamma_s$ have been put forward [@ref3; @ref4; @ref8]. It is believed that $(\dg)_d \sim 0.1$ can be measured at B–factories [@ref9] while $(\dg)_d \sim 0.001$ [@ref10] might be accessible at the LHC. In this article we wish to emphasize that apart from dynamical surprises in the decay mechanism, a possible breakdown of the CPT symmetry contributes to $\dg$. The currently available constraints on CPT violating parameters [@opal; @belle] certainly allow this possibility. If this happens its effect will be more visible and detectable in the $B_d$ system which, in the electroweak theory, is expected to have negligible $(\dg)_d$. In other words the scenario with $(\dg)_d$ large not only due to hitherto unknown dynamics but also due to a breakdown of CPT is quite an open possibility. In the case of $(\dg)_s$ CPT violation may act in tandem with the already known electroweak dynamics to produce an even larger effect.\ There are several motivations for drawing out a strategy to test CPT symmetry. From the experimental point of view all symmetries of nature must be scrutinized as accurately as possible, irrespective of the prevailing theoretical prejudices. It may be recalled that before the discovery of CP violation, there was very little theoretical argument in its favour. There are purely theoretical motivations as well. First of all the CPT theorem is valid for local, renormalizable field theories with well defined asymptotic states. It is quite possible that the theory we are dealing with is an effective theory and involving small nonlocal/ nonrenormalizable interactions. Further the concept of asymptotic states is not unambiguous in the presence of confined quarks and gluons. It has been suggested that physics at the string scale may indeed induce nonlocal interactions in the effective low energy theory leading to CPT violation [@ref11]. Moreover, modification of quantum mechanics due to gravity may also lead to a breakdown of CPT [@ref12]. One of the major goals of the B–factories running at KEK or SLAC is to reveal CP violation in the B system. The discrete symmetry CPT has not yet been adequately tested for the B meson system, although there are many interesting suggestions to test it [@ref13; @ref14]. In all such works, however, the correlation between $\D \gm$ and CPT violation was either ignored or not adequately emphasized. It will be shown below that $\D \gm$ can in general be numerically significant even if CPT violation is not too large. We consider the time development of neutral mesons $M^0$ (which can be $K^0$ or $D^0$ or $B_d^0$ or $B_s^0$) and their antiparticles $\bar{M}^0$. The time development is determined by the effective Hamiltonian $H_{ij} = M_{ij}-\frac{i}{2}\Gamma_{ij}$ with $M_{ij}$ and $\Gamma_{ij}$ being the dispersive and absorptive parts of the Hamiltonian, respectively [@ref15]. CPT invariance relates the diagonal elements $$M_{11} = M_{22}\quad\quad {\rm and} \quad\quad \Gamma_{11} = \Gamma_{22}\, .$$ A measure of CPT violation is, therefore, given by the parameter $$\delta = \frac{H_{22}-H_{11}}{\sqrt{H_{12}H_{21}}}$$ which is phase convention independent. In order to keep the discussion simple we shall study the consequences of indirect CPT violation only. Since indirect CPT violation is a cumulative effect involving summations over many amplitudes, it is likely that its magnitude would be much larger than that of direct violation in a single decay amplitude. It is further assumed that CPT violation does not affect the off–diagonal elements of $H_{ij}$. These assumptions can be justified in specific string models [@ref11], where terms involving both flavour and CPT violations receive negligible corrections due to string scale physics. A further consequence of this assumption is that the usual SM inequality $M_{12}\gg \Gamma_{12}$ holds even in the presence of CPT violation. The eigenfunctions of the Hamiltonian are defined as $$|M_1\rangle = p_1|M^0\rangle + q_1|\bar{M}^0\rangle \quad\quad {\rm and} \quad\quad |M_2\rangle = p_2|M^0\rangle
{ "pile_set_name": "ArXiv" }
=1
{ "pile_set_name": "ArXiv" }
--- author: - '**S.A. Grebenev**' title: '**THE ORIGIN OF THE BIMODAL LUMINOSITY DISTRIBUTION OF ULTRALUMINOUS X-RAY PULSARS**' --- [*to be published in Astronomy Letters, 2017, v. 43, n. 7, pp. 464–470*]{}\ \[30mm\] The mechanism that can be responsible for the bimodal luminosity distribution of super-Eddington X-ray pulsars in binary systems is pointed out. The transition from the high to low state of these objects is explained by accretion flow spherization due to the radiation pressure at certain (high) accretion rates. The transition between the states can be associated with a gradual change in the accretion rate. The complex behavior of the recently discovered ultraluminous X-ray pulsars M 82 X-2, NGC 5907 , and NGC 7793 P13 is explained by the proposed mechanism. The proposed model also naturally explains the measured spinup of the neutron star in these pulsars, which is slower than the expected one by several times. [**DOI:**]{} 10.1134/S1063773717050012 [**Keywords:**]{} ultraluminous X-ray sources, supercritical accretion, X-ray pulsars, neutron stars, bimodality. ------------------------------------------------------------------------ \ [$^*$ E-mail $<$sergei@hea.iki.rssi.ru$>$]{} INTRODUCTION {#introduction .unnumbered} ============ The discovery (Bachetti et al. 2014) of X-ray pulsations with a mean period $P_s\simeq 1.37$ s from the ultraluminous X-ray (ULX) source (=NuSTARJ095551+6940.8) and its sinusoidal modulation with a period $P_b\simeq2.5$ days (the orbital period of the binary system) changed drastically our views of the nature of ULX sources. Previously, it had been assumed that a high observed isotropic X-ray luminosity of such sources $L_{\rm iso}\ga 10^{40}\ \mbox{erg s}^{-1}$ could be reached only during accretion onto a black hole with a moderately large, $\sim10^3\ M_{\odot}$, or at least stellar, $\sim10\ M_{\odot}$, mass (provided the formation of a relativistic jet and associated strong radiation anisotropy). It has now become clear that such a luminosity can also take place during accretion onto a neutron star possessing a strong magnetic field with a mass of only $M_*\sim1.4\ M_{\odot}$. Such binary systems must be widespread and can even dominate in the population of ULX sources (Shao and Li 2015). The discoveries of the ultraluminous X-ray pulsars NGC7793 P13 and NGC5907 ULX-1 by the XMM-Newton satellite (Israel et al. 2017a, 2017b) shortly afterward confirm this point of view and give hope for the detection of other objects of this type. Note that NGC5907 ULX-1 has a record peak luminosity even for ULX sources, in particular, it exceeds the maximum detected luminosity of by several times (see the table). The discovery of ULX pulsars has thrown down a serious challenge to theorists. For example, it is still unclear, though is widely discussed, how such a high luminosity is reached, which exceeds the Eddington one for spherically symmetric accretion onto a neutron star by hundreds of times: $$\label{led} L_{\rm ed}=\frac{4\pi GM_*m_p c}{\sigma_{\rm es}}\simeq 1.9\times 10^{38} \left(\frac{\sigma_{\rm T}}{\sigma_{\rm es}}\right) \left(\frac{M_*}{1.4\ M_{\odot}}\right)\ \mbox{erg s}^{-1}.$$ Here, $\sigma_{\rm es}$ is the electron scattering cross section, $\sigma_{\rm T}$ is the Thomson cross section, $G$ is the gravitational constant, $m_p$ is the proton mass, and $c$ is the speed of light. Of course, the accretion onto a neutron star with a strong magnetic field is far from spherically symmetric one. As early as 1976, having considered a realistic accretion flow geometry at a supercritical accretion rate, Basko and Sunyaev (1976) showed that the isotropic luminosity of a pulsar could exceed $L_{\rm ed}$ by more than an order of magnitude (see below). Nevertheless, it is still insufficient to explain the observations of ULX pulsars. Many of the authors (e.g., Lyutikov 2014; Tong 2015; Eksi et al. 2015; Tsygankov et al. 2016a; Israel et al. 2017a, 2017b) are inclined to the assumption about an extreme magnetic field strength of the neutron star in ULX systems ($B_*\ga 10^{14}$ G), which reduces the electron scattering cross section $\sigma_{\rm es}$ and, thus, raises the Eddington limit. Others (e.g., Kluzniak and Lasota 2015) think that a high luminosity is reached precisely because of the reduced (to $B_*\sim10^{9}$ G) magnetic field strength (compared to its values $B_*\sim10^{12}-10^{13}$ G typical for X-ray pulsars). Because of the weak magnetic field, the accretion disk almost reaches the neutron star surface and radiates in the same way as during super-Eddington accretion onto a black hole. In both cases, the limiting observed luminosity of ULX pulsars, $L_{\rm iso}\sim 10^{41}\ \mbox{erg s}^{-1}$, still cannot be explained and one has to appeal to a strong anisotropy of their radiation (dall’Osso et al. 2015; Chen 2017). [@l@|c|c|c|c|c|c|c|c|c|c|c|c@]{} & & &\ Source& $P_{b}$& $P_{s}$&$\dot{P}_{-10}$& $\gamma$c&$\mu_3$&$R_{\rm c}$& $\dot{M}_{20}$& $R_{\rm s}$& $L_{39}$& $R_{\rm m}$& $L_{39}$& $R_{\rm ms}$\ &days & s & s s$^{-1}$ &&& km&gs$^{-1}$&km&ergs$^{-1}$&km& ergs$^{-1}$&km\ M82X-2 &2.5 &1.37 & -2.0&4& 3& 2080&0.59& 860&37 & 900&0.28&890\ NGC5907ULX-1&5.3 &1.13 & -8.1&6&12&1830&1.06 &1550&100&1670&&lt;0.3&1640\ NGC7793P13 & &0.42 & -0.4&2&1.4& 950&0.41& 610&13 &640&0.3&630\ \ \[-3mm\]\ \ \ \ \ \ \ \ \ The nature of the bimodal luminosity distribution of ULX pulsars pointed out by Tsygankov et al. (2016a) and Israel et al. (2017a, 2017b) also remains a puzzle. In addition to the state with a very high X-ray luminosity (hereafter the high state), periods during which the luminosity dropped to $\la 3\times10^{38}\ \mbox{erg s}^{-1}$ (hereafter the low state) have been detected for all three sources (see the table). Tsygankov et al. (2016a) and Israel et al. (2017а) assumed the bimodality of the luminosity distribution to be associated with the action of centrifugal forces, which inhibit accretion and are capable of expelling an excess of accreting matter from the system (the propeller effect; Illarionov and Sunyaev 1975; see also Corbet 1996). This effect begins to manifest itself as soon as the magnetospheric radius of the neutron star $R_m$ during the evolution of the system (for example, a temporary decrease in the accretion rate) exceeds the corotation radius $R_c$ (otherwise the surface rotation velocity of the magnetosphere will exceed the Keplerian velocity). In this case, the accretion onto the neutron star ceases, and only the radiation from the outer disk region $R>R_m$ is observed. In order for the propeller effect to operate in the systems being discussed, it is necessary that the neutron stars in them possess a very strong magnetic field $B_*\sim10^{14}-10^{15}$ G similar to the field of magnetars (Tsygankov et al. 2016a). Although the very existence of the propeller effect is beyond doubt and has come into wide use by astrophysicists, i.e., it is used to explain the observed luminosity jumps in millisecond (LMXBs, Campana et al. 2008, 2014) and ordinary (HMXBs, Corbet et al. 1996; Campana et al. 2002; Tsygankov et al. 2016b; Postnov et al. 2017) X-ray pulsars, the existence of “equilibrium” pulsar periods (van den Heuvel 1984; Corbet 1986), the out
{ "pile_set_name": "ArXiv" }
--- abstract: 'Based on network analysis of hierarchical structural relations among Chinese characters, we develop an efficient learning strategy of Chinese characters. We regard a more efficient learning method if one learns the same number of useful Chinese characters in less effort or time. We construct a node-weighted network of Chinese characters, where character usage frequencies are used as node weights. Using this hierarchical node-weighted network, we propose a new learning method, the distributed node weight (DNW) strategy, which is based on a new measure of nodes’ importance that takes into account both the weight of the nodes and the hierarchical structure of the network. Chinese character learning strategies, particularly their learning order, are analyzed as dynamical processes over the network. We compare the efficiency of three theoretical learning methods and two commonly used methods from mainstream Chinese textbooks, one for Chinese elementary school students and the other for students learning Chinese as a second language. We find that the DNW method significantly outperforms the others, implying that the efficiency of current learning methods of major textbooks can be greatly improved.' author: - 'Xiaoyong Yan$^{1,2}$, Ying Fan$^{1,3}$, Zengru Di$^{1,3}$, Shlomo Havlin$^{4}$, Jinshan Wu$^{1,3,\dag}$' bibliography: - 'characters.bib' title: Efficient learning strategy of Chinese characters based on network approach --- [**[Introduction]{}**]{}. It is widely accepted that learning Chinese is much more difficult than learning western languages, and the main obstacle is learning to read and write Chinese characters. However, some students who have learned certain amount of Chinese characters and gradually understand the intrinsic coherent structure of the relations between Chinese characters, quite often find out that it is not that hard to learn Chinese [@Bellassen]. Unfortunately, such experiences are only at individual level. Until today there is no textbook that have exploited systematically the intrinsic coherent structures to form a better learning strategy. We explore here such relations between Chinese characters systematically and use this to form an efficient learning strategy. Complex networks theory has been found useful in diverse fields, ranging from social systems, economics to genetics, physiology and climate systems [@Watts; @Strogatz; @Albert; @Newman; @Wu; @Costa; @Fortunato]. An important challenge in studies of complex networks in different disciplines is how network analysis can improve our understanding of function and structure of complex systems [@Costa; @Fortunato; @Chen]. Here we address the question if and how network approach can improve the efficiency of Chinese learning. Differing from western languages such as English, Chinese characters are non-alphabetic but are rather ideographic and orthographical [@Branner]. A straightforward example is the relation among the Chinese characters ‘’, ‘’ and ‘’, representing tree, woods and forest, respectively. These characters appear as one tree, two trees and three trees. The connection between the composition forms of these characters and their meanings is obvious. Another example is ‘’ (root), which is also related to the character ‘ ’ (tree): A bar near the bottom of a tree refers to the tree root. Such relations among Chinese characters are common, though sometimes it is not easy to realize them intuitively, or, even worse, they sometimes may become fuzzy after a few thousand years of evolution of the Chinese characters. However, the overall forms and meanings of Chinese characters are still closely related [@Qiu; @Bai; @Bellassen]: Usually, combinations of simple Chinese characters are used to form complex characters. Most Chinese users and learners eventually notice such structural relations although quite often implicitly and from accumulation of knowledge and intuitions on Chinese characters [@Lam1]. Making use of such relations explicitly might be helpful in turning rote leaning into meaningful learning [@Novak:Cmap], which could improve efficiency of students’ Chinese learning. In the above example of ‘’, ‘ ’, and ‘’, instead of memorizing all three characters individually in rote learning, one just needs to memorize one simple character ‘’ and then uses the logical relation among the three characters to learn the other two. However, such structural relations among Chinese characters have not yet been fully exploited in practical Chinese teaching and learning. As far as we know from all mainstream Chinese textbooks the textbook of Bellassen et al. [@Bellassen] is the only one that has taken partially the structure information into consideration. However, considerations of such relations in teaching Chinese in their textbook are, at best, at the individual characters level and focus on the details of using such relations to teach some characters one-by-one. With the network analysis tool at hand, we are able to analyze this relation at a system level. The goal of the present manuscript is to perform such a system-level network analysis of Chinese characters and to show that it can be used to significantly improve Chinese learning. Major aspects of strategies for teaching Chinese include character set choices, the teaching order of the chosen characters, and details of how to teach every individual character. Although our investigation is potentially applicable to all three aspects, we focus here only on the teaching order question. Learning order of English words is a well studied question which has been well established [@English_Order]. However, there is almost no explicit such studies in Chinese characters. In this work, the characters choice is taken to be the set of the most frequently used characters, with $99\%$ accumulated frequency [@Frequency]. To demonstrate our main point: how network analysis can improve Chinese learning, we focus here on the issue of Chinese character learning order. Although some researchers have applied complex network theory to study the Chinese character network [@Li; @Lee], they mainly focus on the network’s structural properties and/or evolution dynamics, but not on learning strategies. A recent work studied the evolution of relative word usage frequencies and its implication on coevolution of language and culture [@Petersen]. Different from these studies, our work considers the whole structural Chinese character network, but more importantly, the value of the network for developing efficient Chinese characters learning strategies. We find, that our approach, based on both word usage and network analysis provides a valuable tool for efficient language learning. [**[Data and methods.]{}**]{} Although nearly a hundred thousand Chinese characters have been used throughout history, modern Chinese no longer uses most of them. For a common Chinese person, knowing $3,000 - 4,000$ characters will enable him or her to read modern Chinese smoothly. In this work, we thus focus only on the most used $3500$ Chinese characters, extracted from a standard character list provided by the Ministry of Education of China [@Characters]. According to statistics [@Frequency], these 3500 characters account for more than $99\%$ of the accumulated usage frequency in the modern Chinese written language. ![\[fig1\] Chinese character decomposing and network construction. The numerical values in the figure represent learning cost, which will be discussed later.](Wu_fig1.pdf){width="8.4cm"} Most Chinese characters can be decomposed into several simpler sub-characters [@Qiu; @Bai]. For instance, as illustrated in Fig. \[fig1\], character ‘’(means ‘add’) is made from ‘’(ashamed) and ‘’(water); ‘’ can then be decomposed into ‘’(head, or sky) and ‘’(heart), and ‘’ can be decomposed into ‘’ (one) and ‘’(a person standing up, or big). The characters ‘’, ‘’, ‘ ’ and ‘’ cannot be decomposed any further, as they are all radical hieroglyphic symbols in Chinese. There are general principles about how simple characters form compound characters. It is so-called “Liu Shu” (six ways of creating Chinese characters). Ideally when for example two characters are combined to form another character the compound character should be connected to its sub-characters either via their meanings or pronunciations. We have illustrated those principles using characters listed in Fig. \[fig1\]. See [**[Supporting Online Material]{}**]{} for more details. While certain decompositions are structurally meaningful and intuitive, others are not that obvious at least with the current Chinese character forms [@Bai]. In this work, we do not care about the question, to what extent Chinese character decompositions are reasonable, the so-called Chinese character rationale [@Qiu], but rather about the existing structural relations (sometimes called character-formation rationale or configuration rationale) among Chinese characters and how to extract useful information from these relations to learn Chinese. Our decompositions are based primarily on Ref. [@ShuoWen; @Qiu; @Bai]. Following the general principles shown in the above example and the information in Ref. [@ShuoWen; @Qiu; @Bai] , we decompose all 3500 characters and construct a network by connecting character $B$ to $A$ (an adjacent matrix element $a_{BA}=1$, otherwise it is zero) through a directed link if $B$ is a “direct” component of $A$. Here, “direct” means to connect characters hierarchically (see Fig. \[fig1\]): Assuming $B$ is part of $A$, if $C$ is part of $B$ and thus in principle $C$ is also part of $A$, we connect only $B$ to $A$ and $C$ to $B$, but NOT $C$ to $A$. There are other considerations on including more specific characters which are not within the list of most-used $3500$ characters but are used as radicals of characters in the list, in constructing this network. More technical details can be found in the [**[Supporting Online Material]{}**]{}. Decomposing characters and building up links in this way, the network is a Directed Acyclic Graph (DAG), which has a giant component of $3687$ nodes (see [**[Supporting Online Material]{}
{ "pile_set_name": "ArXiv" }
[**[Vector-like quarks in a “composite” Higgs model]{}**]{} **[Abstract]{}** Vector-like quarks are a common feature of “composite” Higgs models, where they intervene in cutting off the top-loop contribution to the Higgs boson mass and may, at the same time, affect the Electroweak Precision Tests (EWPT). A model based on $SO(5)/SO(4)$ is here analyzed. In a specific non minimal version, vector-like quarks of mass as low as 300-500 GeV are allowed in a thin region of its parameter space. Other models fail to be consistent with the EWPT. Introduction ============ The great success of the Standard Model (SM) in predicting the electroweak observables leaves many theoretical open questions. One of them is the famous “naturalness problem” of the Fermi scale: one looks for a non-accidental reason that explains why the Higgs boson is so light relatively to any other short distance scale in Physics. In order to keep the Higgs boson mass near the weak-scale expectation value $v$ with no more than $10 \%$ finetuning it is necessary to cut-off the top, gauge, and scalar loops at a scale $\Lambda_{nat} \lesssim 1-2$ TeV. This fact tells us that the SM is not natural at the energy of the Large Hadron Collider (LHC), and more specifically new physics that cuts-off the divergent loops has to be expected at or below 2 TeV. In a weakly coupled theory this means new particles with masses below 2 TeV and related to the SM particles by some symmetry. For concreteness, the dominant contribution comes from the top loop. Thus naturalness arguments predict new multiplet(s) of top-symmetry-related particles that should be easily produced at the LHC, which has a maximum available energy of 14 TeV. The possibilities in extending the SM are many. Here we focus on a model (see [@contino2]) in which the Higgs particle is realized as a pseudo-Goldstone boson associated to the breaking $SO(5)\rightarrow SO(4)$ at a scale $f > v$. In some sense this extension is “minimal” since we add only one field in the scalar sector. The Higgs mass will then be protected from self-coupling corrections, and the cutoff scale can be raised up to $3$ TeV. Following the approach of [@barbieri1], the $SO(5)$ symmetry has then to be extended to the top sector by adding new vector-like quarks in order to reduce the UV sensitivity of $m_h$ to the top loop. In principle new heavy vectors should also be included in order to cut-off the gauge boson loops, however here only the quark sector will be studied because the dominant contribution comes from the top. Moreover, from a phenomenological point of view, heavy quark searches at the LHC may be easier than heavy vector searches (as pointed out in [@barbieri2]). In enlarging the fermion sector it is necessary to fulfill the requirements of the Electro Weak Precision Tests (EWPT). More specifically, as shown in Figure \[figuraewpt\], the composite nature of the Higgs boson and the physics at the cutoff produce two corrections to the $S$ and $T$ parameters of the SM. For this reason, in order to be consistent with data, one can look for a positive contribution to $T$ coming from the fermion sector. Another experimental constraint comes from the modified bottom coupling to the $Z$ boson. The main virtues of this model are minimality and effectiveness. That is we concentrate on the fermion resonances, which can be lighter than the new gauge bosons and play a central role in reducing the sensitivity of the Higgs boson mass to the new physics. Moreover we do so introducing the least possible number of new particles and parameters. In fact there are models which can be compatible with EWPT data and have the same scalar sector, but since they start from 5d considerations they are forced to introduce much more new fields (see e.g. [@contino] and [@carena-santiago]). In section \[modelloSO5\] a summary of some relevant previous works is reported. In section \[extendedmodel\] I work out a non minimal model which can be consistent with data. In section \[othermodels\] two examples are given of other models ruled out by the EWPT. ![The experimentally allowed region in the $ST$ plane, including contributions “from scalars” and “from cutoff” (see [@barbieri1], section 2). The dashed arrow shows that an extra positive contribution to $T$ is needed in order to make the model consistent with data. In section \[extendedmodel\] it will be shown that such contribution may come from a suitably extended top sector. This figure is taken from [@barbieri1].[]{data-label="figuraewpt"}](./EWPT){width="60.00000%"} Summary of previous works {#modelloSO5} ========================= Making reference to [@contino2] and [@barbieri1] for a detailed description of the model, here I concentrate on quarks. The fermion sector has to be enlarged in such a way that the top is ($SO(5)$ symmetrically) given the right mass $m_t = 171$ GeV, and new heavy quarks are vector-like in the $v/f \rightarrow 0$ limit. The bottom quark can be considered massless at this level of approximation, while lighter quarks are completely neglected. The minimal way to do this is to enlarge the left-handed top-bottom doublet $q_L$ to a vector (one for each colour) $\Psi_L$ of $SO(5)$, which under $SU(2)_L \times SU(2)_R$ breaks up as $(2,2)+1$. The SM gauge group $G_{SM}=SU(2)_L \times U(1)$ is here given by the $SU(2)_L$ and the $T_3$ of the $SU(2)_R$ of a fixed subgroup $SO(4)=SU(2)_L \times SU(2)_R \subset SO(5)$. The full fermionic content of the third quark generation is now: $$\Psi_L= \left( q= \left( \begin{array}{c} t \\ b\end{array}\right) ,\, X= \left( \begin{array}{c} X^{5/3} \\ X \end{array}\right) , \, T \right)_L \, , \, t_R, X_R= \left( \begin{array}{c} X^{5/3} \\ X \end{array}\right)_R, T_R ,$$ where the needed right handed states have been introduced in order to give mass to the new fermions. Hypercharges are fixed in order to obtain the correct value of the electric charges. Note that the upper component of the “exotic” $X$ has electric charge $5/3$. In the next section an extended model with fermions in the fundamental representation will be examined. The spinor representation (see e.g. [@contino2]) is ruled out by requiring that the physical left handed b-quark is a true doublet of $SU(2)_L$ and not an admixture of doublet and singlet, as noted in [@barbieri1] or in [@contino-lett]. The requirement that there be not a left handed charge $-\frac{1}{3}$ singlet to mix whith $b_L$ is a sort of “custodial symmetry” which protects the $Zb\overline{b}$ coupling fom large corrections ([@agashe-contino]). The Yukawa Lagrangian of the fermion sector consists of an $SO(5)$ symmetric mass term for the top (this guarantees the absence of quadratic divergences in the contribution to $m_h$, as shown by equation \[diverglogaritm5q\]) and the most general (up to redefinitions) gauge invariant mass terms for the heavy $X$ and $T$: $$\label{lagr5iniziale} \mathcal{L}_{top}= \lambda_1 \overline{\Psi}_L \phi t_R + \lambda_2 f \overline{T}_L T_R + \lambda_3 f \overline{T}_L t_R +M_X \overline{X}_L X_R + h.c,$$ where $\phi$ is the scalar 5-plet containing the Higgs Field. Note that the adjoint representation of $SO(5)$ splits in the adjoint representation of $SO(4)$ plus a $(4)$ of SO(4): this fact guarantees that the Goldstone bosons of the $SO(5)\rightarrow SO(4)$ breaking have the quantum numbers of the Higgs dublet. Up to rotations that preserve all the quantum numbers, with a convenient definition of the various parameters, we can rewrite \[lagr5iniziale\] in the form: $$\label{yukawaminimal} \mathcal{L}_{top}=\overline{q}_L H^c (\lambda_t t_R + \lambda_T T_R) + \overline{X}_L H (\lambda_t t_R + \lambda_T T_R) + M_T \overline{T}_L T_R + M_X \overline{X}_L X_R + h.c.$$ Through diagonalization of the mass matrix we obtain the physical fields, in terms of which it is possible to evaluate the physical quantities. For example let us check the cancellation of the quadratically divergent contribution to
{ "pile_set_name": "ArXiv" }
--- abstract: 'The quantum Zakharov system in three-spatial dimensions and an associated Lagrangian description, as well as its basic conservation laws are derived. In the adiabatic and semiclassical case, the quantum Zakharov system reduces to a quantum modified vector nonlinear Schrödinger (NLS) equation for the envelope electric field. The Lagrangian structure for the resulting vector NLS equation is used to investigate the time-dependence of the Gaussian shaped localized solutions, via the Rayleigh-Ritz variational method. The formal classical limit is considered in detail. The quantum corrections are shown to prevent the collapse of localized Langmuir envelope fields, in both two and three-spatial dimensions. Moreover, the quantum terms can produce an oscillatory behavior of the width of the approximate Gaussian solutions. The variational method is shown to preserve the essential conservation laws of the quantum modified vector NLS equation.' author: - 'F. Haas' - 'P. K. Shukla' title: Quantum and classical dynamics of Langmuir wave packets --- Introduction ============ The Zakharov system [@Zakharov], describing the coupling between Langmuir and ion-acoustic waves, is one of the basic plasma models, see Ref. [@Goldman; @Thornhill] for reviews. Recently [@Garcia], a quantum modified Zakharov system was derived, by means of the quantum plasma hydrodynamic model [@Haas]–[@HaasQMHD]. In this context, enhancement of the quantum effects was then shown [*e. g.*]{} to suppress the four-wave decay instability. Subsequently [@Marklund], a kinetic treatment of the quantum Zakharov system has shown that the modulational instability growth rate can be increased in comparison to the classical case, for partially coherent Langmuir wave electric fields. Also [@Haasvar], a variational formalism was obtained and used to study the radiation of localized structures described by the quantum Zakharov system. Bell shaped electric field envelopes of electron plasma oscillations in dense quantum plasmas obeying Fermi statistics were analyzed in Ref. [@Shukla]. More mathematically-oriented works on the quantum Zakharov equations concern its Lie symmetry group [@Tang] and the derivation of exact solutions [@Abdou]–[@Yang]. Finally, there is evidence of hyperchaos in the reduced temporal dynamics arising from the quantum Zakharov equations [@Misra]. All these paper refer to quantum Zakharov equations in one-spatial-dimension only. In the present work, we extend the quantum Zakharov system to fully three-dimensional space, allowing also for the magnetic field perturbation. In the classical case, both heuristic arguments and numerical simulations indicate that the ponderomotive force can produce finite-time collapse of Langmuir wave packets in two- or three-dimensions [@Goldman], [@Zakharov2; @Zakharov3]. This is in contrast to the one-dimensional case, whose solutions are smooth for all time. A dynamic rescaling method was used for the time-evolution of electrostatic self-similar and asymptotically self-similar solutions in two- and three-dimensions, respectively [@Landman]. Allowing for transverse fields shows that singular solutions of the resulting vector Zakharov equations are weakly anisotropic, for a large class of initial conditions [@Papanicolaou]. The electrostatic nonlinear collapse of Langmuir wave packets in the ionospheric and laboratory plasmas has been observed [@Dubois; @Robinson]. Also, the collapse of Langmuir wave packets in beam plasma experiments verifies the basic concepts of strong Langmuir turbulence, as introduced by Zakharov [@Cheung]. The analysis of the coupled longitudinal and transverse modes in the classical strong Langmuir turbulence has been less studied [@Alinejad]–[@Li], as well as the intrinsically magnetized case [@Pelletier], which can lead to upper-hybrid wave collapse [@Stenflo]. Finally, Zakharov-like equations have been proposed for the electromagnetic wave collapse in a radiation background [@Marklund2]. It is expected that the ponderomotive force causing the collapse of localized solutions in two- or three-space dimensions could be weakened by the inclusion of quantum effects, making the dynamics less violent. This conjecture is checked after establishing the quantum Zakharov system in higher-dimensional space and using its variational structure in association with a (Rayleigh-Ritz) trial function method. The manuscript is organized in the following fashion. In Section 2, the quantum Zakharov system in three-spatial-dimensions is derived by means of the usual two-time scale method applied to the fully 3D quantum hydrodynamic model. In Section 3, the 3D quantum Zakharov system is shown to be described by a Lagrangian formalism. The basic conservation laws are then also derived. When the density fluctuations are so slow in time so that an adiabatic approximation is possible, and treating the quantum term of the low-frequency equation as a perturbation, a quantum modified vector nonlinear Schrödinger equation for the envelope electric field is obtained. In Section 4, the variational structure is used to analyze the temporal dynamics of localized (Gaussian) solutions of this quantum NLS equation, through the Rayleigh-Ritz method, in two-spatial-dimensions. Section 5 follows the same strategy, extended to fully 3D space. Special attention is paid to the comparison between the classical and quantum cases, with considerable qualitative and quantitative differences. Section 6 contains conclusions. Quantum Zakharov equations in $3+1$ dimensions ============================================== The starting point for the derivation of the electromagnetic quantum Zakharov equations is the quantum hydrodynamic model for an electron-ion plasma, Equations (20)-(28) of Ref. [@HaasQMHD]. For the electron fluid pressure $p_e$, consider the equation of state for spin $1/2$ particles at zero temperature, $$\label{e1} p_e = \frac{3}{5}\,\frac{m_{e}v_{Fe}^2 \,n_{e}^{5/3}}{n_{0}^{2/3}} \,,$$ where $m_e$ is the electron mass, $v_{Fe}$ is the Fermi electron thermal speed, $n_e$ is the electron number density and $n_0$ is the equilibrium particle number density both for electron and ions. The pressure and quantum effects (due to their larger mass) are neglected for the ions. Also due to the larger ion mass, it is possible to introduce a two-time scale decomposition, $n_e = n_0 + \delta n_s + \delta n_f$, $n_i = n_0 + \delta n_s$, ${\bf u}_e = \delta{\bf u}_s + \delta{\bf u}_f$, ${\bf u}_i = \delta{\bf u}_s$, ${\bf E} = \delta{\bf E}_s + \delta{\bf E}_f$, ${\bf B} = \delta{\bf B}_f$, where the subscripts $s$ and $f$ refer to slowly and rapidly changing quantities, respectively. Also, ${\bf u}_e$ is the electron fluid velocity, $n_i$ the ion number density, ${\bf u}_i$ the ion fluid velocity, ${\bf E}$ the electric field, and ${\bf B}$ the magnetic field. Notice that it is assumed that there is no slow contribution to the magnetic field, a restriction which allows to get ${\bf B} = (m_{e}/e)\,\nabla\times\delta{\bf u}_f$ (see Equation (2.21) of Ref. [@Thornhill]), where $-e$ is the electron charge. Including a slow contribution to the magnetic field could be an important improvement, but this is outside the scope of the present work. Following the usual approximations [@Thornhill; @Garcia], the quantum corrected 3D Zakharov equations read $$\begin{aligned} \label{e2} 2i\omega_{pe}\frac{\partial{\bf\tilde{E}}}{\partial t} &-& c^2\, \nabla\times(\nabla\times{\bf\tilde{E}}) + v_{Fe}^2 \nabla(\nabla\cdot{\bf\tilde{E}}) = \nonumber \\ &=& \frac{\delta n_s}{n_0} \,\omega_{pe}^2 \,{\bf\tilde{E}} + \frac{\hbar^2}{4m_{e}^2}\nabla\left[\nabla^2 (\nabla\cdot{\bf\tilde{E}})\right] \,, \\ \label{e3} \frac{\partial^2 \delta n_s}{\partial t^2} &-& c_{s}^2 \,\nabla^2 \delta n_s - \frac{\varepsilon_0}{4m_i}\nabla^2 (|{\bf\tilde{E}}|^2) + \frac{\hbar^2}{4m_e m_i} \,\nabla^4 \delta n_s = 0 \,.\end{aligned}$$ Here ${\bf\tilde{E}}$ is the slowly varying envelope electric field defined via $${\bf E}_f = \frac{1}{2}\,({\bf\tilde{E}} \, e^{-i\omega_{pe}t} + {\bf\tilde{E}}^{*} \, e^{i\omega_{pe}t}) \,,$$ where $\omega_{pe}$ is the electron plasma frequency. Also, in Eqs. (\[e2\]–\[e3\]) $c$ is the speed of light in vacuum, $\hbar$ the scaled Planck constant,
{ "pile_set_name": "ArXiv" }
--- abstract: 'We undertake a regularity analysis of the solutions to initial/boundary value problems for the (third-order in time) Moore-Gibson-Thompson (MGT) equation. The key to the present investigation is that the MGT equation falls within a large class of systems with memory, with affine term depending on a parameter. For this model equation a regularity theory is provided, which is of also independent interest; it is shown in particular that the effect of boundary data that are square integrable (in time and space) is the same displayed by wave equations. Then, a general picture of the (interior) regularity of solutions corresponding to homogeneous boundary conditions is specifically derived for the MGT equation in various functional settings. This confirms the gain of one unity in space regularity for the time derivative of the unknown, a feature that sets the MGT equation apart from other PDE models for wave propagation. The adopted perspective and method of proof enables us to attain as well the (sharp) regularity of boundary traces.' address: - ' Francesca Bucci, Università degli Studi di Firenze, [*Dipartimento di Matematica e Informatica*]{}, [Via S. Marta 3, 50139 Firenze, ITALY]{} ' - 'Luciano Pandolfi, Politecnico di Torino, [*Dipartimento di Scienze Matematiche “Giuseppe Luigi Lagrange”*]{}, [Corso Duca degli Abruzzi 24, 10129 Torino, ITALY]{} ' author: - Francesca Bucci - Luciano Pandolfi title: 'On the regularity of solutions to the Moore-Gibson-Thompson equation: a perspective via wave equations with memory' --- Introduction ============ The Jordan-Moore-Gibson-Thompson equation is a nonlinear Partial Differential Equation (PDE) model which describes the acoustic velocity potential in ultrasound wave propagation; the use of the constitutive Cattaneo law for the heat flux, in place of the Fourier law, accounts for its being of third order in time. The quasilinear PDE is $$\label{Eq:quasilineare} \tau \psi_{ttt} + \psi_{tt}-c^2\Delta \psi - b\Delta \psi_t= \frac{\partial}{\partial t}\Big(\frac1{c^2}\frac{B}{2A}\psi^2_t+|\nabla \psi|^2\Big)$$ in the unknown $\psi=\psi(t,x)$, that is the acoustic velocity potential (then $-\nabla \psi$ is the acoustic particle velocity), $A$ and $B$ being suitable constants; [*cf.*]{} Moore & Gibson [@moore-gibson_1960], Thompson [@thompson_1972], Jordan [@jordan_2009]. For a brief overview on nonlinear acoustics, along with a list of relevant references, see the recent paper by Kaltenbacher [@kalt_2015]. Aiming at the understanding of the nonlinear equation, a great deal of attention has been recently devoted to its linearization—referred to in the literature as the Moore-Gibson-Thompson (MGT) equation—whose mathematical analysis is also of independent interest, posing already several questions and challenges. Let $\Omega\subset \mathbb{R}^n$ be a region with smooth ($C^2$) boundary $\Gamma:=\partial\Omega$. (It is a natural conjecture that existence results for wave equations in non-smooth domains ([*cf.*]{} [@Grisvard]) can be extended to wave equations with memory and to the MGT equation, by using the methods we present in this paper. We consider the MGT equation $$\label{e:mgt} \tau u_{ttt}+\alpha u_{tt} -c^2 \Delta u -b \Delta u_t =0 \qquad \text{in $(0,T)\times\Omega$}$$ in the unknown $u=u(t,x)$, $t\ge 0$, $x\in \Omega$, representing the acoustic velocity potential or alternatively, the acoustic pressure (see [@kalt-las-posp_2012] for a discussion on this issue). The coefficients $c$, $b$, $\alpha$ are constant and positive; they represent the speed and diffusivity of sound ($c$, $b$), and, respectively, a viscosity parameter ($\alpha$). For simplicity we set $\tau=1$ throughout the paper. Equation is supplemented with initial and boundary conditions: $$\begin{aligned} & u(0,\cdot)=u_0\,,\; u_t(0,\cdot)=u_1\,,\; u_{tt}(0,\cdot)=u_2(x)\,, & \text{in $(0,T)\times\Omega$} \label{e:IC} \\[1mm] & {{\mathcal T}}u(t,\cdot) =g(t,\cdot) & \text{on $(0,T)\times\Gamma$}; \label{e:BC}\end{aligned}$$ ${{\mathcal T}}$ denotes here a boundary operator, which—for the sake of simplicity—associates to a function either the trace on $\Gamma$, or the outward normal derivative $\frac{\partial}{\partial \nu}\big|_\Gamma$ (it would be the [*conormal*]{} derivative, in the case of a more general elliptic operator than the Laplacian). The original studies of the MGT equation with homogenous (Dirichlet or Neumann) boundary data carried out in Kaltenbacher [*et al.*]{} [@kalt-etal_2011] and Marchand [*et al.*]{} [@marchand-etal_2012] establish appropriate functional settings for semigroup well-posedenss, as well as stability and spectral properties of the dynamics, depending on the parameters values. They obtain, in particular, 1. that assuming $b>0$ the linear dynamics is governed by a strongly continuous [*group*]{} in the function space $H^1_0(\Omega)\times H^1_0(\Omega)\times L^2(\Omega)$ (Dirichlet BC), or ($H^1(\Omega)\times H^1(\Omega)\times L^2(\Omega)$ (Neumann BC); 2. that in the case $b=0$ the associated initial/boundary value problems are ill-posed ([*cf.*]{} Remark \[r:role-of-b\]); 3. that the parameter $\gamma=\alpha - \tau c^2/b$ is a threshold of stability/instability: it must be positive, if the property of uniform stability is required. The critical role of $\gamma$ for a dissipative behaviour was recently pointed out also in Dell’Oro and Pata [@delloro-pata_2016], within the framework of viscoleasticity. (We add that linear and true nonlinear variants of the MGT equation including an [*additional*]{} memory term have been the object of recent investigation; see [@las-jee_2017] and references therein.) Our interest lies in studying the regularity of the mapping $$(u_0,u_1,u_2,g)\longmapsto u$$ that associates to initial and boundary data—taken in appropriate spaces—the corresponding solution $u=u(t,x)$ to the initial/boundary value problem (IBVP) --. (We note that the time and more often the space variable $x$ will generally not be esplicit, unless when needed for the sake of clarity.) As it will be shown in the paper, it will be the embedding of equation in a general class of integro-differential equations (depending on a parameter) to spark our method of proof for the regularity analysis of the associated initial/boundary value problems. Indeed, the MGT equation is a special instance of the following wave equation with persistent memory, $$\label{e:memory} u_{tt}-b \Delta u=-b\gamma \int_0^t N(t-s) \Delta u(s)\,ds + F(t)\xi\,,$$ which displays an affine term depending on a suitable $\xi$, and that will be supplemented with (initial and boundary) data $$\label{eq:dataDIe:memory} u(0)=u_0\,,\ u_t(0)=u_1\,, \qquad {{\mathcal T}}u=g\,.$$ The assumptions on the real valued functions $N(t)$, $F(t)$ and on $\xi$ in are specified later; see Theorem \[t:sample\]. As it will be apparent below, the parameter $\xi$ includes the component $u_2$ of initial data $(u_0,u_1,u_2)$ for the MGT equation, while - reduces to the MGT equation (with -) when $$N(t)=F(t)=e^{-\alpha t}\,,\qquad \xi=u_2-b\Delta u_0\,.$$ The obtained regularity results will follow combining the (interior and trace) regularity theory for wave equations with non-homogenous boundary data—the Neumann case being the most challenging (see [@las-trig_wave1], and the optimal result of [@tataru_1998])—with the methods developed in [@PandLIBRO] for equations with persistent memory. In order to carry out a regularity analysis of the model equation with memory we shall use the trick of MacCamy [@maccamy_1977] and the theory of Volterra equations. For equations with memory of the
{ "pile_set_name": "ArXiv" }