text
stringlengths
0
12.5k
meta
dict
sentences_perturbed
int64
0
15
length_stats
dict
--- abstract: 'Video image datasets are playing an essential role in design and evaluation of traffic vision algorithms. Nevertheless, a longstanding inconvenience concerning image datasets is that manually collecting and annotating large-scale diversified datasets from real scenes is time-consuming and prone to error. For that virtual datasets have begun to function as a proxy of real datasets. In this paper, we propose to construct large-scale artificial scenes for traffic vision research and generate a new virtual dataset called “ParallelEye”. First of all, the street map data is used to build 3D scene model of Zhongguancun Area, Beijing. Then, the computer graphics, virtual reality, and rule modeling technologies are utilized to synthesize large-scale, realistic virtual urban traffic scenes, in which the fidelity and geography match the real world well. Furthermore, the Unity3D platform is used to render the artificial scenes and generate accurate ground-truth labels, e.g., semantic/instance segmentation, object bounding box, object tracking, optical flow, and depth. The environmental conditions in artificial scenes can be controlled completely. As a result, we present a viable implementation pipeline for constructing large-scale artificial scenes for traffic vision research. The experimental results demonstrate that this pipeline is able to generate photorealistic virtual datasets with low modeling time and high accuracy labeling.' author: - 'Xuan Li, Kunfeng Wang, *Member, IEEE*, Yonglin Tian, Lan Yan, and Fei-Yue Wang, *Fellow, IEEE*[^1][^2][^3][^4][^5][^6]' title: 'The ParallelEye Dataset: Constructing Large-Scale Artificial Scenes for Traffic Vision Research' --- Introduction ============ The publicly available video image datasets have received much attention in recent years, due to its indispensability in design and evaluation of computer vision algorithms [@Geiger2013]. In general, a computer vision algorithm needs a large amount of labeled images for training and evaluation. The datasets can be divided into two types: unlabeled datasets used for unsupervised learning and labeled datasets used for supervised learning. However, manually annotating the images is time-consuming and labor-intensive, and participants often lack professional knowledge, making some annotation tasks difficult to execute. Experts are always sparse and should be properly identified. As we known, the human annotators are subjective, and their annotations should be re-examined if two or more annotators have disagreements about the label of one entity. By contrast, the computer is objective in processing data and particularly good at batch processing, so why not let the computer annotate the images automatically? At present, most publicly available datasets are obtained from real scenes. As the computer vision field enters the big data era, researchers begin to look for better ways to annotate large-scale datasets [@Handa2014]. At the same time, the development of virtual datasets has a long history, starting at least from Bainbridge’s work [@Bainbridge2007]. Bainbridge used Second Life and World of Warcraft as two distinct examples of virtual worlds to predict the scientific research potential of virtual worlds, and introduced the virtual worlds into a lot of research fields that scientists are now exploring, including sociology, computer science, and anthropology. In fact, synthetic data has been used for decades to benchmark the performance of computer vision algorithms. The use of synthetic data has been particularly significant in object detection \[4\], \[5\] and optical flow estimation \[6\]-\[8\], but most virtual data are not photorealistic or akin to the real-world data, and lack sufficient diversity [@Ros2015]. The fidelity of some virtual data is close to the real-world [@Prendinger2013]. However, the synthesized virtual worlds are seldom equivalent to the real world in geographic position, and seldom annotate the virtual images automatically. Richter *et al.* [@Richter2016] used a commercial game engine to extract virtual images, with no access to the source code or the content. The SYNTHIA dataset [@Ros2016] provided a realistic virtual city as well as synthetic images with automatically generated pixel-level annotations, but in that dataset there lacks other annotation information such as object bounding box and object tracking. Gaidon *et al.* [@Gaidon2016] proposed a virtual dataset called “Virtual KITTI" as a proxy for tracking algorithm evaluation. While this dataset was cloned from “KITTI", it cannot extend easily to arbitrary traffic networks. Due to the above limitations, new virtual datasets that match the real world and provide detailed ground truth annotations are still desirable. ![image](fig/fig1.pdf){width="7in"} Manually annotating pixel-level semantics for images is time-consuming and not accurate enough. For example, annotating high-quality semantics with 10-20 categories in one image usually takes 30-60 minutes [@Kundu2014]. This is known as the “curse of dataset annotation” [@Xie2016]. The more detailed the semantics, the more labor-intensive the annotation process. As a result, many datasets do not provide semantic segmentation annotations. For example, ImageNet [@Karpathy2014],[@Russakovsky2015] has 14 million images, in which more than one million images have definite class and the images are annotated with object bounding box for object recognition. However, ImageNet does not have semantic segmentation annotations. Some datasets provide only limited semantic segmentation annotations. For example, NYU-Depth V2 [@Silberman2012] has 1449 densely labelled images, KITTI [@Geiger2013] has 547 images, CamVid [@Brostow2009],[@Browstow2008] has 600 images, Urban LabelMe [@Russell2008] has 942 images, and Microsoft COCO [@Lin2014] has three hundred thousand images. These datasets play an important role in the study of semantic segmentation. However, these datasets cannot be used directly in intelligent transportation, especially in automobile navigation, because the number of labeled images is insufficient and the segmented semantics have different categories. Currently, computer vision algorithms that exploit context for pattern recognition would benefit from datasets with many annotated categories embedded in images from complex scenes. Such datasets should contain a wide variety of environmental conditions with annotated object instances co-occurring in the same scenes. However, the real scenes are unrepeatable and the captured images are expensive to annotate, making it difficult to obtain large-scale, diversified datasets with precise annotations. In order to solve these problems, this paper proposes a pipeline for constructing artificial scenes and generating virtual images. First of all, we use map data to build the 3D scene model of Zhongguancun Area, Beijing. Then, we use the computer graphics, virtual reality, and rule modeling technologies to create a realistic, large-scale virtual urban traffic scene, in which the fidelity and geographic information can match the real world well. Furthermore, we use the Unity3D development platform for rendering the scene and automatically annotating the ground truth labels including pixel-level semantic/instance segmentation, object bounding box, object tracking, optical flow, and depth. The environmental conditions in artificial scenes can be controlled completely. In consequence, we generate a new virtual image dataset, called “ParallelEye" (see Fig. 1). We will build a website and make this dataset publicly available before the publication of this paper. The experimental results demonstrate that our proposed implementation pipeline is able to generate photorealistic virtual images with low modeling time and high fidelity. ![Basic framework and architecture for parallel vision [@KWang2016].[]{data-label="fig_sim"}](fig/fig2.pdf){width="3.3in"} The rest of this paper is organized as follows. Section II introduces the significance of parallel vision and virtual dataset. Section III presents our approach to constructing artificial scenes and generating virtual images with ground-truth labels. Section IV reports the experimental results and analyzes the performance. Finally, the concluding remarks are made in section V. Parallel Vision and Virtual Dataset =================================== Parallel vision \[23\]-\[25\] is an extension of the ACP (Artificial systems, Computational experiments, and Parallel execution) theory \[26\]-\[30\] into the computer vision field. For parallel vision, photo-realistic artificial scenes are used to model and represent complex real scenes, computational experiments are utilized to learn and evaluate a variety of vision models, and parallel execution is conducted to online optimize the vision system and realize perception and understanding of complex scenes. The basic framework and architecture for parallel vision [@KWang2016] is shown in Fig. 2. Based on the parallel vision theory, this paper constructs a large-scale virtual urban network and synthesizes a large number of realistic images. The first stage of parallel vision is to construct photorealistic artificial scenes by simulating a variety of environmental conditions occurring in real scenes, and accordingly to synthesize large-scale diversified datasets with precise annotations generated automatically. Generally speaking, the construction of artificial scenes can be regarded as “video game design", i.e., using the computer animation-like techniques to model the artificial scenes. The main technologies used in this stage include computer graphics, virtual reality, and micro-simulation. Computer graphics and computer vision, on the whole, can be thought of as a pair of forward and inverse problems. The goal of computer graphics is to synthesize image measurements given the description of world parameters according to physics-based image formation principles (forward inference), while the focus of computer vision is to map the pixel measurements to 3D scene parameters and semantics (inverse inference). Apparently their goals are opposite, but can converge to a common point: parallel vision. From the parallel vision perspective, we design the ParallelEye dataset. ParallelEye is synthesized by referring to the urban network of Zhongguancun Area, Beijing. Using OpenStreetMap (OSM), an urban network with length 3km and width
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Traditional indoor scene synthesis methods often take a two-step approach: object selection and object arrangement. Current state-of-the-art object selection approaches are based on convolutional neural networks (CNNs) and can produce realistic scenes for a single room. However, they cannot be directly extended to synthesize style-compatible scenes for multiple rooms with different functions. To address this issue, we treat the object selection problem as combinatorial optimization based on a Labeled LDA (L-LDA) model. We first calculate occurrence probability distribution of object categories according to a topic model, and then sample objects from each category considering their function diversity along with style compatibility, while regarding not only separate rooms, but also associations among rooms. User study shows that our method outperforms the baselines by incorporating multi-function and multi-room settings with style constraints, and sometimes even produces plausible scenes comparable to those produced by professional designers.' author: - Yu He - Yun Cai - Yuanchen Guo - Zhengning Liu - Shaokui Zhang - Songhai Zhang - Hongbo Fu - Shengyong Chen bibliography: - 'sample-bibliography.bib' title: 'Style-compatible Object Recommendation for Multi-room Indoor Scene Synthesis' --- =1 <ccs2012> <concept> <concept\_id>10010147.10010178.10010187.10010197</concept\_id> <concept\_desc>Computing methodologies Spatial and physical reasoning</concept\_desc> <concept\_significance>500</concept\_significance> </concept> <concept> <concept\_id>10010147.10010371.10010396</concept\_id> <concept\_desc>Computing methodologies Shape modeling</concept\_desc> <concept\_significance>500</concept\_significance> </concept> </ccs2012> ![image](images/page1.jpg){width="7in"}
{ "pile_set_name": "ArXiv" }
null
null
[**DUALITY AND FACTORIZATION THEOREM IN QCD[^1]**]{} I. V. Anikin$^{1 \dag}$, I. O. Cherednikov$^{1, 3}$, N. G. Stefanis$^{2}$ and O. V. Teryaev$^{1}$ [(1) [*Bogoliubov Laboratory of Theoretical Physics, JINR, 141980 Dubna, Russia* ]{}\ (2) [*Institut für Theoretische Physik II, Ruhr-Universität Bochum, D-44780 Bochum, Germany* ]{}\ (3) [*INFN Gruppo collegato di Cosenza, I-87036 Rende, Italy* ]{}\ $\dag$ [*E-mail: anikin@theor.jinr.ru* ]{}]{} **Abstract** We find that in “two-photon”-like processes in the scalar $\varphi^3_E$ model and also in hadron-pair production arising from the collisions of a real (transversely polarized) and a highly virtual, longitudinally polarized, photon in QCD, there is duality between two distinct nonperturbative mechanisms. These two mechanisms, one involving a twist-$3$ Generalized Distribution Amplitude, the other employing a leading-twist Transition Distribution Amplitude, are associated with different regimes of factorization. In the kinematical region, where the two mechanisms overlap, duality is observed for the scalar $\varphi^3_E$ model, while in the QCD case the appearance of duality turns out to be sensitive to the particular nonperturbative model applied and can, therefore, be used as a tool for selecting the most appropriate one. Introduction {#sec:intro} ============ The only known method today of applying QCD in a rigorous way is based on the factorization of the dynamics and the isolation of a short-distance part that becomes accessible to perturbative techniques of quantum field theory (see, [@Efremov-Radyushkin; @Bro-Lep; @Col-Sop-Ste89] and for a review, for instance, [@Ste99] and references cited therein). Then, the conventional systematic way of dealing with the long-distance part is to parametrize it in terms of matrix elements of quark and gluon operators between hadronic states (or the vacuum). These matrix elements stem from nonperturbative phenomena and have to be either extracted from experiment or be determined on the lattice. In many phenomenological applications they are usually modeled in terms of various nonperturbative methods or models. Generically, the application of QCD to hadronic processes involves the consideration of hard parton subprocesses and (unknown) nonperturbative functions to describe binding effects. Prominent examples are hard exclusive hadronic processes which involve hadron distribution amplitudes (DAs), generalized distribution amplitudes (GDAs), and generalized parton distributions (GPDs) [@Diehl:2003; @Bel-Rad; @NonforRad; @GPV]. Applying such a framework, collisions of a real and a highly-virtual photon provide a useful tool for studying a variety of fundamental aspects of QCD. Recently, nonperturbative quantities of a new kind were introduced—transition distribution amplitudes (TDAs) [@Frank-Pol; @Pire-Szym; @LPS06]—which are closely related to the GPDs. In contrast to the GDAs, the TDAs appear in the factorization procedure when the Mandelstam variable $s$ is of the same order of magnitude as the large photon virtuality $Q^2$, while $t$ is rather small. Remarkably, there exists a reaction where both amplitude types, GDAs and TDAs, can overlap. This can happen in the fusion of a real and transversely polarized photon with a highly-virtual longitudinally polarized photon, giving rise to a final state which comprises a pair of pions. The key feature of this reaction is that it can potentially follow either path: proceed via twist-$3$ GDAs, or go through the leading-twist TDAs, as illustrated in Fig. \[GDAvsTDA\]. Such an antagonism of alternative factorization mechanisms in this reaction seems extremely interesting both theoretically and phenomenologically and deserves to be studied in detail. The intimate relation between these two mechanisms in the production of a vector-meson pair was analyzed in [@PSSW] and it was found that these mechanisms can be selected by means of the different polarizations of the initial-state photon. In contrast, for (pseudo)scalar particles, such as the pions, this effect is absent enabling us to access the overlap region of both mechanisms and their duality as opposed to their additivity. In this talk, we will report on the possibility for duality between these antagonistic mechanisms of factorization, associated either with GDAs or with TDAs, in the regime where *both* Mandelstam variables $s$ and $t$ are rather small compared to the large photon virtuality $Q^2$. ![Two ways of factorization: via the GDA mechanism and via the TDA mechanism.[]{data-label="GDAvsTDA"}](gamma-gamma-pi-new-blobs3.eps){width="40.00000%"} Regimes of Factorization within the $\varphi^3_E$-model {#sec:fact1} ======================================================= Consider first the factorization of the scalar $\varphi^3_E$ model in Euclidean space. To study the four-particle amplitude in detail, it is particularly useful to employ the $\alpha$-representation—see [@NonforRad]. Then, the contribution of the leading “box” diagram can be written as (while details can be found in [@An-Dual]) $$\begin{aligned} \label{Amp1} {\cal A}(s,t,m^2) =-\frac{g^4}{16\pi^2} \int\limits_{0}^{\infty} \frac{\prod\limits_{i=1}^4 d\alpha_i}{D^2} \exp \biggl[ - \frac{1}{D} \left( Q^2 {\alpha_1\alpha_2} + s \alpha_2\alpha_4 + t {\alpha_1\alpha_3} + m^2 D^2 \right)\biggr],\end{aligned}$$ where $m^2$ serves as a infrared (IR) regulator, $s>0$, $t>0$ are the Mandelstam variables in the Euclidean region, and $D=\sum\limits_{i=1}^4 \alpha_i$. Assuming that $q^2=Q^2$ is large compared to the mass scale $m^2$ (which simulates here the typical scale of soft interactions), the amplitude (\[Amp1\]) can indeed be factorized. As regards the other two kinematic variables $s$ and $t$, one can identify three distinct regimes of factorization: (a) $s\ll Q^2$ while $t$ is of order $Q^2$; (b) $t\ll Q^2$ while $s$ is of order $Q^2$; (c) $s,~t\ll Q^2$. **Regime (a)**: The process is going through the s-channel. In this regime, the main contribution in the integral in Eq. (\[Amp1\]) arises from the integration over $\alpha_1$ when $\alpha_1\sim 0$: $$\begin{aligned} \label{GDA-alpha} {\cal A}_{\rm GDA}^{\rm as}(s,t,m^2) =-\frac{g^4}{16\pi^2} \int\limits_{0}^{\infty} \frac{d\alpha_2 \,d\alpha_3 \,d\alpha_4}{D^2_0} \ \exp \left( - s \frac{\alpha_2\alpha_4}{D_0}- m^2 D_0 \right) \left[Q^2 \frac{\alpha_2}{D_0} + t \frac{\alpha_3}{D_0} + m^2 \right]^{-1}\, .\end{aligned}$$ Schematically this means that the propagator, parametrized by $\alpha_1$, can be associated with the partonic (hard) subprocesses, while the remaining propagator constitutes the soft part of the considered amplitude, i.e., the scalar version of the GDA. **Regime (b)**: Here we have to eliminate from the exponential in Eq. (\[Amp1\]) the variables $Q^2$ and $s$, which are large. This can be achieved by integrating over the region $\alpha_2\sim 0$. Performing similar manipulations as in regime (a), we find that the scalar TDA amplitude can be related to the scalar GDA via $ {\cal A}_{\rm TDA}^{\rm as}(s, t, m^2) = {\cal A}_{\rm GDA}^{\rm as}(t, s, m^2) $. **Regime (c)**: The relevant regime to investigate duality is when it happens that both variables $s$ and $t$ are simultaneously small compared to $Q^2$, i.e., when $s,\, t \ll Q^2$. In this case, there are two possibilities to extract the leading $Q^2$-asymptotics, notably, we can either integrate over the region $\alpha_1 \
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Online learning algorithms are designed to perform in non-stationary environments, but generally there is no notion of a dynamic [*state*]{} to model constraints on current and future actions as a function of past actions. State-based models are common in stochastic control settings, but commonly used frameworks such as Markov Decision Processes (MDPs) assume a known stationary environment. In recent years, there has been a growing interest in combining the above two frameworks and considering an MDP setting in which the cost function is allowed to change arbitrarily after each time step. However, most of the work in this area has been algorithmic: given a problem, one would develop an algorithm almost from scratch. Moreover, the presence of the state and the assumption of an arbitrarily varying environment complicate both the theoretical analysis and the development of computationally efficient methods. This paper describes a broad extension of the ideas proposed by Rakhlin et al. to give a general framework for deriving algorithms in an MDP setting with arbitrarily changing costs. This framework leads to a unifying view of existing methods and provides a general procedure for constructing new ones. Several new methods are presented, and one of them is shown to have important advantages over a similar method developed from scratch via an online version of approximate dynamic programming.' author: - 'Peng Guan[^1]' - 'Maxim Raginsky[^2]' - 'Rebecca M. Willett[^3]' title: | Relax but stay in control: from value to algorithms\ for online Markov decision processes[^4] --- Introduction ============ Markov decision processes, or MDPs for short [@AC_MDP_survey; @Puterman; @LermaLasserreMDP], are a popular framework for sequential decision-making in a dynamic environment. In an MDP, we have states and actions. At each time step of the sequential decision-making process, the agent observes the current state and chooses an action, and the system transitions to the next state according to a fixed and known Markov law. The costs incurred by the agent depend both on his action and on the current state. Traditional theory of MDPs deals with the case when both the transition law and the state-action cost function are known in advance. In this case, there are two ways of designing policies [@Bertsekas] – via dynamic programming (where the construction of an optimal policy revolves around the computation of a relative value function), or via the linear programming (LP) approach [@Manne; @Borkar], which reformulates the MDP problem as a “static” linear optimization problem over the so-called state-action polytope [@Puterman]. However, [*a priori*]{} known costs are typically unavailable in practical settings. When neither the transition probability nor the cost functions are known in advance, various reinforcement learning (RL) methods, such as the celebrated $Q$-learning algorithm [@Watkins_Dayan_QLearning; @Tsitsiklis_QLearning] and its variants, can be used to learn an optimal policy in an online regime. However, the key assumptions underlying RL are that the agent is operating in a stochastically stable environment, and that the state-action costs (or at least their expected values with respect to any environmental randomness) do not vary with time. In this paper, instead of considering a fixed or stochastic cost function, we study Markov decision processes where the cost functions are chosen arbitrarily and allowed to change with time. More specifically, we are interested in the [*online MDP*]{} problem: just as in the usual online leaning framework [@Robbins_compound; @Hannan; @PLG], the one-step cost functions form an arbitrarily varying sequence, and the cost function corresponding to each time step is revealed to the agent after an action has been taken. The objective of the agent is to minimize regret relative to the best stationary Markov policy that could have been selected with full knowledge of the cost function sequence over the horizon of interest. The assumption of arbitrary time-varying cost functions makes sense in highly uncertain and complex environments whose temporal evolution may be difficult or costly to model, and it also accounts for collective (and possibly irrational) behavior of any other agents that may be present. The regret minimization viewpoint then ensures that the agent’s [*online*]{} policy is robust against these effects. Online MDP problems can be viewed as [*online control problems*]{}. The online aspect is due to the fact that the cost functions are generated by a dynamic environment under no distributional assumptions, and the agent learns the current state-action cost only after committing to an action. The control aspect comes from the fact that the choice of an action at each time step influences future states and costs. Taking into account the effect of past actions on future costs in a dynamic distribution-free setting makes online MDPs hard to solve. To the best of our knowledge, only a few methods have been developed in this area over the past decade [@McMahan; @EvenDar; @Yu; @onlineMDP_bandits; @AroraTewari; @onlineMDP_full; @Yadkori; @Zimin; @DickCsaba]. Most research in this area has been algorithmic: given a problem, one would present a method and prove a guarantee (i.e., a regret bound) on its performance. There are two distinct lines of methods: the algorithms presented by [@EvenDar; @Yu; @onlineMDP_bandits] require the computation of relative value functions at each time step, while the algorithms in [@Zimin; @DickCsaba] reduce the online MDP problem to an online linear optimization problem and solve it by online learning methods. These two lines of methods correspond to the two above-mentioned different ways of designing polices for MDPs. From a theoretical and conceptual standpoint, it is desirable to provide a unifying view of existing methods and a general procedure for constructing new ones. In this paper, we present such a general framework for online MDP problems that subsumes the above two approaches. This general framework not only enables us to recover known algorithms, but it also gives us a generic toolbox for deriving new algorithms from a more principled perspective rather than from scratch. The online MDP setting we are considering was first defined and studied in the work of [@EvenDar] and [@Yu], which deals with MDPs with arbitrarily varying rewards. Like these authors, we assume a full information feedback model and known stochastic state transition dynamics. (However, it should be pointed out that these assumptions have been relaxed in some recent works — for example, [@onlineMDP_bandits] and [@AroraTewari] assume only bandit-type feedback, while [@Yadkori] prove regret bounds for MDPs with arbitrarily varying transition models and cost functions. An extension of our framework to these settings is an interesting avenue for future research.) Our general approach is motivated by recent work of Rakhlin et al. [@RakhlinRL], which gives a principled way of deriving online learning algorithms (and bounding their regret) from a minimax analysis. Of course, many online learning algorithms have been developed in various settings over the past few decades, but a comprehensive and systematic treatment was still lacking prior to [@RakhlinRL]. Starting from a general formulation of online learning as a (stateless) repeated game between a learner and an adversary, Rakhlin et al. [@RakhlinRL] analyze the minimax regret (value) of this online learning game, which is the regret (relative to a fixed competing strategy) that would be achieved if both the learner and the adversary play optimally. It was known before the work of [@RakhlinOR] that one could derive sublinear upper bounds on the minimax value in a nonconstructive manner. However, algorithm design was done on a case-by-case basis, and custom analysis techniques were needed in each case to derive performance guarantees matching these upper bounds. The work of [@RakhlinRL] bridges this gap between minimax value analysis and algorithm design: They have shown that, by choosing appropriate relaxations of a certain recursive decomposition of the minimax value, one can recover many known online learning algorithms and give a general recipe for developing new ones. In short, the framework proposed by [@RakhlinRL] can be used to convert an upper bound on the value of the game into an algorithm. Our main contribution is an extension of the framework of [@RakhlinRL] to online MDPs. Since online learning problems are studied in a state-free setting, it is not straightforward to generalize the ideas of [@RakhlinRL] to the case when the system has a state, and the technical nature of the arguments involved in online MDPs is significantly heavier than their state-free counterpart. We formulate the online MDP problem as a two-player repeated game with state variables and study its minimax value. We introduce the notion of an online MDP [*relaxation*]{} and show how it can be used to recover existing methods and to construct new algorithms. More specifically, we present two distinct approaches of moving from the original dynamic setting, where the state evolves according to a controlled Markov chain, to simpler static settings and constructing corresponding relaxations. The first approach uses Poisson inequalities for MDPs [@MeynTweedie] to reformulate the original dynamic setting as a static setting, where each possible state is associated with a separate online learning algorithm. We show that the algorithm proposed by [@EvenDar] arises from a particular relaxation, and we also derive a new algorithm in the spirit of [@Yu] which exhibits improved regret bounds. The second approach moves from the dynamic setting to a static setting by reducing the online MDP problem to an online linear optimization problem. After the reduction, we can directly capitalize on the framework
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Within convex analysis, a rich theory with various applications has been evolving since the proximal average of convex functions was first introduced over a decade ago. When one considers the subdifferential of the proximal average, a natural averaging operation of the subdifferentials of the averaged functions emerges. In the present paper we extend the reach of this averaging operation to the framework of monotone operator theory in Hilbert spaces, transforming it into the resolvent average. The theory of resolvent averages contains many desirable properties. In particular, we study a detailed list of properties of monotone operators and classify them as dominant or recessive with respect to the resolvent average. As a consequence, we recover a significant part of the theory of proximal averages. Furthermore, we shed new light on the proximal average and present novel results and desirable properties the proximal average possesses which have not been previously available.' author: - 'Sedi Bartz[^1], Heinz H. Bauschke[^2], Sarah M. Moffat[^3], and Xianfu Wang[^4]' date: 'May 11, 2015' title: | [The resolvent average of monotone operators:\ dominant and recessive properties]{} --- [**2010 Mathematics Subject Classification:**]{} Primary 47H05, 52A41, 90C25; Secondary 15A09, 26A51, 26B25, 26E60, 47H09, 47A63. [**Keywords:**]{} Convex function, Fenchel conjugate, Legendre function, monotone operator, paramonotone, positive semidefinite operator, proximal average, rectangular, resolvent, resolvent average, strong convexity, strong monotonicity, strong smoothness, subdifferential operator, uniform convexity, uniform smoothness. Introduction {#intro} ============ The proximal average of two convex functions was first considered in [@BMR]. Since then, in a series of papers, the definition of the proximal average was refined and its useful properties were studied and employed in various applications revealing a rich theory with promising potential for further evolution and applications. One of the latest forms of the proximal average we refer to in the present paper is given in Definition \[proximal average def\] below. Some other cornerstones in the study of the proximal average include: [@BGLW] where many useful properties and examples where presented, [@BLT] where it was demonstrated that the proximal average defines a homotopy on the class of convex functions (unlike other, classical averages), and, also, a significant application [@BW] where the proximal average was employed in order to explicitly construct *autoconjugate* representations of monotone operators, also known as *self-dual* Lagrangians, the importance of which in variational analysis is demonstrated in detail in the monograph [@Gho]. A recent application of the proximal average in the theory of *machine learning* is [@Yu]. When subdifferentiating the proximal average, we obtain an averaging operation of the subdifferentials of the underlying functions (see equation  below). Monotone operators are fundamentally important in analysis and optimization [@AusTeb], [@BC2011], [@Borwein], [@BV], [@BurIus], [@RockWets], [@Simons2]. In the present paper, we analyze the resolvent average (see Definition \[resolvent average def\] below), which significantly extends the above averaging operation of subdifferentials to the general framework of monotone operator theory. (See also [@bmow2013], [@Wang], and [@Moffat] for some earlier works on the resolvent average.) We present powerful general properties the resolvent average possesses and then focus on the study of more specific inheritance properties of the resolvent average. Namely, we go through a detailed list of attractive properties of monotone operators and classify them as *dominant* or *recessive* with respect to the resolvent average by employing the following notions: Let $C$ be a set and let $I$ be an index set. Suppose that $\mathcal{AVE}:C^I\to C$. Then a property $(p)$ is said to be 1. **dominant** with respect to $\mathcal{AVE}$ if for each $(c_i)\in C^I$, the existence of $i_0\in I$ such that $c_{i_0}$ has property $(p)$ implies that $\mathcal{AVE}((c_i))$ has property $(p)$; 2. **recessive** with respect to $\mathcal{AVE}$ if $(p)$ is not dominant and for each $(c_i)\in C^I$, for each $i\in I$, $c_i$ having property $(p)$ implies that $\mathcal{AVE}((c_i))$ has property $(p)$. We also provide several examples of the resolvent average (mainly in order to prove the recessive nature of several properties) of mappings which are monotone, however, which are not subdifferential operators. As a consequence, the resolvent average is now seen to be a natural and effective tool for averaging monotone operators which avoids many of the domain and range obstacles standing in front of classical averages such as the arithmetic average. The resolvent average is also seen to be an effective averaging technique when one wishes the average to posses specific properties, especially when the desired properties are dominant. When we restrict our attention to monotone linear relations, our current study extends the one in [@bmw-res] where the resolvent average was considered as an average of positive semidefinite and definite matrices. When we restrict our attention to subdifferential operators, we recover a large part of the theory of the proximal average [@BGLW]. Moreover, we present several novel results regarding the inheritance of desired properties of the proximal average which have not been previously available. In summary, *the resolvent average provides a novel technique for generating new maximally monotone operators with desirable properties*. The remaining of the paper is organized as follows: In the remainder of Section \[intro\] we present the basic definitions, notations and the relations between them which we will employ throughout the paper. We also collect all preliminary facts necessary for our presentation. In Section \[basic\] we present basic properties of the resolvent average. In Section \[dominant\] we study dominant properties while Section \[recessive\] deals with recessive properties. Finally, in Section \[neither\] we consider combinations of properties, properties which are neither dominant nor recessive, and other observations and remarks. Before we start our analysis, let us recall the following concepts and standard notation from monotone operator theory and convex analysis: Throughout this paper, ${\ensuremath{\mathcal H}}$ is a real Hilbert space with inner product ${\langle{{\cdot},{\cdot}}\rangle}$, induced norm $\|\cdot \|$, identity mapping ${\ensuremath{\operatorname{Id}}}$ and we set $q=\frac{1}{2}\|\cdot\|^2$. We denote the interior of a subset $C$ of ${\ensuremath{\mathcal H}}$ by ${\ensuremath{\operatorname{int}}}C$. Let $A:{\ensuremath{\mathcal H}}{\ensuremath{\rightrightarrows}}{\ensuremath{\mathcal H}}$ be a set-valued mapping. We say that $A$ is *proper* when the *domain* of $A$, the set ${\ensuremath{\operatorname{dom}}}A=\{x\in{\ensuremath{\mathcal H}}{\ensuremath{\;|\;}}Ax\neq\varnothing\}$, is nonempty. The *range* of $A$ is the set ${\ensuremath{\operatorname{ran}}}A = A({\ensuremath{\mathcal H}})=\bigcup_{x \in {\ensuremath{\mathcal H}}} Ax$, the *graph* of $A$ is the set ${\ensuremath{\operatorname{gra}}}A = \{(x,u)\in {\ensuremath{\mathcal H}}\times {\ensuremath{\mathcal H}}{\ensuremath{\;|\;}}u \in Ax\}$ and the inverse of $A$ is the mapping $A^{-1}$ satisfying $x\in A^{-1}u\Leftrightarrow u\in Ax$. $A$ is said to be *monotone* if $$(\forall (x,u) \in {\ensuremath{\operatorname{gra}}}A)(\forall (y,v) \in {\ensuremath{\operatorname{gra}}}A)\quad {\langle{{x-y},{u-v}}\rangle} \geq 0.$$ $A$ is said to be *maximally monotone* if there exists no monotone operator $B$ such that ${\ensuremath{\operatorname{gra}}}A$ is a proper subset of ${\ensuremath{\operatorname{gra}}}B$. The *resolvent* of $A$ is the mapping $J_A=(A+{\ensuremath{\operatorname{Id}}})^{-1}$. We say that $A$ is a linear relation if ${\ensuremath{\operatorname{gra}}}A$ is a linear subspace of ${\ensuremath{\mathcal H}}\times {\ensuremath{\mathcal H}}$. $A$ is said to be a maximally monotone linear relation if $A$ is both maximally monotone and a linear relation. The mapping $T: {\ensuremath{\mathcal H}}\to {\ensuremath{\mathcal H}}$ is said to be *firmly nonexpansive* if $$(\forall x\in {\ensuremath{\mathcal H}})(\forall y\in {\ensuremath{\mathcal H}}) \quad \|Tx-Ty\|^2 + \|({\ensuremath{\operatorname{Id}}}-T)x-({\ensuremath{\operatorname{Id}}}-T)y\|^2 \leq \|x-y\|^2.$$ Obviously, if $T$ is firmly nonexpansive, then it is *nonexpansive*, that is, [Lipschitz continuous]{} with constant $1$, where a Lipschitz continuous mapping with constant $L$ is a mapping $T:{\ensuremath{\mathcal H}}\to{\ensuremath{\mathcal
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We can obtain one solution of the Hamiltonian constraint equation in the local sense. The form of the state is suggested from the up-to-down method in our previous work. The up-to-down method works for different way in treating the general metrics. In the mini-superspace approach there appears additional constraint in the 4-dimensional quantum gravity Hilbert space. However, in the general treatment of the metrics this method works as only solving technique.' author: - Shintaro Sawayama title: 'One Local Solution of the Wheeler-DeWitt equation' --- Introduction {#sec1} ============ There are many treatment of the quantum gravity, e.g. string theories [@AL] and mini-superspace approach [@Hart] or loop gravity [@Rov; @As; @Thi; @As2]. The theory of the quantum gravity has not yet been constructed. The theory of canonical quantum gravity is based on ADM decomposition. Form the ADM decomposition [@ADM]; we obtain constraint equations i.e. Hamiltonian and diffeomorphism constraint equations. To solve these constraint equations is the orthodox way of the canonical quantum gravity. The Hamiltonian constraint is the generator of the time translation and the diffeomorphism constraint are the generator of space translations [@Di2]. The theory of quantum gravity contains many unsolved problems which contains problem of the time and problem of the norm. However, most important problem is the difficulty of the constraint equations, i.e. Wheeler-DeWitt equation [@De]. Our motivation is simple and that is to find at least one local solution of the Wheeler-DeWitt equation. Our work is not motivated from the higher dimensional gravity. Although the Wheeler-DeWitt equation is difficult to solve, because it is the second order functional differential equation with diffeomorphism constraints, we had created a method to solve it that we call the up-to-down method [@Sa]. The introduced method contains many problems which is peculiar to the quantum gravity. And some long standing problems e.g. the problems of the norm or the problem of the time has not solved yet. In this paper we show one of the special local solutions of quantum gravity state for general spacetime metrics on the basis of the up-to-down method. The obtained state is a solution which satisfy Hamiltonian constraint. The main progress of the up-to-down method is the fact that we can find at least one local solution of the Hamiltonian constraint equation. In this paper, we reconstruct a technical method to solve the Wheeler-DeWitt equation, which we call up-to-down method. The up-to-down method consists of the following steps. First we add another dimension as an external time to the usual 4-dimensional metric and create an artificial functional space which has support of the spacetime metrics, and then we reduce this quantum state to the physical 4-dimensional state, we can simply solve the usual 3+1 Wheeler-DeWitt equation. The same method, however does not work for Klein-Gordon systems. The work of this method is different from the mini-superspace models, if we treat the Wheeler-DeWitt in the general sense. If we treat the mini-superspace models, the additional constraint appears. The simplification comes from the embedding of the 4-dimensional metric in the arbitrary 5-dimensional metric. The ideas of the up-to-down method come from the work of dynamical horizon [@AK; @Sa2] and the problem of the time. The problem of the time is inversely used such that if we add an additional dimension and treat it external time, then this dimension or all the components of additional dimension must be vanish. Although we use the up-to-down method as a derivation, the obtained state does not depend on the derivation. In section \[sec2\] we introduces the local quantum gravity and reconstruct the up-to-down method which is solving technique of the Wheeler-DeWitt equation. In section \[sec3\] we derive one local solution of the Hamiltonian constraint equation without fixing spacetime metrics. In section \[sec4\] we summarize the obtained result and comment on the problems of the quantum gravity. Local Quantum Gravity and Up-to-down Method {#sec2} =========================================== The local quantum gravity introduced in this section is considered to simply treat the Wheeler-DeWitt equation. The method of the local quantum gravity starts from decomposition of the Einstein-Hilbert action as, $$\begin{aligned} S=\int RdM=\sum_i \int R_i[g^{(i)}_{\mu\mu}]dS_i.\end{aligned}$$ Here $S_i$ is the subset of the hypersurface of $\Sigma$ with constant time. And $S_i$ is defined such that metric become diagonal by the local coordinate transformation. The Local quantum gravity starts from decomposition of $R_i$ as usual 3+1 sense. Then we obtained the Hamiltonian constraint and diffeomorphis constraint only in terms of the diagonal metric components. Although the local quantum gravity only uses diagonal components of the metrics, boundary condition appears and this condition is still not well defined. We introduce what we call the up-to-down method in self-contained way and in more strict way more than the previous paper. Some mistaken are corrected in this section. We should say some sentence is same as the previous paper. We start by introducing an additional dimension which is an external euclidean time with positive signature, and thus create an artificial functional space corresponding to this external time. We write such external dimension as $s$. We dare to start with artificial 5-dimensional action whose metric is created from the usual 4-dimensional metric components and arbitrary additional dimensional components as, $$\begin{aligned} S=\int _{M\times s}{}^{(5)}RdMds.\end{aligned}$$ Where ${}^{(5)}R$ is the 5-dimensional Ricci scalar. Although we stat from higher dimensional action, we do not motivated from higher dimensional gravity. Rewriting the action by a 4+1 slicing of the 5-dimensional spacetime with lapse functionals given by the $s$ direction, we obtain the 4+1 Hamiltonian constraint and the diffeomorphism constraints as, $$\begin{aligned} \hat{H}_S\equiv \hat{R}-\hat{K}^2+\hat{K}^{ab}\hat{K}_{ab} \\ \hat{H}_V^a\equiv \hat{\nabla} _b(\hat{K}^{ab}-\hat{K}\hat{g}^{ab}),\end{aligned}$$ where a hat means 4-dimensional, e.g. the $\hat{K}_{ab}$ is extrinsic curvature defined by $\hat{\nabla}_a s_b$ and $\hat{K}$ is its trace, while $\hat{R}$ is the 4-dimensional Ricci scalar, and $\hat{\nabla} _a$ is the 4-dimensional covariant derivative.\ \ [*Definition.* ]{} The artificial functional state is defined by $\hat{H}_S|\Psi^{5} (g)\rangle =\hat{H}_V^a|\Psi^{5} (g)\rangle =0$, where $g$ is the 4-dimenstional spacetime metrics $g_{\mu\nu}$ with ($\mu =0,\cdots ,3$). We write this functional space as ${\cal H}_5$.\ Here, the definition of the canonical momentum $P$ is different from the usual one. Note in fact that the above state in ${\cal H}_5$ is not the usual 5-dimensional quantum gravity state, because the 4+1 slicing is along the $s$ direction. It is not defined by $\partial {\cal L}/(\partial dg/dt)$ but by $\partial {\cal L}/(\partial dg/ds)$, where ${\cal L}$ is the 5-dimensional Lagrangian. Although whether this state is the Hilbert space or $l^2$ norm space is open question and this problem does not matter below, because what we would like to treat is the physical 4-dimensional quantum gravity state. In addition, we impose that 4-dimensional quantum gravity must be recovered from the above 5-dimensional action. The 3+1 Hamiltonian constraint and diffeomorphism constraint are, $$\begin{aligned} H_S\equiv {\cal R}+K^2-K^{ab}K_{ab} \\ H_V^a\equiv D_b(K^{ab}-Kq^{ab}).\end{aligned}$$ Here $K_{ab}$ is the usual extrinsic curvature defined by $D_at_b$ and $K$ is its trace, while ${\cal R}$ is the 3-dimensional Ricci scalar, and $D_a$ is the 3-dimensional covariant derivative. Then we can define a subset of the auxiliary Hilbert space on which the wave functional satisfies the usual 4-dimensional constraints. In order to relate the 4 and 5 dimensional spaces we should define projections.\ \ [*Definition.*]{} The subset of ${\cal H}_5$ in which the five dimensional quantum state satisfies the extra constraints $H_SP|\Psi ^5(g)\rangle=H_V^aP|\Psi ^5(g)\rangle =0$ is called ${\cal H}_{5lim}$, where $P$ is the projection defined by $$\begin{aligned} P:{\cal H}_5 \to L^2_4 \ \ \ \{ P|\Psi^5(g)\rangle=|\Psi^5(g_{0\mu}={\rm const})\rangle \} ,\end{aligned}$$ where $L^2_4$ is a functional space. And ${\cal H}_4$ is the usual four dimensional state with the restriction that $H_S|\Psi^4(q
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'I present some reminiscences, both personal and scientific, over a lifetime of admiration and friendship with one of the Grandmasters of our subject.' address: | Walter Burke Institute for Theoretical Physics,\ California Institute of Technology, Pasadena, CA 91125;\ Physics Department, Brandeis University, Waltham, MA 02454;\ [deser@brandeis.edu]{} author: - Stanley Deser title: 'Julian Schwinger — Recollections from many decades' --- {#section .unnumbered} Dear students, friends, and admirers of Julian Schwinger, or all three. We are here to celebrate and commemorate a century since Julian’s birth. He only lived three quarters of that period, unfortunately dying far too young at 76, but left us a great legacy. Being in this Conference’s history section, I will try to discuss the life and work as I saw it, minus technicalities. I knew Julian for three-fifths of his life, a reasonable fraction. It began when I arrived as a graduate student in the fall of 1949. I didn’t know much physics, nor did I know who Julian was, but I was soon educated on the latter. In fact, I sat in on three of his quantum mechanics courses, all different. Like everybody else who wanted to do theory, I was convinced that Julian should be my mentor. He was willing to accept just about anybody, but he was chary with his time, as all his students know. Since he has saved my life so many times, I feel I should begin by giving some examples. The system at Harvard in my day, if you wanted to do theory, required you to take a qualifying exam, usually in something called Math and Mechanics, which covered various sins. One was supposed to bone up on that during one’s second academic year; a jury of one’s would-be advisor plus two other people was then convened. The day duly came and Julian arrived, flanked by Abe Klein and Bob Karplus, two up-and-coming assistant professors whose careers depended critically on sufficiently impressing Julian so they could get good positions elsewhere — one didn’t get promoted from within. They had discovered something in their latest calculations, some particularly uninteresting but technical stuff called dilogarithms, which are now, I suppose, taught in kindergarten but in those days unknown to anyone — certainly to me. They proceeded to show Julian how brilliant and clever they were, at my expense, so that after the first few words, I was totally excluded from everything, and after an hour and a half of this, they turned to me and pityingly asked me a question like what two plus two was, at which point I couldn’t even have answered one plus one. And so this terrible ordeal ended, I walked out, and two minutes later Julian came and said, “you realize you failed your qualifying exam,” and I said “yes,” and there was a little pause and he went on, “don’t worry about it.” I think this miracle (and miracle it was — no one else failed M&M) may have been due to my performance on an advanced electrodynamics course I had just taken with him. Then I started on my thesis. I think I probably saw Julian for a total — just on the upper limit — of about ten hours during those two years. One day, in the spring of my fourth year, I asked Julian, when would I could possibly think of finishing up. When he replied “right now if you want” — this was shortly before the strict Harvard deadline for submitting a thesis, I was not going to let this opportunity slide; somehow it all got done and typed on a Bible paper, only available in one place in the world, and bound in one particular way, and all the rest of it. Although the thesis was mediocre, I was handed my Ph.D. by James Bryant Conant, in his last year of a long tenure as president of the university. Rescue number two was a bit more indirect. In those days, Julian would simply phone Oppenheimer at the Institute for Advanced Study, tell him who his latest graduates were and Oppenheimer would take them, no applications or recommendations. Unfortunately, the year before mine, Julian’s choice at that point was a very strange guy, we’ll not name names, who was found in his first year at the Institute climbing the wall of some estate in Princeton, something frowned upon at such a rich community. The whole thing was handled very well, all airbrushed out. He disappeared, and I’m told became a successful psychoanalyst, but that could be apocryphal. In any case, Oppenheimer was taking no chances, so he told Julian that his two picks, Roger Newton and I, had better show up and pass a psychiatric exam. In those days Oppenheimer still had his clearance, so two FBI agents were guarding his files; I walked past with trepidation, but all Oppenheimer did was ask me what my thesis was about, the title of which I told him. He immediately told me (a) what was in the thesis and (b) why it was wrong. He was way off the mark, at least on point (a); he had no idea whatsoever, but that was in his style. At least I didn’t have any obvious tics. I was vetted also by the younger permanent people at the Institute, and Roger also passed with flying colors. We were installed at the Institute where I had my two years, and not so much contact with Julian. However he saved me because when I arrived at the Institute not too sure what to do, I was immediately pounced on by Murph Goldberger and Walther Thirring who were both visiting there. They said, “you must know all of Julian’s tricks, so let’s get moving and apply them to the following project.” Of course I didn’t know Julian’s tricks, but it in fact provided my first successful extra-Ph.D. experience and did use some of them after all. I should mention — going back a bit — that before you start on a thesis, you’re given a little test problem by your advisor. Julian gave me the little test problem, of which I had no idea whatsoever at all what to do, the reason was that this little problem was the beginning of his celebrated National Academy of Sciences series that to this very day is a standard tool. So when he showed me what he had done, I realized why I hadn’t a clue as to what to do. Well, that too he accepted. So I learned from that that one should do onto others and give would-be graduate students a certain amount of leeway, perhaps not as much as he gave me, but still. Then came my second postdoc stage. After the two years at the Institute, I went off for two years to the Niels Bohr Institute in Denmark, which was a difficult period for me. I only wrote one paper, which was furthermore wrong, although wrong in an interesting way. In any case, in those days especially, I hadn’t realized that, once you go into exile, you no longer exist in the United States, because you’re not in any loop. Fortunately, Julian came by that summer, visiting Denmark with Clarice, and he again saved my life by offering me one year as his assistant as an Instructor at Harvard, while I found my footing back home. That was truly critical, because being married having a baby, it was clear that I needed some sort of a job. He then also recommended me for my first faculty position at Brandeis. So, this was the support I got from Julian: his faith in me was truly beyond any requirement. His greatest confidence in me occurred much later. I was an invited visitor to UCLA, where Julian had moved, and used to stay in his house. Once I came it was during one of those oil embargos when you couldn’t get any gas for your car, especially in California. Julian lived in Bel Air, which had, and has still, for all I know, one and only one gas station, at some chi-chi little shopping center. It was going to open at 7AM until the gas ran out by 7:30, and Julian was of course in a terrible quandary because 7AM is too late for staying up and far too early for getting up. I was still on Eastern time, so 7AM suited me fine, but would he entrust his precious sports car? He agonized all evening and then finally handed me the keys, gave me a three-hour lecture on how to drive, and I’m sure had a very restless night. I arrived at 7AM, surrounded by all the neighborhood Bentleys and Rolls-Royces, chauffeurs waiting in line, but I did manage to snag sufficient gas for the next period and avoided having any dents in the car, which Julian inspected carefully. Our relations became more even with time. In particular, after the birth of supergravity in ’76, Julian asked me to come for a weekend tutorial for him and his entourage at UCLA. So there, on a Saturday at some ungodly early hour like 10AM, we started on a full Soviet-style two day session; Julian would say, “I don’t understand what this is,” and I replied “come on, Julian, you invented it all,” and reminded him of the Rarita–Schwinger equation, which he did indeed vaguely remember — they had actually a fairly ugly form for it — but they had found it. I suggested that in fact Julian should have discovered super
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | We investigate an infinite, linear system of ordinary differential equations that models the evolution of fragmenting clusters. We assume that each cluster is composed of identical units (monomers) and we allow mass to be lost, gained or conserved during each fragmentation event. By formulating the initial-value problem for the system as an abstract Cauchy problem (ACP), posed in an appropriate weighted $\ell^1$ space, and then applying perturbation results from the theory of operator semigroups, we prove the existence and uniqueness of physically relevant, classical solutions for a wide class of initial cluster distributions. Additionally, we establish that it is always possible to identify a weighted $\ell^1$ space on which the fragmentation semigroup is analytic, which immediately implies that the corresponding ACP is well posed for any initial distribution belonging to this particular space. We also investigate the asymptotic behaviour of solutions, and show that, under appropriate restrictions on the fragmentation coefficients, solutions display the expected long-term behaviour of converging to a purely monomeric steady state. Moreover, when the fragmentation semigroup is analytic, solutions are shown to decay to this steady state at an explicitly defined exponential rate.\ *Keywords:* discrete fragmentation, positive semigroup, analytic semigroup, long-time behaviour, Sobolev towers\ *Mathematics Subject Classification (2010):* 47D06; 34G10, 80A30, 34D05 author: - 'Lyndsay Kerr, Wilson Lamb and Matthias Langer' title: | Discrete Fragmentation Systems in\ Weighted $\ell^1$ Spaces --- Introduction {#Introduction} ============ There are many diverse situations arising in nature and industrial processes where clusters of particles can merge together (coagulate) to produce larger clusters, and can break apart (fragment) to produce smaller clusters. Particular examples can be found in polymer science, [@aizenman1979convergence; @ziff1980kinetics; @ziffmcgrady1985kinetics], in the formation of aerosols, [@drake1972aerosol], and in the powder production industry, [@verdurmen2004simulation; @wells2018thesis]. It is often appropriate when modelling such processes to regard cluster size as a discrete variable, with a cluster of size $n$, an $n$-mer, composed of $n$ identical units (monomers). By scaling the mass, we can assume that each monomer has unit mass and so an $n$-mer has mass $n$. The aim is to use the mathematical model to obtain information on how clusters of different sizes evolve. In this paper we restrict our attention to the case when no coagulation occurs, and consequently the evolution of clusters can be described by a linear, infinite system of ordinary differential equations. With the number density of clusters of size $n$ (i.e. mass $n$) at time $t$ denoted by $u_n(t)$, this fragmentation system is given by $$\label{full frag system} \begin{split} u_n'(t)&=-a_nu_n(t)+\sum\limits_{j=n+1}^{\infty} a_jb_{n,j}u_j(t), \qquad t>0; \\ u_n(0)&=\mathring{u}_n, \qquad n=1,2,\ldots, \end{split}$$ where $a_n$ is the rate at which clusters of size $n$ are lost, $b_{n,j}$ is the rate at which clusters of size $n$ are produced when a larger cluster of size $j$ fragments and $\mathring{u}_n$ is the initial density of clusters of size $n$ at time $t=0$. Equation was first introduced in [@ziffmcgrady1985kinetics] to deal with the case of binary fragmentation, where it is assumed that each fragmentation event results in the creation of exactly two daughter clusters. As in [@banasiak2011irregular; @banasiakjoelshindin2019_onlinefirst; @mcbride2010strongly; @smith2012discrete], we consider the more general case, where each fragmentation event can result in the creation of two or more clusters. Since is an infinite system, it is convenient to express solutions as time-dependent sequences of the form $u(t) \coloneqq (u_n(t))_{n=1}^{\infty}$. Throughout this paper we need various assumptions on the fragmentation coefficients $a_n$ and $b_{n,j}$. We list these assumptions here and will refer to them in the sequel when required. \[A1.1\] ------------------------------------------------------------------------ For all $n \in \mathbb{N}$, $$\label{fragmentation rate assumption} a_n \ge 0.$$ For all $n,j \in \mathbb{N}$, $$\label{a_b_nonnegative} b_{n,j} \ge 0 \qquad\text{and}\qquad b_{n,j} = 0 \quad \text{when} \ n \ge j.$$ The total mass of daughter clusters resulting from the fragmentation of a $j$-mer is given by $\sum_{n=1}^{j-1} nb_{n,j}$. In most papers that have dealt with discrete fragmentation systems it is assumed that $$\label{local_mass_non_increasing} \sum\limits_{n=1}^{j-1} nb_{n,j} \le j \qquad\text{for all} \ j=2,3,\ldots,$$ i.e. there is no increase in mass at fragmentation events. If there is strict inequality in , then mass is lost by some other mechanism. However, for most of our results we do not assume that holds; this means that mass could even be gained at fragmentation events. We can specify the local mass loss or mass gain with real parameters $\lambda_j$, $j=2,3,\ldots$, such that $$\label{local mass conservation lambda} \sum\limits_{n=1}^{j-1} nb_{n,j} = (1-\lambda_j)j, \qquad j=2,3,\ldots.$$ In terms of the densities $u_n(t)$, the total mass of all clusters in the system at time $t$ is given by the first moment, $M_1(u(t))$, of $u(t)$, where $$\label{total mass} M_1\bigl(u(t)\bigr) \coloneqq \sum\limits_{n=1}^{\infty} nu_n(t).$$ A formal calculation establishes that if $u$ is a solution of , then $$\label{massode} \frac{{\mathrm{d}}}{{\mathrm{d}}t}M_1\bigl(u(t)\bigr) = - a_1u_1(t) - \sum_{j=2}^\infty j \lambda_j a_ju_j(t).$$ The expression in gives the rate at which mass may be lost from the system or gained, and also shows that, at least formally, the total mass is conserved when $a_1=0$ and $\lambda_j=0$ for all $j=2,3,\ldots$, i.e. when $$\label{mass_conserved} a_1 = 0 \qquad\text{and}\qquad \sum_{n=1}^{j-1} n b_{n,j} = j \quad \text{for all} \ j=2,3,\ldots.$$ Note that monomers cannot fragment to produce smaller clusters, and hence the case when $a_1 > 0$ is interpreted as a situation in which monomers are removed from the system. In this paper, the approach we use to investigate relies on the theory of semigroups of bounded linear operators, and entails formulating as an abstract Cauchy problem (ACP) in an appropriate Banach space. The existence and uniqueness of solutions to the ACP are established via the application of perturbation results for operator semigroups. Of particular relevance is the Kato–Voigt perturbation theorem for substochastic semigroups [@banasiak2001extension; @voigt1987onsubstochastic] that was first applied to in [@mcbride2010strongly], and subsequently in similar semigroup-based investigations into , such as [@banasiak2012global; @smith2012discrete]. We use a refined version of this theorem proved by Thieme and Voigt in [@thieme2006stochastic]. In previous studies, including [@mcbride2010strongly; @smith2012discrete], the ACP associated with the fragmentation system has been formulated in the space $$\label{X1space} X_{[1]} \coloneqq \biggl\{f=(f_n)_{n=1}^{\infty}: f_n \in \mathbb{R} \ \text{for all} \ n \in \mathbb{N} \ \text{and} \ \sum\limits_{n=1}^{\infty} n|f_n|<\infty\biggr\}.$$ Equipped with the norm $$\label{X1norm} \Vert f \Vert_{[1]} = \sum\limits_{n=1}^{\infty} n|f_n|, \qquad f \in X_{[1]},$$ $X_{[1]}$ is a Banach space, and $$\label{X_1functional} \Vert f \Vert_{[1]} = M_1(f)$$ if $
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Quark-lepton compositeness is a well-known beyond the Standard Model (SM) scenario with heavy exotic particles like leptoquarks (LQs) and leptogluons (LGs) etc. These particles can couple to leptons and jets simultaneously. In this letter, we use the recent CMS scalar LQ search data in the $eejj$ and $eej$ channels to probe this scenario. We recast the data in terms of a color octet partner of the SM electron (or a first generation spin-1/2 LG) that couples to an electron and a gluon via a dimension five operator suppressed by the quark-lepton compositeness scale ($\Lm$). By combining different production processes of the color octet electron ($e_8$) at the LHC, we use the CMS 8TeV data to obtain a simultaneous bound on $\Lm$ and the mass of the $e_8$ ($M_{e_8}$). We also study the reach of the 13 TeV LHC to discover the $e_8$ and interpret the required luminosity in terms of $M_{e_8}$ and $\Lm$.' author: - Tanumoy Mandal - Subhadip Mitra - Satyajit Seth title: | [\ ]{}Probing Compositeness with the CMS $eejj$ & $eej$ Data --- Introduction {#sec:intro} ============ The idea of quark-lepton compositeness [@Pati:1974yy; @Terazawa:1976xx; @Neeman:1979wp; @Harari:1979gi; @Shupe:1979fv; @Terazawa:1979pj; @Harari:1980ez; @Fritzsch:1981zh] goes along with our intention to describe nature in terms of its most fundamental building blocks. As its name suggests, in the models with quark-lepton compositeness, the Standard Model (SM) fermions are not elementary but rather have finer substructures. Similarities between the SM lepton and quark sectors (like, both come with three flavors and behave similarly under the $SU(2)_{\rm L}\times U(1)_{\rm Y}$ gauge symmetry with the same weak coupling) can be explained if they are assumed to be different bound states of some fundamental constituents. These fundamental constituents, called preons by Pati and Salam [@Pati:1974yy], are charged under some new strong force which confines them below a certain scale $\Lm$, known as the compositeness scale. As we have hadrons in QCD, in this scenario one expects a host of new exited preonic-condensates. Some of these condensates would be quite exotic, as they would carry both $SU(3)_{\rm c}$ color charges and lepton numbers, like the bosonic leptoquarks (LQs or ${\ell_q}$’s) that transform as triplets under $SU(3)_{\rm c}$ [@Buchmuller:1986zs; @Hewett:1997ce; @Kramer:1997hh] or the leptogluons (LGs or ${\ell_8}$’s) that are color-octet fermions [@Harari:1985cr; @Baur:1985ud; @Nir:1985ah; @Rizzo:1985dn; @Rizzo:1985ud; @Streng:1986my] etc. Because of their color charges, if these exotic condensates have TeV-range masses, they would be produced copiously at the Large Hadron Collider (LHC) making it possible to probe this scenario experimentally. The LHC has already put some constraints on the masses of scalar LQs decaying to SM quarks and leptons [@Aad:2015caa; @Khachatryan:2015vaa; @Khachatryan:2015bsa; @Khachatryan:2015qda]. Of these, we look at the most recent search by CMS, for the first and second generations of scalar LQs in the $\ell\ell jj$ and the $\ell\n_\ell jj$ channels with 19.7 fb$^{-1}$ of integrated luminosity at the 8 TeV LHC [@Khachatryan:2015vaa]. With pair production, the 95% confidence level (CL) exclusion limit on the mass of the first (second) generation scalar LQ now stands at $M_{{\ell_q}} = 1005$ (1080) GeV assuming it always decays to an electron (a muon) and a jet. Note that unless specified otherwise, we do not distinguish between any particle and its anti-particle. Hence, an electron here could mean a positron as well. In the first generation search, mild excesses of events compared to the SM background were observed in both the $eejj$ and the $eej$ channels for $M_{{\ell_q}}\sim$ 650 GeV. Currently, these excesses have attracted considerable attention in the literature. CMS has also performed a dedicated search for the single productions of the first two generations of LQs in the $\ell\ell j$ channels [@Khachatryan:2015qda]. However, unlike the mostly QCD mediated pair production, the single productions depend strongly on an unknown coupling $\lm$, the ${\ell_q}$-$\ell$-$q$ coupling. Hence, the exclusion limits from this search are $\lm$ dependent. For the first generation, the exclusion limit goes from 895 GeV to 1730 GeV when $\lm$ goes from 0.4 to 1.0 and for the second generation the data exclude $M_{{\ell_q}}$ below 530 GeV for $\lm=1.0$. In this letter, we recast the CMS 8 TeV $eejj$ [@Khachatryan:2015vaa] and $eej$ [@Khachatryan:2015qda] data in terms of the first generation spin-1/2 LG carrying unit electric charge, [*i.e.*]{}, the color octet partner of the SM electron ($e_8$) to probe the composite quark-lepton scenarios and obtain the most stringent limits available on the $e_8$. This is possible because a LG can also decay to a lepton and a jet (gluon) just like a LQ. Hence, the pair production of $e_8$’s would have $eejj$ final states.[^1] Earlier, there have been other phenomenological studies on LGs [@Celikel:1998dj; @Sahin:2010dd; @Akay:2010sw; @Jelinski:2015epa; @Acar:2015wxp] and the CMS 7 TeV $eejj$ data [@Chatrchyan:2012vza] were used to infer bounds on $M_{e_8}$ [@Goncalves-Netto:2013nla; @Mandal:2012rx]. Considering the pair production, Ref. [@Goncalves-Netto:2013nla] put the mass exclusion limit at about 1.2-1.3 TeV. Similarly, an $e_8$ could be produced singly in association with an electron and give rise to an $eej$ final state. Interestingly, the single productions of LGs open up a way to probe the compositeness scale. This is because, at the leading order (LO), the ${\ell_8}$-$\ell$-$g$ interaction comes from an effective operator of dimension five that is suppressed by the compositeness scale $\Lm$ [@Agashe:2014kda; @Mandal:2012rx] (see the next section). This is unlike the LQ interactions, where the LO terms are of dimension four and hence, apparently insensitive to $\Lm$. In a recent paper [@Mandal:2015vfa], we pointed out that the single productions of LQs can also lead to the $eejj$ final state and similarly, events from the pair productions could also pass the signal selection criteria of the single production search in the $eej$ channel. Combining these production processes in the signal simulations can provide better limits in the $M_{{\ell_q}}$-$\lm$ plane from both the $eejj$ and the $eej$ channels. The same argument applies for LGs too. Hence, following Ref. [@Mandal:2015vfa], here we systematically combine both the pair and the single production processes of the $e_8$ while reinterpreting the CMS $eejj$ and $eej$ data and obtain exclusion limits in the $M_{e_8}$-$\Lm$ plane. This way, we obtain the mass exclusion limits as well as the limits on the compositeness scale from both the $eejj$ and the $eej$ data and compare them. Our presentation is organized as follows. In the next section we discuss the details of the signal we consider, in section \[sec:three\], we present the results of our recast analysis, in section \[sec:futpros\] we investigate the prospect of discovering the color octet electron at the 13 TeV LHC and then in section \[sec:last\] we conclude. Leptogluon (Combined) Signals {#sec:two} ============================= If we assume $M_{e_8}$ is smaller than $\Lm$ and there is no violation of lepton flavor, we can write a generic effective Lagrangian for the $e_8$ allowed by the SM gauge symmetry as [@Mandal:2012rx], = |e\_8\^a i\^(\_\^[ac]{} + g\_s f\^[abc]{}G\^b\_) e\_8
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | On 26 May 1999, one of the Sloan Digital Sky Survey (SDSS) fiber–fed spectrographs saw astronomical first light. This was followed by the first spectroscopic commissioning run during the dark period of June 1999. We present here the first hour of extra–galactic spectroscopy taken during these early commissioning stages: an observation of the Coma cluster of galaxies. Our data samples the Southern part of this cluster, out to a radius of 1.5 degrees ($1.8\,h^{-1}$ Mpc, approximately to the virial radius) and thus fully covers the NGC 4839 group. We outline in this paper the main characteristics of the SDSS spectroscopic systems and provide redshifts and spectral classifications for 196 Coma galaxies, of which 45 redshifts are new. For the 151 galaxies in common with the literature, we find excellent agreement between our redshift determinations and the published values, [*e.g.*]{}, for the largest homogeneous sample of galaxies in common (63 galaxies observed by Colless & Dunn 1996) we find a mean offset of 3 ${\rm km\,s^{-1}}$ and an RMS scatter of only 24 ${\rm km\,s^{-1}}$. As part of our analysis, we have investigated four different spectral classification algorithms: measurements of the spectral line strengths, a principal component decomposition, a wavelet analysis and the fitting of spectral synthesis models to the data. We find that these classification schemes are in broad agreement and can provide physical insight into the evolutionary histories of our cluster galaxies. We find that a significant fraction (25%) of our observed Coma galaxies show signs of recent star–formation activity and that the velocity dispersion of these active galaxies (emission–line and post–starburst galaxies) is 30% larger than the absorption–line galaxies. We also find no active galaxies within the central (projected) $200\,h^{-1}$ Kpc of the cluster. The spatial distribution of our Coma active galaxies is consistent with that found at higher redshift for the CNOC1 cluster survey. Beyond the core region, the fraction of bright active galaxies appears to rise slowly out to the virial radius and are randomly distributed within the cluster with no apparent correlation with the potential merger or post-merger of the NGC 4839 group. We briefly discuss possible origins of this recent galaxy star-formation. author: - 'Francisco J. Castander, Robert C. Nichol, Aronne Merrelli, Scott Burles, Adrian Pope, Andrew J. Connolly, Alan Uomoto, James E. Gunn, John E. Anderson, James Annis, Neta A. Bahcall, William N. Boroski, Jon Brinkmann, Larry Carey, James H. Crocker, István Csabai, Mamoru Doi, Joshua A. Frieman, Masataka Fukugita, Scott D. Friedman, Eric J. Hilton, Robert B. Hindsley, Željko Ivezić, Steve Kent, Donald Q. Lamb, R. French Leger, Daniel C. Long, Jon Loveday, Robert H. Lupton, Harvey MacGillivray, Avery Meiksin, Jeffrey A. Munn, Matt Newcomb, Sadanori Okamura, Russell Owen, Jeffrey R. Pier, Constance M. Rockosi, David J. Schlegel, Donald P. Schneider, Walter Seigmund, Stephen Smee, Yehuda Snir, Larry Starkman, Chris Stoughton, Gyula P. Szokoly, Christopher Stubbs, Mark SubbaRao, Alex Szalay, Aniruddha R. Thakar, Christy Tremonti, Patrick Waddell, Brian Yanny and Donald G. York' title: 'The First Hour of Extra–galactic Data of the Sloan Digital Sky Survey Spectroscopic Commissioning: The Coma Cluster.' --- Introduction ============ The Coma cluster is the richest cluster of galaxies in our local universe and has thus attracted considerable attention over the last century (see reviews by Biviano 1998 & West 1998). In the optical, for example, Goldwin, Metcalfe & Peach (1983; hereafter GMP83) have published an extensive photometric study of the cluster providing accurate positions, colors, magnitudes and ellipticities for 6724 bright galaxies over 2.63 square degrees centered on Coma. Several authors have explored the fainter dwarf galaxy population of Coma (see Bernstein et al. 1995; Kashikawa et al. 1998; Adami et al. 1998 & 2000). The dynamics of the cluster have also been well studied. Kent & Gunn (1982) assembled approximately 300 optical redshifts from the literature to determine the cluster mass distribution. This initial work was extended by Colless & Dunn (1996; CD96), who collected 556 redshifts (based on new and literature redshifts), and Geller, Diaferio & Kurtz (1999) who have extended the dynamical study of Coma to large radii (10 degrees) and larger numbers (1693 redshifts) thus measuring the density profile of Coma well beyond the virial radius of the cluster. Hughes (1989) measured the total mass of Coma using early X-ray observations of the cluster. More recently, ROSAT observations of Coma have provided unprecedented detail of the intracluster gas morphology (e.g., [@bri92]; [@whi93]). ASCA observations of the cluster have provided important information on the temperature structure of the X–ray emitting gas (Honda et al. 1996) which have been recently complemented by XMM-Newton observations (Briel et al 2000; Arnaud et al 2000; Neumann et al 2000) The Coma cluster has long been regarded as the archetypal relaxed massive cluster of galaxies. However, recent studies of the cluster have shown otherwise. The current view of Coma is that the cluster is the product of one recent, and one ongoing, cluster–group merger. A group centered on NGC 4839, in the southwest region of the cluster, is falling into (CD96), or has just passed through ([@bur94]), the main body of the cluster and may have triggered new star–formation in galaxies in Coma (Caldwell & Rose 1997). The velocity dispersion of this southwest group is approximately $\frac{1}{3}$ of that of the main cluster. Meanwhile, the core of Coma has two dominant galaxies, NGC 4874 and NGC 4889, which seem to be the relic central galaxies of previous groups that have merged into the current cluster. The X–ray and dynamical data reveal that both of these dominant galaxies do not appear to sit at the bottom of the cluster potential (CD96; [@whi93]). The lack of a cooling flow and the existence of an extended radio halo support this merging history of the cluster. The Coma cluster was thus an ideal first target for the Sloan Digital Sky Survey (SDSS; [@yor00]) spectroscopic commissioning program because of its location (the North Galactic Pole), the pre–existence of wide–field galaxy photometry (e.g. GMP83), and the high–density of known redshifts for comparison and testing. Moreover, there remains interesting scientific questions that can be addressed using the unique SDSS spectroscopic hardware [*e.g.*]{} the influence of cluster merger events on the star–formation rates of galaxies. The quality and quantity of SDSS spectral data will allow us to study such problems in great detail and in this paper, we start by outlining and comparing the different analysis techniques for classifying SDSS galaxy spectra as well as quantifying their star–formation rates. Robust, automated, spectral classification methods are necessary to help us understand the distribution and evolution of the galaxies physical properties that define and characterize their spectral energy distributions. The problem of galaxy spectra classification has been copiously treated in the astronomical literature. In general, methods are based on the measurement of spectral continuum and line features (e.g., the Lick/IDS system: Faber et al 1985; Burstein, Faber & González 1986) which are then used to classify and derive the galaxies physical properties. Stellar population synthesis models can, for example, be compared to these measurements, or the entire spectrum using template fitting, to provide a physical understanding of the galaxy properties. Recently, other techniques have been investigated. Amongst them, the principal component analysis has attracted a lot of interest (e.g, Connolly et al 1995; Bromley et al 1998; Folkes et al 1999; Ronen et al 1999). The technique is based on decomposing the spectra into a basis that highlights the galaxy differences. Such decomposition can therefore be used to classify the spectra. Different implementations change the way the basis is constructed or how the resulting coefficients of the decomposition are used to generate a classification method. Some authors, for instance, use artificial neural networks to build a classification scheme from the principal component coefficients (e.g., Folkes, Lahav & Maddox 1996). Wavelets also provide an orthogonal basis in which the spectra can be decomposed and therefore can in principle be used as a classification method. Along these lines, Pando & Fang (1996) and Theuns & Zaroubi (2000) have used wavelets to study quasar spectra. In this paper, we present the first hour of extra–galactic data taken by the SDSS spectroscopic system. Only one of the ten plates originally designed in the Coma region has been observed producing nearly 200 Coma redshifts and thus illustrating the capabilities of this new instrumentation. These spectra will allow us to investigate several classification schemes and test their applicability to the future SDSS dataset. The paper is structured as follows. In §2, we briefly highlight the main characteristics of the SDSS spectroscopic system. In §3, we describe the selection of galaxy targets in the Coma cluster region. The observations of the Coma cluster plates are presented in §
{ "pile_set_name": "ArXiv" }
null
null
--- author: - | Christian Kleiber\ Universität Basel Achim Zeileis\ Universität Innsbruck bibliography: - 'rootograms.bib' title: Visualizing Count Data Regressions Using Rootograms --- Introduction {#sec:introduction} ============ The area of count data regression has experienced rapid growth over the last two decades. More often than not, the standard Poisson model from the generalized linear model (GLM) toolbox does not suffice in empirical work. Specifically, many data sets are plagued by some form of overdispersion, often resulting from unobserved heterogeneity that can potentially be handled by, e.g., models with additional shape parameters such as the negative binomial distribution or from an excess of zeros for which hurdle and zero-inflation models are available [@rootograms:Mullahy:1986; @rootograms:Lambert:1992]. While various diagnostic tests of dispersion are also available – see, e.g., [@rootograms:Cameron+Trivedi:1990] or [@rootograms:Dean:1992] for some popular tests and [@rootograms:Cameron+Trivedi:2013] for an overview – they typically only identify general issues with model fit and rarely provide clear indications regarding the source of the problems. Suitable graphical tools can guide the search for more appropriate specifications, thereby supplementing and enhancing more formal approaches. If count data regressions are visualized at all, this is currently mainly done in the form of barplots of observed and expected frequencies; see, e.g., Figures 3.1 and 6.4 in [@rootograms:Cameron+Trivedi:2013] for examples and also Figure \[fig:CrabSatellites-comparison\] below. In the present paper, we explore the use of rootograms for assessing the fit. Rootograms are associated with the work of John W. Tukey on exploratory data analysis (EDA) and statistical graphics, culminating in [@rootograms:Tukey:1977]. However, rootograms do not figure prominently there. Instead, early applications, all confined to continuous data, appear in selected contributions to collected volumes and conference proceedings [@rootograms:Tukey:1965; @rootograms:Tukey:1972], which were often not easily available prior to the publication of Tukey’s collected works in the 1980s. Nonetheless, the ideas pertaining to rootograms were known in some circles at an early stage [@rootograms:Healy:1968], and an early paper popularizing the concept is [@rootograms:Wainer:1974]. For further information on the history of statistical graphics we refer to [@rootograms:Friendly+Denis:2001]. The following section introduces a generalized version of the rootogram for regression models (as opposed to univariate distributions) and allowing for weights that can be applied to new data or (weighted) subsamples of a data set. This is useful for assessing in-sample fits as well as out-of-sample predictions and also for situations with survey weights or model-based weights. Several styles of the rootogram, namely standing, hanging, and suspended versions, are briefly described. We also provide some guidelines for interpretation using simulated data. Section \[sec:example\] provides an empirical example, presenting a case where a hurdle model adjusts for excess zeros and also for overdispersion, while the final section \[sec:disc\] discusses how rootograms could be included in routine applications of count data regressions. In supplementary materials, we present two further examples, one involving a finite mixture model requiring the rootogram version with model-based weights mentioned above, the other involving underdispersed data. All analyses are run in [@rootograms:R:2016], and we briefly describe an implementation of our tools in the package in an appendix. Rootograms {#sec:rootograms} ========== Given observations $y_i$ ($i = 1, \dots, n$) we want to assess the goodness of fit of some parametric model $F(\cdot; \alpha_i)$, with corresponding density or probability mass function $f(\cdot; \alpha_i)$. For classic rootograms [see e.g., @rootograms:Friendly:2000 Chapter 2] the parameter vector $\alpha_i$ is the same for all observations $i = 1, \dots, n$. Here, we allow it to be observation-specific, e.g., through dependence on some covariates $x_i$ – a leading case being the GLM with $\alpha_i = g(x_i^\top \beta)$ for some monotonic function $g(\cdot)$. In practice, these parameters are typically unknown and have to be estimated from data. Hence, in the following we assume that we have fitted parameters $\hat \alpha_i$ where estimation may have been carried out on the same observations $i = 1, \dots, n$ (i.e., corresponding to an in-sample assessment) or on a different data set (i.e., out-of-sample evaluation). The estimation procedure itself may be fully parametric or semiparametric etc. as long as it yields fitted parameters $\hat \alpha_i$ for all observations of interest. To judge the goodness of fit of a model with estimated parameters $\hat \alpha_i$ to observations $y_i$ ($i = 1, \dots, n$), a natural idea is to assess whether observed frequencies match expected frequencies from the model. In the case of discrete observations frequencies for the observations themselves could be considered while somewhat more generally frequencies for intervals of observations may be used. Tukey’s original work often considered goodness of fit to the normal distribution on the basis of binned observations, see, e.g., his example involving the heights of 218 volcanos [@rootograms:Tukey:1972]. In this paper, we focus on discrete distributions. For assessing the goodness of fit in regression models, practitioners routinely check some type of residuals, i.e., (weighted) deviations of the observations $y_i$ from the corresponding predicted means. However, this focuses on the first moment of the fitted distribution only while for count data, which are non-negative and typically skewed, further aspects of the distribution are also of interest. Relevant aspects include the amount of (over-)dispersion, skewness (or further aspects of shape), and whether there are excess zeros. Hence, it is natural to consider observed and expected values for a range of counts $0, 1, 2, \dots$ in order to assess the entire fitted distribution. Specifically, in the case of count data with possible outcomes $j = 0, 1, 2, \dots$, the observed and expected frequencies for each integer $j$ are given by $$\begin{aligned} \text{obs}_j & = & \sum_{i = 1}^n I(y_i = j) , \\ \text{exp}_j & = & \sum_{i = 1}^n f(j; \hat \alpha_i) ,\end{aligned}$$ where $I(\cdot)$ is an indicator variable. More generally, one can use a set of breaks $b_0, b_1, b_2, \dots$ that span (a suitable subset of) the support of $y$. Here, we additionally also allow for observation-specific weights $w_i$ ($i = 1, \dots, n$), the observed and expected frequencies are then given by $$\begin{aligned} \text{obs}_j & = & \sum_{i = 1}^n w_i \, I(y_i \in (b_{j}, b_{j + 1}]) , \\ \text{exp}_j & = & \sum_{i = 1}^n w_i \, \{ F(b_{j + 1}; \hat \alpha_i) - F(b_{j}; \hat \alpha_i) \} .\end{aligned}$$ The weights are needed for survey data and also for situations with model-based weights. For example, the latter may represent class membership in mixture models, a case that is relevant in one of our supplementary examples. Styles of Rootograms {#subsec:styles} -------------------- The rootogram compares observed and expected values graphically by plotting histogram-like rectangles or bars for the observed frequencies and a curve for the fitted frequencies, all on a square-root scale. The square roots rather than the untransformed observations are employed to approximately adjust for scale differences across the $j$ values or intervals. Otherwise, deviations would only be visible for $j$’s with large observed/expected frequencies. Different styles of rootograms have been suggested, see Figure \[fig:styles\]: - *Standing:* The standing rootogram simply shows rectangles/bars for $\sqrt{\text{obs}_j}$ and a curve for $\sqrt{\text{exp}_j}$. To assess deviations across the $j$’s, the expected curve needs to be followed as the deviations are not aligned. - *Hanging:* To align all deviations along the horizontal axis, the rectangles/bars are drawn from $\sqrt{\text{exp}_j}$ to $\sqrt{\text{exp}_j} - \sqrt{\text{obs}_j}$ so that they are “hanging” from the curve representing expected frequencies, $\sqrt{\text{exp}_j}$. - *Suspended:* To emphasize mainly the deviations (rather than the observed frequencies), a third alternative is to draw rectangles/bars for the differences between expected and observed frequencies, $\sqrt{\text{exp}_j} - \sqrt{\text{obs}_j
{ "pile_set_name": "ArXiv" }
null
null
--- address: 'Department of Mathematics, University of Notre Dame, Notre Dame, IN 46556, USA' author: - 'Juan C. Migliore' title: The Geometry of Hilbert Functions --- Introduction ============ The title of this paper, “The geometry of Hilbert functions," might better be suited for a multi-volume treatise than for a single short article. Indeed, a large part of the beauty of, and interest in, Hilbert functions derives from their ubiquity in all of commutative algebra and algebraic geometry, and the unexpected information that they can give, very much of it expressible in a geometric way. Most of this paper is devoted to describing just one small facet of this theory, which connects results of Davis (e.g. [@davis]) in the 1980’s, of Bigatti, Geramita and myself (cf. [@BGM]) in the 1990’s, and of Ahn and myself (cf. [@AM]) very recently. On the other hand, we have an alphabet soup of topics that play a role here: UPP, WLP, SLP, ACM at the very least. It is interesting to see the ways in which these properties interact, and we also try to illustrate some aspects of this. There are almost as many different notations for Hilbert functions as there are papers on the subject. We will use the following notation. If $I$ is a homogeneous ideal in a polynomial ring $R$, we write $$h_{R/I}(t) := \dim (R/I)_t.$$ If $I$ is a saturated ideal defining a subscheme $V$ of $\mathbb P^n$ then we also write this function as $h_V(t)$ or $h_{R/I_V}(t)$. So where is the geometry? Of course $\dim R_t = \binom{t+n}{n}$, so the information provided by the Hilbert function is equivalent to giving the dimension of the degree $t$ component of $I$. This dimension is one more than the dimension of the linear system of hypersurfaces of degree $t$ defined by $I_t$ (since this latter dimension is projective). What is the base locus of this linear system? Of course $V$ is contained in this base locus, but it may contain more. The results in this paper (e.g. Theorem \[BGM general results\], Theorem \[AM general\], Theorem \[BGM UPP results\] and Theorem \[AM UPP results\]) can be viewed as describing the dimension, irreducibility and reducedness of this base locus, based on information about the Hilbert function, and other basic properties, of $V$. We will see that under some situations, just knowing the dimension of this linear system in two consecutive degrees can force the base locus to contain a hypersurface, or anything smaller. (We will concentrate on the curve case.) An important starting point for us (and indeed for almost any discussion of Hilbert functions of standard graded algebras) is Macaulay’s theorem bounding the growth of the Hilbert function. Once we have this, we need Gotzmann’s results about what happens when Macaulay’s bound is achieved. These are both discussed in Section \[preliminary section\], as are several other results related to these. In Section \[WLP section\] we recall the notions of the Uniform Position Property (UPP) and the Weak Lefschetz Property (WLP) and some of their connections. Subsequent sections, especially Section \[UPP results\], continue the discussion of UPP. WLP, while often less visible, lurks in the background of many of the results and computations of this paper, and in fact is an important object of study. We include a short discussion of the behavior of WLP in families of points in Section \[WLP section\], including a new example (Example \[WLP in families\]) showing how, for fixed Hilbert function, WLP can hold in one component of the postulation Hilbert scheme and not hold in another. See also Theorem \[delta 2\]. The focus in this article is the situation where the first difference of the Hilbert function of a set of points, $Z$, in projective space $\mathbb P^n$ attains the same value in two consecutive degrees: $\Delta h_Z(d) = \Delta h_Z(d+1) = s$. Depending on the relation between $d$, $s$ and certain invariants of $Z$, we will get geometric consequences for the base locus. In Section \[set stage\] we describe these relations, setting the stage for the main results. These main results are given in Sections \[general results\] and \[UPP results\]. Here we see that under certain assumptions on $d$, the condition $\Delta h_Z(d) = \Delta h_Z(d+1) = s$ guarantees that the base locus of the linear system $|I_d|$ is a curve of degree $s$. This comes from work in [@davis], [@BGM] and [@AM]. Other results follow as well. What is surprising here is that the central condition of [@AM], namely that $d > r_2(R/I_Z)$ (see Section \[preliminary section\] for the definition), is much weaker than the central assumption of the comparable results in \[Bigatti-Geramita-Migliore\], namely $d \geq s$, but the results are very similar. Section \[general results\] focuses on the general results, while Section \[UPP results\] turns to the question of what can be said about this base locus when the points have UPP. There are some differences in the results of \[Bigatti-Geramita- Migliore\] and [@AM] as a result of the differences in these assumptions. Section \[example section\] studies these, and gives examples to show that they are not accidental omissions. Some very surprising behavior is exhibited here. I am grateful to Irena Peeva for asking me to write this paper, which I enjoyed doing. In part it is a greatly expanded version of a talk that I gave in the Algebraic Geometry seminar at Queen’s University in the fall of 2004, and I am grateful to Mike Roth and to Greg Smith for their kind invitation. I would like to thank Jeaman Ahn, Chris Francisco, Hal Schenck and especially Tony Iarrobino for helpful comments. And of course I am most grateful to my co-authors Anna Bigatti and Tony Geramita ([@BGM]) and Jeaman Ahn ([@AM]) for their insights and for the enjoyable times that we spent in our collaboration. During the writing of this paper, and some of the work described here, I was sponsored by the National Security Agency (USA) under Grant Number MDA904-03-1-0071. Maximal growth of the Hilbert function {#preliminary section} ====================================== We first collect the notation that we will use throughout this paper. Let $k$ be a field of characteristic zero and let $R = k[x_1,\dots, x_n]$. Let $Z \subset \mathbb P^{n-1}$ be any closed subscheme with defining (saturated) ideal $I = I_Z$. - The [*Hilbert function of $Z$*]{} is the function $$h_Z(t) = \dim(R/I_Z)_t$$ We also may write $h_{R/I}(t)$ for this function. If $A$ is Artinian then we write $$h_A(t) = \dim A_t$$ for its Hilbert function. - We say that $Z$ is [*arithmetically Cohen-Macaulay*]{} (ACM) if the coordinate ring $R/I_Z$ is a Cohen-Macaulay ring. Note that if $Z$ is a zero-dimensional scheme then it is automatically ACM. If $F$ is a homogeneous polynomial, by abuse of notation we will also denote by $F$ the hypersurface of $\mathbb P^{n-1}$ defined by $F$. \[alpha definition\] For a homogeneous ideal $I$ we define $$\alpha = \min \{ t \ | \ I_t \neq 0 \},$$ i.e. $\alpha$ is the [*initial degree*]{} of $I$. If $A = R/I$ is a standard graded $k$-algebra, then there is a famous bound, due to Macaulay (cf. [@fsmacaulay]), that describes the maximum possible growth of the Hilbert function of $A$ from any degree to the next. To give this bound, we need a little preparation. \[ibinomexp\] The [*$i$-binomial expansion*]{} of the integer $c$ ($i, c >0$) is the unique expression $$c = \binom{m_i}{i} + \binom{m_{i-1}}{i-1} + \dots + \binom{m_j}{j},$$ where $m_i > m_{i-1} > \dots > m_j \geq j \geq 1$. Note that the assertion that this representation is unique is something that has to be checked! If $c \in {\mathbb Z}$ ($c>0$) has $i$-binomial expansion as in Definition \[ibinomexp\], then we set $$c^{\langle i \rangle} = \binom{m_i+1}{i+1} + \binom{m_{i-1}+1}{i} + \dots + \binom{m_j
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We generalize to topologically non-trivial gauge configurations the description of the Einstein–Yang–Mills system in terms of a noncommutative manifold, as was done previously by Chamseddine and Connes. Starting with an algebra bundle and a connection thereon, we obtain a spectral triple, a construction that can be related to the internal Kasparov product in unbounded KK-theory. In the case that the algebra bundle is an endomorphism bundle, we construct a $PSU(N)$-principal bundle for which it is an associated bundle. The so-called internal fluctuations of the spectral triple are parametrized by connections on this principal bundle and the spectral action gives the Yang–Mills action for these gauge fields, minimally coupled to gravity. Finally, we formulate a definition for a topological spectral action.' address: 'Institute for Mathematics, Astrophysics and Particle Physics, Faculty of Science, Radboud University Nijmegen, Heyendaalseweg 135, 6525AJ Nijmegen, The Netherlands' author: - 'Jord Boeijink and Walter D. van Suijlekom' title: 'The noncommutative geometry of Yang–Mills fields' --- Introduction ============ One of the main applications of noncommutative geometry to theoretical physics is in deriving the Yang–Mills action from purely geometrical data [@ChamseddineConnes]. In fact, the full Lagrangian of the Standard Model of high-energy physics – including the Higgs potential – can be derived by starting with a noncommutative Riemannian spin manifold [@CCM07]. It is interesting to confront this with the geometrical approach to Yang–Mills theory (*cf*. [@Ati79]), using the language of principal fiber bundles and connections thereon. It turns out that the noncommutative geometrical description of [@CC97] corresponds to topologically trivial $SU(N)$-principal bundles. It is the goal of this paper to generalize this to topologically non-trivial gauge configurations. As a matter of fact, we derive the Yang–Mills action for gauge fields defined on a non-trivial principal bundle from a noncommutative Riemannian spin manifold, that is, from a spectral triple. Since spectral triples – and more generally, (unbounded) KK-theory – form a natural setting for doing index theory, our construction has potential applications to [*e.g.*]{} the study of moduli spaces of instantons in noncommutative geometry. Our construction will naturally involve algebra bundles and connections thereon, for which – after some preliminaries – we will give their definition in Section \[sect:algebrabundles\]. There, we will also construct a spectral triple from this data. The above connection plays the same role as it does in the internal Kasparov product in KK-theory and we will explore this relation in some detail in Section \[sect:KK\]. In the case that the algebra bundle has typical fiber $M_N(\C)$ – [*i.e.*]{} it is an endomorphism bundle – it is possible to construct a $PSU(N)$-principal bundle, with the algebra bundle as an associated bundle. We will explore this case in Section \[sect:ym\]. The so-called internal fluctuations of the above spectral triple are parametrized by connections on this principal bundle. Finally, we show that the spectral action principle applied to the spectral triple gives the Yang–Mills action on a topologically non-trivial $PSU(N)$-principal bundle, minimally coupled to gravity. In the concluding section, we sketch the definition of a so-called topological spectral action. Acknowledgements {#acknowledgements .unnumbered} ---------------- We thank Simon Brain for a careful proofreading of the manuscript, as well as valuable suggestions and remarks. Preliminaries ============= Spectral triples and the spectral action principle -------------------------------------------------- Spectral triples, as they are introduced in [@Connes] are at the heart of noncommutative geometry. In fact, they generalize $spin^c$-structures to the noncommutative world. A *spectral triple* $(\mathcal{A}, \mathcal{H}, D$) is given by an involutive algebra $\mathcal{A}$ represented faithfully on the Hilbert space $\mathcal{H}$, together with a densely defined, self-adjoint operator $D$ on $\mathcal{H}$ with the following properties: - The resolvent operators $(D - \lambda)^{-1}$ are compact on $\mathcal{H}$ for all $\lambda \notin \mathbb{R}$, - For all $a \in \mathcal{A}$ the operator $[D,a]$ extends to a bounded operator defined on $\mathcal{H}$ . \[dfn:spectraltriple\] The triple is said to be *even* if there exists an operator $\Gamma$ on $\mathcal{H}$ with the properties $$\Gamma^*=\Gamma, \quad \Gamma^2=1, \quad \Gamma D + D \Gamma = 0, \quad \Gamma a - a \Gamma = 0.\label{eq:eventriple}$$ If such an operator does not exist, then the triple is said to be *odd*. \[ex:canonicaltriple\] The motivating example for the definition of a spectral triple is formed by the *canonical triple* $$({C^{\infty}(M)}, L^2(M,S), {{D\mkern-11.5mu/\,}})$$ associated to any compact Riemannian spin-manifold $M$.[^1] The Hilbert space $L^2(M,S)$ consists of square-integrable sections of the spinor bundle $S \to M$. The operator ${{D\mkern-11.5mu/\,}}$ is the Dirac operator on the spinor bundle. For even dimensional spin-manifolds there exists a grading $\gamma$ on $L^2(M,S)$. A spectral triple can have additional structure such as reality. A *real structure* on a spectral triple $({\mathcal}{A}, {\mathcal}{H},D)$ is an anti-unitary operator $J: {\mathcal}{H} \rightarrow {\mathcal}{H}$, with the property that $$J^2 = {\varepsilon}, \quad JD = {\varepsilon}' DJ, \quad and \quad J\gamma = {\varepsilon}'' \gamma J, \text{ (even} \text{ case}),$$ where the numbers ${\varepsilon}$,${\varepsilon}'$, ${\varepsilon}''$ are $\pm 1$. Moreover, there are the following relations between $J$ and elements of ${\mathcal}{A}$: $$[a,b^0] = 0, \qquad [[D,a], b^0] = 0 \text{ for all } a,b \in \mathcal{A}. \label{eq:jreq1}$$ where $b^0 = J b^* J^{-1} \text{ for all } b \in \mathcal{A}$. A spectral triple $(\mathcal{A},\mathcal{H},D)$ endowed with a real structure $J$ is called a *real spectral triple*. \[dfn:realstructure\] The signs ${\varepsilon}, {\varepsilon}'$ and ${\varepsilon}''$ determine the so-called KO-dimension (modulo 8) of the real spectral triple (see [@ConnesGravity] for more details). \[ex:canonical\] For a spin-manifold and a given spinor bundle $S$ there exists an operator $J_M$ – called charge conjugation – on $L^2(M,S)$ such that $$({C^{\infty}(M)}, L^2(M,S), {{D\mkern-11.5mu/\,}}, J_M) $$ is a real spectral triple. Here the $KO$-dimension is equal to the dimension of the spin-manifold $M$. For more details on the construction of $J_M$ the reader is referred to [*e.g.*]{} [@Varilly]. When the dimension $n$ is even, the inclusion of the grading operator $\gamma$ of Example \[ex:canonicaltriple\] to the datum $$({C^{\infty}(M)}, L^2(M,S), {{D\mkern-11.5mu/\,}}, J_M, \gamma) \label{eq:canonicaltripleJeven}$$ yields a real and even spectral triple. Note that the existence of a real structure $J$ turns ${\mathcal}{H}$ into a bimodule over ${\mathcal}{A}$. Indeed, condition (\[eq:jreq1\]) implies that the right action of ${\mathcal}{A}$ on ${\mathcal}{H}$ defined by $$\xi a := Ja^*J^* \xi, \quad (\xi \in {\mathcal}{H}, a \in {\mathcal}{A})$$ commutes with the left action of ${\mathcal}{A}$. ### Spectral triples and gauge theories In this subsection we show how noncommutative spectral triples naturally give rise to gauge theories, following [@ConnesGravity]. First of all, note that the most natural notion of equivalence of (unital) noncommutative ($C^*$-)algebras is Morita equivalence ([@Rieffel]). A unital algebra ${\mathcal}{A}$ is [*Morita equivalent*]{} to a unital algebra ${\mathcal}{B}$ if and only if there exists a ${\mathcal}{B}-{\mathcal}{A}$-module ${\mathcal}{E}$ which is finitely generated and projective as an ${\mathcal}{A}$-module such that ${\mathcal}{B} = \text{End}_{{\mathcal}{A}} {\mathcal}{E}$. Commutative algebras are Morita equivalent if and only if they are isomorphic, justifying this notion of equivalence for noncommutative algebras. If
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We experimentally explore solutions to a model Hamiltonian dynamical system recently derived to study frequency cascades in the cubic defocusing nonlinear Schrödinger equation on the torus. Our results include a statistical analysis of the evolution of data with localized amplitudes and random phases, which supports the conjecture that energy cascades are a generic phenomenon. We also identify stationary solutions, periodic solutions in an associated problem and find experimental evidence of hyperbolic behavior. Many of our results rely upon reframing the dynamical system using a hydrodynamic formulation.' address: - 'Department of Mathematics, University of Toronto' - 'Department of Mathematics, University of North Carolina, Chapel Hill' - 'Department of Mathematics, Princeton University' - 'School of Mathematics, University of Minnesota' author: - 'James E. Colliander' - 'Jeremy L. Marzuola' - Tadahiro Oh - Gideon Simpson bibliography: - 'ToyModel.bib' title: Behavior of a Model Dynamical System with Applications to Weak Turbulence --- Introduction {#s:intro} ============ Recent investigations in [@CKSTT] reduced the study of the nonlinear Schrödinger equation (NLS), $$\begin{aligned} \label{e:dcnls} i u_t + \Delta u - |u|^2 u = 0, \ \ u(0,x) = u_0(x) \ \text{for} \ x \in \mathbb{T}^2,\end{aligned}$$ to the “Toy Model” dynamical system given by the equation $$\label{e:toy_model} -i\dt b_j (t) = -\abs{b_j(t)}^2 b_j(t) + 2 b_{j-1}^2 \overline{b_j}(t) + 2 b_{j+1}^2 \overline{b_j}(t)$$ for $j = 1,\ldots, N$, with boundary conditions $$\label{e:dirichletbc} b_0(t) = b_{N+1}(t) = 0.$$ The $b_j$’s approximate the energy of families of resonantly interacting frequencies to be described below. The main purpose of this paper is to study the evolution equation , both to gain additional insight into and for its own sake. In addition to showing how approximates , a key result of [@CKSTT] is the construction of a solution to which transfers mass from low index $j$ to high $j$. In the underlying NLS problem, this implies there exist arbitrarily large, but finite, energy cascades. Thus, [@CKSTT] showed that Hamiltonian dispersive equations posed on tori can have “weakly turbulent dynamics,” the phenomenon by which arbitrarily high index Sobolev norms can grow to be arbitrarily large in finite time. The question of energy cascades in infinite dimensional dynamical systems was considered by Bourgain [@B04], who asked if there was a solution to with an initial condition $u_0 \in H^s$, $s > 1$, such that $$\label{e1} \limsup_{t \to \infty} \|u(t)\|_{H^s} = \infty.$$ This corresponds to a weakly turbulent dynamic, as there is growth in high Sobolev norms, but no finite time singularity. Indeed, since is defocusing it has a bounded $H^1$ norm. One can view this behavior as an “infinite-time blowup.” Although the result in [@CKSTT] does not answer Bourgain’s question, it makes significant progress. The result says that given a threshold $K\gg1$ and $\delta >0$ there exists $u_0 \in H^s$ with $\| u_0\|_{H^s} \leq \delta$ and $T>0$ such that $\|u(T)\|_{H^s} \geq K$, where $u$ is the solution to the NLS with $u(0) = u_0$. This establishes $$\label{e2} \inf_{\delta > 0} \bigg\{ \limsup_{t \to \infty} \Big(\sup_{\|u_0\|_{H^s} \leq \delta }\|u(t)\|_{H^s}\Big) \bigg\} = \infty,$$ but not . This is one of the first rigorous result exhibiting the shift of energy from low to high frequencies for a nonlinear Hamiltonian PDE viewed as an infinite-dimensional Hamiltonian dynamical system, see also work by Kuksin [@Ku1]. The works Carles-Faou [@Carles:2012jv], Hani [@H11], and Guardia-Kaloshin [@GK12] have also recently treated . A particular achievement of these newer works is their careful construction of error estimates on the non-resonant terms. The dynamics in [@CKSTT] were not shown to be generic. Rather, the authors constructed a single solution with the desired properties. The stability of this solution to the flow is unknown. One purpose of this note is to explore this question of “genericity”, by investigating ensembles of data for , and finding that, on average, there is a transfer of energy from low to high indices. In addition to this statistical study, we seek out other interesting dynamics in . Notable behaviors we found include: - Compactly supported, time harmonic, structures; - Spatially and temporally periodic solutions subject to the adoption of periodic boundary conditions, $$\label{e:periodicbc} b_0(t) = b_N(t), \quad b_{N+1}(t) = b_1(t);$$ - Nonlinear hyperbolic behavior with both rarefactive waves and dispersive shock waves. Many of these solutions are obtained by going to the hydrodynamic formulation of the problem. Making the Madelung transformation, $$b_j(t) = \sqrt{\rho_j(t)}\exp(i\phi_j(t))$$ with $\rho_j \geq 0$ and $\phi_j \in \mathbb{R}$, we obtain evolution equations for $\rho_j$ and $\phi_j$: \[e:toy\_model\_hydro\] $$\begin{aligned} \dot\phi_j & = -\rho_j + 2 \rho_{j-1} \cos\bracket{2(\phi_{j-1}-\phi_j)} + 2 \rho_{j+1} \cos\bracket{2(\phi_{j+1}-\phi_j)} , \\ \dot\rho_j & = -4 \rho_j \rho_{j-1} \sin\bracket{2(\phi_{j-1}-\phi_j)} -4 \rho_j \rho_{j+1} \sin\bracket{2(\phi_{j+1}-\phi_j)}. \end{aligned}$$ From this perspective, it is clear that phase interactions play a key role in the dynamics. Properties of the Toy Model {#s:properties} =========================== In this section, we briefly review the connection between and , and review some important structural properties of . Relationship to NLS ------------------- First, we summarize the argument from [@CKSTT] which relates NLS to the Toy Model. This begins by studying NLS in Fourier space, $$u(t,x) = \sum_{n \in \mathbb{Z}^2} a_n (t) e^{i n \cdot x + |n|^2 t}.$$ After a choice of gauge eliminating certain trivial interactions, the Fourier amplitudes $\{a_n\}$ are seen to evolve according to $$\label{e:fnls} -i \partial_t a_n = -a_n |a_n|^2 + \sum_{ n_1, n_2, n_3 \in \Gamma (n)} a_{n_1} \bar{a}_{n_2} a_{n_3} e^{i \omega_4 t},$$ where $$\begin{gathered} \omega_4 = |n_1|^2 -|n_2|^2+|n_3|^2-|n|^2,\\ \Gamma (n) = \left\{ (n_1,n_2,n_3) \in (\mathbb{Z}^2)^3 | n_1 - n_2 + n_3 = n, \ n_1 \neq n, \ n_3 \neq n \right\}.\end{gathered}$$ For any $n$, the most significant contributions in the summation will be the elements of $\Gamma(n)$ belonging to the resonant set, $$\Gamma_{\rm res} (n) = \left\{ (n_1,n_2,n_3) \in \Gamma (n) \mid |n_1|^2 - |n_2|^2 + |n_3|^2 - |n|^2 = 0 \right\}.$$ Restricting to the resonant modes, we have $$\label{e:resfnls} -i \partial_t r_n = -r_n |r_n|^2 + \sum_{ n_1, n_2, n_3 \in \Gamma_{\rm res} (n)} r_{n_1} \bar{r}_{n_2} r_{n_3}.$$ A union of disjoint sets,
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'A new era of directly imaged extrasolar planets has produced a three-planet system [@2008M], where the masses of the planets have been estimated by untested cooling models. We point out that the nominal circular, face-on orbits of the planets lead to a dynamical instability in $\sim$$10^5$ yr, a factor of at least $100$ shorter than the estimated age of the star. Reduced planetary masses produce stability only for unreasonably small planets ($\lesssim 2$ $M_{\rm Jup}$). Relaxing the face-on assumption, but still requiring circular orbits while fitting the observed positions, makes the instability time even shorter. A promising solution is that the inner two planets have a 2:1 commensurability between their periods, and they avoid close encounters with each other through this resonance. That the inner resonance has lasted until now, in spite of the perturbations of the outer planet, leads to a limit $\lesssim 10 M_{\rm Jup}$ on the masses unless the outer two planets are *also* engaged in a 2:1 mean-motion resonance. In a double resonance, which is consistent with the current data, the system could survive until now even if the planets have masses of $\sim20$ $M_{\rm Jup}$. Apsidal alignment can further enhance the stability of a mean-motion resonant system. A completely different dynamical configuration, with large eccentricities and large mutual inclinations among the planets, is possible but finely tuned.' author: - 'Daniel C. Fabrycky and Ruth A. Murray-Clay' bibliography: - 'ms.bib' title: ' Stability of the directly imaged multiplanet system HR 8799: resonance and masses ' --- Introduction {#sec:intro} ============ The method of direct imaging for the discovery of extrasolar planets has yielded spectacular first results over the last several years [@2004C; @2008L; @2008M; @2008K; @2009Lagrange]. Direct imaging is a method for discovering planets located far from their host stars, an as-yet unexplored region of parameter space, and it promises new opportunities to characterize the planets using their own radiation. However, because the gravitational influence of directly-imaged planets is not measured and the astrometric orbital arcs obtained so far are short, determining the planetary masses and orbital architectures of these systems is challenging. In the newly-discovered planetary system HR 8799 ($=$ HD 218396), three planets have been imaged at projected separations of 24, 38, and 68 AU from their host star [@2008M]. The best current estimate of their masses is derived from the planetary luminosities, measured in the infrared. Because these planets are young and massive, they are still radiating prodigiously as they contract, cool, and become more gravitationally bound. The masses are estimated using untested models of this contraction and cooling process. One class of such models, the “hot-start” models, provides the largest luminosity possible at a certain mass and age, given assumptions about opacities in the planetary atmosphere. Hot-start models have initially extended envelopes and a large entropy per baryon; even hotter models converge to a common track after a few Myr [@2002B]. Therefore, for a given age and luminosity, these models should provide a lower limit on the mass. For HR 8799, the lower-limit masses are $5$-$11$, $7$-$13$, and $7$-$13$ $M_{\rm Jup}$ for planets b, c, and d, respectively, based on a rather uncertain stellar age of $30$-$160$ Myr[^1], which is presumably also roughly the age of the planets. The following simple calculation illustrates why a lower mass limit can be inferred from a planet’s contraction luminosity. For HR 8799, the planetary luminosities have been measured to be $L \simeq 10^{-5} L_\odot$, and radii of $R \simeq 1.2 R_{\rm Jup}$ were derived from the objects’ temperatures, measured by fitting photometry with a variety of synthetic spectral energy distributions [@2008M]. Because the objects are cooling, they were more luminous in the past, so they have radiated at least $L t_{\rm age} \gtrsim 4 \times 10^{43}$ erg. Their current binding energy, which supplied this luminosity, is $\simeq G M^2 R^{-1} \simeq 3 \times 10^{43}(M/M_{\rm Jup})^2$ erg, where the radius is roughly independent of the mass for Jupiter-mass objects. Consequently, $M > 1 M_{\rm Jup}$. Cooling models also take into account that $L$ diminishes with time, and thus arrive at a considerably larger mass. Whether this larger calculated mass is a robust lower limit depends on the accuracy of the model. Recently, [@2009D] measured the dynamical masses for a system of brown dwarfs (both of mass $\approx 57 M_{\rm Jup}$) and showed that cooling models overpredict the component masses by $\sim$25%. If energy is lost during the process of planet formation, then an even larger planet mass would be needed to generate the currently observed luminosity. For example, in the planetary core-accretion models of [@2007M], considerable luminosity is radiated in the accretion stream and shock, and that energy is not internalized by the planet. At the end of formation the planet has less gravitational potential energy to later supply its luminosity. The integrated luminosity since formation would not account for the planet’s current binding energy, so the mass needed to supply an observed luminosity at a given age may be much bigger. The HR 8799 system has survived an order of magnitude longer than the primordial gas disk, which, if typical of disks of A stars, lasted $\lesssim3$ Myr [@1993H; @2005H]. The system has therefore had time to dynamically evolve in the absence of gas. Though the planets orbiting HR 8799 are separated by tens of AU, the inferred minimum masses of the planets were large enough that their mutual gravitational interactions are important. For example, a planet with mass $M_p = 10$ $M_{\rm Jup}$ orbiting a star of mass $M_* = 1.5$ $M_\odot$ at semi-major axis $a = 40$ AU dominates gravitational dynamics within its Hill radius of size $R_H = a(M_p/3M_*)^{1/3} = 5$ AU. Because $R_H$ is a large fraction of the planetary separation, gravitational interactions among the planets can substantially modify the dynamical evolution of the system. In fact, the nominal orbits reported in the discovery paper [@2008M] are unstable. We integrated the Newtonian equations of motion of the proposed system using the Bulirsh-Stoer (BS) algorithm of the [*Mercury*]{} [@1999C] package (version 6.2), with an accuracy parameter of $10^{-12}$. The planets are assigned circular, face-on orbits, and we used the nominal masses for all four bodies: $7$, $10$, $10$ $M_{\rm Jup}$ for planets b, c, and d, respectively, and $1.5$ $M_\odot$ for the star[^2]. Figure \[fig:ae\] shows the results for the semi-major axis and maximum radial excursion of each planet as a function of time. A close encounter between planets c and d at $0.298$ Myr (i.e., they enter within one Hill radius of one another) leads to a brief interval of strong scattering which ejects planet b at $0.316$ Myr (i.e., it reaches $>500$ AU with positive energy, and is removed from the simulation). Planets c and d swap orbits and finish in a stable configuration, with no further semi-major axis evolution, but they exhibit a regular secular eccentricity cycle with a period of $1.5$ Myr. This evolution is not unique in its details since the orbital evolution is chaotic. However, qualitatively similar evolutions are common for simulated planetary systems constructed to match the discovery data: instability usually sets in well before the star’s age of $\gtrsim 30$ Myr. The goal of this paper is to determine orbits that are consistent with the astrometric data, the inferred planetary masses, *and* with dynamical stability over the system’s age. Neglecting stability considerations, there is a large amount of freedom in fitting orbits to the discovery data, because (1) the measured astrometric arcs cover only $\sim$2% of the middle orbit and $\sim$1% of the outer orbit, (2) the velocity of the inner planet is almost entirely unconstrained, and (3) the line-of-sight positions and velocities of the planets relative to the star are unknown. [*A priori*]{}, two classes of orbital architectures are possible—those in which the planets occupy roughly coplanar orbits and those with large mutual inclinations. Since planets form in disks, it is likely that they initially occupy nearly coplanar orbits, and systems that remain stable indefinitely are likely to stay roughly coplanar. Alternatively, the system may not be indefinitely stable. While old compared to the lifetime of the protoplanetary disk, the current age of the planetary system is probably less than one tenth the main-sequence lifetime of the star ($\sim$$1.5$ Gyr; @1967I). Without further analysis, it is thus possible that the planets are in the process of scattering off of one another, currently have large eccentricities and mutual inclinations, and will not be stable over the lifetime of the star. In fact, current models predict that
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We show that higher-order nonlinear indices ($n_4$, $n_6$, $n_8$, $n_{10}$) provide the main defocusing contribution to self-channeling of ultrashort laser pulses in air and Argon at 800 nm, in contrast with the previously accepted mechanism of filamentation where plasma was considered as the dominant defocusing process. Their consideration allows to reproduce experimentally observed intensities and plasma densities in self-guided filaments.' author: - 'P. Béjot$^{1,2}$' - 'J. Kasparian$^{1}$' - 'S. Henin$^1$' - 'V. Loriot$^{2}$' - 'T. Vieillard$^{2}$' - 'E. Hertz$^{2}$' - 'O. Faucher$^{2}$' - 'B. Lavorel$^{2}$' - 'J.-P. Wolf$^{1}$' title: 'Higher-order Kerr terms allow ionization-free filamentation in gases' --- The filamentation of ultrashort laser pulses in gases [@BraunKLDSM95] attracted a lot of interest in the last years because of its physical interest as well as its potential applications [@KasparianW08; @BergeSNKW07; @CouaironM07; @ChinHLLTABKKS05]. Filaments are self-channeled structures propagating over many Rayleigh lengths without diffraction. They are generally considered to stem from a dynamic balance between Kerr focusing and defocusing by the plasma generated at the non-linear focus. Numerical simulations based on this balance report a core intensity of several 10$^{13}$ W/cm$^{2}$ and typical electron densities of several 10$^{16}$ cm$^{-3}$ [@BergeSNKW07; @CouaironM07]. Consequently, plasma ionization is generally admitted as necessary for an ultrashort pulse to experience self-channeling in gases. But the plasma density provided by this description of filamentation appears overestimated as compared with experimental measurements. As reviewed in [@KasparianSC00], such measurements are dispersed over several orders of magnitude, especially due to different focusing conditions and divergent assumptions about the core diameter of the filaments, but the electron density in a filament generated by a slightly focused beam is more likely to amount to $10^{14} - 10^{15}$ cm$^{-3}$ [@KasparianSC00]. This value, as well as the discrepancy by more than one order of magnitude with numerical simulations, was recently confirmed [@ThebergeLSBC06]. The observation of so-called plasma-free filamentation [@MechainCADFPTMS04; @DubietisGTT04], as well as the consideration that a balance between the instantaneous Kerr term and the time-integrated plasma contribution implies strongly asymmetric pulse shapes [@StibenzZS06], periodically led to challenge the role of plasma in laser filamentation. However, up to now, no other process seriously challenged plasma as the main defocusing process balancing the Kerr self-focusing. Nurhuda et al. proposed that the saturation of the nonlinear susceptibility $\chi^{(3)}$ should be taken into account [@NurhudaSM08]. Such saturation can be described as negative higher-order Kerr terms. The nonlinear index of air induced by high-power femtosecond laser pulses can be written as $\Delta\text{n}_\text{Kerr}=n_2I+n_4I^2+n_6I^3+n_8I^4+ ...$ , where $I$ is the incident intensity and the $n_{2*j}$ coefficients are related to $\chi^\text{(2*j+1)}$ susceptibilities. This nonlinear index is generally truncated after its first term, $n_2$ [@KasparianW08; @BergeSNKW07; @CouaironM07; @ChinHLLTABKKS05], mostly because of the lack of data about the values of the subsequent terms. Numerical works have investigated the influence of the quintic nonlinear response on the propagation dynamics in gases, although without knowledge of its value [@AkozbekSBC01; @Couairon03; @VincotteB04; @FibichI04; @Centurion05]. They showed that $n_4$ is negative, *i.e.* the $\chi^{(5)}$ susceptibility is a defocusing term. It tends to stabilize the propagation of ultrashort laser pulses in air and to decrease both the electron density and the maximal on-axis intensity. Consequently, the losses due to multiphoton absorption (MPA), which lead to the end of the filamentation, are reduced and pulse self-channeling is sustained over longer distances. However, plasma generation still appeared as necessary for filament stabilization. Moreover, the value of $n_4$ was set arbitrarily, which limits the conclusiveness of these studies. Finally, the lack of data prevented any evaluation of a possible effect of the further-order nonlinear refractive indices. However, the higher-order Kerr indices have recently been measured in N$_2$, O$_2$ and Ar by Loriot *et al.* [@Loriot09]. The reader is referred to this work for a detailed description of this experimental determination. In this Letter, we investigate their influence on numerical simulations of laser filamentation. We show that their values are sufficient to provide the dominant contribution to the defocusing terms of self-channeling. Their implementation in numerical simulations yields the experimentally observed plasma density. As a consequence, contrary to previously held beliefs, a plasma is not required for the observation of filamentation. Rather, plasma generation can be considered as a by-product of the self-guiding of laser filaments. We implemented these nonlinear coefficients into a numerical model describing the propagation of ultrashort high power pulses [@BejotBBW07]. We consider a linearly polarized incident electric field at $\lambda_0$=$800$ nm with cylindrical symmetry around the propagation axis $z$. The scalar envelope $\varepsilon(r,t,z)$, assumed to vary slowly in time and along $z$, evolves according to the propagation equation: $$\begin{aligned} \begin{aligned} \label{Equation3} &\partial_z\varepsilon =\frac{i}{2k_0}\triangle_{\bot}\varepsilon-i\frac{k''}{2}\partial_t^2\varepsilon+i\frac{k_0}{n_0}\left(\sum_{j=1}^{4}{n_{2*j}|\varepsilon|^{2*j}}\right)\varepsilon \\ &-i\frac{k_0}{2n_0^2\rho_c}\rho\varepsilon-\frac{\varepsilon}{2}\sum_{l=\mathrm{O}_2,\mathrm{N}_2}{\left(\sigma_l\rho+\frac{W_l(|\varepsilon|^2)U_l}{|\varepsilon|^2}(\rho_{at_l}-\rho)\right)} \end{aligned}\end{aligned}$$ where $k_0$=${2\pi n_0}/{\lambda_0}$ and $\omega_0={2\pi c}/{\lambda_0}$ are the wavenumber and the angular frequency of the carrier wave respectively, $n_0$ is the linear refractive index at $\lambda_0$, $k''=\frac{\partial^2 k}{\partial\omega^2}|_{\omega_0}$ is the second order dispersion coefficient, $\rho_{at}$ the neutral atoms density, $\rho$ the electron density, $\rho_c=\epsilon_0 m \omega_0^2/e^2$ is the critical electron density, $m$ being the electron mass and $e$ its charge. $W_l(|\varepsilon|^2)$ and $\sigma_l$ are the photoionization probability and the inverse Bremsstrahlung cross-section of species $l$ respectively (with ionization potential $U_l$), and $t$ refers to the retarded time in the reference frame of the pulse. The right-hand terms of Eq.(\[Equation3\]) account for spatial diffraction, second order group-velocity dispersion (GVD), instantaneous nonlinear effects (*i.e.* the nonlinear refractive index of air, up to the $n_8$ term), plasma defocusing, inverse Bremsstrahlung and multiphoton absorption respectively. As compared with previously published data [@Loriot09], we used values of the higher-order refractive indices (Table \[tab1\]) incorporating the correction for the coherent artifact [@Oudar82], *i.e.* adequately substracting its electronic contribution at play in the original measurement of Ref. [@Loriot09]. This correction results in dividing each $n_{2*j}$ term by $j+1$. Owing to the short pulse duration ($30\ fs$) used in the simulations, the delayed orientational response is disregarded. The propagation dynamics of the electric field is coupled with the density of the electrons originating from the ionization of both O$_2$ and N$_2$: $\rho=\rho_{\mathrm{O}_2}+\rho_{\mathrm{N}_2}$. This density is governed by the muti-species generalized Keldysh-PPT (Perelomov, Popov, Terent’ev) formulation [@KasparianSC00; @BergeSNKW07]. --------- ----------------------- ----------------------- ----------------------- ----------------------- $n_2\ (10^\text{-19}$ $n_4\ (10^\text{-33
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'The BabyAI platform is designed to measure the sample efficiency of training an agent to follow grounded-language instructions. BabyAI 1.0 presents baseline results of an agent trained by deep imitation or reinforcement learning. BabyAI 1.1 improves the agent’s architecture in three minor ways. This increases reinforcement learning sample efficiency by up to $3 \times$ and improves imitation learning performance on the hardest level from $77 \%$ to $90.4 \%$. We hope that these improvements increase the computational efficiency of BabyAI experiments and help users design better agents.' author: - 'David Yu-Tung Hui' - 'Maxime Chevalier-Boisvert' - Dzmitry Bahdanau - Yoshua Bengio bibliography: - 'references.bib' title: 'BabyAI 1.1' --- Introduction ============ The BabyAI platform [^1] is an environment designed to evaluate how well an agent follows grounded-language instructions. The quality of an agent is measured with two metrics: its success rate at following instructions and the number of episodes or demonstrations required to train it. BabyAI 1.0, [@babyai_iclr19], presents results of a baseline agent trained by reinforcement and imitation learning (RL and IL) methods. In this technical report, we present three modifications that significantly improved the baseline results. Two modifications are to the network’s architecture and the third to the representation of the visual input. The network is modified by removing maxpooling at lower levels in the visual encoder and adding residual connections around FiLM layers [@perez_film:_2017]. The visual representation is modified to use learned embeddings in a Bag-of-Words fashion [@mikolov2013efficient]. Proposed Architectural and Representational Modifications {#sec:arch} ========================================================= This section describes the network architecture and BabyAI 1.0 visual representation before detailing the two architectural modifications and two alternate visual representations. The BabyAI platform has nineteen levels which can be categorised into two types: small and big [@babyai_iclr19]. Small levels are single-room but big levels are usually $3 \times 3$ rooms. The BabyAI 1.0 baseline agent has two architectures used on the small and big levels. These architectures have the same structure and are illustrated by Figure \[fig:models\].a. The architecture takes two inputs, a visual input and a linguistic instruction. We use FiLM to combine the outputs of a convolutional ‘visual encoder’ with a GRU [@cho_learning_2014] embedding of the instruction. We refer the reader to Appendix \[section:arch\] for more details concerning the distinction between ‘big’ and ‘small’. Figures \[fig:models\].b and \[fig:models\].c respectively present two architectural modifications: removing pooling in the visual encoder and adding residual connections around the image convolution and the FiLM layers. To ensure that the shape of the visual encoder is consistent after pooling is removed, we change filter size from $2\times2$ to $3\times3$. We expect these changes to improve sample efficiency because they enable more information to be transmitted to higher layers. At every timestep, the agent receives visual information about a $7 \times 7$ grid of tiles which are immediately in the direction it is facing. BabyAI 1.0 represents a tile by a triple-integer value. The first integer describes the type of object in the tile and the second integer the object’s color. The third integer is only used if the object is a door, and describes whether it is open, closed or locked. BabyAI 1.0 represents a visual input by concatenating all tile representations together. This results in a tensor of size $7 \times 7 \times 3$. A gridworld tile (and thus the visual input) can also be represented in two other ways: as a “bag of words” or by an RGB image. In the Bag-of-Words (“BOW”) approach a set of symbols that describes the tile is embedded in a trainable lookup table. This approach is commonly used in gridwords such as [@leike2017ai], [@rajendran2015attend] and [@SchraderSokoban2018]. Because a tile in BabyAI can be represented by three integers, we use three look-up tables and use each integer as a key. A tile is then represented by the mean of the three looked-up feature vectors. As with the BabyAI 1.0 visual representation, the BOW representation is formed from combining all tile representations into a 3D tensor. As we set the dimensionality of a feature vector to 128, the dimensionality of the BOW visual representation is $7 \times 7 \times 128$. This is depicted in Figure \[fig:models\].d. The contents of a tile can also be represented by a 3-channel Red-Green-Blue (RGB) image. As we choose the size of an image to be $8$ pixels, a grid is thus represented by an image stored in a $8 \times 8 \times 3$ tensor. The entire $7 \times 7$ visual input can be represented in a 3-channel RGB image with dimensionality $56 \times 56 \times 3$. An architecture using this visual representation is illustrated in Figure \[fig:models\]. Architectures names are structured in two parts. The first part is either “original”, “bow” or “pixels”, indicating which visual representation is used. The second part is optional and describes whether “\_endpool” (because the only source of pooling is at the end) or “\_res” (adding residual connections) are present in the architecture. Experiments =========== To determine the best architecture and visual representation, we follow BabyAI 1.0 and experiment on the six easiest BabyAI levels. These six levels consist of five single-room levels and one multi-room level. Then, we present IL performance benchmarks on all levels. Finding the Best Architecture ----------------------------- We measure RL sample and computational efficiency and IL performance with varying number of demonstrations. RL experiments were structured in two stages. The first set of experiments investigated architectural modifications. Results in Table \[tbl:arch\] showed that removing pooling in the visual encoder had a significant improvement on sample efficiency, but adding residual connections effected both increases and decreases. Nevertheless, we adopted residual connections for further experiments because the sample efficiency increase for PutNextLocal greatly outweighted the total decrease in GoToLocal and GoTo. The second set of experiments investigated visual representations. Results in Table \[tbl:visual\] still do not show a wide variation in sample efficiency. Because training from pixels was hard on the two most difficult levels (GoTo, PutNextLocal), we halved the learning rate ($\alpha$ in Adam [@kingma_adam:_2015] from $1 \times 10^{-4}$ to $5 \times 10^{-5}$) and reran the second set of experiments. The resulting statistics in Table \[tbl:visual2\] do not show much variation between the three visual representations. We now consider the computational efficiency of training each of these five architectures. Training from pixels has a slower throughput than the other visual representations (Table \[tbl:fps\]). Because of this and no clear advantage in RL sample efficiency (Tables \[tbl:visual\], \[tbl:visual2\]), we drop further experiments on pixels. Now, we investigate whether changing the visual representation to BOW and two architectural modifications improve IL performance. @babyai_iclr19 measures sample efficiency using an interpolated function fitted with a Gaussian Process (GP) [@rasmussen_gaussian_2005]. In our experiments we found that an infeasibly large number of training runs would be required in order to obtain a sufficiently confident sample efficiency estimate from the GP. Instead, we follow Table 6 in [@zolna2020combating], who evaluate IL by observing its success rate trained with varying number of demonstrations. @zolna2020combating use $1/64$^th^, $1/8$^th^ and all of 1 million demonstrations. We use 5, 10, 50, 100 and 500 thousand demonstrations, which correspond to $1/200$^th^, $1/100$^th^, $1/20$^th^, $1/10$^th^ and $1/2$^th^ of the total 1 million demonstrations. IL results in Table \[tbl:il\] show that training from BOW is advantageous to the original BabyAI 1.0 visual representation. Interestingly, we find that for hard levels with a few number of demos, the architectural modifications are not beneficial for training. This is offset by changing the visual representation to BOW. Benchmarking the Best Modifications ----------------------------------- Having constructed the BabyAI 1.1 agent, we benchmark its performance over all nineteen BabyAI levels. Table \[tbl:baseline\] shows that modifications found by the previous section yielded improvements in performance over all levels. Four more levels (Unlock, Putnext, Synth and SynthLoc) were solved, and success rate increased by $13.7 \%$ in the hardest level (BossLevel) from $77 \%$ to $90.4 \%$. Conclusion ========== As BabyAI was intended to be a lightweight experimental platform, BabyAI 1.0 used a specific hand-crafted representation rather than a more realistic pixel-based representation. We have shown that training from other visual representations (BOW and pixels) is feasible, and is sometimes more sample efficient (Table \[tbl:il\]). However, learning from pixels took longer to compute (Table \[tbl:fps\]) and was more sensitive to hyperparameters. Besides,
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'A search for RR Lyrae stars has been conducted in the publicly available data of the Northern Sky Variability Survey ([*NSVS*]{}). Candidates have been selected by the statistical properties of their variation; the standard deviation, skewness and kurtosis with appropriate limits determined from a sample 314 known RRab and RRc stars listed in the GCVS. From the period analysis and light curve shape of over 3000 candidates 785 RR Lyrae have been identified of which 188 are previously unknown. The light curves were examined for the Blazhko effect and several new stars showing this were found. Six double-mode RR Lyrae stars were also found of which two are new discoveries. Some previously known variables have been reclassified as RR Lyrae stars and similarly some RR Lyrae stars have been found to be other types of variable, or not variable at all.' author: - | Patrick Wils$^{1}$, Christopher Lloyd$^{2}$, Klaus Bernhard$^{3,4}$\ $^{1}$Vereniging voor Sterrenkunde, Belgium, email: patrick.wils@cronos.be\ $^{2}$Space Science & Technology Department, Rutherford Appleton Laboratory, Chilton, Didcot, Oxon. OX11 0QX, UK, email: cl@astro1.bnsc.rl.ac.uk\ $^{3}$A-4030 Linz, Austria, email: klaus.bernhard@liwest.at\ $^{4}$Bundesdeutsche Arbeitsgemeinschaft für Veränderliche Sterne e.V. (BAV), Munsterdamm 90, D-12169 Berlin, Germany date: 'Accepted 2006 February 23. Received 2006 February 21; in original form 2006 January 16' title: A Catalogue of RR Lyrae Stars from the Northern Sky Variability Survey --- \[firstpage\] stars: variables: other - stars: Population II Introduction ============ Statistical studies of the relative numbers of the different classes of RR Lyrae variable stars and especially the incidence rate of multi-periodicity may give indications of the metallicity of different stellar systems and of their evolution [see e.g. @mos]. Exhaustive studies have been done in the Magellanic Clouds by the [*MACHO*]{} collaboration [@macho and 2003] and the [*OGLE*]{} survey [@oglelmc]. Other studies have searched for RR Lyrae stars in the Galaxy, such as [*QUEST*]{} [@quest] and also [*OGLE*]{} [@collinge]. Most of the stars found in these studies are faint, and limited to a small region of sky (the galactic bulge and the equator for [*OGLE*]{} and [*QUEST*]{} respectively). The Robotic Optical Transient Search Experiment [[*ROTSE-1*]{}, @rotse] found a fairly large number of previously unknown bright (magnitude $< 15$) RR stars in a part of the sky. This paper sets out to extend this search for field galactic RR Lyrae stars to the whole northern sky in the [*ROTSE-1*]{} data, made publicly available via the Internet [Northern Sky Variability Survey - [*NSVS*]{}, @skydot]. Methodology =========== With only photometric data, and no spectral information, the type of a short period variable can only be determined once the (phased) light curve is known, and hence only once the period is known. However, determining the period from sparse data is a computationally demanding process. Therefore it was decided to limit the number of objects by statistical parameters involving much less computation: standard deviation, skewness, kurtosis and the mean square of successive differences of the magnitude data. By looking at the values for these statistics for the RR Lyrae stars listed in the General Catalogue of Variable Stars [GCVS, @gcvs and its online edition], limiting conditions were then derived. The aim was to set these limits as strict as possible, so that not too many objects needed to be checked, but still to include the majority of RR Lyrae stars. Those GCVS stars that did not follow the criteria, can then provide an estimate for the completeness of the survey. Data ---- The [*ROTSE-1*]{} was an unfiltered CCD survey of the sky from the north pole to declination $\sim -38$, reaching magnitude $\sim 15$ with varying levels of completeness. The survey lasted nominally for one year but depending on the circumstances coverage of individual objects may be significantly less than this. Objects typically have 100 to 500 measurements with a median photometric accuracy of 0.02 mag for 10th magnitude stars and a positional accuracy of 2“. The spatial resolution of 14” compromises the photometry in crowded fields, typically with $|b| < 20$, but also at higher galactic latitudes for stars with companions within $\sim 45"$. The data are publicly available from the Sky Database for Objects in Time-Domain (SkyDOT) web site [@skydot] and it is possible to select data with respect to 8 extraction flags and 7 photometric correction flags. The default selection for good data sets all but one, [PATCH]{}, of the photometric correction flags and only one, [SATURATED]{}, of the extraction flags. However, experience of working with the data has shown that observations with the extraction flag [APINCOMPL]{} set are often completely out of range and should be rejected. On the other hand data with the photometric correction flag [RADECFLIP]{} set are often indistinguishable from the other data. So the data have been selected with the [SATURATED]{} and [APINCOMPL]{} flags set and the [PATCH]{} and [RADECFLIP]{} flags unset. It was also decided that only stars with 100 good [*NSVS*]{} observations or more were to be considered, in order to get good statistics and reliable period determinations, and to possibly detect multiperiodicity (double-mode pulsation or a Blazhko effect). With fewer data points, statistics may be influenced substantially by erroneous observations, such as those introduced by e.g. a close companion. Also, it is then not always possible to derive the correct period: in view of the sampling frequency, and the rather short total time span of the available data (less than a year) alias frequencies will be more important. RR Lyrae stars are fairly blue stars (spectral types A and F). The [*NSVS*]{} survey however observed only in one colour (unfiltered CCD), so colour information has to be retrieved from another source, such as [*2MASS*]{} [@2mass]. The [*NSVS*]{} positions are not very accurate however, and matching them to [*2MASS*]{} coordinates may be troublesome in crowded fields. In view of this and of reddening aspects, it was decided not to use colour information as a filter. Control group: GCVS stars ------------------------- The GCVS stars that were to be considered for the control group, had to be well-known and have an accurate position. Therefore only the GCVS types RRab or RRc were taken (no RR, RR:, RRab: or RRc: stars, i.e. the GCVS classification should be precise enough to give the exact subtype). Because of the limited number of RRd stars in the GCVS (the RR(B) class), these were not taken into account either. For practical purposes, as far as their statistical parameters are concerned, these double-mode stars can be considered to be RRc stars. The known RR Lyrae stars in the GCVS were further limited to the constellations And to Ori for which precise positions had been determined by the GCVS team at the time this study started. Many stars in other constellations did not have accurate enough coordinates, which could lead to misidentifications with the [*NSVS*]{} stars. It is however still possible that a faint RR Lyrae star unobservable by the [*ROTSE*]{} camera, lies close to a brighter (constant) companion, leading to a false identification. Because of this, the success rates for finding an RR star, may be underestimated. On the other hand, especially at fainter magnitudes, some stars which should have been detectable, will not have been registered, thus overestimating the success rate. With the above restrictions imposed, 582 [*NSVS*]{} objects were identifed as GCVS RR Lyrae stars by their [HTM]{} identification [see @skydot]. [*NSVS*]{} synonyms, the same object observed in overlapping [*NSVS*]{} fields, have been counted separately here. This will be done also for the remainder of this section, as it will not change the statistics very much. The further restriction that there needed to be at least 100 good data points limited the sample to 314 objects (273 RRab and 41 RRc), or 54% of the total number of stars identified. Compare this to the overall 42% of objects with at least 100 good points (8393519 out of a total of 19995106 [*NSVS*]{} objects). 60% of the GCVS stars that are on average brighter than magnitude 14, have more than 100 observations, and 68% if only objects North of the equator are counted. Skewness -------- RRab stars have a typical light curve, spending more time near minimum than near maximum, while RRc stars have more symmetric light curves. It distinguishes them from eclipsing binaries, by far the most common type of variable star found in the [*NSVS*]{} database. As a result, the distribution of magnitudes of an RRab star shows a negative skewness, while those of an RR
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Before the launch of the Rossi X-ray Timing Explorer ([*RXTE*]{}) it was recognized that neutron star accretion disks could extend inward to very near the neutron star surface, and thus be governed by millisecond timescales. Previous missions lacked the sensitivity to detect them. The kilohertz quasi-periodic oscillations (QPO) that [*RXTE*]{} discovered are often, but not always, evident in the X-ray flux. In 8 years [*RXTE*]{} has found kilohertz signals in about a fourth of 100 low-mass X-ray binaries (LMXB) containing neutron stars. The observed power spectra have simple dominant features, the two kilohertz oscillations, a low frequency oscillation, and band-limited white noise. They vary systematically with changes in other source properties and offer the possibility of comparison with model predictions. New information from the millisecond pulsars resolves some questions about the relations of the QPO and the spin. Coherence, energy spectrum and time lag measurements have indicated systematic behaviors, which should constrain mechanisms.' author: - Jean Swank bibliography: - 'swankj.bib' title: 'Quasi-Periodic Oscillations from Low-mass X-Ray Binaries with Neutron Stars' --- [ address=[Goddard Space Flight Center, Greenbelt, MD 20740]{} ]{} A Brief History of LMXB QPO =========================== Soon after the discoveries of Sco X–1 and Cyg X–2, it was realized that accretion onto a neutron star in a binary was a likely source of the X-ray emission. But while clear pulsations were seen in the flux from Hex X–1, these sources exhibited no periodic signal. The possibility was raised that accretion over a long lifetime had spun up the neutron star to frequencies higher than could have been measured in the early observations [@alpar82]. Successive missions strove to increase their sensitivity to higher frequencies. [*EXOSAT*]{} and [*Ginga*]{} pushed the frontier to about 200 Hz. The world before [*RXTE*]{} is below 200 Hz. [*EXOSAT*]{} discovered timing signals, but quasi-periodic signals rather than the coherent clock of the neutron star. QPO were found in many X-ray sources in the galactic bulge. The frequencies varied in the 1-50 Hz range. [*EXOSAT*]{} proportional counter data provided spectral information at the same time. @HasvdK89 showed that the spectral variations fell in two categories, denoted “Z” and “Atoll” and that the QPO frequencies depended on the source’s position in a plot of hard versus soft “colors” or energy ratios. The widths and amplitudes varied also in systematic ways. @Wijnands01 compiled the [*RXTE*]{} version of a figure summarizing the properties of both Z and Atoll sources. At first the frequency appeared to be positively correlated with the X-ray luminosity and a simple explanation was attractive, the magnetic beat frequency model [@alpar85]. The accretion rate through the disc, should be stopped eventually, by a magnetosphere due to the neutron star, but closer to the neutron star because the magnetic field was much weaker. The Kepler period of gas in the disk would beat with the spin frequency of the neutron star to cause brightness oscillations. Changing the accretion rate would change the magnetospheric radius, the Kepler frequency at the boundary, and thus the beat frequency. It implied spin rates of 50-350 Hz in several cases [@GL92]. However the model was not a satisfactory fit to the data from several sources and there was evidence that the luminosity was not a good measure of the accretion rate. The character of bursts and their recurrence rate changed in 4U 1636–53 as it moved through the “Atoll” pattern [@vdK90], while the luminosity did not increase smoothly. In Cyg X–2 [@Has90; @Vrtilek90] and Sco X–1 [@Vrtilek91] UV emission decreased as the X-ray flux increased, while the magnetospheric beat frequency model implied it should increase [@Has90]. Nevertheless coherent oscillations were sought [@Vaughan94] and upper limits of less than 0.5 % were achieved for frequencies below 200 Hz. The idea that the magnetic fields of the neutrons stars are $10^{2} - 10^{4}$ lower than the $10^{12}$ G of “classical” pulsars was advanced to explain the failure to detect strongly channeled accretion flow that should show up as pulsations and the higher frequencies. Kluzniak and Wagoner saw that accretion disks around low magnetic field neutron stars could be very interesting if the equation of state of nuclear matter meant that neutron stars were inside the innermost stable orbit of orbiting material. The accretion disk could extend down to the innermost stable orbit and be truncated there rather than at the magnetosphere. A signal might even indicate the Kepler frequency of the inner most stable orbit [@Kluz85; @Kluz90]. These papers foresaw that signals bearing the imprint of General Relativity could come from these sources. In 1996, [*RXTE*]{} began observing and the first observations of the Atoll source 4U 1728–34 by @Stroh96 and the Z source Sco X-1 by @vdK96 showed signals with frequencies in the range that orbits close to neutron stars would have. Figure 1 shows several of the important aspects of the kilohertz QPO discovered in the flux from 21 LMXB. As the count rates vary, the QPO center frequencies vary significantly compared to the widths of the features. The features in the Atoll and the Z source are very similar. The phenomena and the physical models that have been explored during the 8 yr since the discovery have been described in several review articles [See @vdK00; @Wijnands01]. Looking back at why [*RXTE*]{} could detect the signals while previous missions did not, the increase in sensitivity came from several factors. The number of sigmas of the detection of a QPO feature can be expressed as $$n_{\sigma} = (1/2) S^{2}/(S+B)(rms/S)^{2} \surd (T/\Delta \nu)$$ Here, S is the source count rate, B the background rate, rms the root mean square variance in S, T the duration of the observation and $\Delta \nu$ the width of the QPO feature. This scales as the detector area. The PCA has observed with a maximum of 6250 cm$^2$, compared to [*EXOSAT*]{}’s 1600 cm$^2$, but [*Ginga*]{} had 4000 cm$^2$, and did not detect kiloHertz oscillations because of insufficient time resolution. Other factors - background, noise, dead time, low duty cycle of observations - have influenced the sensitivity to these phenomena. So far, [*RXTE*]{} has been the only instrument to detect them. Observed Characteristics of the Two KHz QPO =========================================== Frequency Range --------------- Low-mass X-ray binaries have a wide range in X-ray luminosity, from apparently exceeding the Eddington limit for a neutron star of 1.4 M to 0.5 % of it. Yet for sources at both extremes the frequencies observed for the upper of the two kilohertz oscillations range from approximately 300 Hz to 1100 Hz. This was apparent early in the exploration of QPOs and remains true now [@SSZ98; @vdK00]. (The highest frequency, although only 2.6 $\sigma$, is 1330 Hz for 4U 0614+09 [@vdK00; @Straaten00]). @Zhang97 deduced from this that the frequency must depend only on properties of the neutron star, independent of the mass accretion rate. It could either be the radius of the neutron star or the radius of the inner-most stable orbit (ISCO). It seemed more likely to be the ISCO than the radius, in that surface behavior would be more likely to depend on the accretion rate. @Kaaret97 also pointed out that if the ISCO was responsible for the peak frequency, it was a test of General Relativity. As the number of sources accumulated, @Ford00 exhibited that this independence of the frequency range on the luminosity continued to hold, using fits to the simultaneous spectral data to determine more accurate luminosities. Figure 2 shows that there is a slight trend for the lower luminosity bursters to exhibit highest frequencies a little higher than those of the Z sources at high luminosity. Interestingly, none of the bright Atoll sources (e.g. GX 3+1, GX 9+9, GX 9+1, GX 13+1) have shown oscillations. They fill in the luminosity range between the brightest of the bursters, 4U 1820–30 and the Z sources. [![Frequencies of upper kilohertz oscillations for LMXB of the range of luminosities. These values are for a subset of all the observations. The highest frequency points for 4U 1636-53 are of disputed significance.[]{data-label="fig:2"}](swankj_f2.ps "fig:")]{} We now believe we know the rotation period P for 16 of the LMXB which either have coherent oscillations or have oscillations during bursts, For some of these we have a good estimate of the distance and therefore the X-ray luminosity, L$_X$. The accretion rate onto the neutron star could be through the disk or from a corona, so that for the accretion rate in the disk, $dM/dt \leq L_X/(GM/R)$. If we suppose, as did @White97, that adiabatic
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In this research, we apply ensembles of Fourier encoded spectra to capture and mine recurring concepts in a data stream environment. Previous research showed that compact versions of Decision Trees can be obtained by applying the Discrete Fourier Transform to accurately capture recurrent concepts in a data stream. However, in highly volatile environments where new concepts emerge often, the approach of encoding each concept in a separate spectrum is no longer viable due to memory overload and thus in this research we present an ensemble approach that addresses this problem. Our empirical results on real world data and synthetic data exhibiting varying degrees of recurrence reveal that the ensemble approach outperforms the single spectrum approach in terms of classification accuracy, memory and execution time.' author: - 'Sripirakas Sakthithasan^\*^ , Russel Pears^\*^, Albert Bifet^\#^ and Bernhard Pfahringer^\#^' title: Use of Ensembles of Fourier Spectra in Capturing Recurrent Concepts in Data Streams --- Introduction ============ In many real world applications, patterns or concepts recur over time. Machine learning applications that model, capture and recognize concept re-occurrence gain significant efficiency and accuracy advantages over systems that simply re-learn concepts each time they re-occur. When such applications include safety and time critical requirements, the need for concept re-use to support decision making becomes even more compelling. Auto-pilot systems sense environmental changes and take appropriate action (classifiers, in the supervised machine learning context) to avoid disasters and to fly smoothly. As environmental conditions change, appropriate actions must be taken in the shortest possible time in the interest of safety. Thus for example, a situation that involves the occurrence of a sudden low pressure area coupled with high winds (a concept that would be captured by a classifier) would require appropriate action to keep the aircraft on a steady trajectory. A machine learning system that is coupled to a flight simulator can learn such concepts in the form of classifiers and store them in a repository for timely re-use when the aircraft is on live flying missions. In live flying mode the autopilot system can quickly re-use the stored classifiers when such situations re-occur. Additionally, in live flying mode, new potentially hazardous situations not experienced in simulator mode can also be learned and stored as classifiers in the repository for future use. In a real world setting, there is an abundance of applications that exhibit such recurring behavior such as stock and sales applications where timely decision making results in improved productivity. Our research setting is a data stream environment where we seek to capture concepts as they occur, store them in highly compressed form in a repository and to re-use such concepts for classification when the need arises in the future. A number of challenges need to be overcome. Firstly, a compression scheme that captures concepts using minimal storage is required as in a high volatile high dimensional environment. Memory overhead will be a prime concern as the number of concepts will grow continuously in time given the unbounded nature of data streams. Secondly, in real-world environments, concepts rarely, if ever, occur in exactly their original form and so a mechanism is needed to recognize partial re-occurrence of concepts. Thirdly, the concept encoding scheme needs to be efficient in order to support high speed data stream environments. In order to meet the above challenges, we extend the work proposed in [@sak:mrc] in a number of ways. In [@sak:mrc] concepts were initially captured using decision trees and the Discrete Fourier Transform (DFT) was applied to encode them into spectra yielding compressed versions of the original decision trees. Firstly, instead of encoding each concept using its own Fourier spectrum, we use an ensemble approach to aggregate individual spectra into a single unified spectrum. This has two advantages, the first of which is reduction of memory overhead. Memory is further reduced as Fourier coefficients that are common between different spectra can be combined into a single coefficient, thus eliminating redundancy. The second advantage arises from the use of an ensemble: new concepts that manifest as a combination of previously occurring concepts already present in the ensemble have a higher likelihood of being recognized, resulting in better accuracy and stability over large segments of the data stream. Secondly, we devise an efficient scheme for spectral energy thresholding that directly controls the degree of compression that can be obtained in encoding concepts in the repository. Thirdly, we optimize the DFT encoding process by removing the need for computing a potentially expensive inner product operation on vectors. Related Research {#sec:relatedresearch} ================ While a vast literature on concept drift detection exists [@pea:dci], only a small body of work exists so far on exploitation of recurrent concepts. The methods that exist fall into two broad categories. Firstly, methods that store past concepts as models and then use a meta-learning mechanism to find the best match when a concept drift is triggered [@joa:trc], [@gom:trc]. Secondly, methods that store past concepts as an ensemble of classifiers. The method proposed in this research belongs to the second category where ensembles remember past concepts. An algorithm called REDDLA is presented in [@pli:mrc]. This algorithm is designed to handle recurring concepts with unlabeled data instances. One of the key issues is that explicit domain is required on the concept recurrence interval. The other issue is high memory overhead. Lazarescu in [@laz:aml] proposed an evidence forgetting mechanism based on a multiple window approach and a prediction module to adapt classifiers based on estimating future rate of change. Whenever the difference between observed and estimated rates of change are above a threshold, a classifier that best represents the current concept is stored in a repository. Experimentation on the STAGGER data set showed that the proposed approach outperformed the FLORA method on classification accuracy with re-emergence of previous concepts in the stream. Ramamurthy and Bhatnagar [@ram:trc] use an ensemble approach based on a set of classifiers in a global set G. An ensemble of classifiers is built dynamically from a collection of classifiers in G, if none of the existing individual classifiers are able to meet a minimum accuracy threshold based on a user defined acceptance factor. Whenever the ensemble accuracy falls below the accuracy threshold, G is updated with a new classifier trained on the current chunk of data. Another ensemble based approach by Katakis et al. is proposed in [@kat:aeo]. A mapping function is applied on data stream instances to form conceptual vectors which are then grouped together into a set of clusters. A classifier is incrementally built on each cluster and an ensemble is formed based on the set of classifiers. Experimentation on the Usenet data set showed that the ensemble approach produced better accuracy than a simple incremental version of the Naive Bayes classifier. Gomes et al. [@gom:trc] used a two layer approach with the first layer consisting of a set of classifiers trained on the current concept, while the second layer contains classifiers created from past concepts. A concept drift detector flags when a warning state is triggered and incoming data instances are buffered to prepare a new classifier. If the number of instances in the warning window is below a threshold, the classifier in layer 1 is used instead of re-using classifiers in layer 2. One major issue with this method is validity of the assumption that explicit contextual information is available in the data stream. Gama and Kosina also proposed a two layered system in [@joa:trc] which is designed for delayed labelling, similar in some respects to the Gomes et al. [@gom:trc] approach. In their approach, Gama and Kosina pair a base classifier in the first layer with a referee in the second layer. Referees learn regions of feature space which its corresponding base classifier predicts accurately and is thus able to express a level of confidence on its base classifier with respect to a newly generated concept. The base classifier which receives the highest confidence score is selected, provided that it is above a user defined hit ratio parameter; if not, a new classifier is learnt. Just-in-Time classifiers is the solution proposed by Allipi et al. [@ali:jit] to deal with recurrent concepts. Concept change detection is carried out on the classification accuracy as well as by observing the distribution of input instances. The drawback is that this model is designed for abrupt drifts and is weak at handling gradual changes. Recently, Sakthithasan and Pears in [@sak:mrc] used the Discrete Fourier Transform (DFT) to encode decision trees into a highly compressed form for future use. They showed that DFT encoding is very effective in improving classification accuracy, memory usage and processing time in general. It maintains a pool of Fourier spectra and a decision tree forest in parallel. The decision tree forest dominates the model, when none of the existing Fourier spectra matches the current concept, otherwise classification is done by the best performing Fourier spectrum. Application of the Discrete Fourier Transform on Decision Trees {#sec:dftapplication} =============================================================== The Discrete Fourier Transform (DFT) has a vast area of application in diverse domains such as time series analysis, signal processing, image processing and so on. It turns out as Park [@par:kdf] and Kargupta [@hhil:afs] show, that the DFT is very effective in terms of classification when applied on a decision tree model. Kargupta et al. [@hhil:afs], working in the domain of distributed data mining, showed that the Fourier spectrum fully captures a decision tree in algebraic form, meaning that the Fourier representation preserves the same classification power as the original decision tree. Transforming Decision Tree into Fourier Spectrum ------------------------------------------------ A decision tree can be represented in compact algebraic form by applying the DFT to paths of the tree. Each Fourier coefficient $\omega_{j}$ is given by: $$\\ \small \label
{ "pile_set_name": "ArXiv" }
null
null
--- author: - | O. L. Creevey, T. S. Metcalfe, M. Schultheis, D. Salabert,\ M. Bazot, F. Thévenin, S. Mathur, H. Xu, R. A. García bibliography: - 'seismic.bib' date: 'Received ; accepted' title: 'Characterizing solar-type stars from full-length *Kepler* data sets using the Asteroseismic Modeling Portal' --- Introduction\[sec1\] ==================== ---------- -------------------- --------------------- ------------------- ------------------- ------------------ ------ -- -- -- -- -- -- -- -- -- -- -- -- -- KIC ID [$T_{\rm eff}$]{}  [\[M/H\]]{} $K_s$ $A_{K_S}$ P$_{\rm ROT}$ Ref. (K) (dex) (mag) (mag) (days) 1435467 6326 $\pm$ 77 $+$0.01 $\pm$ 0.10 7.718 $\pm$ 0.009 0.011 $\pm$ 0.004 6.68 $\pm$ 0.89 1,A 2837475 6614 $\pm$ 77 $+$0.01 $\pm$ 0.10 7.464 $\pm$ 0.023 0.008 $\pm$ 0.002 3.68 $\pm$ 0.36 1,A 3427720 6045 $\pm$ 77 $-$0.06 $\pm$ 0.10 7.826 $\pm$ 0.009 0.020 $\pm$ 0.019 13.94 $\pm$ 2.15 1,B 3656476 5668 $\pm$ 77 $+$0.25 $\pm$ 0.10 8.008 $\pm$ 0.014 0.022 $\pm$ 0.050 31.67 $\pm$ 3.53 1,A 3735871 6107 $\pm$ 77 $-$0.04 $\pm$ 0.10 8.477 $\pm$ 0.016 0.018 $\pm$ 0.027 11.53 $\pm$ 1.24 1,A 4914923 5805 $\pm$ 77 $+$0.08 $\pm$ 0.10 7.935 $\pm$ 0.017 0.017 $\pm$ 0.029 20.49 $\pm$ 2.82 1,A 5184732 5846 $\pm$ 77 $+$0.36 $\pm$ 0.10 6.821 $\pm$ 0.005 0.012 $\pm$ 0.007 19.79 $\pm$ 2.43 1,A 5950854 5853 $\pm$ 77 $-$0.23 $\pm$ 0.10 9.547 $\pm$ 0.017 0.002 $\pm$ 0.004 1 6106415 6037 $\pm$ 77 $-$0.04 $\pm$ 0.10 5.829 $\pm$ 0.017 0.003 $\pm$ 0.020 1 6116048 6033 $\pm$ 77 $-$0.23 $\pm$ 0.10 7.121 $\pm$ 0.009 0.013 $\pm$ 0.020 17.26 $\pm$ 1.96 1,A 6225718 6313 $\pm$ 76 $-$0.07 $\pm$ 0.10 6.283 $\pm$ 0.011 0.003 $\pm$ 0.001 1 6603624 5674 $\pm$ 77 $+$0.28 $\pm$ 0.10 7.566 $\pm$ 0.019 0.008 $\pm$ 0.008 1 6933899 5832 $\pm$ 77 $-$0.01 $\pm$ 0.10 8.171 $\pm$ 0.015 0.023 $\pm$ 0.017 1 7103006 6344 $\pm$ 77 $+$0.02 $\pm$ 0.10 7.702 $\pm$ 0.015 0.007 $\pm$ 0.010 4.62 $\pm$ 0.48 1,A 7106245 6068 $\pm$ 102 $-$0.99 $\pm$ 0.19 9.419 $\pm$ 0.006 0.015 $\pm$ 0.029 4 7206837 6305 $\pm$ 77 $+$0.10 $\pm$ 0.10 8.575 $\pm$ 0.011 0.004 $\pm$ 0.005 4.04 $\pm$ 0.28 1,A 7296438 5775 $\pm$ 77 $+$0.19 $\pm$ 0.10 8.645 $\pm$ 0.009 0.012 $\pm$ 0.018 25.16 $\pm$ 2.78 1,A 7510397 6171 $\pm$ 77 $-$0.21 $\pm$ 0.10 6.544 $\pm$ 0.009 0.018 $\pm$ 0.010 1 7680114 5811 $\pm$ 77 $+$0.05 $\pm$ 0.10 8.673 $\pm$ 0.006 0.011 $\pm$ 0.013 26.31 $\pm$ 1.86 1,A 7771282 6248 $\pm$ 77 $-$0.02 $\pm$ 0.10 9.532 $\pm$ 0.010 0.005 $\pm$ 0.001 11.88 $\pm$ 0.91 1,A 7871531 5501 $\pm$ 77 $-$0.26 $\pm$ 0.10 7.516 $\pm$ 0.017 0.023 $\pm$ 0.021 33.72 $\pm$ 2.60 1,A 7940546 6235 $\pm$ 77 $-$0.20 $\pm$ 0.10 6.174 $\pm$ 0.011 0.023 $\pm$ 0.009 11.36 $\pm$ 0.95 1,A 7970740 5309 $\pm$ 77 $-$0.54 $\pm$ 0.10 6.085 $\pm$ 0.011 0.003 $\pm$ 0.013 17.97 $\pm$ 3.09 1,A 8006161 5488 $\pm$ 77 $+$0.34 $\pm$ 0.10 5.670 $\pm$ 0.015 0.009 $\pm$ 0.006 29.79 $\pm$ 3.09 1,A 8150065 6173 $\pm$ 101 $-$0.13 $\pm$ 0.15 9.457 $\pm$ 0.014 0.010 $\pm$ 0.013 4 8179536 6343 $\pm$ 77 $-$0.03 $\pm$ 0.10 8.278 $\pm$ 0.009 0.005 $\pm$ 0.016 24.55 $\pm$ 1.61 1,A 8379927 6067 $\pm$ 120 $-$0.10 $\pm$ 0.15 5.624 $\pm$ 0.011 0.004 $\pm$ 0.012 16.99 $\pm$ 1.35 2,A 8394589 6143 $\pm$ 77 $-$0.29 $\pm$ 0.10 8.226 $\pm$ 0.016 0.013 $\pm$ 0.010 1 8424992 5719 $\pm$ 77 $-$0.12 $\pm$ 0.10 8.843 $\pm$ 0.011 0.016 $\pm$ 0.018 1 8694723 6246 $\pm$ 77 $-$0.42 $\pm$ 0.10 7.663 $\pm$ 0.007 0.003 $\pm$ 0.001 1 8760414 5873 $\pm$ 77 $-$0.92 $\pm$ 0.10 8.173 $\pm$ 0.009 0.016 $\pm$ 0.012 1 8938364 5677 $\pm$ 77 $-$0.13 $\pm$ 0.10 8.636 $\pm$ 0.016 0.003 $\pm$ 0.009 1 9025370 5270 $\pm$ 180 $-$0.12 $\pm$ 0.18 7.372 $\pm$ 0.025 0.041 $\pm$ 0.030 3 9098294 5852 $\pm$ 77 $-$0.
{ "pile_set_name": "ArXiv" }
null
null
--- author: - Maximilien Pindao - Daniel Schaerer - 'Rosa M. González Delgado' - Grażyna Stasińska date: 'Received 20 june 2002/ Accepted 12 august 2002' title: 'VLT observations of metal-rich extra galactic HII regions. I. Massive star populations and the upper end of the IMF [^1] ' --- Introduction {#s_intro} ============ Wolf-Rayet stars (WR) are the descendants of the most massive stars. Although they live during a short time (Maeder & Conti 1994) these stars have been detected in young stellar systems, such as extragalactic HII regions (Kunth & Schild 1986) and the so-called WR galaxies (Conti 1991, Schaerer  1999b). They are recognized by the presence of broad stellar emission lines at optical wavelengths, mainly at 4680 Å (known as the blue WR bump) and at 5808 Å  (red WR bump). The blue bump is a blend of N [v]{} $\lambda\lambda$4604,4620, N [iii]{} $\lambda\lambda$4634,4641, C [iii/iv]{} $\lambda\lambda$4650,4658 and [He [ii]{} $\lambda$4686]{} lines, that are produced in WR stars of the nitrogen (WN) and carbon (WC) sequences. In contrast, the red bump is formed only by [C [iv]{} $\lambda$5808]{} and it is mainly produced by WC stars. The detection of these features in the integrated spectrum of a stellar system provides a powerful tool to date the onset of the burst, and it constitutes the best direct measure of the upper end of the initial mass function (IMF). Thus, if WR features are found in the spectra of star forming systems, stars more massive than $M_{\rm WR}$, where $M_{\rm WR} \sim$ 25  for solar metallicity, must be formed in the burst. The IMF is one of the fundamental ingredients for studies of stellar populations, which has an important bearing on many astrophysical studies ranging from cosmology to the understanding of the local Universe. In particular the value of the IMF slope and the upper mass cut-off () strongly influences the mechanical, radiative, and chemical feedback from massive stars to the ISM such as the UV light, the ionizing radiation field, and the production of heavy elements. A picture of a universal IMF has emerged from numerous works performed in the last few years (e.g. Gilmore & Howell 1998 and references therein). Indeed, these studies derive a slope of the IMF close to the Salpeter value for a mass range between 5 and 60 . This result seems to hold for a variety of objects and metallicities from very metal poor up to the solar metallicity, with the possible exception of a steeper field IMF (Massey  1995, Tremonti  2002). However, the IMF in high metallicity (12+log (O/H) $\ga$ (O/H)$_\odot \approx$ 8.92) systems is much less well constrained. Different indirect methods to derive the slope and  give contradictory results. The detection of strong wind resonance UV lines in the integrated spectrum of high metallicity nuclear starbursts clearly indicate the formation of massive stars (Leitherer 1998; Schaerer 2000; González Delgado 2001). In contrast, the analysis of the nebular optical and infrared lines of IR-luminous galaxies and high metallicity [H [ii]{}]{} regions indicates a softness of the ionizing radiation field that has beeninterpreted as due to the lack of stars more massive than $\sim$ 30  (Goldader  1997; Bresolin  1999; Thornley  2000; Coziol  2001). However, the interpretation of these indirect probes relies strongly on a combination of models for stellar atmospheres and interiors, evolutionary synthesis, and photoionisation, each with several potential shortcomings/difficulties (cf. García-Vargas 1996, Schaerer 2000, Stasińska 2002). For example, recently González Delgado  (2002) have shown that the above conclusion could be an artifact of the failure of WR stellar atmospheres models to correctly predict the ionizing radiation field of high metallicity starbursts (see also Castellanos 2001, Castellanos  2002b). A more direct investigation of the stellar content of metal-rich nuclear starbursts has been performed by Schaerer  (2000, hereafter SGIT00), using the detection of WR features to constrain . They found that the observational data are compatible with a Salpeter IMF extending to masses  $\ga$ 40 . Most recently, a similar conclusion has been obtained by Bresolin & Kennicutt (2002, hereafter BK02) from observations of high-metallicity HII regions in M83, NGC 3351 and NGC 6384. Here, we present a direct attempt to determine  based on the detection of WR features in metal-rich [H [ii]{}]{} regions of a sample of spiral galaxies. To obtain statistically significant conclusions about  and the slope of the IMF, a large sample of [H [ii]{}]{} regions needs to be observed. For coeval star formation with a Salpeter IMF and =120  at metallicities above solar, $\sim$ 60 to 80 % (depending on the evolutionary scenario and age of the region) of the [H [ii]{}]{} regions are expected to exhibit WR signatures (Meynet 1995; Schaerer & Vacca 1998, hereafter SV98). Thus, to find $\ga$ 40 regions with WR stars (our initial aim) a sample of at least 5-7 galaxies with $\ga$ 10 [H [ii]{}]{} regions per galaxy needs to be observed. Spectra of high S/N (at least 30) in the continuum are also required to obtain an accurate measure of the WR features. For this propose, we have selected the nearby spiral galaxies NGC 3351, NGC 3521, NGC 4254, NGC 4303 and NGC4321, which have have sufficient number of disk [H [ii]{}]{} regions of high-metallicity, as known from earlier studies. Our observations have indeed allowed to find a large number of metal-rich WR [H [ii]{}]{} regions. The analysis of their massive star content is the main aim of the present paper. Quite independently of the detailed modeling undertaken below, our sample combined with additional WR regions from Bresolin & Kennicutt (2002) allow us to derive a fairly robust [*lower limit*]{} on the upper mass cut-off of the IMF in these metal-rich environments (see Sect. \[s\_imf\]). The structure of the paper is as follows: The sample selection, observations and data reduction are described in Sect. \[s\_obs\]. The properties of the [H [ii]{}]{} regions are derived in Sect. \[s\_props\]. Section \[s\_wroh\] discusses the trends of the WR populations with metallicity. Detailed comparisons of the observed WR features with the evolutionary synthesis models are presented in Sect. \[s\_models\]. More model independent constraints on  are derived in Sect. \[s\_imf\]. Our main results and conclusions are summarised in Sect. \[s\_conclude\]. Sample selection, observations and reduction {#s_obs} ============================================ ---------- ----------------------- ------------------ ------------------ ----------------- ---------- -- -- -- -- Galaxy NED type and activity $\alpha$ (J2000) $\delta$ (J2000) $v_r$ distance \[km s$^{-1}$\] \[Mpc\] NGC 3351 SB(r)b, HII Sbrst 10h43m57.8s +11d42m14s 778 10.0 NGC 3521 SAB(rs)bc, LINER 11h05m48.6s -00d02m09s 805 7.2 NGC 4254 SA(s)c 12h18m49.5s +14d24m59s 2407 16. NGC 4303 SAB(rs)bc, HII Sy2 12h21m54.9s +04d28m25s 1566 16. NGC 4321 SAB(s)bc, LINER HII 12h22m54.9s +15d49m21s 1571 15.21 ---------- ----------------------- ------------------ ------------------ ----------------- ---------- -- -- -- -- Selection of the HII regions ---------------------------- Our target galaxies (see Table \[tab\_sample\]) are selected among nearby spiral galaxies where a sufficient number of disk [H [ii]{}]{} regions of high metallicity are known from the previous studies of Shields et al.(1991), Oey & Kennicutt (1993), and Zaritsky et al. (1994). Inspection of spectra from the two latter studies kindly made available to us showed that the vast majority of their spectra are not deep enough to allow the detection of WR or other stellar signatures in the continuum. Metallicities [12 + ([O/H]{}) $12 + \log({\rm O/H})$]{} of all known regions were estimated from the published  and  intensities using the
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'E. Moraux' - 'C. Clarke' title: Kinematics of stars and brown dwarfs at birth --- Introduction ============ So far, most studies of the star formation process have dealt with the formation of stellar systems but with the recent discovery of brown dwarfs in 1995 (Nakajima et al. 1995; Rebolo et al. 1995) new perspectives have opened regarding the formation of condensed objects in molecular clouds. Today more than a hundred brown dwarfs (BDs) in various environments are known. However, their mode of formation is still controversial and the theoretical framework describing the stellar and substellar formation process(es) is not completely satisfactory. Regions of high density ($n(H_{2})\sim 10^{7}$ cm$^{-3}$) in molecular clouds are needed to form proto-[brown dwarfs ]{}but then for their mass to remain substellar their reservoir of gas has to be small or the accretion not very efficient. Two main competing scenarios have been proposed so far to account for the formation of substellar objects. One assumes that brown dwarfs form like solar-mass stars by gravitational collapse of small, dense molecular cloud core and subsequent accretion. The supporting argument is that in the opacity limited regime the Jeans mass can be as low as a few Jupiter masses (Low & Lynden-Bell 1976). The alternative view assumes that brown dwarfs are ejected “stellar embryos” as proposed by Reipurth & Clarke (2001). In this scenario, molecular cloud cores fragment to form unstable protostellar multiple systems which decay dynamically. The lowest mass fragments are ejected from their birth place and deprived of surrounding gas to accrete remain substellar objects. The brown dwarf properties predicted by these two different formation scenarios may in principle be quite different. In the former case, both stars and brown dwarfs form predominantly as single or binary ($N=2$) systems; in this case there are no obvious reasons why properties such as the binary fraction or kinematics should depend on mass. In the latter scenario, by contrast, the dominant formation mechanism (again for both stars and brown dwarfs) is in small $N$ ($>2$) clusters, and the gravitational interplay that precedes the break up of the system into stable entities implies a potentially strong mass dependence for resulting properties like the binary fraction and kinematics. In particular it has been suggested (Reipurth and Clarke 2001) that low mass objects (e.g. brown dwarfs) ejected from such clusters would have a higher velocity dispersion than higher mass objects. Reipurth and Clarke’s initial suggestion - that brown dwarfs in star forming regions may have a detectably higher velocity distribution from stars - has [*not*]{} been borne out by radial velocity studies (Joergens & Guenther 2001). Meanwhile, successive simulations have modified the predictions of small $N$ clusters models. Delgado et al. (2003) and Sterzik and Durisen (2003) have emphasised that, in their simulations, the main difference in velocity dispersion is between single stars and binaries, and that brown dwarfs attain rather larger velocities - with respect to their parent cores - because they are more likely to be ejected as single objects. This dependence of ejection speed on binarity may readily be understood, since one binary is typically formed in each cluster in these simulations: this binary is able to eject the remaining stars from the cluster by sling-shot gravitational encounters, whilst itself remaining close to the center of mass of the natal cluster. In the turbulent fragmentation calculations of Bate, Bonnell and Bromm (2003) and Delgado, Clarke and Bate (2004), by contrast, more than one binary is formed per cluster and so binaries are able to eject each other from the natal cluster. Consequently, in these simulations, the kinematics of the resulting objects do not depend strongly on either mass [*or*]{} binarity. Evidently, the relative kinematics of stars and brown dwarfs and of single stars and binaries can shed some light on the conditions in star forming cores and could ultimately answer the question of whether stars (and brown dwarfs) are formed as isolated single and binary systems, as small $N$ aggregates containing typically one binary or as aggregates containing more than one binary. \[Note that this question is not easy to answer by direct observations, since the timescale for the break up of putative small clusters implies that this process occurs in the deeply embedded phase. However, high resolution imaging of the driving sources of Herbig Haro objects by Reipurth (2000) suggests that the multiplicity of stars in deeply embedded regions is indeed high\]. Direct observations of the kinematics of young stars and brown dwarfs is unlikely to be fruitful however. The differences in velocity dispersion predicted by theoretical models are small (of the order of a km/s). When one bears in mind that these velocities are measured with respect to star forming cores, which are in themselves in relative motion at $\sim1$ km/s, it is unsurprising that the study of Joergens and Guenther (2001) - involving small numbers of objects, with velocity resolution of $\sim0.2$ km/s and rather small dynamic range in mass - did not detect any differences. In this paper, we propose another approach that could potentially detect any mass dependence of the kinematics of stars (and brown dwarfs) at birth. Here we examine the statistical consequences of such an effect on the spatial distribution of stars and brown dwarfs in clusters. This approach has the advantage that one can work with large samples of stars and brown dwarfs, whose positions and masses are known with high accuracy. On the other hand, we cannot predict how initial variations in velocity dispersion affect the spatial distributions of stars and brown dwarfs in a cluster at a given age without further, N-body, modeling. This is partly because two body relaxation leads to mass segregation in older clusters, even in the absence of a mass dependent initial velocity dispersion. Our purpose in this paper, therefore, is to use ‘toy’ N-body models (in which brown dwarfs are introduced with a velocity dispersion that is a variable multiple of the stellar velocity dispersion) in order to establish under what circumstances could one detect a higher velocity dispersion at birth for low mass objects. We stress that this toy models for the kinematics is not supposed to correspond to the outcome of any particular numerical star formation model but is designed to provide a ready parameterization of the problem. We also underline that in no models are any sudden discontinuities in kinematic properties expected at the hydrogen burning mass limit. We use the Pleiades as the testbed for our calculations. This is because the brown dwarf population of the Pleiades has been the subject of intensive scrutiny in recent years (Moraux et al. 2003, Dobbie et al. 2002, Pinfield et al. 2000, Zapatero-Osorio et al. 1999, Bouvier et al. 1998) so that the present day mass function in this cluster is reasonably well constrained. We shall proceed by first placing an upper limit on the initial velocity dispersion of brown dwarfs in the Pleiades, based on the broad similarity between the normalization of stars to brown dwarfs in the Pleiades and in the field, which limits the number of brown dwarfs that can have left the cluster to date. We shall then explore whether the radial distribution of brown dwarfs in the cluster can place meaningful limits on their initial velocity distribution. Numerical simulations ===================== We performed numerical simulations of the dynamical evolution of a Pleiades-like cluster using the code [Nbody2]{} (Aarseth 2001) on a Sun workstation. This code is an algorithm for direct integration of N-body problem based on the neighbour scheme of Ahmad & Cohen (1973) and employs a softened potential $\phi$ of the form $$\phi = -\frac{m}{(r^{2}+\epsilon^{2})^{1/2}}$$ to reduce the effects of close encounters. The model of cluster we used is defined as follows. At time $t=0$ the stellar density $n(r,t)$ conforms to Plummer’s model $$n(r,0) = \frac{3}{4\pi r_{0}^{3}} N \left[ 1 + \left( \frac{r}{r_{0}} \right)^{2} \right]^{-5/2}$$ where $N=1900$ is the number of cluster members and $r_{0}=2.2$ pc is a scale factor determining the dimensions of the cluster. It is related to the half-mass radius $r_{h}$ by $r_{h}\simeq1.3\, r_{0}=2.86$ pc (Aarseth & Fall 1980). This leads to an overall initial central density $n(0,0)=42.6$ objects/pc$^{3}$. Initially, the stellar population is assumed to be in virial equilibrium with a velocity distribution everywhere isotropic (cf. Aarseth, Hénon & Wielen 1974 for a practical scheme for the generation of the initial positions and velocities). Note that the true initial conditions of a cluster are not very well known and are likely to be very complex. Isothermal models are sometimes preferred to describe open cluster density initial states, however Plummer models are also used and have already been proved to reproduce reasonably well the Pleiades cluster (Kroupa et al. 2001). In section \[radii\] we compare our results to observational data and find a reasonable agreement. The system is assumed to be isolated. No external potential is included but any object which reaches the cluster tidal radius would in reality be stripped off by the galactic tide and lost by the cluster. For simplicity and in order to focus on how the initial kinematics affects the the spatial distribution of the cluster population, our model does not include gas. We assume that the gas has already left the cluster when we start the
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | We propose to describe higher spins as invariant subspaces of the Casimir operators of the Poincaré Group, $P^{2}$, and the squared Pauli-Lubanski operator, $W^{2}$, in a properly chosen representation, $\psi (\mathbf{\mathbf{p}})$ (in momentum space), of the Homogeneous Lorentz Group. The resulting equation of motion for any field with $s\neq0$ is then just a specific combination of the respective covariant projectors. We couple minimally electromagnetism to this equation and show that the corresponding wave fronts of the classical solutions propagate causally. Furthermore, for $(s,0)\oplus(0,s)$ representations, the formalism predicts the correct gyromagnetic factor, $g_{s}=\frac{1}{s}$. The advocated method allows to describe any higher spin without auxiliary conditions and by one covariant matrix equation alone. This master equation is only quadratic in the momenta and its dimensionality is that of $\psi(\mathbf{\mathbf{p}})$. We prove that the suggested master equation avoids the Velo-Zwanziger problem of superluminal propagation of higher spin waves and points toward a consistent description of higher spin quantum fields. author: - Mauro Napsuciale - Mariana Kirchbach title: 'Avoiding superluminal propagation of higher spin waves via projectors onto $W^{2}$ invariant subspaces. ' --- Introduction. ============= The field theoretical description of interacting particles with spin $>1$ is a long standing problem. The interaction of a spin $\frac{3}{2}$ Rarita-Schwinger (RS) field minimally coupled to an external electromagnetic field was shown to be inconsistent more than forty years ago [@sudarshan1]. Later on, Velo and Zwanziger observed superluminal propagation of the RS wave front in the presence of a minimally coupled electromagnetic field [@VZ1] and studied also the conditions under which the Proca field interacting with an external electromagnetic field propagates causally [@VZ2]. After these works many authors have addressed above problem from different perspectives and for different interactions [@todos] and the general feeling seems to be that it is not possible to construct a consistent quantum theory for massive particles with $s >$ 1. At several decades of distance in looking afresh onto the equations of motion can lead to different understanding of this fundamental problem. Weinberg emphasizes in his textbook on quantum field theory [@Weinberg:mt] that the equation of motion satisfied by the Dirac field is nothing but the record about the way how one puts together the two irreducible representations, (1/2,0), and (0,1/2), of the proper orthochronous Lorentz group to form a field that transforms invariantly under parity. In a wider understanding, this means that the equations of motion satisfied by a field are just a consequence of the properties of the representations of the Homogeneous Lorentz Group (HLG) chosen by us to accommodate the field and the discrete symmetries we require to be realized in this space. Closely related arguments can be found, among others in [@WKT], [@ryder], [@MK97], and [@prinind]. More recently, Refs. [@MK03; @Gaby] studied covariant projectors onto invariant subspaces of the squared Pauli-Lubanski operator in the representation space of the four-vector–spinor and showed that the associated equations are free from the Velo-Zwanziger problem. The corresponding projectors for the $(s,0)\oplus(0,s)$ representation space were studied in [@MC] where it was shown that under minimal coupling a particle in this representation has the correct value for the spin gyromagnetic factor, $g_{s}=\frac{1}{s}$, thus proving Belinfante’s conjecture [@belinfante] from 1953. In this work we explore the projectors onto the invariant subspaces of the Poincaré Casimir operators, the squared four-momentum and thesquared Pauli-Lubanski operator, for any $s$, and study propagation of the corresponding wave fronts along the lines of Refs. [@VZ1; @VZ2]. The paper is organized as follows. In the next Section we recall in brief current description of higher spins and its relation to the Poincaré group. In Section III we suggest to describe higher spins as invariant subspaces of the Poincaré Casimirs. In Section IV we show that particles within this framework propagate causally in the presence of an electromagnetic field, thus avoiding the classical Velo-Zwanziger problem. The paper closes with a brief Summary. Current description of fields and its relation to Poincaré group representations. ================================================================================= The primary classification of elementary systems is usually done by identifying them (up to form factors) with the irreducible representations (irreps) of the Poincaré group ($PG$). If so, then one necessarily has to consider particles as invariant spaces of the Casimir operators of this group– the squared four-momentum $P^{2}$, on the one side, and the squared Pauli-Lubanski operator $W^{2}$, on the other side and label them by their respective eigenvalues, $p^{2}$, and $-p^{2}s(s+1)$, as $|p^{2},s(s+1)>$. Further quantum numbers can be associated with the Casimir invariants of the underlying Homogeneous Lorentz Group (HLG), $SO(1,3)$, and are approached by the reduction chain $PG\supset SO(1,3)$. For finite dimensional representations, the Casimir invariants of $SO(1,3)$ are frequently expressed in terms of two $SU(2)$ Casimirs, in turn denoted by $\mbox{\bf S}_{L}^{2}$, and $\mbox{\bf S}_{R}^{2}$, of $SU(2)_{L}\otimes SU(2)_{R}$, a group that is locally isomorphic to $SL(2,C)$, the universal covering of HLG. The two additional quantum labels gained in this manner are the well known left– and right handed “angular momenta”, $s_{L}$, and $s_{R}$, respectively. Therefore, a covariant state labeling can be introduced as: $|p^{2},s(s+1);s_{L},s_{R}>$, with $s=|s_{L}-s_{R}|,...,s_{L}+s_{R}$. In so doing one encounters essentially two types of finite dimensional HLG representations. 1. The first ones contain just one $W^{2}$ invariant subspace, and correspond to the case when one of the $s_{L}$, $s_{R}$ labels vanishes (i.e. either $(s_{L},0)$, or $(0,s_{R})$), and $s_{R}=s_{L}$. In such a case, $s_{L/R}(s_{L/R}+1)=s(s+1)$, equals the $\left( -\frac{1}{m^{2}}W^{2}\right) $ eigenvalue in the space under consideration (see Eq. (\[W2\_rest\]) below) and $W^{2}$ – and $\mbox{\bf S}_{L/R}^{2}$ invariant spaces coincide. Irreps of the above type are suggestive of replacing $W^{2}$– by $SU(2)$ spin labels. As long as the basic fields in physics are precisely of the above type (the Dirac field is $(1/2,0)\oplus(0,1/2)$, the electromagnetic field strength tensor is $(1,0)\oplus(0,1)$, and scalars are just $(0,0)$) identifying Poincaré labels with $SU(2)$ spins works out without any harm. 2. The second ones are HLG irreps containing several $W^{2}$ invariant subspaces. In this case, both $s_{L}$, and $s_{R}$ are non-vanishing, and the irreps are of the type $\left( s_{L},s_{R}\right) $ with $s_{L}\not =0$, and $s_{R}\not =0$. Examples are the vector–, and tensor gauge fields, $(1/2,1/2)$, and $(1,1)$, respectively. In the rest frame, $W^{2}=-\frac {1}{m^{2}}\mbox{ \bf S}^{2}$ hence $W^{2}$ and $\mbox{ \bf S}^{2}$ invariant sub-spaces coincide. However, beyond rest frame, in flight, $W^{2}$ and $\mbox{\bf S}^{2}$ invariant sub-spaces are no longer identical, a situation caused by the property of the boost to mix up SU(2) spins differing by one unit. Often, Lorentz representations that contain as building blocks irreps of the second type, appear attractive for the description of higher spins, the classical examples being the totally symmetric $K$ rank Lorentz tensors with Dirac spinor components, generically denoted by $\psi_{\mu_{1}...\mu_{K}}$. They are exploited for the description of fields that have been labeled in the rest frame by the highest spin $J=K+1/2$. The separation between Lorentz and spinor indices inherent to such tensors makes them especially appealing for the construction of covariant fermion-boson vertices. However, one has to face the problem how to pick up the favored degrees of freedom and exclude interference with the unwanted ones. It seems inevitable to return back to the Poincaré invariants, if one wishes to distinguish all the degrees of freedom contained in $\psi_{\mu_{1}...\mu_{K}}$ in a covariant and transitionally invariant fashion. Yet, for one reason
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Carrying out clinical diagnosis of retinal vascular degeneration using Fluorescein Angiography (FA) is a time consuming process and can pose significant adverse effects on the patient. Angiography requires insertion of a dye that may cause severe adverse effects and can even be fatal. Currently, there are no non-invasive systems capable of generating Fluorescein Angiography images. However, retinal fundus photography is a non-invasive imaging technique that can be completed in a few seconds. In order to eliminate the need for FA, we propose a conditional generative adversarial network (GAN) to translate fundus images to FA images. The proposed GAN consists of a novel residual block capable of generating high quality FA images. These images are important tools in the differential diagnosis of retinal diseases without the need for invasive procedure with possible side effects. Our experiments show that the proposed architecture outperforms other state-of-the-art generative networks. Furthermore, our proposed model achieves better qualitative results indistinguishable from real angiograms.' bibliography: - 'egbib.bib' title: 'Fundus2Angio: A Novel Conditional GAN Architecture for Generating Fluorescein Angiography Images from Retinal Fundus Photography' --- Introduction {#sec:intro} ============ For a long time Fluorescein Angiography (FA) combined with Retinal Funduscopy have been used for diagnosing retinal vascular and pigment epithelial-choroidal diseases [@mary2016retinal]. The process requires the injection of a fluorescent dye which appears in the optic vein within 8-12 seconds depending on the age and cardiovascular structure of the eye and stays up to 10 minutes [@mandava2004fluorescein]. Although generally considered safe, there have been reports of mild to severe complications due to allergic reactions to the dye [@kwiterovich1991frequency; @brockow2014hypersensitivity; @torres20091]. Frequent side effects can range from nausea, vomiting, anaphylaxis, heart attack, to anaphylactic shock and death [@lira2007adverse; @kwan2006fluorescein; @lieberman2005diagnosis; @el1996anaphylactic; @fineschi1999fatal]. In addition, leakage of fluorescein in intravaneous area is common. However, the concentration of fluorescein solutions don’t have any direct impact on adverse effects mentioned above.[@yannuzzi1986fluorescein]. Given the complications and the risks associated with this procedure, a non-invasive, affordable, and computationally effective procedure is quite imperative. The only current alternatives to flourecein angigraphy (FA) is carried out by Optical Coherence Tomography and basic image processing technique. These systems are generally quite expensive. Without a computationally effective and financially viable mechanism to generate reliable and reproducible flourecein angiograms, the only alternative is to utilize retina funduscopy for differential diagnosis. Although automated systems consisting of image processing and machine learning algorithms have been proposed for diagnosing underlying conditions and diseases from fundus images [@gurudath2014machine; @fu2018disc; @poplin2018prediction; @lira2007adverse], there has not been an effective effort to generate FA images from retina photographs. In this paper, we propose a novel conditional Generative Adversarial Network (GAN) called Fundus2Angio, capable of synthesizing fluorescein angiograms from retinal fundus images. The procedure is completely automated and does not require any human intervention. We use both qualitative and quantitative metrics for testing the proposed architecture. We compare the proposed architecture with other state-of-the-art conditional GANs [@wang2018high; @isola2017image; @zhu2017unpaired]. Our model outperforms these networks in terms of quantitative measurement. For qualitative results, expert ophthalmologists were asked to distinguish fake angiograms from a random set of balanced real and fake angiograms over two trials. Results show that the angiograms generated by the proposed network are quite indistinguishable from real FA images. Literature Review ================= Generative adversarial networks have revolutionized many image manipulation tasks such as image editing [@zhu2016generative; @dekel2018sparse], image styling [@chen2018sketchygan; @sangkloy2017scribbler], and image style transfer [@zhu2017unpaired; @wang2018high; @xian2018texturegan]. Multi-resolution architectures are common practice in computer vision, while coupled architectures have the capability to combine fine and coarse information from images [@burt1983laplacian; @brown2003recognising]. Recently, techniques on Conditional [@huang2017stacked; @denton2015deep] and Unconditional GANs [@chen2017photographic; @zhang2017stackgan] have explored the idea of combined-resolutions within the architecture for domain specific tasks. Inspired by this, we propose an architecture that extract features at different scales. Some approaches also used multi-scale discriminators for style-transfer [@wang2018high; @karras2017progressive; @zhang2018densely]. However, they only attached discriminators with generator that deals with fine features while ignoring discriminators for coarse generator completely. In order to learn useful features at coarsest scale, separate multi-scale discriminators are necessary. Our proposed architecture employs this for both coarse and fine generators. For high quality image synthesis, a pyramid network with multiple pairs of discriminators and generators has also been proposed, termed SinGAN [@shaham2019singan]. Though it produces high quality synthesized images, the model works only on unpaired images. To add to this problem, each generator’s input is the synthesized output produced by the previous generator. As a result, it can’t be employed for pair-wise image training that satisfies a condition. To alleviate from this problem, a connection needs to be established that can propagate feature from coarse to fine generator. In this paper, we propose such an architecture that has a feature appending mechanism between the coarse and fine generators, making it a two level pyramid network with multi-scale discriminators as illustrated in Fig. \[fig1\]. ![Proposed Generative Adversarial Network[]{data-label="fig1"}](Fig1.png){width="9cm"} The Proposed Methodology ======================== This paper proposes a new conditional generative adversarial network (GAN) comprising of a novel residual block for producing realistic FA from retinal fundus images. First, we introduce the residual block in section \[subsec:residualblock\]. We then delve into the proposed conditional GAN encompassing of fine and coarse generators and four multi-scale discriminators in sections \[subsec:generators\] and \[subsec:discriminators\]. Lastly, in section \[subsec:objective\], we discuss the objective function and loss weight distributions for each of the architectures that form the proposed architecture. Novel Residual Block {#subsec:residualblock} -------------------- Recently, residual blocks have become the norm for implementing many image classification, detection and segmentation architectures [@he2016deep; @he2016identity]. Generative architectures have employed these blocks in interesting applications ranging from image-to-image translation to super-resolution [@johnson2016perceptual; @wang2018high; @ledig2017photo]. In its atomic form, a residual unit consists of two consecutive convolution layers. The output of the second layers is added to the input, allowing for deeper networks. Computationally, regular convolution layers are expensive compared to a newer convolution variant, called separable convolution [@chollet2017xception]. Separable convolution performs a depth-wise convolution followed by a point-wise convolution. This, in turn helps to extract and retain depth and spatial information through the network. It has been shown that interspersing convolutional layers allows for more efficient and accurate networks [@opticnet19]. We incorporate this idea to design a novel residual block to retain both depth and spatial information, decrease computational complexity and ensure efficient memory usage, as shown in Table. \[table1\]. Residual Block Equation Activation No. of Parameters$^{1}$ ---------------- -------------------------------------------------------------------------- ------------------------------ ------------------------- Original $\big[R_{i} \circledast F_{Conv} \circledast F_{Conv} \big] + R_{i}$ ReLU (Pre) [@he2016identity] 18,688 Proposed $\big[R_{i} \circledast F_{Conv} \circledast F_{SepConv} \big] + R_{i}$ Leaky-ReLU (Post) 10,784 : Comparison between Original and Proposed Residual Block[]{data-label="table1"} \ $^1$ $F_{Conv}$ and $F_{SepConv}$ has kernel size $K=3$, stride $S=1$, padding $P=0$ and No. of channel $C=32$. ![Proposed Residual Block[]{data-label="fig2"}](Fig2.png){width="9cm"} As illustrated in Fig. \[fig2\], we replace the last convolution operation with a separable convolution. We also use Batch-normalization [@ioffe2015batch] and Leaky-ReLU as post activation mechanism after both convolution and separable Convolution layers. For better results, we incorporate reflection padding as opposed to zero-padding before each convolution operation. The entire operation can be formulated as shown in Eq. \[eq1\]: $$\begin{split} R_{i+1} &= \big[R_{i} \circledast F_{Conv} \circledast F_{SepConv} \big] + R_{i} \\ &= F(R_{i}) + R_{i} \label
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We investigate angular correlations in multi-jet final states at high-energy colliders and discuss their sensitivity to initial-state showering effects, including QCD coherence and corrections to collinear ordering [@url].' author: - | F. Hautmann$^1$ and H. Jung$^2$\ 1 - Oxford University, Theoretical Physics Department\ Oxford OX1 3NP, UK\ 2 - Deutsches Elektronen Synchrotron\ Hamburg D-22603, Germany\ title: ' Dijet azimuthal distributions and initial-state parton showers' --- 0.5 cm 0.8 cm [*Presented at the Workshop DIS08, University College London, April 2008*]{} 0.5 cm Events with multiple hadronic jets are central to many aspects of the LHC physics program and their analysis will require realistic Monte Carlo simulations. See e.g. [@alwalletal]. In a multi-jet event the correlation in the azimuthal angle $\Delta \phi$, defined to be between the two hardest jets, provides a useful measurement, sensitive to how well QCD multiple-radiation effects are described, and has been used to tune shower Monte Carlo event generators [@albrow]. The Tevatron $\Delta \phi$ measurements [@d02005] admit a reasonable description by Monte Carlo, see   and   results in Fig. \[Fig:d0az\] [@d02005]. In particular the data are sensitive to initial-state showering parameters and have been used for re-tuning of these parameters in  [@albrow]. On the other hand, the [HERA]{} $\Delta \phi$ measurements [@h1deltaphi; @zeus1931] are not well described by the standard   and   Monte Carlo showers in most of the data kinematic range (see below). At the LHC, measurements of $\Delta \phi$ distributions in multi-jet events may become accessible relatively early. Such complex hadronic final states at LHC energies are potentially sensitive to corrections to the collinear ordering implemented in standard parton showers [@ictppaper]. In particular, for jets of given $E_T$ the partonic momentum fraction $x$ is reduced as the energy increases, and angular correlations probe coherence effects in the spacelike branching [@hj_angjet], associated with non-collinear radiation at $ x \ll 1$ and not included in   or . Monte Carlo generators designed to take these effects into account are based (see e.g. [@jepp06; @hann04] and early studies in [@mw]) on transverse-momentum dependent parton distributions and matrix elements, defined via high-energy factorization [@hef]. General formulations for these distributions in initial-state showers are studied in [@collinszu]. Ref. [@hj_angjet] investigates the effects of corrections to collinear-ordered showers on correlations in multi-jet final states, using the precise ep measurements [@zeus1931] that have recently become available. These measurements are characterized by large phase space available for jet production and by small $x$ kinematics, potentially relevant for extrapolation of initial-state showering effects to the LHC. In Fig. \[Fig:azz\] we report results [@hj_angjet] for the azimuthal $\Delta \phi$ distribution in two-jet and three-jet cross sections. In Fig. \[Fig: 3\] we give results for the $\Sigma p_t$ and $\Delta p_t$ distributions [@zeus1931; @hj_angjet] measuring the transverse-momentum imbalance between the leading jets. ![\[Fig:d0az\] Dijet azimuthal correlations measured by D0 along with the   and   results [@d02005].](d0az.eps){width="0.35\columnwidth"} These results show that the shape of the distributions is different for  and for the k$_\perp$-shower Monte Carlo  [@jung02], with the largest differences occurring at small $\Delta \phi$ and small $\Delta p_t$, where the two highest $E_T$ jets are far from back to back and one has effectively three hard, well-separated jets. Ref. [@hj_angjet] also analyzes the angular distribution of the third jet and finds significant contributions from regions where the transverse momenta in the initial state shower are not ordered. The description of the measurement by the k$_\perp$-shower is good, whereas the collinear-based   shower is not sufficient to describe it. ![\[Fig:azz\] Azimuthal correlations [@hj_angjet] by the k$_\perp$-shower  and by  compared with the ep data [@zeus1931]: (left) two-jet cross section; (right) three-jet cross section.](az_2jet.eps "fig:"){width="0.45\columnwidth"} ![\[Fig:azz\] Azimuthal correlations [@hj_angjet] by the k$_\perp$-shower  and by  compared with the ep data [@zeus1931]: (left) two-jet cross section; (right) three-jet cross section.](az_3jet.eps "fig:"){width="0.45\columnwidth"} The physical picture underlying the k$_\perp$-shower method involves both transverse momentum dependent pdfs and matrix elements [@ictppaper]. The angular and momentum correlations of Figs. \[Fig:azz\],\[Fig: 3\] are found [@hj_angjet; @hjradcor] to be sensitive in particular to the large-k$_\perp$ tail in the hard matrix elements [@hef]. More detailed studies of these off-shell contributions are currently underway, including comparisons with results of next-to-leading order (NLO) event generators, see single-jet and di-jet distributions in Fig. \[Fig: 4\]. Here we see in particular that the dijet $p_t$ spectrum at high $p_t$ is close for the NLO calculation and the k$_\perp$-shower (at low $p_t$ we see the effect of the Sudakov form factor in the shower). Ref. [@hj_angjet] illustrates that the collinear approximation to the matrix element does not describe the shape of the angular distribution at small $\Delta \phi$. We note that the inclusion of the perturbatively computed high-k$_\perp$ correction distinguishes the calculation [@hj_angjet] of multi-jet cross sections from other shower approaches (see e.g. [@hoeche]) that include transverse momentum dependence in the pdfs but not in the matrix elements. ![\[Fig: 3\] Transverse momentum correlations [@hj_angjet] by the k$_\perp$-shower  and by   compared with the 3-jet data [@zeus1931]. The variables $\Sigma p_t$ (left) and $\Delta p_t$ (right) are as defined in [@zeus1931; @hj_angjet].](zeus-ptsum-3jet_may5.eps "fig:"){width="0.45\columnwidth"} ![\[Fig: 3\] Transverse momentum correlations [@hj_angjet] by the k$_\perp$-shower  and by   compared with the 3-jet data [@zeus1931]. The variables $\Sigma p_t$ (left) and $\Delta p_t$ (right) are as defined in [@zeus1931; @hj_angjet].](zeus-deltapt-3jet_1.eps "fig:"){width="0.45\columnwidth"} It is worth emphasizing that the coherence effects in the angular distributions computed above are associated with multi-gluon radiation terms to the initial-state shower that become non-negligible at high energy (small $x$) and small $\Delta \phi$. These can be incorporated using the formulation at fixed transverse momentum. Near the back-to-back region of large $\Delta \phi$ [@delenda], corrections due to mixed Coulomb/radiative terms may also become important and affect the basic picture: see recent studies in [@0708pap]. See also [@manch] for a related discussion of Coulomb contributions. More general issues on unintegrated pdfs in parton showers are discussed in [@ictppaper; @collinszu; @endp]. Applications to semi-inclusive processes and spin asymmetries are reviewed in [@murgiarev]. ![\[Fig: 4\] Comparison of the k$_\perp$-shower   with the NLO di-jet calculation : (left) single-jet distributions; (right) di-jet distributions.](jjbar-etjets.eps "fig:"){width="0.45\columnwidth"} ![\[Fig: 4\] Comparison of the k$_\perp$-shower   with the NLO di-jet calculation : (left) single-jet distributions; (right) di-jet distributions.](jjbar-ptsum.eps "fig:"){width="0.45\columnwidth"} Besides jet final states, the corrections to collinear-ordered showers discussed in this article will also be relevant to heavy particle production [@hann04; @hef; @hgs], including phenomenological studies of small-$x$ broadening in W and Z $p_\perp$ distributions [@cpyuan1],
{ "pile_set_name": "ArXiv" }
null
null
--- author: - | Balázs Hidasi [^1]\ Gravity R&D Inc.\ Budapest, Hungary\ `balazs.hidasi@gravityrd.com` Alexandros Karatzoglou\ Telefonica Research\ Barcelona, Spain\ `alexk@tid.es` Linas Baltrunas [^2]\ Netflix\ Los Gatos, CA, USA\ `lbaltrunas@netflix.com` Domonkos Tikk\ Gravity R&D Inc.\ Budapest, Hungary\ `domonkos.tikk@gravityrd.com` bibliography: - 'citations.bib' title: | Session-based Recommendations with\ Recurrent Neural Networks --- ### Acknowledgments {#acknowledgments .unnumbered} The work leading to these results has received funding from the European Union’s Seventh Framework Programme (FP7/2007-2013) under CrowdRec Grant Agreement n$^\circ$ 610594. [^1]: The author spent 3 months at Telefonica Research during the research of this topic. [^2]: This work was done while the author was a member of the Telefonica Research group in Barcelona, Spain
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'Chao Yeh Chen and Kristen Grauman [^1]' bibliography: - 'strings.bib' - 'ref.bib' title: 'Efficient Activity Detection in Untrimmed Video with Max-Subgraph Search' --- Acknowledgment {#acknowledgment .unnumbered} ============== We thank the anonymous reviewers for their feedback, and Sudheendra Vijayanarasimhan for helpful discussions. This research is supported in part by ONR PECASE N00014-15-1-2291. [^1]:
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We introduce a generalization of the Temperley–Lieb algebra. This generalization is defined by adding certain relations to the algebra of braids and ties. A specialization of this last algebra corresponds to one small Ramified Partition algebra, this fact is the motivation for the name of our generalization.' address: 'Departamento De Matemáticas, Universidad de Valparaíso, Gran Bretaña 1091, Valparaíso, Chile.' author: - Jesús Juyumaya date: 'April 1, 2013' title: 'A Partition Temperley–Lieb Algebra ' --- [^1] Introduction {#introduction .unnumbered} ============ The Temperley–Lieb algebra appears originally in Statistical Mechanics as well as in Knot theory, quantum groups and subfactors of von Neumann algebras. This algebra was discovered by Temperley and Lieb by building transfer matrices[@tl]. Further, this algebra was rediscovered by V. Jones[@jo83] who used it in the construction of his polynomial invariant for knots known as the Jones polynomial[@jo]. From a purely algebraic point of view, the Temperley–Lieb algebra is a quotient of the Iwahori–Hecke algebra by the two–sided ideal generated by the Steinberg elements $h_{ij}$ associated to $h_i$’s, where $\vert i-j \vert =1$ and $h_i$’s denote the usual generators of the Iwahori–Hecke algebra, view p. 35[@gohajo]. In other words, the Temperley–Lieb algebra can be defined by the usual presentation of the Iwahori–Hecke algebra but by adding the relations $h_{ij}=0$, for all $\vert i-j \vert =1$. Using this point of view, there are several generalizations of the Temperley–Lieb algebra, e.g. see [@fan; @gojula]. This paper proposes a generalization of the Temperley–Lieb algebra by adding relations of Steinberg types to the [*algebra of braid and ties*]{}[@aj; @ry]. The algebra of braid and ties ${\mathcal E}_n(u)$, where $u$ is a parameter and $n$ denotes a positive integer, can be regarded as a generalization of the Hecke algebra and recently E. O. Banjo proved that ${\mathcal E}_n(1)$ is isomorphic to a small ramified partition algebra, see Theorem 4.2[@ba]. The possible connexion of the ${\mathcal E}_n(u)$ and the Partition algebras [@joPA; @mar1] was speculated first by S. Ryom–Hansen[@ry]. The algebra ${\mathcal E}_n(u)$ is defined by two sets of generators and relations. One set of generators $T_1,\ldots , T_{n-1}$ reflects the braid generators of the Yokonuma–Hecke algebra[@yo; @th; @chda] of type $A$ and the other set of generators $E_1, \ldots ,E_{n-1}$ reflects the behavior of the monoid $\mathsf{P}_n$ associated to the set partitions of $\{1, \dots , n\}$. Thus, ${\mathcal E}_n(u)$ also can be thought as a $u$–deformation of an amalgam among the symmetric group on $n$ symbols and $\mathsf{P}_n$. In short, in this paper we define and study the [*Partition Temperley–Lieb algebra*]{}, denoted ${\rm PTL}_n(u)$, which is defined by adding to the presentation of ${\mathcal E}_n(u)$ mentioned above the following relations $$E_iE_jT_{ij}=0 \quad \text{for all}\quad \vert i-j\vert=1$$ where $T_{ij}$ is the Steinberg element associated to the $T_i$’s. This work is organized as follows. In Section 1 we fix notations and we recall the definition of the Jimbo representation. In Section 2 we recall the definition of the algebra ${\mathcal E}_n(u)$, we have included also some results from [@ry] which are used in the paper. In Section 3 we construct a non–faithful tensor representation of the algebra ${\mathcal E}_n(u)$ which is used in Section 4 for the definition of our Partition Temperley–Lieb algebra ${\rm PTL}_n(u)$. The Section 5 shows two presentations of the ${\rm PTL}_n(u)$. By using one of these presentations we constructed a span linear set of ${\rm PTL}_n(u)$ which is conjectured that is a basis for the Partition Temperley–Lieb algebra. Finally, based on a conjecture that the algebra ${\mathcal E}_n(u)$ supports a Markov trace, we prove in Section 7 under which condition this trace could pass to ${\rm PTL}_n(u)$. Preliminaries ============= Along the paper algebra means unital associative algebra, with unity $1$, over the field of rational function $K:={\Bbb C}(\sqrt{u})$ in the variable $\sqrt{u}$. Consequently, we put $u = (\sqrt{u})^2$. Let $ {\rm H}_n = {\rm H}_n(u)$ be the Iwahori–Hecke algebra of type $A$, that is, the algebra presented by generators $1, h_1, \ldots , h_{n-1}$ subject to braid relations among the $h_i$’s and the quadratic relation $h_i^2 = u + (u-1)h_i$, for all $i$. We shall recall the Jimbo representation of the Hecke algebra. Set $V$ the $K$–vector space with basis $\{v_1, v_2\}$. Denotes by ${\bf J}$ the endomorphism of $V\otimes V$ defined through the mapping $$\begin{array}{ccl} {\bf J}(v_i\otimes v_j) & = & -v_i \otimes v_j \qquad \text{for } \quad i=j\\ {\bf J}(v_1\otimes v_2) & = & (u-1)\,v_1\otimes v_2 + \sqrt{u}\, v_2\otimes v_1 \\ {\bf J} (v_2\otimes v_1) & = & \sqrt{u}\, v_1\otimes v_2. \end{array}$$ The Jimbo representation of ${\rm H}_n$ in $V^{\otimes n}$ is defined by mapping $h_i\mapsto {\bf J}_i$, where ${\bf J}_i$ acts as the identity, with exception of the factors $i$ and $i+1$, where acts by ${\bf J}$. \[kerJ\] The kernel of the Jimbo representation is the two–sided ideal generated by $h_{ij}$, where $\vert i-j\vert =1$ and $$h_{ij}:= 1 + h_i + h_j + h_ih_j + h_jh_i + h_ih_jh_i.$$ It is well known that the Temperley–Lieb algebra can be defined as the quotient of the Iwahori–Hecke algebra by the Kernel of Jimbo representation. Thus, the Temperley–Lieb algebra can be defined by adding extra non–redundant relations to the above presentations of the Hecke algebra. More precisely, we have the following definition. The Temperley–Lieb algebra ${\rm TL}_n = {\rm TL}_n(u)$ is the algebra generated by $1, h_1, \ldots , h_{n-1}$ subject to the following relations: $$\begin{aligned} h_i^2 & = & u + (u-1)h_i \qquad \text{ for all $i$}\label{tl1}\\ h_ih_j & = & h_j h_i \qquad \text{ for $\vert i - j\vert >1$}\label{tl2}\\ h_ih_jh_i & = & h_jh_i h_j \qquad \text{ for $\vert i - j\vert =1$}\label{tl3}\\ h_{ij} & = & 0\qquad \text{ for $\vert i - j\vert =1$}.\label{tl4}\end{aligned}$$ It is well known that the dimension of ${\rm TL}_n$ is the $n$th Catalan number $C_n: = \frac{1}{n+ 1}$ $2n\choose{n}$ [@jo83] and that ${\rm TL}_n$ has a presentation (reduced) with idempotents generators. Indeed, making $$f_i := \frac{1}{1+u}(1+h_i)$$ we have the following proposition. \[pretl\] ${\rm TL}_n$ can be presented by generators $1, f_1, \ldots ,f_{n-1}$ satisfying the following relations $$\begin{aligned} f_i^2 & = & f_i \qquad \text{ for all $i$}\label{pretl1}\\ f_if_j & = & f_j f_i \qquad \text
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'An explicit CMC Schwarzschildean line element is derived near the critical point of the foliation, the lapse is shown to decay exponentially, and the coefficient in the exponent is calculated.' address: - '$^{*}$ ESI, A-1090 Wien, Boltzmangasse 9, Austria.' - ' Institute of Physics, Jagiellonian University, 30-059 Cracow, Reymonta 4, Poland.' - '$^{+}$ Physics Department, University College, Cork, Ireland.' author: - 'Edward Malec$^{*,**}$ and Niall Ó Murchadha$^{*,+}$' title: '**Constant mean curvature slices in the extended Schwarzschild solution and collapse of the lapse. Part II**' --- \#1[[$\backslash$\#1]{}]{} Introduction ------------ This is a sequel of our previous work on the constant mean curvature (CMC) slices of the extended Schwarzschild geometry. Here we get CMC foliations by solving Einstein equations in a particular gauge. A crucial role is played by a condition (Eq. (\[5\]) below) imposed on the lapse. While this method is completely equivalent to the other more geometric approach (see [@Niall]), it seems to be more straightforward and technically simpler. We focus on the concise derivation of the explicit CMC foliation near the critical point of the CMC foliation. The final result is identical to the result derived in [@Niall]. The constant mean curvature foliations have been recently investigated numerically in the simulation of a single spherically-symmetric black hole [@CMC2]. We hope that our analytic results appear helpful in the verification of the numerical schemes. CMC slicing of the Schwarzschild spacetime ------------------------------------------ The notation is the same as in the preceding paper [@Niall]. We define $$(pR)^2=4\left[ 1 -{2m \over R} + \left( {KR\over 3}- {C\over R^2} \right)^2\right] , \label{1}$$ $$\gamma(R,t) =1+ 8\partial_tC\int_{R}^{\infty }dr{1\over r^5p^3}. \label{2}$$ and $$N =\gamma {pR\over 2}. \label{3}$$ Here $m$ is the mass, $K$ (the trace of the extrinsic curvature) is a constant and C is a time-dependent parameter which measures the transverse part of the extrinsic curvature. The Schwarzschild line element, expressed in terms of coordinates adapted to the constant mean curvature foliation, is given by [@Iriondo] $$\begin{aligned} ds^2&=&-dt^2\Biggl( N^2 -\gamma^2\Bigl( {KR\over 3}- {C\over R^2}\Bigr)^2\Biggr) +4N {{C\over R^3}-{K\over 3}\over p^2R}dtdR+ {4\over (pR)^2}dR^2+ R^2d\Omega^2. \label{4}\end{aligned}$$ The hypersurfaces of constant time are CMC slices, asymptotic to the CMC slices of Minkowskian geometry. Elliptic slicing condition --------------------------- A minimal surface is a locus of points defined by the condition $p =0$. Choose a CMC Cauchy hypersurface $\Sigma_C$ of the extended Schwarzschild manifold corresponding to a parameter $C$ and let $R_0$ be an areal radius corresponding to a simple zero of $p^2$; that is $p^2(R_0)=0$ but $\partial_{R}p^2|_{R_0}\ne 0$. Futhermore, assume that $${\partial_rN \over \sqrt{a}}|_{R_0}=0 \label{5}$$ at $R_0$. The condition (\[5\]) yields $$\partial_tC={1\over 8I(R_0)}. \label{6}$$ Here $$I(R_0)\equiv \int_{R_0}{dr\over pr}{6{C^2\over r^4}+{K^2r^2\over 3}\over \Biggl( 2m+{2KC\over 3}+{2K^2r^2\over 9} -{4C^2\over r^3} \Biggr)^2}. \label{7}$$ The value of the lapse function $N$ at the minimal surface, that is at the areal radius $R_0$, can be shown to be equal (using Eqs. (\[1\] – \[3\])) to $$N = {dC\over dt} {1\over m+{KC\over 3} +{K^2R^3_0\over 9}-2{C^2\over R^3_0}}. \label{8}$$ The lapse $N $ is strictly positive at the minimal surface corresponding to a simple zero $R_0$. Eqs. (\[1\] – \[3\]) imply that $N (R)> N (R_0)$ if $R>R_0$ and therefore the lapse exists on all of $\Sigma_C$. Equation (\[6\]) dictates the rate of change of the parameter $C$. It is clear that one can uniquely construct a foliation of a part of the extended Schwarzschild geometry by imposing the condition (\[5\]) at minimal surfaces on all slices to the future of a given one. The leaves of the resulting foliation connect two null infinities of the extended Schwarzschild spacetime. This gives us a curve $R_0(t)$ of zeroes of the mean curvature $p$. It is evident, just by inspecting the explicit solution presented above, that the line running along the locations of minimal surfaces $R_0(t)$ can be arranged to be smooth. It can be chosen to coincide with the ‘vertical’ $t = 0$ axis in standard Schwarzschild coordinates. This construction breaks down when $R_0$ ceases to be a simple zero of $p^2$, since expressions appearing in Eqs. (\[6\]) and (\[8\]) become unboundedly large. The goal of this paper is to show the asymptotic behaviour of the lapse at the critical minimal surface. The evolution of C near critical point -------------------------------------- Let $C_*$ and $R_*$ be degenerate, that is such that the zero of $p^2$ ceases to be simple. In this case both $p$ and its derivative $\partial_Rp$ vanish; that means that $$\begin{aligned} && 1- {2m\over R_*} -{2KC_*\over 3R_*}+{K^2R^2_*\over 9}+{C^2\over R^4_*}=0, \nonumber\\ && 2m +{2KC_*\over 3} + {2K^2R^3_*\over 9} -{4C^2_*\over R^3_*}=0. \label{9}\end{aligned}$$ One can easily show, if $C_*$ and $R_*$ are critical, then the sign of $$\beta \equiv -2C_*+{2\over 3}KR^3_* \label{10}$$ is the same as the sign of $-C_*$. There exists critical values of $C_*$ that are positive ($C_*^+$) or negative ($C_{*-}$). For definiteness we shall consider only the case when $C(t=0) > C_{*-}$, therefore the only limiting case we consider is that with $C\rightarrow C_*^+$. (That choice corresponds to a foliation formed by leaves connecting two null infinities which moves forward in time - see a discussion in Sec IV in [@Niall]). For simplicity we will drop the $^+$ suffix and $C_*$ will mean a positive critical parameter. From the dynamical equation (\[6\]) follows that $C$ can only increase. Next, let us introduce the notation that $$\begin{aligned} &&\epsilon\equiv C_*-C \nonumber\\ && R_0\equiv R_* +\delta . \label{11}\end{aligned}$$ where both $\delta $ and $\epsilon $ are positive and small. The equation $p(R_0)=0$ yields a nonlinear algebraic equation whose truncation gives $$\delta^2 A +\epsilon \beta =0. \label{12}$$ Here $A \equiv 2R^2_*+K^2R^4_*$. Eq. (\[12\]) is in fact the Lyapunov - Schmidt reduced equation constructed according to the standard rules [@Trenogin]. Therefore in the vicinity of the critical point we have $$\delta =\sqrt{-\beta \epsilon \over A}. \label{13}$$ The function $p$ can be expressed in a form $${pr\over 2} =\sqrt{1-{R_0\over r}}\Biggl[ {\kappa \delta \over R_0}+ {K^2\over 9}(rR_0+r^2-2R_0^2 )-{C^2\over R^4_0} ({R_0\over r} + {R_0^2\over r^2} +{R_0^3\over r^3}-3)\Biggr]^{1/2
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'A stochastic calculus is given for processes described by stochastic integrals with respect to fractional Brownian motions and Rosenblatt processes somewhat analogous to the stochastic calculus for Itô processes. These processes for this stochastic calculus arise naturally from a stochastic chain rule for functionals of Rosenblatt processes; and some Itô-type expressions are given here. Furthermore, there is some analysis of these results for their applications to problems using Rosenblatt noise.' address: - 'University of Kansas, Department of Mathematics, 1460 Jayhawk Blvd., Lawrence, 660 45, Kansas, USA' - 'Charles University, Faculty of Mathematics and Physics, Sokolovská 83, Prague 8, 186 75, Czech Republic' author: - Petr Čoupek - 'Tyrone E. Duncan' - 'Bozenna Pasik-Duncan' title: A Stochastic Calculus for Rosenblatt Processes --- Rosenblatt process ,stochastic calculus ,Itô formula ,Skorokhod integral ,forward integral 60H05 ,60H07 ,60G22 *This paper is dedicated to the memory of Larry Shepp.* Introduction ============ Self-similar stochastic processes, that are processes whose distributions are invariant under suitable scalings, can be used as mathematical models of various physical phenomena. These processes have been used for modeling in hydrology, biophysics, geophysics, telecommunication, turbulence, cognition, and finance. Typically, these self-similar processes exhibit long-range dependence, that is, their autocorrelations decay slower than exponentially. Some bibliographical guides are given by Taqqu [@Taqqu86]; and Willinger, Taqqu, and Erramili [@WilTaqErr96]; that provide applications of self-similar stochastic processes and many references. The family of fractional Brownian motions is among the most studied self-similar stochastic processes. Fractional Brownian motion indexed by the Hurst parameter $0<H<1$, that is denoted by $B^H$ here, is a centered Gaussian stochastic process whose covariance function is given by $$\mathbb{E} B^H_sB_t^H = \frac{1}{2}\left(|s|^{2H}+|t|^{2H}-|s-t|^{2H}\right), \quad s,t\in\mathbb{R}.$$ There are at least two reasons why fractional Brownian motions are of interest. First, these processes are self-similar, have stationary increments, and exhibit long-range dependence for $\sfrac{1}{2}<H<1$. These properties make them very attractive for practical modeling and applications. The second reason is the fact that they are Gaussian processes which makes some mathematical models using fractional noise feasible for analysis. In fact, stochastic calculus for fractional Brownian motions is fairly developed, e.g. [@AlosMazNua01; @AlosNua03; @BiaHuOksZha08; @DecrUstu99; @DunJakDun06; @DunHuDun00]. However, non-Gaussian data with fractal features have also been observed empirically, e.g. [@Dom15] where control error in single-input-single-output (SISO) loops is analyzed. Domański has shown from data of some physical systems that the Gaussian assumption is not always appropriate. In such cases, it does not seem reasonable to use a Gaussian process such as a fractional Brownian motion as a model for these physical phenomena and the use of a Rosenblatt process can provide a useful alternative. A Rosenblatt process with the Hurst parameter $\sfrac{1}{2}<H<1$, denoted here as $R^H$, can arise as a non-Gaussian limit of suitably normalized sums of long-range dependent random variables in a non-central limit theorem, see e.g. [@DobMaj79; @Ros61; @Taqq79]. This process admits a version with Hölder continuous sample paths (up to order $H$), has stationary increments, and is $H$-self-similar with long-range dependence. Moreover, its covariance function is the same as that of the fractional Brownian motion $B^H$. However, unlike the family of fractional Brownian motions, the family of Rosenblatt processes is not Gaussian. A detailed history, construction, and many properties of Rosenblatt processes are given in the survey article of Taqqu [@Taqqu11]. Some stochastic analysis of Rosenblatt pocesses is given by Tudor in [@Tud08] and some properties of Rosenblatt processes are given in [@AbrPip06; @Alb98; @Pip04]. Furthemore, stochastic (partial) differential equations with additive Rosenblatt noise have also been studied, e.g. [@BonTud11; @Cou18; @CouMas17; @CouMasOnd18]. However, despite the considerable attention that Rosenblatt processes have received, there is only a limited development of a stochastic calculus and especially Itô-type formulas for these processes. In the pioneering work of Tudor [@Tud08], a representation of Rosenblatt processes on a finite time interval is given and used to construct both Wiener-type and stochastic integrals in which Rosenblatt processes appear as the integrators. Furthemore, an Itô-type formula for functionals of a Rosenblatt process is given for some general conditions. However, these conditions seem to be difficult to verify in specific cases and in fact in [@Tud08], the conditions are only verified for the square and cube of a Rosenblatt process. In [@Arr15], a stochastic calculus with respect to Rosenblatt processes is developed by means of white-noise theory [@HidaKuoPottStr94]. In this framework, an Itô-type formula for functionals of a Rosenblatt process is proved. However, this formula is given as an infinite series that involves derivatives of all orders and white-noise integrals with respect to stochastic processes obtained from the Rosenblatt process. An important contribution is made by Arras in [@Arr16] that improves the results in [@Tud08] and provides an Itô-type formula for functionals of Rosenblatt processes by means of Malliavin calculus on the white-noise probability space which allows to use techniques from white-noise distribution theory. This Itô-type formula is valid for infinitely differentiable functionals with at most polynomial growth. In the approach used here, some methods of [@Arr16] are used without relying on the white-noise setting. Not only functionals of Rosenblatt processes but functionals of stochastic integrals with respect to them are considered. A main results of this paper is an Itô-type formula for $\mathscr{C}^3$ functionals with at most polynomial growth of the stochastic processes with second-order fractional differential of the form $$\label{eq:x_t_intro} x_t = x_0+ \int_0^t\vartheta_s\,\mathrm{d}{s} + 2c_{H}^{B,R}\int_0^t\varphi_s\delta B_s^{\frac{H}{2}+\frac{1}{2}} + \int_0^t\psi_s\delta R_s^H$$ that have Hölder continuous sample paths of an order greater than $\sfrac{1}{2}$. Here, $c_H^{B,R}$ is a suitable normalizing constant. The integrals are defined using (multiple) Skorokhod integrals with respect to a Wiener process and suitable (fractional) transfer operators similar to [@Tud08]. This formula generalizes the results of [@Tud08] and [@Arr16]. There are two noteworthy properties of the obtained Itô-type formula: - The formula shows that the form of the process $x$ is preserved under compositions with $\mathscr{C}^3$ functions. - There is a term that involves the third derivative. Both of these properties result from the second-order nature of Rosenblatt processes; that is, from the fact that Rosenblatt processes are defined as second-order Wiener-Itô integrals. As suggested in [@Arr16], it seems plausible that the method used to obtain the general Itô-type formula can be employed to obtain analogous formulas for Hermite processes of any order $k$, see [@Tud13 Definition 3.1], and it is conjectured to expect the appearance of stochastic integrals with respect to related Hermite processes up to order $k$ (such as $B^{\frac{H}{2}+\frac{1}{2}}$ is related to $R^H$) as well as derivatives up to order $k+1$ in such formulas. Further discussion of this phenomenon can be found on [@Arr15 p. 548] and in [@Tud08 Remark 8]. The method used to obtain the Itô-type formula has already been used in the literature, see e.g. [@BiaOks08] for the case of fractional Brownian motions and [@Tud08; @Arr16] for the case of Rosenblatt processes. It can be briefly outlined as follows: 1. \[step:intro\_1\] Initially, two types of integrals with respect to fractional Brownian motions and Rosenblatt processes are defined: a pathwise forward integral defined by regularization of the integrator, see [@RusVal93], and a Skorokhod-type integral defined by means of Malliavin calculus. These definitions are given in , and in and , respectively. Moreover,
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Several models of dark matter suggest the existence of hidden sectors consisting of $SU(3)_C \times SU(2)_L \times U(1)_Y$ singlet fields. The interaction between the ordinary and hidden sectors could be transmitted by new Abelian $U''(1)$ gauge bosons $A''$ (dark or hidden photons) mixing with ordinary photons. If such $A''$’s have masses below the $\pi^0$ meson mass, they would be produced through $\gamma - A''$ mixing in the $\pi^0\to \gamma \gamma$ decays and be observed via decays $A'' \to {e^+e^-}$. Using bounds from the SINDRUM experiment at the Paul Scherrer Institute that searched for an excess of ${e^+e^-}$ pairs in $\pi^- p$ interactions at rest, the area excluding the $\gamma - A''$ mixing $\epsilon \gtrsim 10^{-3}$ for the $A''$ mass region $ 25 \lesssim M_{A''} \lesssim 120$ MeV is derived.' author: - 'S.N. Gninenko' title: 'Constraints on dark photons from $\pi^0$ decays' --- The origin of dark matter is still a great puzzle in particle physics and cosmology. Several models dealing with this problem suggest the existence of ‘hidden’ sectors consisting of $SU(3)_C \times SU(2)_L \times U(1)_Y$ singlet fields. These sectors do not interact with our world directly and couple to it by gravity. It is also possible that there exist new very-weak forces between the ordinary and dark worlds transmitted by new Abelian $U'(1)$ gauge bosons $A'$ (dark or hidden photons for short) mixing with our photons [@hop], as discussed first by Okun in his model of paraphotons [@okun]. In a class of recent interesting models the $\gamma-A'$ mixing strength may be large enough to be experimentally tested. This makes searches for $A'$’s very attractive; for a recent review see [@jr] and references therein. It should be noted, that many models of physics beyond the Standard Model (SM) such as GUTs [@1], superstring models [@2] (see also Ref.[@khlop]), supersymmetric [@3], and models including the fifth force [@carl] also predict an extra U$^{'}$(1) factor and the corresponding new gauge $X$ boson. The $X$’s could interact directly with quarks and/or leptons. If the $X$ mass is below the pion mass, the $X$ could be effectively searched for in the decays $P\to \gamma X$, where $P = \pi^{0},\eta$, or $\eta^{\prime}$. This is due to the fact, that the decay rate of $P\to \gamma~+~$ $\it any~new~particles~with~spin~0~or~\frac{1}{2}$ is proved to be negligibly small [@di]. Hence, an observation of these decay modes could unambiguously signal the discovery of a new spin-1 boson, in contrast with other searches for new light particles in rare K, $\pi$ or $\mu$ decays [@di; @md; @gkx2]. The allowed $\gamma - A'$ interaction is given by the kinetic mixing [@okun; @jr; @holdom; @foot1] $$L_{int}= -\frac{1}{2}\epsilon F_{\mu\nu}A'^{\mu\nu} \label{mixing}$$ where $F^{\mu\nu}$, $A'^{\mu\nu}$ are the ordinary and the dark photon fields, respectively, and $\epsilon$ is their mixing strength. In some recent dark matter models the dark photon could be massless; see, e.g. Refs.[@Cline:2012is; @Cline:2012ei]. If the $A'$ has a mass, the kinetic mixing of Eq.(\[mixing\]) can be diagonalized resulting in a nondiagonal mass term and $\gamma - A'$ mixing. Hence, any $\gamma$-source could produce a kinematically allowed massive $A'$ boson according to the appropriate mixings. Then, if the mass difference is small, ordinary photons may oscillate into dark photons-similarly to neutrino oscillations- or, if the mass difference is large, dark photons could decay, e.g. into ${e^+e^-}$ pairs. Experimental constaints on dark photons in the meV-keV mass range can be derived from searches for the fifth force [@okun; @c1; @c2], from experiments based on the photon regeneration technique [@phreg; @bober; @sik; @rs; @vanb], and from astrophysical considerations [@seva1; @seva2]. For example, the results of experiments searching for solar axions [@cast1; @cast2] can be used to set limits on the ${{\gamma }}- {{A'}}$ mixing in the keV part of the solar spectrum of dark photons [@jr1; @jr2; @gr; @st]. Stringent bounds on the low mass $A'$s could be obtained from astrophysical considerations [@blin]-[@david]. There are plans to test the existence of sub-eV dark photons at new facilities, such as, for example, SHIPS [@ships] and IAXO [@igor]. The $A'$’s with the masses in the sub-GeV range, see e.g. [@bpr; @rw; @will], can be searched for through their $A'\to {e^+e^-}$ decays in beam-dump experiments [@jdb; @e137; @brun; @e141; @e774; @apex], or in particle decays [@bes; @kloe; @babar; @mami]. Recently, stringent bounds on the mixing $\epsilon$ have been obtained from searches for decay modes $\pi^0,\eta,\eta' \to \gamma A'(X)$, $A'(X)\to {e^+e^-}$ with existing data of neutrino experiments [@sngpi0; @sngeta]. These limits are valid for the relatively long-lived $A'$s with a mixing strength in the range $10^{-4}\lesssim \epsilon \lesssim 10^{-7}$. The goal of this note is to show that new bounds on the decay $\pi^0 \to \gamma A'$ of neutral pions into a photon and a short-lived $A'$ followed by the rapid decay $A'\to {e^+e^-}$ due to the relatively large $\gamma-A'$ mixing can be obtained from the results of sensitive searches for an excess of single isolated ${e^+e^-}$ pairs from decays of the weakly interacting neutral boson $X$ by the SINDRUM Collaboration at the Paul Scherrer Institute (PSI, Switzerland) [@sindrum]. The SINDRUM experiment- specifically designed to search for rare particle decays in the SINDRUM magnetic spectrometer- was performed by using the $\pi^- p $ interactions at rest as the source of $\pi^0$’s. The $\pi^0$’s were produced in the charge exchange reaction $\pi^- p \to \pi^0 n $ of 95 MeV/c $\pi^-$’s stopped in a small liquid hydrogen target in the center of the SINDRUM magnetic spectrometer. The magnetic field was 0.33 T, resulting in a transverse-momentum threshold of roughly 17 MeV/c for particles reaching the scintillator hodoscope surrounding the target. The trigger required an ${e^+e^-}$ pair with an opening angle in the plane perpendicular to the beam axis of at least 35$^o$; this corresponds to a lower threshold in the invariant mass of 25 MeV/c [@sindrum]. A total of 98 400 ${\pi^0 \to \gamma {e^+e^-}}$ decays were observed. The signature of the $X\to {e^+e^-}$ decay would be seen as a peak in the continuous ${e^+e^-}$ invariant mass distribution. ![ The 90 % C.L. area (shaded) in the $\bigl(M_{X}; Br(\pi^0\to \gamma X, X\to {e^+e^-})\bigr)$ plane excluded by the SINDRUM experiment (from Ref.[@sindrum]).[]{data-label="limit"}](sindrum.eps){width="50.00000%"} No such peak events were found and upper limits on the branching ratio $Br(\pi^0\to \gamma X, X\to {e^+e^-})=\frac{\Gamma(\pi^0\to \gamma X, X\to {e^+e^-})}{\Gamma(\pi^0\to \gamma \gamma)}$ in the range $\simeq 10^{-6}-10^{-5}$ have been placed for the $X$-mass region $25 \lesssim M_X \lesssim 120$ MeV. The corresponding 90% C.L. exclusion area in the $\bigl(M_{X}; Br(\pi^0\to \gamma X, X\to {e^+e^-})\bigr)$ plane is shown in Fig.\[limit\]. The limits were obtained assuming the $X$ lifetimes to be in the range $$10^{-23} \lesssim \tau_{X} \lesssim 10^{-11} ~{\rm s}. \label{lifetime}$$ For lower values of $\tau_X$ in Eq.(\[lifetime\]) the ${e^+e^-}$ mass peak would be smeared out beyond recognition; for larger values most $X$’s would
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'A new method is presented for the construction of a natural continuous wavelet transform on the sphere. It incorporates the analysis and synthesis with the same wavelet and the definition of translations and dilations on the sphere through the spherical harmonic coefficients. We construct a couple of wavelets as an extension of the flat [*Mexican Hat Wavelet*]{} to the sphere and we apply them to the detection of sources on the sphere. We remark that no projections are used with this methodology.' address: | $^1$ Instituto de Fí[sica]{} de Cantabria (CSIC-UC), 39005, Santander, Spain\ email: sanz@ifca.unican.es\ $^2$ Departamento de Fí[sica]{} Moderna, Universidad de Cantabria, 39005, Santander, Spain\ $^3$ Departamento de Matemáticas, Universidad de Oviedo, 33007, Oviedo, Spain title: | Wavelets on the sphere.\ Application to the detection problem --- Introduction ============ Multiscaling analysis techniques dealing with the analysis/synthesis of nD-images defined on intervals of $R^n$ have been applied in many fields of physics in the last 15 years. For instance, in the case $n = 1$ one has electronics and audio signals, in the case $n = 2$ one has optical or infrared images whereas for $n = 3$ one deals with fluid dynamics or the large-scale structure of the universe as 3D-images. However, there are data given on other manifolds like the circle $S_1$ (e. g. scanning along circles the microwave sky) and the sphere $S_2$ (e. g. geophysics). In this paper, we are interested in data distributed on the sphere. Trivially, for the study of local properties (e. g. detection of objects) one can project on the tangent plane at any point on the sphere to make this type of analysis but when global properties are taken into account the curvature of the sphere can not be neglected. A first approach to deal with these global properties is to make some global projection of all the points of the sphere. The stereographic projection has been recently used dealing with the continuous wavelet transform. In this case, to get the wavelet coefficient at any point on the sphere, one projects from the opposite point to the local tangent plane. \[1\] have made a connection to group theory. The translations and dilations in the wavelet have their definition on the plane. Clearly, such a projection does not take into account the topological structure of the sphere. Some applications to cosmology, in particular the study of anisotropies of the cosmic microwave background radiation have been done by some authors (\[3\],\[7\],\[10\]) using the projection of the [*Mexican hat wavelet*]{}. A drawback of such projection is the obvious deformation of the pixels and wavelets near the projection pole. We remark that the synthesis can be done in terms of another biorthogonal wavelet \[11\]. Another approach uses some analyzing wavelet functions that are defined in terms of spherical harmonics \[5\] with a definition of the dilation operator and conditions on the wavelets in such a way to get a synthesis formula. The drawback of such methodology is: the dilations do not satisfy the appropriate flat limit in general. Also some examples of wavelet functions are poorly localized (e. g. Abel-Poison wavelets). A different approach assumes from the beginning discrete wavelets incorporating tensor product approaches in polar coordinates, then the two poles are singular points regarding approximation/stability properties (\[4\],\[6\]). Another approach is adapted to arbitrary point systems or triangulations on the spheres, then there is no efficient tool as fast wavelet algorithms. In the approach by \[8\] basis are defined on a quasi-uniform icosahedral triangulation on the sphere allowing for a fast algorithm. However, biorthogonal wavelets are needed and a lifting scheme for the multiresolution is applied avoiding the concepts of translations and dilations and also it is not clear whether the construction leads to a stable $L_2 (S_2)$ basis. Haar-type wavelets have been developed using different pixel combinations (\[2\],\[7\],\[9\]). The first case uses the lifting scheme weighting for the area of the pixels whereas in the other two cases an equal area pixelization is used but the Haar-type transform is only applied on regions of the sphere covering only $\frac{1}{12}$ of the total area. Clearly, with any pixelization the symmetry on the sphere is lost. In this paper we will consider a continuous approach, we will introduce a methodology that incorporates the analysis and synthesis of any function defined on the sphere $S_2$ using the same circularly-symmetric wavelet and also we will introduce the generalization of the translations/dilations. In this sense we follow Freeden’s approach working with spherical harmonics. Examples will be given that have the appropriate flat limit. Finally, the application to the detection of a spot is given, studying the concentration of the wavelet coefficients. Properties of the wavelet ========================= We will consider a circularly-symmetric filter defined on the sphere $ S_2$ $$\label{eq:cc} \Psi(\vec{n}\cdot \vec{\gamma}; R),$$ where $\vec{n}$ is a fixed direction. $\vec{\gamma}$ is another fixed, but arbitrary direction, therefore $\vec{n}\cdot \vec{\gamma}$ will represent a rotation on the sphere with respect to the direction $\vec{n}$ defined by the angle $\theta$ ($\cos(\theta)\equiv \vec{n}\cdot \vec{\gamma})$. $R>0$ will represent a dilation, which will be defined later on through the spherical harmonics. We assume the following properties of the filter: \(i) the analysis of any function $f(\vec{n})$ will be done with the wavelets $\Psi(\vec{n}\cdot \vec{\gamma}; R)$, \(ii) the synthesis of any function $f(\vec{n})$ will be done with the wavelets coefficients and the wavelets $\Psi(\vec{n}\cdot \vec{\gamma}; R)$, \(iii) it will incorporate the definition of translation and dilation on the sphere. We remark that no assumption about compensation of the filter (i. e. $\int d\Omega (\vec{n})\,\Psi(\vec{n}\cdot \vec{\gamma}; R) = 0$) and projection from $R^2$ to $S_2$ is imposed. Analysis with the filter $\Psi$ =============================== We define the wavelet coefficients associated to the translation $\vec{\gamma}$ and dilation $R$ for the function $f(\vec{n})$ defined on $S_2$ $$\label{eq:cd} w(R, \vec{\gamma} ) = \int d\Omega (\vec{n})\,f(\vec{n}) \Psi(\vec{n}\cdot \vec{\gamma}; R).$$ Let us assume the standard decomposition of $f(\vec{n})$ in spherical harmonics $Y_{lm}(\vec{n})$ $$\label{eq:ce} f(\vec{n}) = \sum_{lm} f_{lm}Y_{lm}(\vec{n}), \ \ \ f_{lm} = \int d\Omega (\vec{n})\,f(\vec{n})Y_{lm}^*(\vec{n}).$$ By introducing Eq.(\[eq:ce\]) into Eq.(\[eq:cd\]) and taking into account that $Y_{lm}(\vec{n})$ is an orthonormal base of $S_2$, we obtain $$\label{eq:cf} w(R, \vec{\gamma} ) = \sum_{lm}(\frac{4\pi}{2l+1})f_{lm}\Psi_l(R)Y_{lm}(\vec{\gamma}),$$ where the Legendre coefficients associated to the circularly-symmetric filter $\Psi$ are given by $$\begin{aligned} \Psi(\vec{n}\cdot \vec{\gamma}; R) & = & \sum_l \Psi_l(R)P_l(\vec{n}\cdot \vec{\gamma}),\nonumber \\ \label{eq:cg} \Psi_l(R) & = & (l+\frac{1}{2})\int_{-1}^1dy\,P_l(y)\Psi(y; R).\end{aligned}$$ Synthesis with the filter $\Psi$ ================================ Now, let us show that in order to have a reconstruction equation, i. e. $f(\vec{n})$ as a functional integral of the wavelet coefficients and the wavelet base $\Psi$ one can impose the condition $$\label{eq:ch} \Psi_l(R) \equiv (\frac{2l+1}{4\pi})\psi (lR),$$ i. e. $\Psi_l(R)$ depends on the product $lR$ and $\psi(l)$ satisfies the admissibility condition $$\label{eq:cl} C_{\psi} \equiv \int_0^{\infty} \frac{dl}{l}\psi^2(l) < \infty,$$ where $l$ runs in the interval $[0, \infty)$. We remark that the analogous condition to have a reconstruction on the plane by substituting $l \rightarrow q$, $q$ being the wave
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'M.-C. ARNAUD [^1] [^2] [^3]' title: '[ Green bundles, Lyapunov exponents and regularity along the supports of the minimizing measures ]{}' --- Keywords: Minimizing orbits and measures, Lyapunov exponents, weak KAM theory, Green bundles, regularity of solutions to Hamilton-Jacobi equations. [**Résumé**]{} Dans cet article, on étudie les mesures minimisantes de Hamitoniens de Tonelli. Plus précisément, on explique quelles relations existent entre les fibrés de Green et différentes notions comme : 1. les exposants de Lyapunov des mesures minimisantes; 2. les solutions KAM faibles. On en déduit par exemple que si tous les exposants de Lyapunov d’une mesure minimisante $\mu$ sont nuls, alors le support de cette mesure est $C^1$-régulier en $\mu$-presque tout point. Mots clefs: Orbites et mesures minimisantes, exposants de Lyapunov, théorie KAM faible, fibrés de Green, régularité des solutions de l’équation de Hamilton-Jacobi. MSC: 37J50, 35D40, 37C40, 34D08, 35D65 Introduction ============ In this article, $M$ is a closed $n$-dimensional manifold and $\pi~: T^*M\rightarrow M$ its cotangent bundle. We consider a Tonelli Hamiltonian $H~: T^*M\rightarrow {\mathbb {R}}$, i.e. a $C^2$ function that is strictly $C^2$-convex and superlinear in the fiber. The Hamiltonian flow associated with such a function is denoted by $(\varphi_t)_{t\in{\mathbb {R}}}$ or $(\varphi_t^H)_{t\in{\mathbb {R}}}$. To such a Hamiltonian, there corresponds a Lagrangian function $L~: TM\rightarrow {\mathbb {R}}$ that has the same regularity as $H$ and is also superlinear and strictly convex in the fiber. The corresponding Euler-Lagrange flow is denoted by $(f_t)_{t\in{\mathbb {R}}}$. For such a Hamiltonian system, it is usual to study its “minimizing objects”; more precisely, a piece of orbit $(\varphi_t(q,p))_{t\in [a,b]}=(q_t, p_t)_{t\in[a,b]}$ is minimizing if the arc $(q_t)_{t\in[a, b]}$ minimizes the action functional $A_L$ defined by $A_L(\gamma )=\int_a^bL(\gamma (t), \dot\gamma (t))dt$ among the $C^2$-arcs joining $q_a$ to $q_b$. More generally, if $I$ is an interval and $(\varphi_t)_{t\in I}=(q_t, p_t)_{t\in I}$ is an orbit piece, we say that it is minimizing if for every segment $[a,b]\subset I$, its restriction to $[a,b]$ is minimizing. Then we call the set of points of $T^*M$ whose (complete) orbit is minimizing the [*Mañé set*]{}. We denote it by ${\mathcal {N}}^*(H)$ and its projection, the [*projected Mañé set*]{}, is denoted by: ${\mathcal {N}}(H)=\pi ({\mathcal {N}}^*(H))$. The Mañé set is non empty, compact and invariant by the Hamiltonian flow (see [@Fa1]). The first proof of the non-emptiness of the Mañé set is due to J. Mather: he proved in the 90’s in [@mather1] the existence of minimizing measures. We are interested in invariant subsets of the Mañé set, i.e. subsets that are the union of some minimizing orbits. More precisely, we would like to know if we can say something about the regularity of such subsets (we will be more precise very soon. It’s a kind of differentiability) and particularly if there is a link between the dynamic of the flow restricted to such a set and the regularity of the set. The oldest result in this direction concerns the time-dependent case : considering a symplectic twist map of the annulus $T^*{\mathbb {S}}$, G. Birkhoff proved in the 1920’s that any essential invariant curve is the graph of a Lipschitz map (see [@Bir1] or [@He1]). It is easy to prove that such a curve is action minimizing. In the case of higher dimensions, M. Herman proved in [@He2] that any $C^0$-Lagrangian graph of $T^*{\mathbb {T}}^n$ that is invariant by a symplectic twist map is, in fact, the graph of a Lipschitz map. A related result in the autonomous case is that any $C^1$-Hamilton-Jacobi solution of a Tonelli Hamiltonian is, in fact, $C^{1,1}$ (see [@Fa2]). As Rademacher’s theorem says to us that any Lipschitz function is differentiable Lebesgue almost everywhere, these results are a kind of regularity result. In [@Arna2], we did, in fact, improve these results of regularity in the autonomous case, proving that if a $C^0$-Lagrangian graph is invariant by a Tonelli flow, and if one of the two following hypotheses is satisfied: 1. $\dim M=2$ and all the singularities of $H$ are non degenerate; 2. the dynamic of the restriction of the flow to the invariant graph is Lipschitz conjugate to a translations’ flow; then the invariant graph is, in fact, $C^1$ almost everywhere (this is stronger than just differentiable). Let us point out that any of the two previous hypotheses implies that the dynamic of the restricted flow to the graph is soft on a certain sense (our arguments are not very precise, but we only want to give a certain intuition of the forthcoming result); indeed, when $\dim M=2$, if we reduce the dynamic modulo the vector field, we obtain a 1-dimension dynamic, and it is known at least in the differentiable case that the Lyapunov exponents of a dynamic on the circle are zero. The same is true for any dynamic that is Lipschitz conjugate to a translation. We gave a similar results for the invariant curves of the twist maps of the annulus in [@Arna1], proving that Birkhoff’s result can be improved: any essential invariant curve of a symplectic twist map of the annulus $T^*{\mathbb {S}}$ is the graph of a Lipschitz map that is $C^1$ Lebesgue almost everywhere. Hence, it seems reasonable to try to find a relationship between the Lyapunov exponents of any minimizing measure and the regularity of its support, where an invariant measure is [*minimizing*]{} if its support is in the Mañé set. For a twist map of the annulus $T^*{\mathbb {S}}$, we studied the ergodic minimizing measures in [@Arna3] and proved that the $C^1$-regularity (we will be more precise very soon) of its support is equivalent to the fact that the Lyapunov exponents are zero. Hence, in a certain way, in this case, “$C^1$-irregularity” is equivalent to non-vanishing Lyapunov exponents. The question that we ask now ourselves is the following: what can we say for higher dimensions? Is the irregularity (in a sense we will soon specify) of the support of a minimizing ergodic measure equivalent to non-vanishing exponents? A first and obvious answer is: no. Indeed, let us consider the following example: $(\psi_t)$ is an Anosov flow defined on the cotangent bundle $T^*{\mathcal{S}}$ of a closed surface ${\mathcal{S}}$. Let ${\mathcal {N}}=T^*_1{\mathcal{S}}$ be its unitary cotangent bundle, which is a 3-manifold invariant by $(\psi_t)$. Then a method due to Mañé (see [@Man1]) allows us to define a Tonelli Hamiltonian $H$ on $T^*{\mathcal {N}}$ such that the restriction of its flow $(\varphi_t)$ to the zero section ${\mathcal {N}}$ is $(\psi_t)$: the Lagrangian $L$ associated with $H$ is defined by: $L(q,v)=\frac{1}{2}\| \dot\psi (q)-v\|^2$ where $\| .\|$ is any Riemannian metric on ${\mathcal {N}}$. In this case, the zero section is very regular (even $C^\infty$), but the Lyapunov exponents of every invariant measure whose support is contained in ${\mathcal {N}}$ are non zero (except two, the one corresponding to the flow direction and the one corresponding to the energy direction). Hence, it may happen that some exponents are non zero and the support of the measure is very regular…\ In fact, the other implication is true: we will see that the nullity of the Lyapunov exponents implies the regularity of the support of the considered measure.\ Let us now explain in a detailed way in which kind of regularity we are interested: Let $A$ be a subset of a manifold $M$ and let $a$ belong to $A$. The contingent cone to $A
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'Christian Kanzow$^{\dagger}$' - 'Daniel Steck [^1]' bibliography: - 'VI\_ALMinf.bib' date: 'March 27, 2018' title: ' On Error Bounds and Multiplier Methods for Variational Problems in Banach Spaces [^2] ' --- **.** This paper deals with a general form of variational problems in Banach spaces which encompasses variational inequalities as well as minimization problems. We prove a characterization of local error bounds for the distance to the (primal-dual) solution set and give a sufficient condition for such an error bound to hold. In the second part of the paper, we consider an algorithm of augmented Lagrangian type for the solution of such variational problems. We give some global convergence properties of the method and then use the error bound theory to provide estimates for the rate of convergence and to deduce boundedness of the sequence of penalty parameters. Finally, numerical results for optimal control, Nash equilibrium problems, and elliptic parameter estimation problems are presented. **Keywords.** Variational problem, variational inequality, error bound, augmented Lagrangian method, local convergence, global convergence, Nash equilibrium problem. **AMS subject classifications.** 49K, 49M, 65K, 90C. Introduction {#Sec:Intro} ============ This paper deals with the following variational problem: $$\label{Eq:VI} \text{Find }x\in M\text{ such that}\quad{\mleft\langle F(x),v \mright\rangle}\ge 0 \quad\forall v\in{\mathcal{T}_{M}(x)},$$ where $M\subseteq X$ is a nonempty closed set, $X$ a real Banach space, and $F:X\to X^*$ a given mapping. The set ${\mathcal{T}_{M}(x)}$ denotes the (Bouligand) tangent cone [@Bonnans2000] to $M$ at $x$. If $M$ is additionally convex, then is equivalent to $$\label{Eq:VI_Convex} \text{Find }x\in M\text{ such that}\quad{\mleft\langle F(x),y-x \mright\rangle}\ge 0 \quad\forall y\in M,$$ which is often regarded as the standard form of a variational inequality (VI). Throughout this paper, we will use the terms “variational inequality” and “variational problem” interchangeably, and often refer to as a VI. Note that, in the absence of convexity, is the canonical formulation of variational problems; in particular, this form encompasses first-order necessary conditions for nonlinear optimization problems of the type $$\label{Eq:Opt} \min\ f(x) \quad\text{s.t.}\quad x\in M$$ by choosing $F:=f'$. Throughout this paper, we assume that $M$ is given in the form $$\label{Eq:M} M=\{ x\in X: g(x)\in K \},$$ where $g:X\to H$ is a given mapping, $H$ a real Hilbert space, and $K\subseteq H$ a nonempty closed convex set (not necessarily a cone). We make no blanket convexity assumptions on $g$ (although some of our results do pertain to the convex case). Hence, the set $M$ is nonconvex in general, and is the natural framework for our setting. Variational inequalities are a well-known and popular class in both finite and infinite-dimensional optimization since they unify various problem types such as constrained minimization and equilibrium-type problems, in particular Nash and (certain) generalized Nash equilibrium problems [@Facchinei2007; @Facchinei2010; @Fischer2014; @Hintermueller2015; @Kanzow2017a]. This opens up a broad spectrum of applications including optimal control, parameter estimation, differential games, and problems in mechanics or shape optimization. Many further applications are given in [@Baiocchi1984; @Glowinski2015; @Glowinski1981; @Kinderlehrer2000]. As a result, VIs have gained considerable attention in the literature and a variety of algorithms have been developed for their solution, e.g. [@Facchinei2003; @Fortin1983; @Glowinski2008; @Ulbrich2011]. On the other hand, the augmented Lagrangian method (ALM, also called multiplier-penalty method or simply multiplier method) is one of the classical methods for nonlinear optimization, see [@Conn1991; @Hestenes1969; @Powell1969; @Rockafellar1973; @Rockafellar1974] and the textbooks [@Bertsekas1982; @Nocedal2006]. In recent years, ALMs have seen a certain resurgence [@Andreani2007; @Andreani2008; @Birgin2012; @Birgin2010; @Birgin2014] in the form of modified methods which use a slightly different update of the Lagrange multiplier and turn out to have very strong global convergence properties [@Birgin2014]. A comparison of the classical and modified ALMs is given in [@Kanzow2017]. We also note that ALMs have been generalized to VIs in finite dimensions [@Andreani2008] and to infinite-dimensional optimization problems in certain restricted settings [@Hintermueller2006; @Ito1990a; @Ito1990b; @Ito2000; @Ito2008; @Kanzow2016; @Wierzbicki1977]. However, most of these papers either consider rather specific problem settings [@Hintermueller2006; @Ito1990a; @Ito1990b; @Ito2000; @Ito2008] or deal with global convergence properties only [@Kanzow2016]. The main purpose of the present paper is to analyze the local convergence properties of ALMs for variational inequalities in the general (possibly infinite-dimensional) setting . To accomplish this, we will need certain elements of perturbation and error bound theory for generalized equations and KKT systems, some of which are refinements of the corresponding results in finite dimensions [@Ding2017; @Dontchev1998; @Fischer2002; @Izmailov2012a]. Using these, we will prove that, given a KKT point which admits a primal-dual error bound, the ALM converges locally to this point with a rate of convergence that is essentially $1/\rho_k$ (where $\rho_k$ is the penalty parameter), and that $\{\rho_k\}$ remains bounded if updated suitably. Sufficient conditions for the primal-dual error bound include a suitable second-order sufficient condition (SOSC) together with a strict version of the Robinson constraint qualification (see Section \[Sec:Prelims\]). These assumptions are akin to those used in [@Birgin2012] for ALMs in finite-dimensional nonlinear programming (NLP), where the authors obtain results similar to ours. Interestingly, however, it turns out that these results (for standard NLP) can be established under SOSC only [@Fernandez2012] by using the specific structure of the constraints. In particular, when transferred to our notation, the set $K$ arising from NLP is polyhedral and this yields, roughly speaking, the dual part of the error bound without any constraint qualification [@Fernandez2012; @Izmailov2012a]. However, apart from the NLP setting, polyhedrality is a rare property which is usually violated, e.g. in optimal control or semidefinite programming. As a result, SOSC alone does not yield a primal-dual error bound, see the example in Section \[Sec:ErrorBounds\]. We solve this issue by using SOSC together with a suitable constraint qualification. The paper is organized as follows. We start with some preliminary material in Section \[Sec:Prelims\] and give some results on primal-dual error bounds in Section \[Sec:ErrorBounds\]. Section \[Sec:Method\] contains a precise statement of our algorithm and we continue with some global convergence results in Section \[Sec:GlobalConv\]. In Section \[Sec:LocalConv\], we prove the main results of this paper, i.e. local convergence of the ALM under the error bound hypothesis. We then give some numerical results in Section \[Sec:Applic\] and final remarks in Section \[Sec:Final\]. **Notation:** Throughout the paper, $X$ is always a real Banach space, $H$ a real Hilbert space, and their duals are denoted by $X^*$ and $H^*$, the latter of which we usually identify with $H$. Fréchet-derivatives are denoted by a prime $'$ or by $D_x$ if the variable is emphasized, and we use the abbreviation lsc for lower semicontinuity. Strong and weak convergence are denoted by $\to$ and ${\rightharpoonup}$, respectively. Duality pairings are written as ${\mleft\langle \cdot,\cdot \mright\rangle}$, scalar products as ${\mleft( \cdot,\cdot \mright)}$, and norms are denoted by $\|\cdot\|$ with an appropriate subscript to emphasize the corresponding space (e.g. $\|\cdot\|_X$). If $S$ is a nonempty subset of some normed space, we write $d_S={\operatorname{dist}(\cdot,S)}$ for the distance to $S$. Additionally, if $S\subseteq H$ is closed and convex, we write $P_S$ for the projection onto $S$. Preliminaries {#Sec:Prelims} ============= This section is dedicated
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We describe the far from equilibrium non-local transport in a diffusive superconducting wire with a Zeeman splitting, taking into account the different spin relaxation mechanisms. We demonstrate that due to the Zeeman splitting an injection of a current in a superconducting wire creates a spin accumulation that can only relax via thermalization. In addition the Zeeman splitting also causes a suppression of the spin-orbit and spin-flip scattering rates. These two effects lead to long-range spin and charge accumulations detectable in the non-local signal. Our model explains the main qualitative features of recent experimental results in terms of realistic parameters and predicts a strong dependence of the non-local signal on the orbital depairing effect from an induced magnetic field.' author: - 'M. Silaev' - 'P. Virtanen' - 'F.S. Bergeret' - 'T.T. Heikkilä' title: 'Long-range spin and charge accumulation in mesoscopic superconductors with Zeeman splitting' --- Hybrid ferromagnetic/superconducting (FS) structures reveal a rich physics originating from the interplay between magnetism and superconductivity [@BuzdinRMP; @Bergeret2001]. While most of the research activity has been focused on the study and detection of proximity induced triplet superconducting correlations in an equilibrium situation [@Bergeret2001; @triplet], more recent experiments addressed the problem of spin and charge accumulation in superconducting wires[@Fukuma2011; @HanleSuper; @Poli; @SpinInjectionNb; @Aprili2013; @Beckman2012; @Beckman2014]. Figure \[Fig:Sketch\] shows a typical experimental setup, in which a spin accumulation is generated by a spin-polarized current injected from a ferromagnetic electrode. This spin accumulation observed in the experiments can be quite large. Two puzzling findings motivate this Letter: First, in superconductors with a strong Zeeman splitting, the induced spin accumulation has been detected at distances from the injector much larger than the spin-relaxation length in the normal state [@Aprili2013; @Beckman2012; @Beckman2014]. Second, the non-local conductance $g_{nl}$ depends drastically on the origin of the Zeeman splitting. Such a splitting can be caused either by an applied (strong) external magnetic field [@Aprili2013; @Beckman2012] or by the proximity of a ferromagnetic insulator [@Beckman2014proximity]. In this Letter, we develop a microscopic model based on the well-established Keldysh kinetic equations for superconductors extended to spin-dependent phenomena, and solve this puzzle. In particular we show that: (i) The observed long-range spin accumulation can be understood as a thermoelectric effect for Bogoliubov quasiparticles. The heating of a superconducting wire, originated for example from an injected current, produces a spin accumulation which can be detected as an electric signal by a spin-filter detector. The spin accumulation created in such a way can relax only due to the thermalization of injected quasiparticles and therefore the spin relaxation length is determined by inelastic electron-phonon and electron-electron scattering that can well exceed the usual spin diffusion length. (ii) Besides generating a large thermoelectric effect the Zeeman splitting also suppresses the spin-flip and spin-orbital scattering which are the main sources of charge imbalance relaxation in superconductors at low temperatures [@PairBreakingChImb]. Hence the different behaviors observed for the non-local conductance $g_{nl}$ as a function of the injection voltage $V_{inj}$, depend on the value of the orbital depairing parameter $\alpha_{orb}$ defined below. For large enough values of $\alpha_{orb}$, at large applied fields the contribution from the charge imbalance to the non-local conductance is suppressed and the $g_{nl}(V_{inj})$ dependence is almost antisymmetric with respect to $V_{inj}$ [@Aprili2013; @Beckman2012; @Beckman2014]. In contrast, if the Zeeman splitting is caused by the proximity of a ferromagnetic insulator [@Beckman2014proximity], $\alpha_{orb}$ is small and the charge imbalance contribution to $g_{nl}$ becomes important. In this case, we predict a qualitative change of the non-local conductance as function of the injected current, that can be experimentally proven. We consider the nonlocal spin valve shown in Fig. \[Fig:Sketch\]. A spin-polarized current is injected in the superconducting wire from a ferromagnetic electrode with polarization ${\bm P_{inj}}$, pointing in the direction of the magnetization. The detector is also a ferromagnet with a polarization vector ${\bm P_{det}}$ and located at a distance $L_{det}$ from the injector. Both the injector and the detector are coupled to the wire via tunnel contacts. A magnetic field ${\bm B}$ is applied in $z$ direction. ![\[Fig:Sketch\] (Color online) Schematic view of the setup for nonlocal conductance measurements. Here we assume that the polarizations of the magnetic contacts are collinear to the magnetic field, ${\bm P_{inj}}\parallel {\bm P_{det}}\parallel {\bm B}$. ](Sketch5.eps){width="1.0\linewidth"} When ${\bm P_{inj}}\parallel {\bm B}\parallel {\bm P_{det}}$ (for the non-collinear case, see Ref. [@silaevup14]), the tunnelling current at the detector is given by $$\label{Eq:ZeroCurrentYGen} R_{det}I_{det}= \mu + P_{det} \mu_z$$ where $R_{det}$ is the detector interface resistance in the normal state, $\mu$ is the charge imbalance and $\mu_z$ the spin imbalance. Here we assume that the detector current is measured at zero bias $V_{det}=0$. The nonlocal differential conductance measured in the experiment is $ g_{nl}= d I_{det}/d V_{inj} $. The charge imbalance $\mu$ and spin accumulation $\mu_z$ can be expressed in terms of the Keldysh quasiclassical Green function (GF) as $ \mu = \int_{0}^\infty {\rm Tr}(g^K) d\varepsilon/16 $ and $ \mu_{z} = \int_{0}^\infty {\rm Tr}[\tau_3 \sigma_3 (g^K-g^K_{eq})] d\varepsilon/16 $. Here $\tau_3$ ($\sigma_3$) is the third Pauli matrix in Nambu (spin) space, $g^K$ is the (4$\times$4 matrix) Keldysh component of the quasiclassical GF matrix $\check{g} = \left(% \begin{array}{cc} g^R & g^K \\ 0 & g^A \\ \end{array}% \right)$, and $g^{R(A)}$ is the retarded (advanced) GF. We denote $g^K_{eq}=g^K$ at the equilibrium state. The matrix GF satisfies the normalization condition $\check g^2=1$ that allows writing the Keldysh component as $ g^K= g^R \hat f - \hat f g^A$, where $\hat f$ is the distribution function with a general spin structure $ \hat f= f_L +f_T\tau_3 + f_{T3} \sigma_3+ f_{L3} \tau_3\sigma_3$[@SupplMat]. With the help of the above notations we obtain the expressions for the charge and spin imbalance in the superconductor (here and below, $\hbar=k_B=1$) $$\begin{aligned} \label{Eq:ChPot0} % \nonumber to remove numbering (before each equation) \mu = \frac{1}{2}\int_{0}^{\infty} d\varepsilon ( N_+ f_T+ N_- f_{L3}) \\\label{Eq:ChPotZ} \mu_z = \frac{1}{2}\int_{0}^{\infty} d\varepsilon [ N_+ f_{T3}+ N_-(f_{L}-n_0)], \end{aligned}$$ where $N_+$ is the total density of states (DOS), $N_-$ is the DOS difference between the spin subbands, and $n_0(\varepsilon) = \tanh(\varepsilon/2T)$. According to Eq. (\[Eq:ChPotZ\]) there are two contributions to the spin signal. One is generated from the longitudinal component $f_L$. This contribution is only finite in the presence of a Zeeman splitting of the DOS ($N_-\neq 0$). The second contribution is described by the first term in the integrand of Eq. (\[Eq:ChPotZ\]) and it is finite even in the absence of an exchange field. While this latter contribution has been analyzed in Ref. [@Beckman2012], we show below that in several cases it is the longitudinal contribution that dominates the spin signal due to its long-range character. In order to obtain the kinetic equations in a diffusive spin-polarized superconductor we start from the general Usadel equation [@Bergeret2001] $$\label{Eq:Usadel1} D\nabla\cdot(\check{g}\nabla\check{g})+ [\check\Lambda - \check\Sigma_{so} - \check\Sigma_{sf} - \check\Sigma_{orb}, \check{g}] =0.$$
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We prove that NIP valued fields of positive characteristic are henselian. Furthermore, we partially generalize the known results on dp-minimal fields to dp-finite fields. We prove a dichotomy: if $K$ is a sufficiently saturated dp-finite expansion of a field, then either $K$ has finite Morley rank or $K$ has a non-trivial ${\operatorname{Aut}}(K/A)$-invariant valuation ring for a small set $A$. In the positive characteristic case, we can even demand that the valuation ring is henselian. Using this, we classify the positive characteristic dp-finite pure fields.' author: - Will Johnson bibliography: - 'mybib.bib' title: 'Dp-finite fields I: infinitesimals and positive characteristic' --- Introduction ============ The two main conjectures for NIP fields are - The *henselianity conjecture*: any NIP valued field $(K,\mathcal{O})$ is henselian. - The *Shelah conjecture*: any NIP field $K$ is algebraically closed, real closed, finite, or admits a non-trivial henselian valuation. By generalizing the arguments used for dp-minimal fields (for example in Chapter 9 of [@myself]), we prove the henselianity conjecture in positive characteristic, and the Shelah conjecture for positive characteristic dp-finite fields. This yields the positive-characteristic part of the expected classification of dp-finite fields. We also make partial progress on dp-finite fields of characteristic zero. Let $(K,+,\cdot,\ldots)$ be a sufficiently saturated dp-finite field, possibly with extra structure. Then either - $K$ has finite Morley rank, or - There is an ${\operatorname{Aut}}(K/A)$-invariant non-trivial valuation ring on $K$ for some small set $A$. Unfortunately, we can only prove henselianity of this valuation ring in positive characteristic. Following the approach used for dp-minimal fields, there are three main steps to the proof: 1. Construct a type-definable group of infinitesimals. 2. Construct a valuation ring from the infinitesimals. 3. Prove henselianity. We discuss each of these steps, explaining the difficulties that arise when generalizing from rank 1 to rank $n$. Constructing the infinitesimals ------------------------------- Mimicking the case of dp-minimal fields, we would like to define the group $I_M$ of $M$-infinitesimals as $$\bigcap_{X \text{ ``big'' and $M$-definable}} \{\delta \in K ~|~ X \cap (X + \delta) \text{ is ``big''}\}$$ for some notion of “big.” In the dp-minimal case, “big” was “infinite.” By analyzing the proof for dp-minimal fields, one can enumerate a list of desiderata for bigness: 1. Non-big sets should form an ideal. 2. Bigness should be preserved by affine transformations. 3. \[definability-condition\] Bigness should vary definably in families. 4. The universe $K$ should be big. 5. \[mininf-condition\] If $X, Y$ are big, the set $\{\delta ~|~ X \cap (Y + \delta) \text{ is big}\}$ should be big. 6. Bigness should be coherent on externally definable sets, to the extent that: - If $X$ is $M$-definable and big for a small model $M \preceq K$, if $Y$ is $K$-definable, and if $X(M) \subseteq Y$, then $Y$ is big. - If $X$ is big and $X \subseteq Y_1 \cup \cdots \cup Y_n$ for externally definable sets $Y_1, \ldots, Y_n$, then there is a big definable subset $X' \subseteq X$ such that $X' \subseteq Y_i$ for some $i$. The intuitive guess is that for a field of dp-rank $n$, “big” should mean “rank $n$.” But there is no obvious proof of the definability condition (\[definability-condition\]), as noted in §3.2 of Sinclair’s thesis [@sinclair]. An alternative, silly guess is that “big” should mean “infinite.” This fails to work in some of the simplest examples, such as $({\mathcal{C}},+,\cdot,{\mathbb{R}})$. However, the silly guess *nearly* works; the only requirement that can fail is (\[mininf-condition\]). The key insight that led to the present paper was the realization that in rank 2, any failure of (\[mininf-condition\]) for the silly option (“big”=“infinite”) fixes (\[definability-condition\]) for the intuitive option (“big”=“rank 2”). Indeed, if $X, Y$ are infinite sets but $X \cap (Y + \delta)$ is finite for almost all $\delta$, then - By counting ranks, $X, Y$ must be dp-minimal. - The map $X \times Y \to X - Y$ is almost finite-to-one. - By a theorem of Pierre Simon [@surprise], “rank 2” is definable on $X \times Y$.[^1] This ensures that “rank 2” is definable on $X - Y$. - A definable set $D \subseteq K$ has rank 2 if and only if some translate of $D$ has rank 2 intersection with $X - Y$. So, in the rank-2 setting, one can first try “big”=“infinite,” and if that fails, take “big”=“rank 2.” A prototype of this idea appears in Peter Sinclair’s thesis [@sinclair]. In his §3.3, he observes that the machinery of infinitesimals goes through when “rank $n$”=“infinite,” and conjectures that this always holds in the pure field reduct. By extending this line of thinking to higher ranks, we obtain a notion of *heavy* sets satisfying the desired properties.[^2] See §\[sec:heavy-light\] for details; the technique is reminiscent of Zilber indecomposability in groups of finite Morley rank. Once heavy and light sets are defined, the construction of infinitesimals is carried out in §\[sec:infinitesimals\] via a direct generalization of the argument for dp-minimal fields. For certain non-triviality properties, we need to assume that $K$ does not have finite Morley rank. The relevant dichotomy is proven in §\[sec:likeTT\]; it is closely related to Sinclair’s Large Sets Property (Definition 3.0.3 in [@sinclair]), but with heaviness replacing full dp-rank. For ranks greater than 2, we need to slightly upgrade Simon’s results in [@surprise]. We do this in §\[sec:broad-narrow\]. Say that an infinite definable set $Q$ is *quasi-minimal* if ${\operatorname{dp-rk}}(D) \in \{0, {\operatorname{dp-rk}}(Q)\}$ for every definable subset $D \subseteq Q$. The main result is the following: Let $M$ be an NIP structure eliminating $\exists^\infty$. Let $Q_1, \ldots, Q_n$ be quasi-minimal sets, and $m = {\operatorname{dp-rk}}(Q_1 \times \cdots \times Q_n)$. Then “rank $m$” is definable in families of definable subsets of $Q_1 \times \cdots \times Q_n$. This is a variant of Corollary 3.12 in [@surprise]. Note that dp-minimal sets are quasi-minimal, and quasi-minimal sets are guaranteed to exist in dp-finite structures. Getting a valuation ------------------- Say that a subring $R \subseteq K$ is a *good Bezout domain* if $R$ is a Bezout domain with finitely many maximal ideals, and ${\operatorname{Frac}}(R) = K$. This implies that $R$ is a finite intersection of valuation rings on $K$. Ideally, we could prove the following The $M$-infinitesimals $I_M$ are an ideal in a good Bezout domain $R \subseteq K$. Assuming the Conjecture, one can tweak $R$ and arrange for $I_M$ to be the Jacobson radical of $R$. This probably implies that - The ring $R$ is $\vee$-definable, and so are the associated valuation rings. - The canonical topology is a field topology, and has a definable basis of opens. This would put us in a setting where we could generalize the infinitesimal-based henselianity proofs from the dp-minimal case, modulo some technical difficulties in characteristic zero. The strategy would be to prove that $R$ is the intersection of just one valuation ring; this rules out the possibility of $K$ carrying two definable valuations, leading easily to proofs of the henselianity and
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Results obtained with the HADES dielectron spectrometer at GSI are discussed, with emphasis on dilepton production in elementary reactions.' address: - 'Institut de Physique Nucléaire, CNRS/IN2P3-Université Paris Sud, F-91406 Orsay Cedex, France' - | [$^1$Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud, 95125 Catania, Italy,\ $^{2}$LIP-Laboratório de Instrumentação e Física Experimental de Partículas , 3004-516 Coimbra, Portugal\ $^3$Smoluchowski Institute of Physics, Jagiellonian University of Cracow, 30-059 Kraków, Poland,\ $^4$GSI Helmholtzzentrum für Schwerionenforschung, 64291 Darmstadt, Germany,\ $^5$Institut für Strahlenphysik, Forschungszentrum Dresden-Rossendorf, 01314 Dresden, Germany,\ $^6$Joint Institute of Nuclear Research, 141980 Dubna, Russia,\ $^7$Institut für Kernphysik, Johann Wolfgang Goethe-Universität, 60438  Frankfurt, Germany,\ $^8$II.Physikalisches Institut, Justus Liebig Universität Giessen, 35392 Giessen, Germany,\ $^9$Istituto Nazionale di Fisica Nucleare, Sezione di Milano, 20133 Milano, Italy,\ $^10$Institute for Nuclear Research, Russian Academy of Science, 117312 Moscow, Russia,\ $^{11}$Physik Department E12, Technische Universität München, 85748 München, Germany,\ $^{12}$Department of Physics, University of Cyprus, 1678 Nicosia, Cyprus,\ $^{13}$Institut de Physique Nucléaire , CNRS/IN2P3 - Université Paris Sud, F-91406 Orsay Cedex, France,\ $^{14}$Nuclear Physics Institute, Academy of Sciences of Czech Republic, 25068 Rez, Czech Republic,\ $^{15}$Dep. de Física de Partículas, Univ. de Santiago de Compostela, 15706 Santiago de Compostela, Spain,\ $^{16}$Instituto de Física Corpuscular, Universidad de Valencia-CSIC, 46971 Valencia, Spain,\ $^a$Also at Dipartimento di Fisica e Astronomia, Università di Catania, 95125 Catania, Italy,\ $^b$Also at ISEC Coimbra,  Coimbra, Portugal,\ $^c$Also at Technische Universität Dresden, 01062 Dresden, Germany,\ $^d$Also at Dipartimento di Fisica, Università di Milano, 20133 Milano, Italy,\ $^e$Also at Panstwowa Wyzsza Szkola Zawodowa , 33-300 Nowy Sacz, Poland.]{} author: - 'B. Ramstein' - '[G. Agakichiev$^{\,8}$, C. Agodi$^{\,1}$, A. Balanda$^{\,3,e}$, G. Bellia$^{\,1,a}$, D. Belver$^{\,15}$, A. Belyaev$^{\,6}$, A. Blanco$^{\,2}$, M. Böhmer$^{\,11}$, J. L. Boyard$^{\,13}$, P. Braun-Munzinger$^{\,4}$, P. Cabanelas$^{\,15}$, E. Castro$^{\,15}$, T. Christ$^{\,11}$, M. Destefanis$^{\,8}$, J. Díaz$^{\,16}$, F. Dohrmann$^{\,5}$, A. Dybczak$^{\,3}$, L. Fabbietti$^{\,11}$, O. Fateev$^{\,6}$, P. Finocchiaro$^{\,1}$, P. Fonte$^{\,2,b}$, J. Friese$^{\,11}$, I. Fröhlich$^{\,7}$, T. Galatyuk$^{4}$, J. A. Garzón$^{\,15}$, R. Gernhäuser$^{\,11}$, A. Gil$^{\,16}$, C. Gilardi$^{\,8}$, M. Golubeva$^{\,10}$, D. González-Díaz$^{\,4}$, E. Grosse$^{\,5,c}$, F. Guber$^{\,10}$, M. Heilmann$^{\,7}$, T. Hennino$^{\,13}$, R. Holzmann$^{\,4}$, A. Ierusalimov$^{\,6}$, I. Iori$^{\,9,d}$, A. Ivashkin$^{\,10}$, M. Jurkovic$^{\,11}$, B. Kämpfer$^{\,5}$, K. Kanaki$^{\,5}$, T. Karavicheva$^{\,10}$, D. Kirschner$^{\,8}$, I. Koenig$^{\,4}$, W. Koenig$^{\,4}$, B. W. Kolb$^{\,4}$, R. Kotte$^{\,5}$, A. Kozuch$^{\,3,e}$, A. Krása$^{\,14}$, F. Křížek$^{\,14}$, R. Krücken$^{\,11}$, W. Kühn$^{\,8}$, A. Kugler$^{\,14}$, A. Kurepin$^{\,10}$, J. Lamas-Valverde$^{\,15}$, S. Lang$^{\,4}$, J. S. Lange$^{\,8}$, K. Lapidus$^{\,10}$, L. Lopes$^{\,2}$, M. Lorenz$^{\,7}$, T. Liu$^{\,13}$, L. Maier$^{\,11}$, A. Mangiarotti$^{\,2}$, J. Marín$^{\,15}$, J. Markert$^{\,7}$, V. Metag$^{\,8}$, B. Michalska$^{\,3}$, J. Michel$^{\,7}$, D. Mishra$^{\,8}$ E. Morinière$^{\,13}$, J. Mousa$^{\,12}$, C. Müntz$^{\,7}$, L. Naumann$^{\,5}$, R. Novotny$^{\,8}$, J. Otwinowski$^{\,3}$, Y. C. Pachmayer$^{\,7}$, M. Palka$^{\,4}$, Y. Parpottas$^{\,12}$, V. Pechenov$^{\,8}$, O. Pechenova$^{\,8}$, T. Pérez Cavalcanti$^{\,8}$, J. Pietraszko$^{\,4}$, W. Przygoda$^{\,3,e}$, A. Reshetin$^{\,10}$, A. Rustamov$^{\,4}$, A. Sadovsky$^{\,10}$, P. Salabura$^{\,3}$, A. Schma$^h{\,11}$, R. Simon$^{\,4}$, Yu.G. Sobolev$^{\,14}$, S. Spataro$^{\,8}$, B. Spruck$^{\,8}$, H. Ströbele$^{\,7}$, J. Stroth$^{\,7,4}$, C. Sturm$^{\,7}$, M. Sudol$^{\,13}$, A. Tarantola$^{\,7}$, K. Teilab$^{\,7}$, P. Tlustý$^{\,14}$, M. Traxler$^{\,4}$, R. Trebacz$^{\,3}$, H. Tsertos$^{\,12}$, I. Veretenkin$^{\,10}$, V. Wagner$^{\,14}$, M. Weber$^{\,11}$, M. Wisniowski$^{\,3}$, J. Wüstenfeld$^{\,5}$, S. Yurevich$^{\,4}$, Y.V. Zanevsky$^{\,6}$, P. Zhou$^{\,4}$, P. Zumbruch$^{\,5}$]{}' title: 'Study of elementary reactions with the HADES dielectron spectrometer [^1]' --- Introduction ============ The main objective of the High-Acceptance di-Electron Spectrometer at GSI is the study of in-medium modifications of $\rho$ and $\omega$ vector mesons in hot and/or dense baryonic matter. Despite the challenging instrumental requirements, the dilepton probe provides the most direct information on the hadronic matter. Being complementary to the ones performed at higher energy facilities (SPS,RHIC) or looking for effects at normal density with photon or proton beams (JLab, KEK), the HADES experiments explore the 1-2 AGeV energy domain, where moderate temperatures (T$ <$ 100 MeV)
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We explore a few-body mixture of two bosonic species confined in quasi-one-dimensional parabolic traps of different length scales. The ground state phase diagrams in the three-dimensional parameter space spanned by the harmonic length scale ratio, inter-species coupling strength and particle number ratio are investigated. As a first case study we use the mean-field ansatz (MF) to perform a detailed analysis of the separation mechanism. It allows us to derive a simple and intuitive rule predicting which of the immiscible phases is energetically more favorable at the miscible-immiscible phase boundary. We estimate the critical coupling strength for the miscible-immiscible transition and perform a comparison to correlated many-body results obtained by means of the Multi-Layer Multi-Configuration Time Dependent Hartree method for bosonic mixtures (ML-X). At a critical ratio of the trap frequencies, determined solely by the particle number ratio, the deviations between MF and ML-X are very pronounced and can be attributed to a high degree of entanglement between the components. As a result, we evidence the breakdown of the effective one-body picture. Additionally, when many-body correlations play a substantial role, the one-body density is in general not sufficient for deciding upon the phase at hand which we demonstrate exemplarily.' author: - Maxim Pyzh - Peter Schmelcher title: 'Phase separation of a Bose-Bose mixture: impact of the trap and particle number imbalance' --- Introduction {#sec:intro} ============ Binary mixtures of ultra-cold gases have been extensively studied over the past years. They represent a unique platform for the investigation of complex interacting many-body quantum systems in a well controlled environment. In particular, it is experimentally possible to shape the geometry of the trap [@TailoredTraps2000], to reduce the dimensionality of the relevant motion [@1Dgases2008; @1Dgases2011], to tune the inter-particle interactions [@Feshbach2010; @CIR1998; @CIR2000; @CIR2003; @CIR2010] and prepare samples of only a few atoms [@fewbody2012; @fewbody2019]. Numerous experiments have been conducted with different hyperfine states [@Myatt1997; @Hall1998; @Ketterle1999; @Inguscio2000; @Aspect2001; @Hall2007; @Hirano2010; @Becker2008; @Engels2011; @Oberthaler2015; @Hirano2016collision; @Hirano2016quench; @dropletsCabrera2018; @dropletsInguscio2018], different elements [@Inguscio2002; @Weidemuller2002; @Inguscio2008; @Ospelkaus2008; @Cornish2011; @Nagerl2011; @Grimm2013; @Nagerl2014; @Cornish2014; @Arlt2015; @Wang2015a; @Proukakis2018; @Wang2015b; @Minardi2010] or different isotopes [@Papp2008; @Takahashi2011] to reveal how the interplay between two condensates impacts their stationary properties and non-equilibrium dynamics. Highlights of these explorations include among others the phase separation between the components and symmetry-breaking phenomena [@Hall1998; @Papp2008; @Hirano2010; @Wang2015b; @Proukakis2018], the observation of Efimov physics [@Minardi2010] and creation of deeply bound dipolar molecules [@Ospelkaus2008; @Nagerl2014; @Cornish2014], as well as dark-bright solitary waves [@Becker2008; @Engels2011] and quantum droplets [@dropletsCabrera2018; @dropletsInguscio2018]. One of the key properties, which makes the multi-component systems attractive and their physics very rich, is the miscibility, which has significant implications for sympathetic cooling [@Aspect2001; @Weidemuller2002], coarse graining dynamics [@coarseGraining2004; @coarseGraining2008; @coarseGraining2010; @coarseGraining2014] and vortex formation [@vortex2003; @vortex2011] to name a few. In the very early theoretical investigations a very rich phase space for the ground state of the Bose-Bose mixture has been identified. These investigations [@TFAShenoy1996; @cGPEBigelow1998; @phasesChui2003; @phasesOhberg1999; @phasesTrippenbach2000] are based on the one-body densities obtained from solving the underlying mean-field equations, commonly known as Gross-Pitaevskii equations. In case of a weak inter-component coupling one finds a miscible phase with a high spatial overlap between the components. For a sufficiently large repulsive coupling there are three types of segragated phases with a rather small overlap. Two of them are core-shell phases with one component being symmetrically surrounded by the other component, whereas the third is an asymmetrical phase, where the rotational or parity symmetry of the underlying trapping potential is broken. Neglecting the kinetic energy (Thomas-Fermi approximation) a simple separation criterion for the miscible-immiscible transition has been derived [@separationRuleTimmermans1998; @separationRuleChui1998; @separationRuleEsry1997]. It depends solely on the intra-species and inter-species interactions strengths, which are easily adjustable by Feshbach or confinement induced resonances [@Feshbach2010; @CIR1998; @CIR2000; @CIR2003; @CIR2010]. However, it has been shown that this separation criterion, while valid in homogeneous systems, should be applied with care in inhomogeneous geometries. There, system parameters such as trap frequency, particle numbers or mass ratio, have also an impact on the miscible-immiscible phase boundary [@brokenRuleTrapKevrekidis2009; @brokenRuleTrapHu2012; @brokenRuleTrapProukakis2016; @brokenRuleMassBoronat2018; @brokenRuleImbalanceZhang2020]. From the intuitive point of view the trap pressure favors miscibility, since it costs energy to extend in space. Thus, it requires stronger inter-component repulsion for the species to separate. However, there are still open questions regarding the impact of different length scales, the characterization of boundaries between the immiscible phases and what type of separation will occur once the critical coupling is reached. Another relevant topic affecting the critical coupling strength for a transition as well as the resulting type of phase are the inter-species correlations, which generate entanglement between the components and lead to bunching of particles of the same species. Although a mean-field treatment is often justified in experimental setups, a very thorough numerical analysis of 1D few-body systems has revealed that an asymmetric immiscible phase is one of the two possible configurations of an entangled many-body state, the other one being the mirror image. The one body densities of this so-called composite fermionization phase [@phasesZollner2008; @phasesHao2008; @phasesPolls2014; @phasesZinner2015; @phasesPyzh2018] preserve parity symmetry of the underlying trapping potential and have a high spatial overlap, which is uncharacteristic for an immiscible phase. Nevertheless, the components are indeed separated, which is encoded in the inter-species two-body density matrix. In experiments, the single-shots do not represent one-body densities but are projections on one of the two mutually exclusive configurations. An averaging procedure would reveal a parity preserving density, unless the Hamiltonian itself violates that symmetry, such as not coinciding trap centers of the one-body potentials. Apart from composite fermionization, there is a whole class of so called spin-chain phases with an even higher degree of entanglement [@spinchain2014; @spinchain2015; @spinchain2016]. When all interactions in the system become nearly-resonant, many states become quasi-degenerate and particles, being bosons, gain fermionic features like the Pauli exclusion principle. Considering the above, our work addresses three different points. First, we characterize the phase diagram in a three-dimensional parameter space spanned by the ratio of the harmonic trap lengths, the inter-species coupling strength and the particle number ratio. We switch off intra-component interactions to reduce the complexity and gain a better understanding of the separation process. A very rich phase diagram is revealed admitting two tri-critical points, where three phases may coexist. Second, within the framework of a mean-field approximation, we perform a detailed analysis of the separation mechanism. Equipped with this knowledge we derive a selection rule for phase separation processes and a simple algorithm to estimate the miscible-immiscible phase boundary. Finally, we investigate the deviations of the mean-field picture to a many-body approach. For this we use the Multi-Layer Multi-Configuration Time-Dependent Hartree method for bosonic mixtures [@MLX2013; @MLX2013b; @MLX2017]. We find that in the vicinity of the high-entanglement regime the phase diagram is indeed greatly affected. The symmetry-broken phase is replaced by the composite fermionization, while the onset of symmetry-breaking is linked to the degree of entanglement reaching a certain threshold. Furthermore, the location of this beyond mean-field regime strongly depends on the harmonic length scale ratio and the particle number ratio. We also find that the one-body density is in general not sufficient to distinguish between a core-shell phase and the composite fermionization. This work is organized as follows. In Sec. \[sec:general\_setup\] we introduce our physical setup and in Sec. \[sec:methodology\] our computational approach
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: '[ We define a class of orthosymplectic $osp(m;j|2n;\omega)$ and unitary $sl(m;j|n;\epsilon)$ superalgebras which may be obtained from $osp(m|2n)$ and $sl(m|n)$ by contractions and analytic continuations in a similar way as the special linear, orthogonal and the symplectic Cayley-Klein algebras are obtained from the corresponding classical ones. Casimir operators of Cayley-Klein superalgebras are obtained from the corresponding operators of the basic superalgebras. Contractions of $sl(2|1)$ and $osp(3|2)$ are regarded as an examples. ]{}' author: - | N. A. GROMOV, I. V. KOSTYAKOV, V. V. KURATOV\ Department of Mathematics,\ Syktyvkar Branch of IMM UrD RAS,\ Chernova st., 3a, Syktyvkar, 167982, Russia\ E-mail: gromov@dm.komisc.ru title: ON CONTRACTIONS OF CLASSICAL BASIC SUPERALGEBRAS --- §[ + ]{} Introduction ============ Since their discovery [@1], [@2], [@3] in 1971 the supersymmetry is used in different physical theories such as Kaluza–Klein supergravity [@W-86], supersymmetric field theories of the Wess–Zumino type [@K-75], massless higher-spin field theories [@Vas-90]. Recently the secret theory [@B-96] (or S-theory) that includes superstring theory and its super p-brane and D-brane [@BIK] generalizations was discussed. All these theories are build algebraically with the help of some superalgebra in their base. In this work we wish to present a wide class of Cayley–Klein (CK) superalgebras which may be used for constructions of different sypersymmetric models. For an ordinary Lie groups (or algebras) the title CK was initially used for the short name of the set of a motion groups of a spaces of constant curvature. It is well known that there are $3^n$ n-dimensional spaces of constant curvature and their motion groups may be obtained from the orthogonal group $SO(n+1)$ with the help of contractions and analytical continuations [@NG]. Later the notion CK was extend to the case of unitary and symplectic groups (algebras) [@JMP]. The typical (and attractive) property of CK groups is that all of them are depend on the same number of independent parameters as the corresponding simple classical group. On the level of Lie algebras this means that all CK algebras of the same type have the equal dimensions. A basic superalgebras include a simple algebras as an even subalgebras, so it looks quite natural to introduce a new class of superalgebras with CK algebras as an even subalgebras. A superalgebra as an algebraic structure contain (as compared with Lie algebra) a new additional operation, namely, $Z_2$-grading. So under contraction of superalgebra this $Z_2$-grading must be conserved. To our knowledge contraction of orthosymplectic superalgebra to the superkinematics was first regarded in [@Rem]. The detailed investigation a class of contraction of $osp(1|2)$ and $osp(1|4)$ to the kinematical Poincar$\acute{e}$ and Galilei superalgebras was made in [@Val-99]. Contraction of unitary superalgebra $Gsu(2)=sl(2|1)$ as well as their representations was described in [@Pat]. Later the notion of contraction was generalized [@MP] to the case of Lie algebra with an arbitrary finite grading group and is known as graded contractions. Nevertheless the particular case of the simplest $Z_2$-grading deserve an independent interest. The preliminary results was reported in [@GKK]. The paper is organized as follows. In section 2 the orthogonal, symplectic and special linear CK groups and algebras are briefly remind. Section 3 is devoted to the orthosymplectic CK superalgebras. CK unitary superalgebras are regarded in section 4. Casimir operators of the CK unitary and orthosymplectic superalgebras are described in section 5. Orthogonal, symplectic and special linear Cayley-Klein algebras ================================================================ Special linear $sl(m),$ orthogonal $so(m)$ and symplectic $sp(2n)$ algebras are even subalgebras of classical basic superalgebras. On the other hand all of them may be contracted and analytically continued to the set of CK algebras. Lie groups and algebras are in close relations. CK group $SO(m;j)$ is defined as the set of transformations of vector space ${\bf R}_m(j),$ which preserve the quadratic form $x^2(j)=x^t(j)x(j) =x_1^2+\sum_{k=2}^{m}(1,k)^2x^2_k, $ where $ (i,k)=\prod^{\max(i,k)-1}_{p=\min(i,k)}j_p, \, (i,i)=1, $ each parameter $j_k=1,\iota_k,i,$ where $\iota_k$ are nilpotent $ \iota^2_k=0,$ commutative $\iota_k\iota_p=\iota_p\iota_k \neq 0$ generators of Pimenov algebra ${\bf}P(\iota).$ Cartesian components of vector $x(j)\in {\bf R}_m(j)$ are $x^t(j)=(x_1,j_1x_2, \ldots ,(1,m)x_m)^t, $ as it is easily follows from $x^2(j).$ For $m\times m$ matrix $g(j) \in SO(m;j)$ the transformation $g(j): {\bf R}_m(j) \rightarrow {\bf R}_m(j)$ means that the vector $x'(j)=g(j)x(j)$ has exactly the same distribution of parameters $j$ among its components as $x(j).$ This requirement give an opportunity to obtain the distribution of parameters $j$ among elements of matrix $g(j),$ i.e. to build the fundamental representation of CK group $SO(m;j)$ starting from the quadratic form. It is remarkable that the same distribution of the parameters $j$ is hold for CK Lie algebra $so(m;j),$ namely $A_{ik}=(i,k)a_{ik},$ for $A \in so(m;j).$ The set of transformations $ L(j): {\bf R}_m(j)\to {\bf R}_m(j) $ with the property $ \det L(j)=1 $ form CK special linear group $ SL(m;j)$ and the corresponding CK algebras $ sl(m;j)$ are given by the $m \times m $ matricies $l(j),$ tr $ l(j)=0.$ Let us stress that in Cartesian basis all matricies from $ SL(m;j), SO(m;j), sl(m;j), so(m;j) $ have identical distribution of parameters $j$ among its elements, i.e. they are of the same type as the matricies with elements from Pimenov algebra $P(j).$ CK symplectic group $Sp(2n;\omega)$ is defined as the set of transformations of ${\bf R}_n(\omega) \times {\bf R}_n(\omega),$ which preserve the bilinear form $S(\omega)=S_1+ \sum_{k=2}^{n}[1,k]^2S_k,$ where $S_k(y,z)=y_kz_{n+k}-y_{n+k}z_k, \, [i,k]=\prod^{\max(i,k)-1}_{p=\min(i,k)} \omega_k, \, [i,i]=1, \, \omega_k=1,\xi_k,i, \, \xi^2_k=0, \, \xi_k\xi_p=\xi_p\xi_k.$ The distribution of parameters $\omega_k$ among matrix elements of the fundamental representation $M(\omega)=\left( \begin{array}{cc} H(\omega) & E(\omega) \cr F(\omega) & -H^t(\omega) \end{array} \right)$ of the CK symplectic algebra $sp(2n;\omega)$ may be obtained as for orthogonal CK algebras and is as follows: $B_{ik}=[i,k]b_{ik},$ where $B=H(\omega),E(\omega),F(\omega).$ Orthosymplectic superalgebras $osp(m;j|2n;\omega)$ =================================================== Let $e_{IJ} \in M_{m+2n}$ satisfying $(e_{IJ})_{KL}=\delta_{IK}\delta_{JL}$ are elementary matrices. One defines the following graded matrix [@Fra] $$G=\left ( \begin{array}{c|c} I_m & 0 \cr \hline 0 & 0 \quad I_n \cr & -I_n \quad 0 \end{array} \right ) \label{0}$$ where $I_m,I_n$ are identity matrices. Let $i,j,\ldots=
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | The purpose of this paper is to calculate explicitly the volumes of Siegel sets which are coarse fundamental domains for the action of ${\mathrm{SL} _n (\mathbb{Z})}$ in $\mathrm{SL} _n (\mathbb{R})$, so that we can compare these volumes with those of the fundamental domains of ${\mathrm{SL} _n (\mathbb{Z})}$ in $\mathrm{SL} _n (\mathbb{R})$, which are also computed here, for any $n\geq 2$. An important feature of this computation is that it requires keeping track of normalization constants of the Haar measures. We conclude that the ratio between volumes of fundamental domains and volumes of Siegel sets grows super-exponentially fast as $n$ goes to infinity. As a corollary, we obtained that this ratio gives a super-exponencial lower bound, depending only on $ n $, for the number of intersecting Siegel sets. We were also able to give an upper bound for this number, by applying some results on the heights of intersecting elements in $ {\mathrm{SL} _n (\mathbb{Z})}$.\ **Keywords:** Arithmetic Groups, Siegel Sets, Coarse Fundamental Domains, Volumes. author: - Gisele Teixeira Paula title: 'Comparison of Volumes of Siegel Sets and Fundamental Domains for $\mathrm{SL}_n (\mathbb{Z})$ ' --- [Correspondence to be sent to: e-mail: giseletp@impa.br]{} \[section\] \[section\] \[section\] \[section\] \[section\] \[section\] \[section\] Introduction {#intro} ============ Siegel sets were first introduced in the study of quadratic forms by Siegel [@siegel2] in 1939, with some results following from previous works of Hermite and Korkine-Zolotarreff. In a fundamental paper [@borelharish], Borel and Harish-Chandra have generalised this notion and used Siegel domains to prove finiteness of covolumes of non-cocompact arithmetic subgroups. The simple structure of Siegel sets, compared to those of the actual fundamental domains makes them appealing for applications. For example, in his recent paper [@young], R. Young exploited their properties to obtain new results in geometric group theory. Still very little is known about the geometry of Siegel sets in general. In his book [@morris], Morris describes algebraically examples of Siegel sets not only for $\mathrm{SL} _n (\mathbb{R})$, with $n\geq 2$ , but also in the case of any semisimple Lie group G with a given Iwasawa decomposition. In this paper we recall one of the main properties of Siegel sets – the finiteness of their volumes. We evaluate these volumes explicitely in the basic case of Siegel sets for ${\mathrm{SL} _n (\mathbb{Z})}$ in $\mathrm{SL} _n (\mathbb{R})$ for any $n\geq 2$. We then compare these volumes with the actual covolumes of ${\mathrm{SL} _n (\mathbb{Z})}$. To this end, we have to deal with an essential difficulty related to the normalization of the Haar measure. For calculating the volumes of Siegel sets, the main difficulty is to find a nice way to describe the region of integration, which we solve with an appropriate change of coordinates. Most of the volume computations that followed Siegel’s original approach were not careful about the normalization constants, just noting that they are computable and could be calculated from the proof. In Section \[domfund\], we follow Garret’s notes on Siegel’s method [@garret] to compute the volumes of the quotients ${\mathrm{SL} _n (\mathbb{Z})}\backslash \mathrm{SL} _n (\mathbb{R})$ for $n\geq 3$ using induction and the volume of $\mathrm{SL}_2({\mathbb{Z}}) \backslash\mathrm{SL}_2({\mathbb{R}})$, that is computed in [@garret]. Our main goal here is to keep a careful track of the normalization constants. The main tools we use are the Poisson Summation formula, the Iwasawa decomposition of $G$ and the choice of a good Haar measure normalization on each group. At the end of the section we discuss the relation between the normalization of the measure we used and the canonical normalization that comes from the metric associated to the Killing form on $\mathfrak{sl}_n({\mathbb{R}})$. By comparing the volumes of Siegel sets and the volumes of fundamental domains of ${\mathrm{SL} _n (\mathbb{Z})}$, we conclude that somewhat surprinsingly the ratio between them grows super-exponentially fast with $n$. As an application of the computations presented here, in Section \[morr\] we show that given a Siegel set $\Sigma$ of ${\mathrm{SL} _n (\mathbb{Z})}$, we have an explicit lower bound for the number of elements $\gamma \in {\mathrm{SL} _n (\mathbb{Z})}$ such that $\gamma \Sigma$ intersects $\Sigma$. This bound is given by the ratio between $\mathrm{vol}(\Sigma)$ and $\mathrm{vol}(\mathrm{SL}_n({\mathbb{Z}}) \backslash\mathrm{SL}_n({\mathbb{R}}))$ – see Corollary \[corol1\]. We also give a proof that this result is consistent with a recent work of M. Orr [@martinorr], which generalizes a previous result of P. Habegger and J. Pila [@habegger] on the height of such elements $\gamma$, motivated by the study of Shimura varieties and their unlikely intersections. More precisely, Orr’s result gives, as a corollary, an upper bound for the number of intersecting Siegel sets while our work provides a lower bound for this number (see Corollary \[final\]). It would be interesting to compute the volumes of Siegel sets in other cases, for example for the action of well known Bianchi groups $\Gamma_d = \mathrm{SL}_2(\mathcal{O}_d)$ on the hyperbolic three-dimensional space $\mathbb{H}^3$. In this case we should have to deal with another difficulty when describing Siegel sets, because of the fact that as $d$ grows the quotients $\Gamma_d \backslash \mathbb{H}^3$ have a growing number of cusps. It would be worth doing these computations in the future, and then comparing them to the results obtained in this paper. The Iwasawa decomposition of . {#iwasawa} ============================== Let $n\geq 2$, $G=\mathrm{SL}_n(\mathbb{R})$ and $\Gamma = \mathrm{SL}_n(\mathbb{Z})$. Consider the action of $\Gamma$ by left translations on $G$ and let $$K = \mathrm{SO}_n;$$ $$A =\left\{\mbox{diag}(a_1,\ldots ,a_n); \displaystyle{ \prod_{i=1}^n{a_i} = 1} ; a_i > 0, \mbox{ for any } i=1,\ldots, n\right\};$$ $$N =\left\{(n_{ij})_{i,j} \in G ; n_{ii}=1 \mbox{ and } n_{ij}= 0 \mbox{ for } i>j\right\}.$$ The product map $$\Phi: K\times A \times N \longrightarrow G$$ $$(k,a,n)\mapsto kan$$ is a homeomorphism. We can construct an inverse map for $\Phi$ by using the Gram-Schmidt orthonormalization process. Take $g\in G$ and let $x_1, \ldots ,x_n$ be its columns. Then define inductively $y_1, \ldots ,y_n$ by $$y_1 = \frac{x_1}{\left\|x_1\right\|};$$ $$y_i = \frac{\widetilde{y}_i}{\left\|\widetilde{y}_i\right\|}, \mbox{ where } \widetilde{y}_i = x_i - \displaystyle{\sum_{l=1}^{i-1}{\left\langle x_i,y_l\right\rangle}y_l}; \mbox{ for } i = 2, \ldots, n.$$ Let $e_1, \ldots ,e_n$ be the standard orthonormal basis of $\mathbb{R}^n$. Then there exists an unique $k\in {\mathrm{SO} _n}$ such that $k(y_i) = e_i$, $\mbox{ for any } i = 1, \ldots n$. Therefore $$k(\widetilde{y}_i) = k(\left\|\widetilde{y}_i\right\| y_i) = \left\|\widetilde{y}_i\right\| k(y_i) = \left\|\widetilde{y}_i\right\|e_i,\mbox{ for any } i=1, \ldots , n.$$ So there is a diagonal matrix $a = \mbox{diag}(\left\|\widetilde{y}_1\right\|, \ldots, \left\|\widetilde{y}_n\right\|)$, such that $$k(\widetilde{y}_i) =a(e_i), \mbox{ for any } i=1, \ldots , n.$$ Also, it is easy to see that $y_i \in \left\langle x
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We analyze the spatial and velocity distributions of confirmed members in five massive clusters of galaxies at intermediate redshift ($0.5 < z < 0.9$) to investigate the physical processes driving galaxy evolution. Based on spectral classifications derived from broad- and narrow-band photometry, we define four distinct galaxy populations representing different evolutionary stages: red sequence (RS) galaxies, blue cloud (BC) galaxies, green valley (GV) galaxies, and luminous compact blue galaxies (LCBGs). For each galaxy class, we derive the projected spatial and velocity distribution and characterize the degree of subclustering. We find that RS, BC, and GV galaxies in these clusters have similar velocity distributions, but that BC and GV galaxies tend to avoid the core of the two $z\approx0.55$ clusters. GV galaxies exhibit subclustering properties similar to RS galaxies, but their radial velocity distribution is significantly platykurtic compared to the RS galaxies. The absence of GV galaxies in the cluster cores may explain their somewhat prolonged star-formation history. The LCBGs appear to have recently fallen into the cluster based on their larger velocity dispersion, absence from the cores of the clusters, and different radial velocity distribution than the RS galaxies. Both LCBG and BC galaxies show a high degree of subclustering on the smallest scales, leading us to conclude that star formation is likely triggered by galaxy-galaxy interactions during infall into the cluster.' author: - 'Steven M. Crawford' - 'Gregory D. Wirth' - 'Matthew A. Bershady' title: Spatial and Kinematic Distributions of Transition Populations in Intermediate Redshift Galaxy Clusters --- We thank the referee for the careful reading of our manuscript and the constructive criticism that improved our paper. S.M.C. acknowledges the South African Astronomical Observatory and the National Research Foundation of South Africa for support during this project. M.A.B. acknowledges suppport from NSF grant AST-1009471. This work made use of IRAF, a software package distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. [*Facilities:*]{}
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We have developed a formalism to study non-adiabatic, non-radial oscillations of non-rotating compact stars in the frequency domain, including the effects of thermal diffusion in the framework of general relativistic perturbation theory. When a general equation of state depending on temperature is used, the perturbations of the fluid result in heat flux which is coupled with the spacetime geometry through the Einstein field equations. Our results show that the frequency of the first pressure ($p$) and gravity ($g$) oscillation modes is significantly affected by thermal diffusion, while that of the fundamental ($f$) mode is basically unaltered due to the global nature of that oscillation. The damping time of the oscillations is generally much smaller than in the adiabatic case (more than two orders of magnitude for the $p-$ and $g-$modes) reflecting the effect of thermal dissipation. Both the isothermal and adiabatic limits are recovered in our treatment and we study in more detail the intermediate regime. Our formalism finds its natural astrophysical application in the study of the oscillation properties of newly born neutron stars, neutron stars with a deconfined quark core phase, or strange stars which are all promising sources of gravitational waves with frequencies in the band of the first generation and advanced ground-based interferometric detectors.' address: - | $^1$ Dipartimento di Fisica “G.Marconi", Universit\` a di Roma “La Sapienza"\ and Sezione INFN ROMA1, piazzale Aldo Moro 2, I-00185 Roma, Italy - | $^2$ Departament de Física Aplicada, Universitat d’Alacant,\ Apartat de correus 99, 03080 Alacant, Spain - '$^3$ Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA' author: - 'L. Gualtieri$^1$, J.A. Pons $^2$, and G. Miniutti $^3$' title: 'Non-adiabatic oscillations of compact stars in general relativity' --- ł ø Introduction ============ The theory of stellar oscillations has been a fundamental tool in the study of stellar interiors and stellar properties during decades. For Newtonian stars the theory is well established and observationally tested. In some areas such as helioseismology, very high precision measurements of the normal oscillation frequencies allow for a detailed understanding of the properties of the solar interior (see [@ChDa] for a review). It is usually assumed that stellar pulsations are adiabatic because the thermal relaxation timescale in stellar interiors is orders of magnitude larger than the pulsation periods. However, in some particular situations, energy transfer in the external regions of stars is fast enough to affect the pulsation properties, and some work has been devoted to study, for example, non-adiabatic oscillations of white dwarfs [@LB93] or the coupling between the different non-linear modes in non–adiabatic situations [@vH94]. Concerning relativistic stars, the study of pulsation properties of neutron stars or compact objects such as strange stars or hybrid stars has become more popular in the last decade (see e.g. the reviews in Ref. [@rev1; @rev2; @rev3]), mainly due to the expectations of detecting the gravitational emission from pulsating nearby compact stars with the current or next generation ground-based interferometric detectors (LIGO, VIRGO, GEO600, TAMA). The theory of non-adiabatic relativistic stellar pulsations is not as well developed as its Newtonian cousin, due the higher complexity of the formalisms, but also due to the fact that most attention has been paid to the study of old neutron stars, in which the thermal conductivity is too small to have visible non-adiabatic effects. Nevertheless, there might be situations in which this is not entirely true. For example, in a newly born neutron star the thermal structure is determined by neutrino diffusion [@BL86; @Pon99], instead of electron conduction as in old neutron stars [@BHY01], and the effects of non-adiabaticity are likely to be relevant in the outer layers. Another interesting possibility is the existence of deconfined quark matter in neutron star cores, or in the form of strange stars. There seems to be common agreement in that, if strange matter exists, it has to be in a color superconducting phase. It has been recently pointed out [@SE02] that the thermal conductivity of such exotic matter is many orders of magnitude larger than in normal neutron star matter, and for sufficiently high temperatures the timescale for thermal relaxation can be as short as $10^{-4}$ s., comparable with typical oscillation periods of compact objects. If this or other exotic scenarios (hyperons, kaon condensates) happen to be true, our common belief that compact star pulsations are adiabatic must be modified, and some care must be taken to understand how non-adiabatic effects change the oscillation properties and therefore the predicted gravitational wave signals of pulsating compact stars. With this motivation, we have derived a formalism that includes the effects of heat transfer and chemical diffusion (important in proto-neutron stars, where lepton diffusion is the driving force of thermodynamical changes) in a relativistic analysis of stellar perturbations. Some work in this line has been done in the past for radial oscillations [@MMIH]. We have considered the case of non-radial oscillations, since this is the case of interest for gravitational wave emission, by extending in a simple way the formalism of Lindblom and Detweiler [@LD; @DL], and complementing the system of equations in the frequency domain with the additional equations for thermal or chemical diffusion. In the present study we focused on the effects of heat transfer and chemical diffusion, neglecting the rotation of the star. The paper is structured as follows. In Sec. II we derive the equations of non-adiabatic stellar perturbations in general relativity. In Sec. III we describe the equation of state we have used and we define the thermodynamical quantities we need in our derivation. The additional equations (transport of energy) that close the system are discussed in section IV. Section V is devoted to the numerical implementation of the complete set of equations. In Sec. VI we discuss in detail the results of the numerical integrations and in Sec. VII we draw the main conclusions and comment on possible future extensions of this work. Derivation of the equations =========================== The stress–energy tensor of a non–perfect fluid[^1], including heat flux but without viscosity, has the general form [@MTW] T\_ = (+p)u\_u\_+pg\_ +u\_q\_+u\_q\_ where $\rho$ is the energy density, $p$ is the pressure, $u^{\alpha}$ is the matter four–velocity, and $q^{\alpha}$ is the heat flux which satisfies $u_{\alpha}q^{\alpha}=0\,.$ In addition, we will also consider the conservation equation for the baryon number density $n$ (equation of continuity) \[cont\] (nu\^)\_[;]{}=0. Background configuration ------------------------ Hereafter, we will neglect the heat flux in the background configuration. This is justified if the background is assumed to be in thermal equilibrium. Even in the case that thermal or chemical gradients are present, the assumption of stationary background is valid if the timescale for global thermodynamical changes is much larger than the timescale of variation of the perturbations. For example, in newly born neutron stars, structural changes happen on timescales of the order of 0.1-1 second, while we are interested in oscillations of periods of the order of milliseconds. Therefore, labeling background quantities by the superscript $\z$ , in the following we assume that q\_=0 such that, our background is a perfect fluid, spherically symmetric star, and is described by the well known TOV equations: g\_&=&[diag]{}(-e\^[(r)]{},e\^[ł(r)]{},r\^2\_[ab]{})\ u\^[(0)]{}&=&(e\^[-/2]{},0,0,0)\[defu0\]\ T\_&=&(+p)u\_u\_+pg\_\ ’&=&-e\^[ł]{}(M-4r\^3)\ ’&=&e\^[ł]{}(M+4pr\^3)\ p\^[(0)]{}&=&-(+p)\_[,r]{} where we denote with a prime $\pa/\pa r$, and we have defined \_[ab]{}(1,\^2) (here and in the following, greek indexes $\mu=0,\dots,3$ run on spacetime, latin indexes $i=1,\dots,3$ run on the spatial subspace, and latin indexes $a=\theta,\phi$ run on the sphere). Perturbations ------------- The equations of stellar perturbations in the perfect fluid case have been derived by many authors in different formalisms [@altri; @altri2]. In this paper, we follow the notation of Lindblom & Detweiler [@LD; @DL], (LD hereafter) and we will try to take the parallelism between their and our equations as far as possible. The perturbed stress–energy tensor has the form T\_&=&(+p)u\_u\_+ (+p)(u\_u\_+u\_u\_)\ &&+p g\_+pg\_ +(q\_u\_+u\_q\_).\[pertT\] Following the conventions in [@LD; @DL], we expand in spherical harmonics the metric perturbations with polar symmetry (we will not discuss perturbations with axial symmetry, which are not related with fluid oscillations in non–rotating stars) as g\_=-r\^le\^[ø
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Stationary wave functions at the transition between plateaus of the integer quantum Hall effect are known to exhibit multi-fractal statistics. Here we explore this critical behavior for the case of scattering states of the Chalker-Coddington network model with point contacts. We argue that moments formed from the wave amplitudes of critical scattering states decay as pure powers of the distance between the points of contact and observation. These moments in the continuum limit are proposed to be correlation functions of primary fields of an underlying conformal field theory. We check this proposal numerically by finite-size scaling. We also verify the CFT prediction for a 3-point function involving two primary fields.' author: - | R. Bondesan, D. Wieczorek, M.R. Zirnbauer\ *Institut für Theoretische Physik, Universität zu Köln, Zülpicher Stra[ß]{}e 77, 50937 Köln, Germany* title: Pure scaling operators at the integer quantum Hall plateau transition --- #### Introduction. A revealing monitor of quantum critical behavior driven by disorder is multi-fractal wave-function statistics. In this vein, theory and experiment have focused on the multi-fractality at Anderson localization transitions between different topological phases of disordered electrons in two dimensions, the prime example being the transition between plateaus of the Hall conductance in the integer quantum Hall (IQH) effect [@Evers2008]. There has long been a consensus that it should be possible to describe the IQH transition by a conformal-invariant effective field theory. Yet, in spite of many efforts [@Bhaseen2000; @Ikhlef2011; @Bettelheim2012] it remains an unsolved problem to identify that conformal field theory (CFT) description. To make progress with the search for it, one needs to find the conformal fields and determine their scaling dimensions. A step in this direction was taken in [@Janssen1999; @Klesse2001], where the moments of the point-contact conductance were introduced and studied as correlation functions. Alas, these are coherent sums of conformal field correlators and therefore do not give direct access to individual conformal fields in pure form; see [@Obuse2013] for a recent discussion. The purpose of this Letter is to put forth a large (and so far unrecognized) class of multi-fractal observables that correspond directly to correlators of CFT primary fields. Our results are motivated by a recent $\sigma$-model based classification of scaling fields at Anderson transitions [@Gruzberg2011; @Gruzberg2013]. The new feature here is that we focus on the scattering states of an *open* system, while the previous work concerned moments of the local density of states for *closed* systems. For concreteness and simplicity, we work with the Chalker-Coddington (CC) network model. The CC model is known to be related by a duality transformation to a statistical mechanical system of vertex-model type [@Gruzberg1997; @Zirnbauer1997]. The main advance of our work is to construct lattice approximations for pure scaling fields on both sides of the duality – as scattering observables of the CC model and, equivalently, as operators of the vertex model. Both representations serve a purpose. Based on the latter, we argue that our lattice operators indeed are discretizations of pure scaling fields, while the former makes it possible to compute their conformal dimensions numerically by finite-size scaling. #### CC model and scattering states. We begin with a quick review of the CC model [@Chalker1988]. This is a network model for the quantum dynamics of an electron moving in two dimensions under the influence of a strong magnetic field and a random electric potential. Formulated on a square lattice, the model is built from elementary plaquettes with a definite sense of circulation that alternates between neighboring plaquettes. The links of the network are directed accordingly, so that each site has two incoming and two outgoing links. The electron wave function lives on the links and evolves in discrete time as $\vert \psi(t+1) \rangle = {U}\vert \psi(t) \rangle$ by a unitary operator ${U}= {U_\text{s}}\, {U_\text{r}}$. The factor ${U_\text{r}}$ is a diagonal matrix modeling the propagation along the links; it assigns to each link a random, independent and uniformly distributed ${\mbox{U}}(1)$ phase. The factor ${U_\text{s}}$ is non-random and consists of $2 \times 2$ matrices that describe the transfer from incoming to outgoing links at each site. When the probabilities for transfer to the left/right are equal, the model is critical and falls into the universality class of the IQH transition [@Chalker1988]. While it is of some interest to study the spectral properties and stationary wave function statistics of the closed network, here we turn to an open network. One major advantage of the open setting is that it allows one to formulate and study CFT correlators right at the critical point. (In contrast, Green’s functions of the closed system are defined by introducing a regularization which places the system slightly off criticality.) The network is opened up by severing a subset of links $C = \{ {\bm{c}}_1, \dots, {\bm{c}}_n\}$, which we call point contacts. Each cut makes for one network-incoming and one network-outgoing link where electric current is injected resp. drained by connecting the network to charge reservoirs. The dynamics in the presence of the point contacts is [@Janssen1999] $$\begin{aligned} \left|\psi(t+1)\right> &= {U}\Big( Q \left|\psi(t) \right> + \sum\nolimits_{l=1}^n \left|{\bm{c}}_l \right> a_l \Big) ,\end{aligned}$$ where the projector $Q = 1 - \sum_{l=1}^n | {\bm{c}}_l \rangle \langle {\bm{c}}_l|$ implements the draining action at the outgoing open ends, and $a_l$ is the amplitude of the flux per time step fed into the incoming end at ${\bm{c}}_l$. We then consider stationary states of this open-network dynamics. Without loss we take the quasi-energy to be zero, as the statistical properties of the network model are independent of it. We refer to the solutions of the stationarity condition $\vert \psi(t+1) \rangle \equiv \vert \psi(t) \rangle$ as *scattering states*. For a system with $n$ point contacts, a basis of scattering states is furnished by $$\begin{aligned} \label{eq:scatt_state} \vert \psi_k \rangle \equiv {U}(1 - Q {U})^{-1} \vert {\bm{c}}_k \rangle \quad (k = 1, \ldots n) .\end{aligned}$$ Note that $|\!| Q {U}|\!| < 1$, which ensures that the inverse exists as a convergent power series $(1 - Q {U})^{-1} = \sum_{t = 0}^\infty (Q {U})^t$. #### Main results and numerics. The first result to be announced is a statement about two-point functions, allowing one to measure the scaling dimensions of primary fields. Consider a set of links $R = \{ {\bm{r}}_1,\dots, {\bm{r}}_n\}$ for the purpose of (non-invasive) observation, and define for $i, j, m = 1,\dots,n$: $$\begin{aligned} \label{eq:An} A_m &= \operatorname{Det}K^{(m)} , \quad K_{ij}= \sum_{k=1}^n \psi_k({\bm{r}}_i)\, \overline{\psi_k({\bm{r}}_j)} ,\end{aligned}$$ where $K^{(m)}$ denotes the upper-left $m\times m$ sub-matrix of $K$. These observables are the open-network counterparts of those considered in [@Gruzberg2013]. Suppose now that coarse graining of the lattice takes the contact and observation regions ($C$ and $R$) to single points, i.e. ${\bm{r}}_i \to {\bm{r}}$ and ${\bm{c}}_i \to {\bm{c}}$ for all $i$, while ${\bm{r}}$ and ${\bm{c}}$ remain distinct. Denoting disorder averages by ${\mathbb{E}}\{ \ldots \}$ and CFT correlators as $\langle \ldots \rangle$, we then claim that [@footnote1] $$\label{eq:2pt} \begin{split} &{\mathbb{E}}\left\{ \left(A_1^{q_1-q_2} A_2^{q_2-q_3} \cdots A_n^{q_n}\right)(R,C) \right\} \\ &= a^{2\Delta_{q_1 \ldots\, q_n}} \big\langle \varphi_{q_1 \ldots\, q_n}({\bm{r}}) \, \Phi({\bm{c}}) \big\rangle \,, \end{split}$$ where $q_1, \ldots, q_n$ are complex numbers, $\varphi_{q_1\dots\, q_n}$ is a CFT primary
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | We present a finite difference method to compute the principal eigenvalue and the corresponding eigenfunction for a large class of second order elliptic operators including notably linear operators in nondivergence form and fully nonlinear operators.\ The principal eigenvalue is computed by solving a finite-dimensional nonlinear min-max optimization problem. We prove the convergence of the method and we discuss its implementation. Some examples where the exact solution is explicitly known show the effectiveness of the method. author: - 'Isabeau Birindelli[^1]' - 'Fabio Camilli[^2]' - 'Italo Capuzzo Dolcetta[^3]' date: 'version: ' title: On the approximation of the principal eigenvalue for a class of nonlinear elliptic operators --- [**MSC 2000**]{}: : 35J60, 35P30, 65M06. [**Keywords**]{}: : Principal eigenvalue, nonlinear elliptic operators, finite difference schemes, convergence. Introduction {#intro} ============ Consider the elliptic self-adjoint operator $$\label{Lself} Lu(x)={{\partial}}_i\left(a_{ij}(x){{\partial}}_{j}u(x)\right),$$ where $a_{ij}=a_{ji}$ are smooth functions in $ \Omega$, a smooth bounded open subset of ${{\mathbb R}}^n$, satisfying $a_{ij}\xi_i \xi_j\ge \alpha|\xi|^2$ for some $\alpha>0$. It is well-known that the minimum value $\lambda_1$ in the Rayleigh-Ritz variational formula $$\lambda_1= \inf_{{\varphi}\in H^1_0(\Omega), {\varphi}\not\equiv 0} \frac{- \int_{\Omega} {\varphi}(x)\, L{\varphi}(x) \,\,dx\, }{ \|{\varphi}\|^2_{L^2(\Omega)}}=\inf_{{\varphi}\in H^1_0(\Omega), {\varphi}\not\equiv 0} \frac{\int_{\Omega} a_{ij}(x){{\partial}}_{j} {\varphi}(x){{\partial}}_i{\varphi}(x) \,\,dx\, }{ \|{\varphi}\|^2_{L^2(\Omega)}}$$ is attained at some function $w_1$ satisfying $$\left\{ \begin{array}{ll} Lw_1(x)+\lambda_1 w_1(x)=0 \qquad& x\in {\Omega}, \\ w_1(x)=0 & x\in \partial{\Omega}. \end{array} \right.$$ The number $\lambda_1$ is usually referred to as the principal eigenvalue of $L$ in $\Omega$ and $w_1$ is the corresponding principal eigenfunction. For operators of the form and also more general linear operator in divergence form there is a vast literature on computational methods for the principal eigenvalue, see for example [@BO], [@B], [@H], [@W]. General non-divergence type elliptic operators, namely $$\label{Lnonself} Lu(x)=a_{ij}(x){{\partial}}_{ij}u(x)+b_i(x){{\partial}}_i u(x)+c(x)u$$ are not self-adjoint and the spectral theory is then much more involved: in particular, the Rayleigh-Ritz variational formula is not available anymore. In the seminal paper [@DV2] by M.D. Donsker and S.R.S. Varadhan, a min-max formula for the principal eigenvalue of a class of elliptic operators $L$ including (\[Lnonself\]) was proved, namely $$\label{PE2intro} {\lambda}_{1}=-\inf_{{\varphi}\in C^2({\Omega}), {\varphi}>0}\;\sup_{x\in{\Omega}}\frac{L{\varphi}(x)}{{\varphi}(x)}.$$ In that papers other representation formulas for ${\lambda}_{1}$ were also proposed in terms of large deviations and of the average long run time behavior of the positive semigroup generated by $L$. A further crucial step in that direction is the paper [@BNV] by H. Berestycki, L. Nirenberg and S.R.S. Varadhan, where the validity of formula (\[PE2intro\]) is proved under mild smoothness assumptions ($\Omega$ a bounded open set and $a_{ij}\in C^0(\Omega)$, $b_i$, $c\in L^\infty(\Omega)$). Moreover it is proved that is equivalent to $${\lambda}_1:=\sup\{{\lambda}\in{{\mathbb R}}:\, \exists\, {\varphi}>0 \;\text{ such that}\;L{\varphi}+{\lambda}{\varphi}\le 0\quad\text{in ${\Omega}$}\}.\,$$ Following this path of ideas, notions of principal eigenvalue for fully nonlinear uniformly elliptic operators of the form $$F[u]= F(x, u(x), Du(x), D^2 u(x))$$ have been introduced and analyzed in [@A], [@BCDPR], [@BD], [@BEQ], [@IY], [@L]. A by now established definition of principal eigenvalue is given by $$\label{PEC} {\lambda}_1:=\sup\{{\lambda}\in{{\mathbb R}}:\, \exists\, {\varphi}>0 \;\text{ such that}\;F[{\varphi}]+{\lambda}{\varphi}\le 0\quad\text{in ${\Omega}$}\}\,$$ where the inequality in is intended in viscosity sense. It is possible to prove under appropriate assumptions, see -, that there exists a viscosity solution $w_1$ of $$\label{PE} \left\{ \begin{array}{ll} F[w_1]+ {\lambda}_1 w_1(x)=0 \qquad& x\in {\Omega}, \\ w_1(x)=0 & x\in \partial{\Omega}. \end{array} \right.$$ Moreover the characterization still holds in this nonlinear setting. As it is well-known, the principal eigenvalue plays a key role in several respects, both in the existence theory and in the qualitative analysis of elliptic partial differential equations as well in applications to large deviations [@A], [@DV2], bifurcation issues [@L], ergodic and long run average cost problems in stochastic control [@BEN]. For linear non self-adjoint operators and, a fortiori, for nonlinear ones the principal eigenvalue can be explicitly computed only in very special cases, see e.g. [@BL; @Pu], hence the importance to devise numerical algorithms for the problem. But, apart some specific case (see [@BEM] for the $p$-Laplace operator), approximation schemes and computational methods are not available in the literature, at least at our present knowledge. The aim of this paper is to define a numerical scheme for the principal eigenvalue of nonlinear uniformly elliptic operators via a finite difference approximation of formula . More precisely, denoting by ${{\mathbb Z}}^n_h=h{{\mathbb Z}}^n$ the orthogonal lattice in ${{\mathbb R}}^n$ where $h>0$ is a discretization parameter, we consider a discrete operator $F_h$ acting on functions defined on a discrete subset ${\Omega}_h\subset{{\mathbb Z}}^n_h$ of $\Omega$ and the corresponding approximated version of , namely $$\label{PE3intro} {\lambda}_{1,h}=-\inf_{{\varphi}>0}\sup_{x\in{\Omega}_h}\frac{F_h[{\varphi}](x)}{{\varphi}(x)}.$$ As for the approximating operators $F_h$, we consider a specific class of finite difference schemes introduced in [@KT0], [@KT1] since they satisfy some useful properties for the convergence analysis. We prove that if $F$ is uniformly elliptic and satisfies in addition some quite natural further conditions, then it is possible to define a finite difference scheme $F_h$ such that the discrete principal eigenvalues ${\lambda}_{1,h}$ and the associated discrete eigenfunctions $w_{1,h }$ converge uniformly in ${\Omega}$, as the mesh step $h$ is sent to $0$, respectively to the principal eigenvalue ${\lambda}_1$ and to the corresponding eigenfunction $w_1$ for the original problem (\[PE\]). It is worth pointing out that the proof of our main convergence result, Theorem \[main\], cannot rely on standard stability results for fully nonlinear partial differential equations, see [@BS], since the limit problem does not satisfy a comparison principle (see Remark \[convergence\] for details). We mention that our approach is partially inspired by the paper [@GO] where a similar approximation scheme is proposed for the computation of effective Hamiltonians occurring in the homogenization of Hamilton-Jacobi equations which can be characterized by a formula somewhat similar to . In Section \[sect2\] we introduce the main assumptions and we investigate some issues related to the Maximum Principle for discrete operators. In Section \[sect3\] we study the approximation method for a class of finite difference schemes and we prove the convergence of the scheme. In Section \[sect4\] we show that under some additional structural assumptions on $F_h$ the inf-sup problem can be transformed into a convex optimization problem on the nodes of the grid and we discuss its implementation. A few tests which show the efficiency of our method on some simple examples are reported in Section \[sect4\] as well. The Maximum Principle for discrete operators {#sect2} ============================================ We start by fixing some notations and the assumptions on the operator $F$. Set $\Gamma={\Omega}\times{{\mathbb R}}\times{{\
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We address the relation between star formation and AGN activity in a sample of 231 nearby ($0.0002<z<0.0358$) early type galaxies by carrying out a multi-wavelength study using archival observations in the UV, IR and radio. Our results indicate that early type galaxies in the current epoch are rarely powerful AGNs, with $P<10^{22}\,WHz^{-1}$ for a majority of the galaxies. Only massive galaxies are capable of hosting powerful radio sources while less massive galaxies are hosts to lower radio power sources. Evidence of ongoing star formation is seen in approximately 7% of the sample. The SFR of these galaxies is less than 0.1 $M_{\odot}yr^{-1}$. They also tend to be radio faint ($P<10^{22}\,WHz^{-1}$). There is a nearly equal fraction of star forming galaxies in radio faint ($P<10^{22}\,WHz^{-1}$) and radio bright galaxies ($P\geq10^{22}\,WHz^{-1}$) suggesting that both star formation and radio mode feedback are constrained to be very low in our sample. We notice that our galaxy sample and the Brightest Cluster Galaxies (BCGs) follow similar trends in radio power versus SFR. This may be produced if both radio power and SFR are related to stellar mass.' author: - 'Sravani Vaddi, Christopher P. O’Dea, Stefi A. Baum, Samantha Whitmore, Rabeea Ahmed, Katherine Pierce , Sara Leary' title: 'Constraints on Feedback in the local Universe: The relation between star formation and AGN activity in early type galaxies' --- Introduction ============ It is now well known that supermassive black holes (SBH) are present in the centers of massive galaxies [@kormendy1995] and share interesting correlations with the host galaxy properties such as the velocity dispersion [@ferrarese2000; @gebhardt2000], bulge mass [@haring2004], bulge luminosity [@kormendy1995; @magorrian1998] and galaxy light concentration [@graham2001]. These empirical correlations suggest that the growth of the central SBH and the host galaxy are fundamentally interlinked. AGN feedback may be responsible for the correlations observed [@silkrees1998; @king2003; @fabian2012], although it has also been argued that the origin of the observed relations is entirely non-causal and is a natural consequence of merger driven galaxy growth [@peng2007; @jahnke2011; @graham2013]. The energy released from the central SBH is several orders more than the binding energy of massive galaxies [@fabian2012]. This energy has the potential to expel gas from the galaxy (radiative-mode feedback) or deposit energy into the surroundings and thus heat up the inter galactic medium (mechanical feedback). These two modes may operate at different redshifts and accretion rates and ensure to regulate the growth of the black hole and the galaxy [review of @mcnamara2007; @fabian2012; @churazov2005].\ Various theoretical models that invoke AGN feedback in galaxy evolution are also able to successfully reproduce the observed galaxy luminosity function [@silkrees1998; @king2003; @granato2004; @dimatteo2005; @springel2005; @croton2006; @hopkins2008]. This theoretical picture has been supported by numerous observations. The strongest evidence comes from the brightest cluster galaxies (BCG) of cool core clusters, whose powerful radio jets have swept out cavities in the intracluster medium (ICM)[@rosner1989; @allen2001; @mcnamara2007]. While in some individual galaxies, energy transportation into the ISM via AGN driven outflows are observed to remove gas from the central regions of the galaxy [@crenshaw2003; @nesvadba2007; @alexander2010; @morganti2013]. All these show the negative effect of AGN feedback by removing/heating up the gas and eventually suppressing the star formation and regulating the galaxy growth. However, several other theoretical studies [@begelman1989; @rees1989; @silk2005; @santini2012; @silk2013] have reported an increased star formation rate in AGN especially at high redshifts via induced pressure by jets/winds. All this evidence thus far has been obtained mostly from studies of large groups, clusters and galaxies at higher redshifts. At low redshifts, the majority of the luminous AGNs reside in early type galaxies [@mclure1999; @bahcall1997]. But how common AGN feedback is in the local universe is not yet well understood. To explore this, we focused our attention on a carefully selected sample of nearby early type galaxies and studied them at multiple wavelengths. We describe the sample selection in Section \[sample\]. In Section \[data\], we discuss the data, steps carried out to retrieve the magnitude in the UV and in the IR, flux from radio images and extinction correction. We present our results in Section \[results\]. In Section \[summary\] we discuss the implications of the results. In the Appendix, we describe the photometry technique in more detail. The sample {#sample} ========== This study is focused on a sample of early type (ellipticals and S0) galaxies that are present at low redshift. The sample was selected from the *Two Micron All Sky Survey* ([*2MASS*]{}; [@jarrett2003]) that have an apparent $K_{s}$ band (2.2 $\mu$m) magnitude of 13.5 and brighter and whose positions correlate with the Chandra archive of ACIS-I and ACIS-S observations (C. Jones, private communication). A total of 231 galaxies were identified. The Chandra selection criteria was used to create a sample which would allow a study of the nature of AGN activity in early type galaxies. X-ray emission is detected for approximately 80% of the galaxies. The X-ray luminosities of the nuclei range from $10^{38}$ to $10^{41}$ erg $s^{-1}$. The Eddington ratios are measured to be small $\sim$ $10^{-5}$ to $10^{-9}$ suggesting that these galaxies are low-luminosity AGNs [@jones2013]. In this paper, we present a parallel study on the star formation in the sample. In a second paper in this series, we will study the relation between the X-ray properties and the star formation. The data are homogeneous and since the sample is not selected based on specific properties in radio or UV, the data can be considered to be unbiased regarding their star formation and radio source properties. Our large dataset at low redshift allows us to study the interplay between star formation and AGN activity in typical galaxies in the current epoch.\ Figure  \[fig:redshift\_hist\] shows the redshift distribution of the sample. All the galaxies are nearby galaxies. The redshift range of the galaxies in the sample is $0.0002<z<0.0358$ with a median of $z=0.006$, of which, 63% are at a redshift of less than 0.01. Adopting a Hubble constant of $71 \,km\, s^{-1}Mpc^{-1}$ [@jarosik2011], $z=0.01$ corresponds to a distance of 42 Mpc and 1 correspond to a scale of 210 pc.\ ![Figure shows the histogram of redshift for the sample indicating that most of the galaxies are at low redshifts. Redshift of 0.01 corresponds to a distance of 42 Mpc.[]{data-label="fig:redshift_hist"}](fig1.pdf){width="50.00000%"} The Data {#data} ======== We examine observations in multiple wavelengths for this study namely, radio, IR and UV. Infrared observations were collected from *Wide-field Infrared Survey Explorer* [*WISE*]{}[@wright2010] and [*2MASS*]{}[@jarrett2003]. We use the [$K_s$]{}band to trace the stellar mass distribution; [*WISE*]{}and [*GALEX*]{}data to study star formation and radio data at 1.4 GHz to study AGN properties. Basic and observational properties for a subset of the sample are given in Table  \[tab:sample\]. A complete list of the sample properties in machine readable format can be obtained in the online version.\ IR data ------- The Two Micron All Sky Survey ([*2MASS*]{}) was conducted in the near-infrared $J(1.25 \mu m)$, $H(1.65 \mu m)$ and $K_{s}(2.16 \mu m)$ wavebands using two 1.3 m diameter telescopes with a resolution of $\sim$ 2-3. The detectors are sensitive to point sources brighter than 1 mJy at the 10$\sigma$ level. The astrometric accuracy is on order of 100 mas. The camera contains three NICMOS 256$\times$256 HgCdTe arrays. The *WISE* mission observed the entire sky at four infrared wavebands - $W1$ at 3.4 $\mu$m, $W2$ at 4.6 $\mu$m, $W3$ at 12 $\mu$m, and $W4$ at 22 $\mu$m with an angular resolution of 6.1, 6.4, 6.5and 12.0respectively. The field of view (FOV) is 47. The short wavelength detectors are HgCdTe arrays whereas long wavelength detectors are SiAs BIB arrays. The arrays are 1024$\times$1024 pixels in size. WISE has 5$\sigma$ point source sensitivity higher than 0.08, 0.11, 1 and 6 mJy at 3.4, 4.6,12 and 22 $\mu m$ wavelengths respectively.\ Rather than use the existing cataloged values, we chose
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We report results of Raman scattering experiments on twin-free with the main focus placed on understanding the influence of electronic and spin degrees of freedom on the lattice dynamics. In particular, we scrutinize the [$E_{g}$]{}modes and the As [$A_{1g}$]{}mode. Each of the two [$E_{g}$]{}phonons in the tetragonal phase is observed to split into a [$B_{2g}$]{}and a [$B_{3g}$]{}mode upon entering the orthorhombic stripe-magnetic phase. The splitting amounts to approximately 10cm$^{-1}$ and less than 5cm$^{-1}$ for the low- and the high-energy [$E_{g}$]{}mode, respectively. The detailed study of the fully symmetric As mode using parallel incident and outgoing photon polarizations along either the antiferromagnetic or the ferromagnetic Fe-Fe direction reveals an anisotropic variation of the spectral weight with the energy of the exciting laser indicating a polarization-dependent resonance effect. Along with the experiments we present results from density functional theory calculations of the phonon eigenvectors, the dielectric function, and the Raman tensor elements. The comparison of theory and experiment indicates that (i) orbital-selective electronic correlations are crucial to understand the lattice dynamics and (ii) all phonon anomalies originate predominantly from the magnetic ordering and the corresponding reconstruction of the electronic bands at all energies.' author: - 'A. Baum' - Ying Li - 'M. Tomić' - 'N. Lazarević' - 'D. Jost' - 'F. Löffler' - 'B. Muschler' - 'T. Böhm' - 'J.-H. Chu' - 'I. R. Fisher' - 'R. Valentí' - 'I.I. Mazin' - 'R. Hackl' title: | Interplay of lattice, electronic and spin degrees of freedom in detwinned BaFe2As2:\ a Raman scattering study --- Introduction ============ One of the most debated issues in Fe-based superconductors is the interplay of spin, orbital and lattice degrees of freedom at the onset of magnetism, nematicity and superconductivity. [@Sefat:2011; @Wang:2015; @Gallais:2016a; @YiM:2017; @Bohmer:2018] Actually, phonons may play a decisive role for probing subtle changes of the electronic and magnetic properties. For instance, soon after the discovery of Fe-based superconductors the magnetic moment was predicted to couple to the As position. [@Yildirim:2009a] Zbiri *et al.* found a modulation of the electronic density of states at the Fermi energy $E_\mathrm{F}$ by the two [$E_{g}$]{}and the [$A_{1g}$]{}modes. [@Zbiri:2009i] Various anomalies were observed experimentally using neutron, Raman and optical spectroscopy, [@Chauviere:2009; @Chauviere:2011; @Rahlenbeck:2009; @Kumar:2010; @Mittal:2009; @Gnezdilov:2013; @Gnezdilov:2011; @Akrap:2009] but are not fully understood yet. One particular effect is the observation of substantial Raman scattering intensity of the As phonon below the magneto-structural transition in crossed polarizations with the electric fields oriented along the axes of the pseudo-tetragonal 2Fe unit cell [@Chauviere:2011] \[For the definition of the axes see Fig. \[fig:orth\_phase\](a)\]. García-Martínez *et al.* argued that magnetism sufficiently modifies the low-energy electronic structure to explain this anomalous intensity.[@Garcia:2013] Recent experiments seem to support this view [@Wu:2017dec] upon comparing spectra obtained with parallel and crossed polarizations in twinned samples with the incident field oriented along the $a$ and the scattered field either along the $a$ or $b$ axis, respectively. Yet, to which extent the phonons are affected by correlations and magnetic-ordering-induced changes in the electronic structure at energies in the range of the photon energies is still unclear. In this work we address this issue both experimentally and theoretically and investigate how magnetism and the combination of moderately correlated Fe $d$ states and uncorrelated As $p$ states affect such complex spectroscopic properties as, for instance, resonant Raman scattering. In particular, we try to clarify whether the observed anomalous intensity of the As mode is a low- or a high-energy phenomenon and aim at identifying the driving force behind the ordering instabilities. In our study we find that very good agreement between experimental observations and density functional theory (DFT) calculations can be achieved in both the paramagnetic and the antiferromagnetic state of if two physically motivated modifications are being made to the standard DFT electronic bands. On the one hand, we need to account for the fact that the high-temperature tetragonal phase is paramagnetically disordered, and cannot be simulated by calculations with suppressed local magnetism.[@Mazin:2008a] Besides, it appears necessary not only to introduce an antiferromagnetic order in the calculations, but also to account for strong correlations. The latter is achieved by separating the energy bands into two regions, a high-energy region with predominantly As states and a low-energy region with predominantly Fe states. The Fe states are then appropriately renormalized. With these two assumptions we can reproduce (i) the positions of the Raman active phonons and their splitting and evolution in the (mechanically detwinned) orthorhombic antiferromagnetic state and (ii) Raman intensities, including the $\tilde{a}-\tilde{b}$ anisotropy as well as the complex resonant evolution with the laser light frequency. This agreement gives an experimental justification to the proposed computational procedure and convincingly substantiates the physical concepts it was derived from, namely the pivotal role of local moments in the lattice dynamics of Fe-based superconductors, and the importance of band renormalizations for $d$-electrons. ![(Color online) FeAs layer of and detwinning clamp. (a) The As-atoms (grey) in the center and at the edges are below and, respectively, above the Fe plane (red). For this reason, the 2Fe unit cell with the axes $a$ and $b$ (green) is determined by the As atoms. In the orthorhombic phase the Fe-Fe distances become inequivalent with the distortion strongly exaggerated here. The magnetic unit cell is twice as large as the 2Fe unit cell and has the axes $\tilde{a}$ and $\tilde{b}$. (b) Schematic sketch and (c) photograph of the detwinning clamp. The sample (4) is glued on the copper plate (1) which is in good thermal contact with the sample holder (3). Upon tightening the screws (5) the force exerted by the copper-beryllium cantilever (2) can be adjusted. (d) Schematic representation of the geometry of our Raman scattering experiment. All incoming light polarizations which are not parallel to $y$ have finite projections on the $c$ axis (red arrow).[]{data-label="fig:orth_phase"}](./Baum_fig1.pdf){width="8.5cm"} Methods ======= Samples {#sec:samples} ------- The crystal was prepared using a self-flux technique. Details of the crystal growth and characterization are described elsewhere.[@Chu:2009] is a parent compound of double-layer iron-based superconductors and orders in a stripe-like spin-density-wave (SDW) below ${\ensuremath{T_\mathrm{SDW}}\xspace}\approx \mathrm{135\,K}$. Superconductivity can be obtained by substituting any of the ions or by pressure.[@Kimber:2009i] In ($0<x\lesssim 0.06$) the SDW is preceded by a structural phase transition from a tetragonal ($I4/mmm$) to an orthorhombic ($Fmmm$) lattice at ${\ensuremath{T_\mathrm{s}}\xspace}> {\ensuremath{T_\mathrm{SDW}}\xspace}$.[@Chu:2009] It remains a matter of debate as to whether or not [$T_\mathrm{SDW}$]{}and [$T_\mathrm{s}$]{}coincide in .[@Chu:2009; @Kim:2011b] Fig. \[fig:orth\_phase\](a) shows the relation of the various axes. The axes of the tetragonal crystal ($T > T_{\mathrm{s}}$, green lines) are denoted $a$ and $b$ with $a = b$. The axes of the magnetically ordered structure (4Fe per unit cell, black lines), $\tilde{a}$ and $\tilde{b}$, differ by approximately 0.7% below [$T_\mathrm{SDW}$]{}[@Rotter:2008] and the Fe-Fe distance along the $\tilde{b}$ axis becomes shorter than along the $\tilde{a}$ axis as sketched in Figure \[fig:orth\_phase\](a). As a result, the angle between $a$ and $b$ differs from 90$^{\circ}$ by approximately 0.4$^{\circ}$. Below [$T_\mathrm{SDW}$]{}the spins order ferromagnetically along $\tilde{b}$ and antiferromagnetically along $\tilde{a}$. Due to the small difference between $\tilde{a}$ and $\tilde{b}$ the crystals are twinned below [$T_\mathrm{s}$]{}, and the orthogonal $\tilde{a}$ and $\tilde{b}$ axes change roles at twin boundaries running along the directions of the tetragonal
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In this study, we consider an empirical Bayes method for Boltzmann machines and propose an algorithm for it. The empirical Bayes method allows estimation of the values of the hyperparameters of the Boltzmann machine by maximizing a specific likelihood function referred to as the empirical Bayes likelihood function in this study. However, the maximization is computationally hard because the empirical Bayes likelihood function involves intractable integrations of the partition function. The proposed algorithm avoids this computational problem by using the replica method and the Plefka expansion. Our method does not require any iterative procedures and is quite simple and fast, though it introduces a bias to the estimate, which exhibits an unnatural behavior with respect to the size of the dataset. This peculiar behavior is supposed to be due to the approximate treatment by the Plefka expansion. A possible extension to overcome this behavior is also discussed.' author: - Muneki Yasuda - Tomoyuki Obuchi bibliography: - 'citation.bib' title: Empirical Bayes Method for Boltzmann Machines --- Introduction {#sec:intro} ============ *Boltzmann machine learning* (BML) [@Ackley_etal1985] has been actively studied in the field of machine learning and also in statistical mechanics. In statistical mechanics, the problem of BML is sometimes referred to as the *inverse Ising problem*, because a Boltzmann machine is the same as an Ising model, and BML can be regarded as an inverse problem for the Ising model. The framework of the *usual* BML is as follows. Given a set of observed data points (e.g., spin snapshots), we estimate appropriate values of the parameters, the external field and couplings, of our Boltzmann machine through maximum likelihood (ML) estimation (cf. Sec. \[sec:BM\]). Because BML involves intractable multiple summations (i.e., evaluation of the partition function), many approximations for it were proposed from the viewpoint of statistical mechanics [@Roudi2009]: for example, methods based on mean-field approximations (such as the Plefka expansion [@Plefka1982] and the cluster variation method [@CVM-review2005])  and methods based on other approximations [@MPF2011; @SMCI2015]. In this study, we focus on another type of learning problem. We consider prior distributions of parameters of the Boltzmann machine and assume that the prior distributions are governed by some hyperparameters. The introduction of the prior distributions is strongly connected with the regularized ML estimation (cf. Sec. \[sec:BM\]). As mentioned above, the aim of the *usual* BML is to optimize the values of the parameters of the Boltzmann machine by using a set of observed data points. Meanwhile, the aim of the problem investigated in this study is the estimation of appropriate values of the hyperparameters from the dataset without estimating specific values of the parameters. One way to allow us to accomplish this from the Bayesian point of view is the *empirical Bayes method* (or also called type-II ML estimation or evidence approximation) [@MacKay1992; @Bishop2006] (cf. Sec. \[sec:Framework\_EB\]). The schemes of the *usual* BML and of our problem are illustrated in Fig. \[fig:Scheme\_of\_EBM\]. ![Illustration of scheme of empirical Bayes method considered in this study.[]{data-label="fig:Scheme_of_EBM"}](framework_EBM.eps){height="3.4cm"} However, the evaluation of the likelihood function in the empirical Bayes method is again intractable, because it involves intractable multiple integrations of the partition function. In this study, we analyze the empirical Bayes method for fully-connected Boltzmann machines, using statistical mechanical techniques based on the replica method [@ParisiBook1987; @Nishimori2001] and the Plefka expansion to derive an algorithm for it. We consider two types of cases of the prior distribution of $\bm{J}$: the cases of Gaussian and Laplace priors. The rest of this paper is organized as follows. The formulations of the *usual* BML and the empirical Bayes method are presented in Sec. \[sec:BM&EB\]. In Sec. \[sec:StatisticalMechanicalAnalysis\], we describe our statistical mechanical analysis for the empirical Bayes method. The proposed inference algorithm obtained from our analysis is shown in Sec. \[sec:algorithm\] with its pseudocode. In Sec. \[sec:experiment\], we examine our proposed method through numerical experiments. Finally, the summary and some discussions are presented in Sec. \[sec:summary\]. Boltzmann Machine and Empirical Bayes Method {#sec:BM&EB} ============================================ Boltzmann machine and prior distributions {#sec:BM} ----------------------------------------- Consider a fully-connected Boltzmann machine with $n$ Ising variables $\bm{S}:= \{S_i \in \{-1,+1\} \mid i = 1,2,\ldots, n\}$ [@Ackley_etal1985]: $$\begin{aligned} P(\bm{S} \mid h,\bm{J}):=\frac{1}{Z(h,\bm{J})}\exp\Big(h \sum_{i=1}^n S_i + \sum_{i<j}J_{ij}S_iS_j\Big), \label{eqn:BoltzmannMachine}\end{aligned}$$ where $\sum_{i<j}$ is the sum over all the distinct pairs of variables; i.e., $\sum_{i<j} = \sum_{i=1}^n\sum_{j = i+1}^n$. $Z(h,\bm{J})$ is the partition function defined by $$\begin{aligned} Z(h,\bm{J}):= \sum_{\bm{S}}\exp\Big(h \sum_{i=1}^n S_i + \sum_{i<j}J_{ij}S_iS_j\Big),\end{aligned}$$ where $\sum_{\bm{S}}$ is the sum over all the possible configurations of $\bm{S}$; i.e., $\sum_{\bm{S}} := \prod_{i=1}^n \sum_{S_i = \pm 1}$. The parameters, $h \in (-\infty, +\infty)$ and $\bm{J} := \{J_{ij} \in (-\infty, +\infty) \mid i<j\}$, denote the external field and couplings, respectively. Given $N$ observed data points, $\mcal{D}:=\{\mbf{S}^{(\mu)} \in \{-1,+1\}^n \mid \mu = 1,2,\ldots, N\}$, we define the log-likelihood function: $$\begin{aligned} L_{\mrm{ML}}(h,\bm{J}):=\frac{1}{n N}\sum_{\mu = 1}^N \ln P(\mbf{S}^{(\mu)} \mid h,\bm{J}). \label{eqn:log-likelihood}\end{aligned}$$ Maximizing the log-likelihood function with respect to $h$ and $\bm{J}$ (i.e., the ML estimation) just corresponds to the BML (or the inverse Ising problem), i.e., $$\begin{aligned} \{\hat{h}_{\mrm{ML}},\hat{\bm{J}}_{\mrm{ML}}\} = \argmax_{h, \bm{J}}L_{\mrm{ML}}(h,\bm{J}). \label{eqn;InverseIsing}\end{aligned}$$ Now, we introduce prior distributions for the parameters $h$ and $\bm{J}$ as $P_{\mrm{prior}}(h\mid H)$ and $$\begin{aligned} P_{\mrm{prior}}(\bm{J} \mid \gamma)&:= \prod_{i<j} P_{\mrm{prior}}(J_{ij} \mid \gamma), \label{eqn:prior_J}\end{aligned}$$ respectively. $H$ and $\gamma$ are the hyperparameters of these prior distributions. One of the most important motivations for introducing the prior distributions is for a Bayesian interpretation of the regularized ML estimation [@Bishop2006]. Given the observed dataset $\mcal{D}$, by using the prior distributions, the posterior distribution of $h$ and $\bm{J}$ is expressed as $$\begin{aligned} &P_{\mrm{post}}(h,\bm{J} \mid \mcal{D}, H, \gamma) \nn &= \frac{P(\mcal{D} \mid h, \bm{J})P_{\mrm{prior}}(h\mid H)P_{\mrm{prior}}(\bm{J} \mid \gamma)}{P(\mcal{D} \mid H, \gamma)}, \label{eqn;posterior_H&J}\end{aligned}$$ where $$\begin{aligned} P(\mcal{D} \mid h, \bm{J}):= \prod_{\mu = 1}^N P(\mbf{S}^{(\mu)} \mid h,\bm{J}).\end{aligned}$$ The distribution in the denominator in Eq. (\[eqn;posterior\_H&J\]), $P(\mcal{D} \mid H, \gamma)$, is sometimes referred to as the evidence. By using the posterior distribution, the maximum a posteriori (MAP) estimation of the parameters is obtained as $$\begin{aligned} \{\hat{h}_{\mrm{MAP}},\hat{\bm{
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Magnetic resonance imaging (MRI) has great potential to improve prostate cancer diagnosis. It can spare men with a normal exam from undergoing invasive biopsy while making biopsies more accurate in men with lesions suspicious for cancer. Yet, the subtle differences between cancer and confounding conditions, render the interpretation of MRI challenging. The tissue collected from patients that undergo pre-surgical MRI and radical prostatectomy provides a unique opportunity to correlate histopathology images of the entire prostate with MRI in order to accurately map the extent of prostate cancer onto MRI. Such mapping will help improve existing MRI interpretation schemes, e.g. PIRADS, and will facilitate the development of quantitative image analysis methods to assess the imaging characteristics of prostate cancer on MRI. Here, we introduce the RAPSODI (diology athology patial pen-Source multi-imensional ntegration) framework for the registration of radiology and pathology images. RAPSODI relies on a three-step procedure that first reconstructs in three dimensions (3D) the resected tissue using the serial whole-mount histopathology slices, then registers corresponding histopathology and MRI slices, and finally maps the cancer outlines from the histopathology slices onto MRI. We tested RAPSODI in a phantom study where we simulated various conditions, e.g., tissue specimen rotation upon mounting on glass slides, tissue shrinkage during fixation, or imperfect slice-to-slice correspondences between histopathology and MRI images. Our experiments showed that RAPSODI can reliably correct for rotations within $\pm15^{\circ}$ and shrinkage up to 10%. We also evaluated RAPSODI in 89 patients from two institutions that underwent radical prostatectomy, yielding 543 histopathology slices that were registered to corresponding T2 weighted MRI slices. We found a Dice similarity coefficient of 0.98$ \pm $0.01 for the prostate, prostate boundary Hausdorff distance of 1.71$ \pm $0.48 mm, a urethra deviation of 2.91$ \pm $1.25 mm, and a landmark deviation of 2.88$ \pm $0.70 mm between registered histopathology images and MRI. Our robust framework successfully mapped the extent of disease from histopathology slices onto MRI and created ground truth labels for characterizing prostate cancer on MRI. Our open-source RAPSODI platform is available as a 3D Slicer plugin or as a stand-alone program and can be downloaded from <https://github.com/pimed/Slicer-RadPathFusion>.' author: - 'Mirabela Rusu[^1]' - 'Christian A. Kunder' - 'Nikola C. Teslovich' - 'Jeffrey B. Wang' - 'Rewa R. Sood' - Wei Shao - 'Leo C. Chen' - Robert West - Richard Fan - Pejman Ghanouni - 'James D. Brooks' - 'Geoffrey A. Sonn' bibliography: - 'draft.bib' title: 'Registration of pre-surgical MRI and whole-mount histopathology images in prostate cancer patients with radical prostatectomy via RAPSODI' --- Keywords: radiology-pathology registration $|$ prostate cancer $|$ whole-mount histopathology $|$ radical prostatectomy $|$ magnetic resonance imaging Introduction ============ Despite advances in diagnosis and treatment, prostate cancer remains the second leading cause of cancer death in American men [@siegel_cancer_2019]. Overdiagnosis of low-grade cancers that do not require treatment and the underdiagnosis of aggressive cancers are still a concern [@futterer_can_2015], even after the changes in the recommendation of prostate biopsy for elevated Prostate Specific Antigen (PSA). Magnetic Resonance Imaging (MRI) can help address all of these problems [@ahmed_diagnostic_2017]. When MRI is normal, up to 50% of men can safely avoid prostate biopsy, thereby reducing overdiagnosis of low-grade cancer and infectious complications of biopsy. However, this is only true when MRI is interpreted by world-leading experts [@van_der_leest_head--head_2019]. In practice, lack of widespread expertise and alarming levels of inter-reader variation greatly reduce the potential of MRI to revolutionize prostate cancer diagnosis [@sonn_prostate_2017]. Both false negatives and false positives, even when using the recommended PIRADS reporting system [@weinreb_pi-rads_2016], are very common and the vast majority of men who undergo MRI still undergo biopsy. Finally, MRI has yet to supplant biopsy which is still required to confirm the presence and aggressiveness of prostate cancer [@barentsz_synopsis_2016]. In men diagnosed with prostate cancer on biopsy, radical prostatectomy remains the most common treatment [@cooperberg_trends_2015]. The resected prostate provides a unique opportunity to correlate pre-surgical MRI with digitized histopathology images and map the exact extent of cancer from histopathology images onto MRI. Developing a large dataset of prostatectomy cases where cancer and Gleason grade is accurately mapped on MRI has two potentially transformative applications. First is helping to improve existing MRI interpretation schemes that are still affected by many false positive and false negative findings. Second, it may facilitate the development of machine learning methods to identify prostate cancer on MRI by accurately labeling of cancer for model training and validation. [|m[0.8in]{}|m[0.5in]{}|m[1.70in]{}|m[1.2in]{}|m[0.6in]{}|m[0.8in]{}|]{} Publication & Subject \# & Approach & Additional Input & Dice Coef. & Landmark Error (mm)\ Park 2008 [@park_registration_2008] & 2& 3D reconstruction + affine and TPS registration & block face picture, ex vivo MRI & NA & 3-3.74\ Chappelow 2011[@chappelow_elastic_2011] & 25 & Feature Based Mutual Information + BSpline & - & NA & NA\ Ward 2012 [@ward_prostate:_2012] & 13 & 2D Affine + TPS Registration & Strand-shaped fiducials, Ex vivo MRI & NA & 1.1\ Kalavagunta 2014 [@kalavagunta_registration_2015] & 35 & Local affine registration & Internal landmarks, 3D Printed Molds & 0.99 & 1.54$\pm$0.64\ Reynolds 2015 [@reynolds_development_2015] &6& 2D TPS registration + deformable registration & Control Points, ex vivo MRI, sectioning box & 0.93 & 3.3\ Li 2017 [@li_co-registration_2017] & 19 & Multi-Scale Representation + deformable registration & - & 0.96$\pm$0.01 &2.96$\pm$0.76\ Losnegard 2018 [@losnegard_intensity-based_2018] & 12 & 3D histopathology reconstruction, 3D affine and deformable registration & - & 0.94 & 5.4\ Wu 2019 [@wu_system_2019] & 17 & 2D Rigid, TPS Registration (automatic landmarks) & ex vivo MRI, 3D printed molds & 0.87$\pm$0.04&2.0$\pm$0.5\ Rusu 2019 [@rusu_framework_2019] & 15 & 3D histopathology reconstruction, 2D Affine+Deformable & 3D printed Molds &0.94$\pm0.02$& 1.11$\pm$0.34\ Although numerous approaches for the radiology-pathology registration in the prostate have been introduced (see section “Prior Work“), these approaches have not been widely adopted and have not been carefully tested by scientists outside the developer teams. Recent publications using histopathology images as reference to improve MRI and automatically detect cancer [@penzias_identifying_2018; @hurrell_optimized_2018; @sumathipala_prostate_2018; @cao_joint_2019; @reynolds_voxel-wise_2019] still use manual approaches to align the histopathology to MRI images, which are known to be labor-intensive and subjective. The reduced adoption of previous methods is due to the challenges associated with managing and registering the histopathology and Magnetic Resonance (MR) images, the lack of open source release of existing methods, and the time constraints associated with running these methods. Specifically, the registration of histopathology images and prostate MRI has the following challenges. Histologic processing of the resected tissue causes artifacts, e.g., deformations, shrinkage, and tissue ripping. Some of these artifacts (e.g., deformation and shrinking) can be corrected through registration, while others (e.g. tissue ripping) are challenging to correct and may result in discarding slices when such artifacts are major. Furthermore, our method and many others [@kalavagunta_registration_2015; @reynolds_development_2015; @wu_system_2019] assume slice-to-slice correspondence between histopathology and MRI images, which can be improved through the use of customized 3D printed molds based on pre-operative MRI [@turkbey_multiparametric_2011]. However, this approach requires a change in clinical protocol that is not present in the vast majority of institutions performing radical prostatectomy. Finally, the acquired data is different between the histopathology images and MRI. Histopathology images provide a discontinuous serial stack of 4$\mu m$ high-resolution colored images with a pixel size of 0.0005 mm separated by roughly 4 mm spaces, while MRI has a typical resolution of
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | We have discovered three globular clusters beyond the Holmberg radius in Hubble Space Telescope Advanced Camera for Surveys images of the gas-rich dark matter dominated blue compact dwarf galaxy NGC2915. The clusters, all of which start to resolve into stars, have $M_{V606} = -8.9$ to –9.8 mag, significantly brighter than the peak of the luminosity function of Milky Way globular clusters. Their colors suggest a metallicity $[{\rm Fe/H}] \approx -1.9$ dex, typical of metal-poor Galactic globular clusters. The specific frequency of clusters is at a minimum normal, compared to spiral galaxies. However, since only a small portion of the system has been surveyed it is more likely that the luminosity and mass normalized cluster content is higher, like that seen in elliptical galaxies and galaxy clusters. This suggests that NGC2915 resembles a key phase in the early hierarchical assembly of galaxies - the epoch when much of the old stellar population has formed, but little of the stellar disk. Depending on the subsequent interaction history, such systems could go on to build-up larger elliptical galaxies, evolve into normal spirals, or in rare circumstances remain suspended in their development to become systems like NGC2915. author: - 'Gerhardt R. Meurer, J.P. Blakeslee, M. Sirianni, H.C. Ford, G.D. Illingworth, N. Benítez, M. Clampin, F. Menanteau, H.D. Tran, R.A. Kimble, G.F. Hartig, D.R. Ardila, F. Bartko, R.J. Bouwens, T.J. Broadhurst, R.A. Brown, C.J. Burrows, E.S. Cheng, N.J.G. Cross, P.D. Feldman, D.A. Golimowski, C. Gronwall, L. Infante, J.E. Krist, M.P. Lesser, A.R. Martel, G.K. Miley, M. Postman, P. Rosati, W.B. Sparks, Z.I. Tsvetanov, R.L. White, & W. Zheng' nocite: '[@s92; @cgcfp00]' title: 'Discovery of Globular Clusters in the Proto-Spiral NGC2915: Implications for Hierarchical Galaxy Evolution' --- Introduction {#s:intro} ============ All galaxies with massive old stellar populations are thought to contain globular clusters (GCs). They are particularly noticeable where the old stellar population is dominant such as in elliptical (E), dwarf elliptical (dE), and central Dominant (cD), as well as spiral galaxies with prominent bulges. Dwarf spheroidal galaxies generally do not contain GCs, probably because they have insufficient mass to make their formation likely. The exceptions are the most massive dwarf spheroidals Fornax [@h61] and Sagitarius [@igi94] which each have at least 4 GCs. Disk galaxies dominated by population I stars contain fewer GCs per unit luminosity, presumably because of star formation in the disk after the formation of the population II component. Galaxies with a high  ratio have yet to form much of their baryonic mass into stars. They typically are blue and not considered likely hosts for populous GC systems. NGC2915 is an extreme gas rich galaxy having ${\mbox{${\cal M}_{\rm HI}/L_B$}}= 1.7\, {\mbox{${\cal M}_\odot$}}/L_{B,\odot}$ [@mcbf96 hereafter MCBF96]. Its regularly rotating  disk extends to over 5 times beyond the readily detectable optical emission providing an excellent dynamical tracer for the mass distribution; not coincidentally NGC2915 has one of the highest known mass-to-light ratios in a single galaxy (MCBF96). Furthermore, while its optical morphology is that of a blue compact dwarf [BCD; @mmc94 hereafter MMC94], its  disk clearly shows spiral arms which are not apparent in the optical. In this Letter, we report the discovery of three luminous GCs found in Hubble Space Telescope Advanced Camera for Surveys [ACS; @ford_acs02] images of NGC2915 that were obtained in order to look for a stellar heating source for the  disk. That issue will be discussed in a separate article (Meurer [[*et al.*]{}]{} 2003; in preparation, hereafter Meu03). Data and analysis {#s:data} ================= ACS Wide Field Camera (WFC) images were obtained of a field centered at 09$^{\rm h}$ 25$^{\rm m}$ 36$\fs$48, –76$^\circ$ 35$'$ 52$\farcs$4 (J2000). The images cover projected radii of 45 to 257 , whereas the Holmberg radius $R_{Ho} = 114''$. We obtained 2, 2, 4 images for a total exposure of 2480$s$, 2600$s$, 5220$s$ in the filters F475W (), F606W (), and F814W (), respectively. The basic processing of the images was done using the [*CALACS*]{} pipeline [@hack99]. We used the program [[*Apsis*]{}]{} [@bambm02] to align and combine the images encorporating geometric correction and rejection of cosmic rays and hot pixels. Here we present photometry in the natural system of the filters, with zeropoints selected so that Vega would have a magnitude of 0.0 in all bands. In order to compare our observations with previous work, we convert the previous work to this system, as needed, using the calibrations of Sirianni et al (2003, in preparation). The most important correction is to the  photometry, since the F606W filter straddles the wavelength of traditional $V$ and $R$ filters. Results {#s:res} ======= Table \[t:prop\] presents adopted global properties for NGC2915. The foreground extinction, , is from the @sfd98 extinction maps. It is significantly larger than ${\mbox{$E(B-V)$}}= 0.15 \pm 0.05$ estimated by MMC94, but consistent with the position of the field star Red Giant Branch (RGB; Meu03). Extinction corrected photometry employing the @ccm89 extinction curve is denoted with a “0” subscript. The distance, $D$ was derived from the field star RGB tip (Meu03). It is consistent with but improves on previous estimates $D = 5.3 \pm 1.3$ Mpc (MMC94) and $D = 3.8 \pm 0.5$ Mpc [@k03]. The remaining quantities in Table \[t:prop\] were derived from MMC94 and MCBF96 after correcting to the new  and $D$. As shown in Fig. \[f:finders\], the three sources are clearly GCs whose brightest stars are resolved. Table \[t:clust\] compiles the properties of the clusters. The photometric quantities were measured using a circular aperture having a radius of $r = 3''$, with the local sky subtracted using an annulus having radii of 5 and 7.5. The cluster size  is the circular aperture radius encompassing half the  light as measured from curves of growth. Compared to Galactic GCs, these clusters are large and luminous, but not abnormally so. Only 16% of the clusters in the @h96 database[^1] have  luminosities brighter than G3; only three clusters are more luminous than G1. The clusters’  ranges from about 5 to 9 pc, placing them in the upper quartile of Galactic GCs which have  ranging from 0.3 to 24.7 pc [@h96]. The clusters are noticeably elongated with ellipticity $\epsilon \equiv 1 - b/a$ similar to the canonical flattened Galactic GCs M22 and $\omega$ Cen ($\epsilon = 0.14$, 0.17, respectively; Harris 1996). The combination of high luminosity and appreciable flattening is also seen in the cluster M31-G1 [@pvdb84; @msjdbr01]. The  and  colors of the clusters are compared to Milky Way GCs [@h96] in Fig. \[f:2cd\]. Their colors are virtually identical implying similar metallicities, assuming they are old and nearly coeval. We derive their metallicity by fitting the metallicty - color relationship from the Harris database after converting the colors to . We employed an unweighted least squares fit with an iterative $2.5\sigma$ rejection resulting in ${\mbox{[Fe/H]}}= -5.37 + 5.36{\mbox{$(V_{606} - I_{814})_0$}}$ with a dispersion of 0.29 dex. The metallicity for the three clusters is then ${\mbox{[Fe/H]}}= -1.9 \pm 0.4$ dex, consistent with [*low*]{} metallicity Galactic GCs. Our stellar population analysis, in progress (Meu03), indicates that the stars at the outskirts of the clusters have very similar  versus  color-magnitude diagrams, dominated by a narrow and blue RGB. This is also consistent with low , if the clusters are old. Discussion {#s:disc} ========== Cluster formation efficiency ---------------------------- Because of the blue core and gas rich nature of NGC
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'V. Charmandaris' - 'O. Laurent' - 'E. Le Floc’h' - 'I. F. Mirabel' - 'M. Sauvage' - 'S. Madden' - 'P. Gallais' - 'L. Vigroux' - 'C. J. Cesarsky' date: 'Received 1 March 2002 / Accepted 30 May 2002' title: ' Mid-infrared observations of the ultraluminous galaxies , , and [^1]' --- Introduction ============ It is currently widely accepted that the majority of the most luminous galaxies (L$_{bol}>10^{11}$L$_{\sun}$) in the local universe (z $<0.3$) are luminous in the infrared, and include the ultraluminous infrared galaxies (ULIRGs, L$_{\rm IR}>10^{12}$L$_{\sun}$) which emit the bulk of their energy at infrared wavelengths [@Houck1984; @Soifer1989; @Sanders1996 and references therein]. In those systems most of the infrared emission seems to originate from their dusty nuclear regions. Even though one of the principal heating mechanisms for the lowest luminosity ($\lesssim 10^{11}$ L$_{\sun}$) infrared galaxies is the stellar radiation field of young massive stars, it is still unclear if the star formation is also the dominant heating source for ULIRGs or whether one needs to invoke an active galactic nucleus (AGN) and its strong radiation field as the central engine responsible for the heating of the dust [see @Joseph99; @Sanders99]. The presence of large quantities of molecular gas has long been detected in the central regions of most ULIRGs [e.g. @Sanders1985; @Sanders1991] leading to high extinction of both their UV and optical radiation. As a result, since it appears that most galaxies do harbor a super-massive, though often quiescent, black hole [@Richstone1998], one would expect to find in their galactic nucleus observational evidence for a mixture of AGN [@Sanders1988] and/or strong compact starburst regions [@Condon1991] fueled by the high concentration of molecular gas [@Bryant1999]. Observations in the mid-infrared (MIR), which are less affected by absorption than shorter wavelengths [A$_{15\,\mu m}$ $\sim$A$_{V}$/70, @Mathis1990], thus provide a powerful probe of galactic central regions [@Soifer2000; @Soifer2001]. As we discussed in @Laurent2000, the integrated MIR emission in active galaxies is produced mainly by the interstellar dust which is heated directly by the ionization field from young stars or an AGN. This is in contrast to late type galaxies where the MIR (5–20$\mu$m) energy budget is dominated by the reprocessed emission of star forming regions in their disk and accounts for $\sim$15% of their luminosity [@Dale2001; @Helou2001; @Roussel2001]. However, the main difficulty in assessing the importance of the underlying physics in galactic nuclei, where the spatial resolution is typically poor, is in separating the contribution of star forming regions and the active nucleus from the integrated MIR emission. The development, application, and general utility of MIR diagnostics in nuclei of galaxies has already been demonstrated by @Roche1991 and more recently by @Genzel1998 [@Laurent2000], as well as by @Dudley1999 [@Imanishi2000]. This was mainly accomplished with the advent of ISOCAM and SWS on board ISO, with high spatial and spectral resolution, as well as improved sensitivity in the 3 to $\sim$40$ \mu$m wavelength range, thus allowing us to study the nature of the heating sources in ULIRGs. More specifically it has been shown by @Lutz1998 [@Laurent1999b; @Laurent2000; @Tran2001] that a nearby galaxy hosting a dominant AGN is clearly different in the MIR from a starburst or a late type spiral. The most striking difference is that the rather featureless MIR spectrum in AGN lacks the emission bands at 6.2, 7.7, 8.6, 11.3 and 12.7$\mu$m, which are seen in late type galaxies and are attributed to Polycyclic Aromatic Hydrocarbons (PAHs) – also often called Unidentified Infrared Bands (UIBs). One may consider that this is simply due to the fact that its elevated MIR continuum of the AGN overwhelms any UIB feature emission [@Pier1992; @Barvainis1987]. It seems inevitable that as the AGN heats its dusty torus at T$\sim$1000K and the dust grains approach sublimation temperatures, the more fragile molecules responsibly for the UIB emission could be partly destroyed by a photo-thermo-dissociation mechanism [@Leger1989]. Obviously this picture is more complicated in distant galaxies since due to limited spatial resolution the contribution of the star forming regions surrounding an AGN would progressively enter into the beam and dilute any AGN MIR signature [see @Laurent1999b]. When sufficient spatial resolution is available to directly view the active nucleus, as is often the case in Seyfert 1 galaxies, the non-thermal emission from the AGN will dominate the spectrum. Consequently, the spectrum can then be fitted by a power law and has a “bump” in the 4–5$\mu$m range. A 5–11$\mu$m study of a large sample of Seyfert galaxies with ISO by @Clavel2000 confirmed this picture, concluding that Seyfert 2 galaxies have weaker MIR continuum. However, a detailed analysis of the MIR spectra and images of the prototypical Seyfert 2 galaxy NGC1068 by @LeFloch2001 showed that if sufficient spatial resolution is available and the AGN is extremely strong, even in the case of a Seyfert 2 one can isolate the emission of the central engine from the star forming regions which surround it. In that case the MIR spectrum of the Seyfert 2 would also be a power law with the addition of a weak PAH emission. Despite this progress, several questions concerning the extent and spectral characteristics of the MIR emission in active nuclei, as well as the correlation between MIR and optical activity have not been fully examined. Could broad band MIR photometry be used to probe the physical characteristics of AGNs? In the present paper we try to address some of these issues by studying the MIR spectral energy distribution (SED) of three ultraluminous IRAS galaxies. Each IRAS source, the properties of which are presented in Table \[info\], consists of a merging pair of galaxies with different levels of nuclear activity. The targets were specifically selected as MIR bright and harboring an optically classified AGN. In section 2, we describe the observations and in section 3 we present the details of our study and analysis of the data for each system. A discussion followed by concluding remarks is presented in section 4. Throughout this paper we assume a Hubble constant H$_{0}$=75 kms$^{-1}$Mpc$^{-1}$ and q$_0$=1/2. Observations and data reduction =============================== Our MIR observations were obtained using ISOCAM, a 32$\times$32 pixel array [@CesarskyC1996] on board the ISO satellite [@Kessler1996]. Each system was observed with broad band filters ranging from 5 to 18$\mu$m in a 2$\times$2 raster with 6 pixel offsets and a lens producing a pixel field of view (PFOV) of 1.5, resulting in a final image of 57$\times$57. This enabled us to obtain images with a spatial resolution of 3 (at 6$\mu$m) to 4.5 (15$\mu$m) limited by the pixel size at 6$\mu$m and by the full width at half maximum (FWHM) of the point spread function (PSF) at 15$\mu$m. We note the ISOCAM filters by their name and central wavelength. The wavelength range in $\mu m$ covered by each filter was: LW2 (5.0 – 8.5), LW3 (12.0 – 18.0), LW4 (5.5 – 6.5), LW6 (7.0 – 8.5), LW7 (8.5 – 10.7), LW8 (10.7 – 12.0), LW9 (14.0 – 16.0). At subsequent sections in this paper we will refer to the measured flux densities using the various filters as f$_{x\,\mu m}$ where *x* is the central wavelength of each filter in microns. Spectrophotometric observations were also obtained with the circular variable filter (CVF) for IRAS 23128-5919, the brightest of our sources. The CVF covers a spectral range from 5 to 16.5$\mu$m with a 1.5 PFOV and a spectral resolution of  50. Each integration step was composed of 12 images with 5.04 second integration time and during the CVF scan the wavelength step varied between 0.05 and 1.11$\mu$m. Details on the observing parameters are summarized in Table \[param\]. The data were analyzed with the CAM Interactive Analysis software (CIA[^2]). A dark model taking into account the observing time parameters was subtracted. Cosmic ray contamination was removed by applying a wavelet transform method [@Starck1997]. Corrections of detector memory effects were done applying the Fouks-Schubert’s method [@Coulais2000]. The flat field correction was performed using the library of calibration data. Finally, individual exposures were combined using shift techniques in order to correct the effect of j
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'The calculation of optimal structures in reaction-diffusion models is of great importance in many physicochemical systems. We propose here a simple method to monitor the number of interphases for long times by using a boundary flux condition as a control. We consider as an illustration a 1-D Allen-Cahn equation with Neumann boundary conditions. Numerical examples are given and perspectives for the application of this approach to electrochemical systems are discussed.' author: - 'J.-P. Chehab[^1], A. A. Franco[^2], Y. Mammeri[^3]' date: title: 'A simple mathematical approach to optimize the structure of reaction-diffusion physicochemical systems' --- Introduction ============ The dynamics of a large diversity of physicochemcal systems can be mathematically modeled as reaction-diffusion systems in which it is described how the composition of multiple chemical species distributed in space change under the influence of competitive chemical reactions between the species (giving origin to a new species) and the diffusion which causes the species to spread out in the space. It is well known that depending on the relative importance of the kinetics and the diffusion these systems can provide a large diversity of behaviors, including the formation of complex structures and patterns see [@Sachs]. Such a structure formation occurs for example during the solid phase formation and evolution in intercalation and conversion reactions in rechargeable lithium batteries [@Franco1; @Franco2], during the self-organisation of materials occuring with the fabrication process of composite electrodes for electrochemical devices applications [@Malek1], during the microstructural evolution of composite elecrodes upon their degradation [@Malek2] and in other competitive chemically reactive systems like in the Belousov-Zhabotinsky reaction [@Sirimungkala].\ Designing appropriate controllers of these reaction-diffusion systems can reveal of great relevance within a reverse engineering approach for example towards the optimization of discharge-charge of lithium batteries (by for example enhancing the formation of solid phases during discharge more reversible upon charge) and the optimization of the structure of the fabricated electrodes as function of the fabrication parameters (e.g. temperature dynamics, reactant flow, etc.).\ In this paper, we consider the one-dimensional Allen-Cahn equation $$\begin{aligned} {\frac{\textstyle \partial u} {\textstyle \partial t}} -{\frac{\textstyle \partial^2 u} {\textstyle \partial x^2}} +{\frac{\textstyle 1} {\textstyle \epsilon^2}}f(u)=0 & x\in]0,1[, t >0,\\ u_x(0,t)=\alpha(t), u_x(1,t)=0& \forall t >0,\\ u(x,0)=u_0(x) & x\in ]0,1[.\end{aligned}$$\ This reaction-diffusion equation describes the process of phase separation in many situations. It was originally introduced in [@AllenCahn] by Allen and Cahn to model the motion of anti-phase boundaries in crystalline solids. In equation (1), $u$ represents the concentration of one of the possible phases, $\epsilon$ represents the interfacial width, supposed to be small as compared to the characteristic length of the laboratory scale. The homogenous Neumann boundary condition (when $\alpha(t)=0$) traduces that there is no loss of mass across the boundary walls. However, the Allen-Cahn equation is invoqued in a large number of complicated moving interface problems in materials science through a phase-field approach, therefore a large litterature in mathematical analysis and in numerical analysis is devoted to the study of the mathematical properties of this equation and of its simulation (see [@MPierre; @JShen] and the references therein).\ In equation (1), $f(u)$ represents the potential energy and $\alpha(t)$ represents the control flux at one of the boundaries; $f(u)$ is assumed having stable roots $\rho_i$, $i=1,\cdots, r$ such that $f(\rho_i)=0$ and $f'(\rho_i)>0$. It is observed in many cases that when $\epsilon <<1$ and as $t$ goes to $+ \infty$ , the solutions tend to steady states ${\bar u}$ which consist in (almost) piecewise constant functions whose the different values are equal to the stable roots of $f$ which represent the different phase stripes. Hence ${\bar u}$ exhibits large gradient near $\rho_i$, as illustrated in Figure (\[fig1\]). ![Steady state for $\epsilon=0.004$ (left) and for $\epsilon=0.001$ (right).[]{data-label="fig1"}](solution1.png "fig:"){height="7.0cm" width="8.5cm"} ![Steady state for $\epsilon=0.004$ (left) and for $\epsilon=0.001$ (right).[]{data-label="fig1"}](solution2.png "fig:"){height="7.0cm" width="8.5cm"} \ \ An important issue in the conception of rechargeable lithium and post-lithium batteries, is the design of active materials providing upon the battery discharge a number of interphases as low as possible. The morphological simplicity of such discharged materials is expected to enhance the rechargeability of these type batteries and thus to increase their efficiency [@Franco2]. In this paper we propose a first numerical strategy to calculate the boundary flux function $\alpha(t)$ on a given time interval $[0,T]$, with $T$ large enough, in such a way the number of interphase of the steady state ${\bar u}$ is minimized. To this end, we consider as control function $\alpha(t)$, $\epsilon$ being constant. For the sake of simplicity, we first restrict ourselves to the case $f(u)=u(u^2-1)$ which possesses 3 roots: $u=\pm 1$ which are stable and $u=0$ which is unstable.\ The article is organized as follows: first, in Section 2, we present first the global numerical strategy by deriving the estimation of the number of interphases, which will be the merit function to minimize. Then, we present the finite differences discretization of the system in space and we describe the numerical solver, that includes the optimization process as well as the full discretized problem to be solved at each iteration. In Section 3, we present some numerical results demonstrating the numerical controllability of the problem: we calculate optimal$\alpha$ for different values of $u_0$, $T$ and $\epsilon$. Finally, in Section 4 we conclude and indicate further perspectives of development of our work. Numerical strategy ================== Estimation of the number of interphases --------------------------------------- We consider the finite differences discretization in space of the Allen-Cahn equation which leads to a differential system. The grid points $x_i$, $i=1,\cdots, N$ are regularly spaced for simplicity, $h$ is the corresponding stepsize. We assume that $h$ is small enough in order the discrete solution captures the strong gradients near the interphases. The steady solution ${\bar u}$ is considered to be almost piecewise constant, so its approximations at grid points ${\bar u}_i$, $i=1,\cdots, N$ take the values $\pm 1$. Hence $${\bar u}_{i+1}-{\bar u}_i =\left\{ \begin{array}{c} 0 \\ 2 \\ -2 \\ \end{array} \right.$$ Therefore, the number of interphases is $$\begin{aligned} \label{formula_changes} N({\bar u})&={\frac{\textstyle 1} {\textstyle 2}}\displaystyle{\sum_{i=0}^{N}\mid {\bar u}_{i+1}-{\bar u}_i \mid}.\end{aligned}$$ This quantity can be related to the $L^1$-norm of $u'$, indeed $$\begin{aligned} N({\bar u})={\frac{\textstyle 1} {\textstyle 2}}\displaystyle{\sum_{i=0}^{N}\mid {\frac{\textstyle {\bar u}_{i+1}-{\bar u}_i} {\textstyle h}} \mid h}\simeq {\frac{\textstyle 1} {\textstyle 2}}\displaystyle{\int_0^1 \mid {\bar u}'(x)\mid dx}.\end{aligned}$$ In Figure (\[fig1\]) (left), we count 10 changes, the result given by formula (\[formula\_changes\]) is 9.9968 and in Figure (\[fig1\]) (right) 48 changes are counted while (\[formula\_changes\]) estimation is 47.7475.\ We remark that an interesting numerical issue could be to plug an adaptive grid strategy since the steady solution needs only few points to be represented. Selection of given phases ------------------------- Our approach applies when more than 2 interphases are present. Indeed, consider for simplicity the case of $m$ stable phases. To obtain the number of interphases, it is sufficient to split the final signal profile into 4 parts, each of them reprensenting the state of one phase stirp (see figure below in the case of $m=4$). Once done, we can apply formula (\[formula\_changes\]) separately. This procedure allows using a weighted merit function $$\begin{aligned} F(u)=\displaystyle{\sum_{i=1}^m\omega_iN_i(u)},\end{aligned}$$ where $N_i(u)$ is the number of connex components for phase $i$ and $\omega_i \ge 0$ the associated weight: a large
{ "pile_set_name": "ArXiv" }
null
null
[**** ]{}\ Stanisław Saganowski^1,\*^, Piotr Bródka^1^, Michał Koziarski^2^, Przemysław Kazienko^1^\ **1** Department of Computational Intelligence, Faculty of Computer Science and Management, Wrocław University of Science and Technology, Wrocław, Poland\ **2** Department of Electronics, Faculty of Computer Science, Electronics and Telecommunications, AGH University of Science and Technology, Kraków, Poland\ \* stanislaw.saganowski@pwr.edu.pl Abstract {#abstract .unnumbered} ======== In the world, in which acceptance and the identification with social communities are highly desired, the ability to predict the evolution of groups over time appears to be a vital but very complex research problem. Therefore, we propose a new, adaptable, generic, and multistage method for Group Evolution Prediction (GEP) in complex networks, that facilitates reasoning about the future states of the recently discovered groups. The precise GEP modularity enabled us to carry out extensive and versatile empirical studies on many real-world complex / social networks to analyze the impact of numerous setups and parameters like time window type and size, group detection method, evolution chain length, prediction models, etc. Additionally, many new predictive features reflecting the group state at a given time have been identified and tested. Some other research problems like enriching learning evolution chains with external data have been analyzed as well. Introduction {#introduction .unnumbered} ============ Network science is a very interdisciplinary domain focusing on understanding the relational nature of various real-world phenomena using for that purpose diverse network models. Commonly, networks consist of smaller, more integrated structures called groups, communities, or clusters. In practice, both the groups and whole networks evolve and change their profiles over time. Hence, their analysis demands advanced computational methods to understand and predict their future behavior. For that reason, group evolution prediction is an essential component of computational network science. One of the domains explored by network science are biological networks[@zickenrott2017prediction; @barabasi2011network; @wu2008network; @goh2007human]. Viruses are as old as life on earth. At the same time, they are very young, as they constantly mutate to change their lethal attributes. Influenza, unlike other viruses which are rather stable, evolves much more rapidly[@Influenza_rate1:2017; @Influenza_rate2:1986] and kills up to one million people worldwide every year[@Influenza_kills:2009]. We can try to protect ourselves using vaccines. However, the rate of mutation is too rapid to provide an effective cure. What is more, the development of a new drug requires a huge amount of money and lasts from a few to a dozen or so years. Despite these difficulties, new drugs are introduced to the market every year. For example, antagonist drugs (also called blockers) are designed to bind to specific receptors to block the disease’s ability to attach to these particular receptors, thereby immunizing the body to the disease. Unfortunately, diseases react to drugs and eventually mutate, creating a variety that will bind to other receptors. Therefore, we need methods that will be able to track the evolution of the disease, and based on the history of its mutations, will be able to predict the most likely future mutations. To track diseases mutations, we can focus on the group of receptors that it binds to, and observe how such group evolves. Based on the history of changes in the lifetime of this group, we can try to predict what will be the next change. Predicting the direction of the mutation could significantly reduce the amount of time and money needed to study the disease. With such knowledge, we would be able to start preparing the drug in advance and bring it to the market much faster and cheaper. Another area that widely applies network science, especially its branch called social network analysis (SNA), is marketing, in particular advertising[@husnain2017impact; @antoniadis2016social; @guo2016effects; @barhemmati2015effects]. Let us imagine that a start-up company invented a new generation of diapers – *Smart Diapers*, which are extra soft, super absorbing, and additionally, can communicate with parents’ smartphones to notify when their change time comes. The company invested very much in their development, therefore, it has a limited budget to advertise the product. The owners decided to introduce the product to discussion groups on the Facebook platform where parents from different countries/cities create and join independent groups to talk about and comment on new products for babies, share general advice about raising children, sell used clothes, etc. Convincing members (parents) of such relevant, targeted groups to use and buy the new diaper product would be much more effective and cheaper than advertising the broader community using expensive TV commercials. Additionally, the word-of-mouth recommendation is commonly believed to be the most powerful marketing tool[@Kozinets:2010]. However, the vital question rises here: which Facebook groups the company should invest in its limited resources, i.e., time and money? In the newly created relatively small groups that might be very active and are expanding fast, or in the larger groups that might be not very active in the nearest future? Which of these groups will be still running or growing in a few weeks/months/years and which one will disappear? That is why the knowledge about the history, current state, and future evolution of groups is crucial at decision making on where to allocate the resources. In 2007, Palla et al. [@palla2007quantifying] have defined the problem of group evolution identification. In the following years, dozens of solutions to this problem have been proposed. One of them was the highly cited GED method [@brodka2011tracking]. Existing surveys describe as many as 12 [@saganowski2017community] or even over 60 methods [@rossetti2018community]. All of them are focused on defining possible events in the community life, hence, tracking the historical changes. This, in turn, has led to emerging a new problem – predicting future changes that will occur in the community lifetime. Some of the first methods concerning prediction of some aspects (e.g., determining lifespan) of the group evolution were: (1) Goldberg et al. [@goldberg2011tracking] – they focused on predicting the lifespan of evolution for a group; (2) Qin et al. [@qin2011evolution] – analyzed dynamic patterns to predict the future behavior of dynamic networks; and (3) Kairam et al. [@kairam2012life] – they investigated the possibility of prediction whether a community will grow and survive in the long term. Note that the methods for tracking group evolution can be also utilized to other similar prediction problems, like link prediction[@group_evolution_link_prediction:2017], churn prediction[@group_evolution_churn_prediction:2010], as well as to understand evolution of software (Unix operating system networks) [@evolution_linux:2017] or dynamics of social groups forming at coffee breaks[@evolution_coffee_breakes:2014]. In 2012, we proposed a new concept, in which the historical group changes were utilized to classify the next event in the group’s lifetime[@Brodka:2012]. In this first trial, we have used only event type and size of the group to describe its state at a given time. Over the next year, we have investigated the concept and adopted it to two methods for tracking group evolution – the GED[@brodka2013ged] method and the SGCI method[@Gliwa:2012]. This resulted in the first method for group evolution prediction[@Gliwa:2013]. It was the predecessor of the GEP (Group Evolution Prediction) method described in this paper. Since then, a few more methods have been proposed. At the end of 2013, İlhan et al. presented their research with several new measures describing the state of the community and a new method for tracking group evolution[@Ilhan:2013]. In 2014, Takafolli et al. applied the binary approach to classifying the next change that group will undergo[@Takaffoli:2014]. They used 33 measures to describe the state of the community. We have presented new results in 2015, where, apart from new measures, the influence of the length of the history used in the classification was examined[@Saganowski:2015]. Later the same year, Diakidis et al. adapted the GED method to conduct their research with 10 measures as predictive features[@Diakidis:2015]. In 2016, İlhan et al. presented new results and proposed a method to select measures, which should be the most useful as predictive features for a given data set[@Ilhan:2016]. More recently, Pavlopoulou et al. used 19 measures already validated in other works and studied whether employing the temporal features on top of the structural ones improves prediction, as well as what is the impact of using a different number of historical community states on the prediction quality[@Pavlopoulou:2017predicting]. Unfortunately, all of the methods proposed to this day have some drawbacks (see the Comparison with other methods section) and have been designed to solve a particular problem, hence, their application area is rather narrow. Therefore, in this paper, a new generic and comprehensive method to predict the future behavior of the groups, based on their historical structural changes as well as experienced events, is proposed, evaluated and discussed. Some of the contributions of this work are: decomposing the group evolution prediction problem, proposing and extensively evaluating the modular method that can be applied to any dynamic network data, proposing new predictive features, performing the features’ ranking, proposing a
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We give a closed form of the discrete-time evolution of a recombination transformation in population genetics. This decomposition allows to define a Markov chain in a natural way. We describe the geometric decay rate to the limit distribution, and the quasi-stationary behavior when conditioned to the event that the chain does not hit the limit distribution.' author: - 'Servet Mart[í]{}nez' title: '*A probabilistic analysis of a discrete-time evolution in recombination*' --- [**Keywords: $\,$**]{} Markov chain; Population genetics; Recombination; geometric decay rate; quasi-stationary distributions. [**AMS Subject Classification:**]{} 60J10; 92D10. Introduction {#sec0} ============== Here we study the evolution of the following transformation $\Xi$ acting on the set of probability measures $\mu$ on a product measurable space $\prod_{i\in I}A_i$, $$\Xi[\mu]=\sum_{J\subseteq I} \rho_J \, \mu_J \otimes \mu_{J^c}.$$ Here $\rho=(\rho_J: J\subseteq I)$ is a probability vector, $\mu_J$ and $\mu_{J^c}$ are the marginals of $\mu$ on $\prod_{i\in J}A_i$ and $\prod_{i\in J^c}A_i$ respectively, and $\otimes$ means that these marginals are combined in an independent way. The analysis of $\Xi$ should give an insight in the study of the genetic composition of population under recombination. Genetic information is encoded in terms of sequences of symbols indexed by a finite set of sites. In the process of recombination the children sequences are derived from two parents, a subset of sites is encoded with the maternal symbols and the complementary set is encoded with the paternal symbols. The above equation expresses that these sets $(J,J^c)$ constitute a probabilistic object distributed according to $\rho$. A relevant feature is that recombination produces decorrelation between sites and this is expressed by the fact that the sequence distribution on these sets are grouped independently into $\Xi[\mu]$. The evolution $(\Xi^n[\mu])$ has been mainly studied in the context of single cross-overs, that is where $I=\{1,..,K\}$ and the pairs of sets $(J,J^c)$ are of the form $J=\{i: i<j\}$, $J^c=\{i: i\ge j\}$. This evolution was introduced by H. Geiringer [@ge], and firstly solved in the continuous-time case by E. Baake and M. Baake [@bb], where it is also supplied an important corpus of ideas and techniques to study the discrete-time evolution. In relation to the discrete-time evolution we refer to [@bvw]: [*’...the corresponding discrete-time dynamics, which is prevalent in the biological literature, is more difficult: its solution has, so far, required nontrivial transformations and recursions that have not been solved in closed form (Benett 1954; Dawson 2000, 2002; von Wangenheim et al. 2010).’*]{} These last works are cited in our list of references as [@be]; [@daw1], [@daw2]; [@vwbb]. Richer discussions on the interpretation of the above equation in a broader perspective of recombination in population genetics, are given in the introductory sections of references [@bb], [@bvw], [@bbs] and [@uvw]. When studying single cross-over recombination, one the main objectives in [@uvw] and [@bvw] is to express the iterated $\Xi^n[\mu]$ in a simple form. The main tools in these works are Möbius inversion formulae, similarly to the continuous case, and commutation relations between $\Xi$ and recombination operators. Some of the main results of these works are the one step recursive relation stated in Theorem 1 in [@bvw], Proposition 3.3 in [@uvw] stating that if one starts from a distribution $\mu$ then $\Xi^n[\mu]$ converges to the Bernoulli distribution having the marginals of $\mu$, and the relation to ancestry trees and Markov chains, summarized in Theorem 3 in [@bvw]. In our work we present two main results, these are Theorems \[theo0\] and \[theo1\]. In Theorem \[theo0\] we write $\Xi^n[\mu]$ as a weighted decomposition of $\otimes_{\ell \in \delta} \mu_\ell$, where $\mu_\ell$ is the marginal $\mu$ on the set $\ell$, and $\{\ell \in \delta\}$ are the atoms of some partition $\delta$ of $I$, and we give exactly the weights of this decomposition. This follows from a simple backward development of $\Xi^n[\mu]$ done in Lemma \[lemmafd\]. When looking in detail the formulae stated in Theorem \[theo0\] one realizes that they define a natural Markov chain $(Y_n)$ on the set of partitions of $I$, having the remarkable property that when it starts from the coarsest partition $\{I\}$, then the probability that $\{Y_n=\delta\}$ is equal to the sum of weights of all trees participating in the backward development of $\Xi^n[\mu]$ whose set of leaves is $\{\ell \in \delta\}$. These results are Lemmas \[lemma4\] and \[lemma5\]. In Theorem \[theo1\] we use this Markov chain to describe the geometric coefficient of convergence to the limit distribution $\otimes_{\ell \in {{\cal D}}^\rho} \mu_\ell$, where ${{\cal D}}^\rho$ is the partition generated by the sets $\{J: \rho_J>0\}$. In the single cross-over case the atoms of this partition are the singletons, so the limit probability measure is the Bernoulli distribution. A key result is formula (\[50e\]) that characterizes the geometric decay behavior. In this Theorem we also study in detail the limiting conditional behavior of the chain when conditioned to the fact that it has not hit the limit distribution. Besides giving the limiting conditional distribution, we state a ratio limit of the probabilities of not hitting the limit distribution. We emphasize that these last results are not a consequence of any known result in the theory of quasi-stationary distributions because the Markov chain $(Y_n)$ is not irreducible on the class of non-absorbing states, so we are not able to use the Perron-Frobenius theory. All these results require entirely new computations. Quasi-stationary distributions have been studied mostly in relation to population extinction, see for instance Section 2.6 in [@les], and [@pp; @cms] for a wide ranging bibliography on the subject. In our context the absorbing state is not the void population as happens in extinction, and a main interest of the quasi-limiting behavior is in the process that never hit the limit distribution which is given in Corollary \[cor1\]. In Section \[sec1\] we fix notation on partitions, atoms, and dyadic partitions. In Section \[sec2\] we define supply some technical lemmas on the transformation $\Xi$. Thus, in Lemma \[lemma3\] we get the marginal $\Xi[\mu]_K$ for $K$ a union of atoms, in terms of some iterated coefficients $\rho^K_M$ derived from $\rho$ which constitute key quantities along all our study. In Section \[sec3\] we introduce the dyadic family of trees depending on the support of $\rho$ participating in the tree decomposition of $\Xi^n[\mu]$. Finally, in Section \[sec4\] we introduce the Markov chain on partitions and state our main results on the quasi-limiting behavior. Let us discuss briefly the relations of our results with respect to previous literature mainly with respect to [@uvw] and [@bvw], which have been an important inspiration for our work. In these references a Markov chain on partitions was introduced for single cross-over recombination, by following the ancestry of the genetic material of a selected individual from a population and using some limits arguments. As a consequence of this rather complicated construction, a key relation between the Markov chain and the coefficients of the iterated $\Xi^n[\mu]$ is stated in Theorem 3 in [@bvw], which must be the same relation we state in Lemma \[lemma5\]. We note, that each backward step in ancestry involves a probabilistic object because the dyadic partition $(J,J^c)$ is randomly distributed. But, our approach differs with the one used in [@bvw] at some substantial points: we get a closed form of $\Xi^n[\mu]$ by using a simple backward decomposition and this decomposition suggests the definition of the Markov chain $(Y_n)$ in a very natural way. Our techniques are totally different to those used in [@uvw] and [@bvw]. Also, our result apply to all kind of dyadic partitions $(J,J^c)$ that can have a complex combinatorics and not only for the ones arising in the single cross-over case. Finally, the study of the quasi-stationary behavior of this chain is, to our knowledge, firstly studied in this monograph. We point out that even if our results are stated for a product of finite spaces, they can be stated for general product of measurable spaces as pointed out in Remark \[rem2a\]. Recently, in [@bbs], the continuous-time evolution was studied in a framework of general partitions other than dyadic partitions. The extension of our results to the analogous framework but for discrete-time, deserves a different study. It is worth mentioning, that in Section \[sec2\] and in the final comment of this work, we point out that all our results remain true when $\otimes$ is
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We Define moments of partitions of integers, and show that they appear in higher order derivatives of certain combinations of functions.' author: - Shaul Zemel title: Moments of Partitions and Derivatives of Higher Order --- Introduction and Statement of the Main Result {#introduction-and-statement-of-the-main-result .unnumbered} ============================================= Changes of coordinates grew, through the history of mathematics, from a powerful computational tool to the underlying object behind the modern definition of many objects in various branches of mathematics, like differentiable manifolds or Riemann surfaces. With the change of coordinates, all the objects that depend on these coordinates change their form, and one would like to investigate their behavior. For functions of one variable, like holomorphic functions on Riemann surfaces, this is very easy, but one may ask what happens to the derivatives of functions under this operation. The answer is described by the well-known formula of Faà di Bruno for the derivative of any order of a composite function. For the history of this formula, as well as a discussion of the relevant references, see [@[J]]. For phrasing Faà di Bruno’s formula, we recall that a partition $\lambda$ of some integer $n$, denoted by $\lambda \vdash n$, is defined to be a finite sequence of positive integers, say $a_{l}$ with $1 \leq l \leq L$, written in decreasing order, whose sum is $n$. The number $L$ is called the *length* of $\lambda$ and is denoted by $\ell(\lambda)$, and given a partition $\lambda$, the number $n$ for which $\lambda \vdash n$ is denoted by $|\lambda|$. Another method for representing partitions, which will be more useful for our purposes, is by the *multiplicities* $m_{i}$ with $i\geq1$, which are defined by $m_{i}=\big|\;\{1 \leq l \leq L|a_{l}=i\}\big|$, with $m_{i}\geq0$ for every $i\geq1$ and such that only finitely many multiplicities are non-zero. In this case we have $|\lambda|=\sum_{i\geq1}im_{i}$ and $\ell(\lambda)=\sum_{i\geq1}m_{i}$. Note that the empty partition, in which all the multiplicities $m_{i}$ vanish, is allowed. It is considered to be partition of 0, with length 0. Therefore when some partition $\lambda$ is known from the context, the numbers $m_{i}$ will denote the associated multiplicities, and in case several partitions are involved we may write $m_{i}(\lambda)$ for clarification. Assume that $f$ is a function of $z$ and the variable $z$ is a function of another variable $t$, say $z=\varphi(t)$, and we wish to differentiate the resulting function of $t$ successively. The formula of Faà di Bruno is the answer to this question, which we can write explicitly as $$(f\circ\varphi)^{(n)}(t)=\frac{d^{n}}{dt^{n}}\big((f(\varphi(t)\big)=\sum_{\lambda \vdash n}\frac{n!}{\prod_{i=1}^{n}(i!)^{m_{i}}m_{i}!}f^{(\ell(\lambda))}\big(\varphi(t)\big)\prod_{i=1}^{n}\big(\varphi^{(i)}(t)\big)^{m_{i}}. \label{FaadiBruno}$$ We remark that gathering these formulae for all $n$ together, and noticing that $\lambda$ appears in the derivative of order $|\lambda|$, yields a structure of a Hopf algebra on the polynomial ring of infinitely many variables, graded appropriately—see, e.g., [@[FGV]]. Equation can be viewed as describing the behavior of derivatives of functions on 1-dimensional objects (like Riemann surfaces, when the variables are locally taken from $\mathbb{C}$) under changing the coordinate. However, functions are not the only type of forms that can be defined on 1-dimensional objects, and the next forms to consider are differentials, and more generally $q$-differentials. These are defined such that their coordinate changes also involve the $q$th power of the derivative of the coordinate change, namely if a $q$-differential is expressed in a coordinate neighborhood as $f(z)$ times the formal symbol $(dz)^{q}$, then when we change the coordinate via $z=\varphi(t)$ the description in the coordinate $t$ is $f\big(\varphi(t)\big)\varphi'(t)^{q}$ times $(dt)^{q}$ (see, e.g., Section III.4.12 of [@[FK]]). While simply differentiating such expressions may seem a bit unnatural, this operation does appear, for example, in the proof of Proposition III.5.10 of [@[FK]], which states that if $d$ is the dimension of the space of $q$-differentials on a Riemann surface $X$ then the Wronskian of this space is an $m$-differential, where $m=\frac{d}{2}(d+2q-1)$. While the proof of the latter statement takes only the “essential terms” of this derivative, where no combinatorial calculations have to be carried out, it does leave open the question about the formula for the $n$th derivative of such a transformation rule, and whether some interesting combinatorial phenomena hide in it. The dependence on $q$ as a number becomes formal, and the expression that we investigate in this manner is the $n$th derivative of an expression like $f\big(\varphi(t)\big)g\big(\varphi'(t)\big)$, or just $(f\circ\varphi)\cdot(g\circ\varphi')$ when we omit the variable $t$. In fact, $g$ needs not be composed with the first derivative of $\varphi$, but can rather be composed with the derivative $\varphi^{(s)}$ of any order $s\geq0$. The question that we tackle in this paper is therefore finding an explicit formula for the $n$th derivative of the expression $(f\circ\varphi)\cdot\big(g\circ\varphi^{(s)}\big)$, in terms of the derivatives of $f$, $g$, and $\varphi$. The fact that the formula, which is given in Equation below, involves partitions, is, of course, no big surprise. But in addition to the combinatorial coefficients appearing in Faà di Bruno’s formula from Equation , the resulting coefficient involves some numbers that we call *moments* of partitions. More precisely, given an integer $k\geq1$ and a partition $\lambda$, with the summands $a_{l}$, $1 \leq l\leq\ell(\lambda)$ and the multiplicities $m_{i}$, we define its *$k$th moment* to be $$p_{k}(\lambda)=\sum_{l=1}^{\ell(\lambda)}a_{l}^{k}=\sum_{i\geq1}i^{k}m_{i}.$$ In particular the first moment of $\lambda$ is just $|\lambda|$ by definition. The notation $p_{k}$ comes from the theory of symmetric functions, as this moment is the value attained by the $k$th power sum function on the numbers $a_{l}$, $1 \leq l\leq\ell(\lambda)$. However, there are several natural bases for the ring of symmetric functions, and in particular one can take the basis arising from the *elementary symmetric functions* $\{e_{r}\}_{r=0}^{\infty}$, which appear, e.g., in the expressions for the coefficients of a polynomial in terms of its roots. We shall therefore denote by $e_{r}(\lambda)$ the *$r$th elementary moment* of $\lambda$, which is obtained by substituting the $a_{l}$s into the $r$th elementary symmetric function $e_{r}$. Note that every symmetric function with index 0 is the constant 1, so that the 0th moment of every partition is 1 (even though the formula for $p_{k}$ above would give $\ell(\lambda)$ when $k=0$). An interesting feature of the resulting formula is that for expressing the coefficient associated with $\lambda$, we first have to modify $\lambda$ in two different directions, and take the elementary moments of this modification. More explicitly, given an integer $s\geq0$ and a partition $\lambda$, we shall denote by $\lambda^{>s}$ the *$s$th truncation* of $\lambda$, which is obtained by eliminating any number $a_{l}$ which satisfies $a_{l} \leq s$, or equivalently by setting each $m_{i}$ with $i \leq s$ to 0 and leaving the multiplicities $m_{i}$ with $i>s$ at their value $m_{i}(\lambda)$. Note that this operation may transform some non-trivial partitions into the trivial one, all the moments of positive indices of which vanish by definition. In addition, for every partition $\mu$ and integer $s\geq0$ we denote by $(\mu)_{s}$ the partition obtained by replacing each number $a_{l}$ by its *Pochhammer symbol* $(a_{l})_{s}=\prod_{\upsilon=0}^{s-1}(a_{l}-\upsilon)=\frac{a_{l}!}{(a_{l}-s)!}$ (the latter equality holding also when $0 \leq a_{l}<s$, since then the numerator is finite and the denominator is infinite, but we shall use it for $\mu=\lambda^{>s}$ where no such
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In this short note, we show that homogeneous Ricci solitons are algebraic. As an application, we see that the generalized Alekseevskii conjecture is equivalent to the Alekseevskii conjecture.' author: - Michael Jablonski title: Homogeneous Ricci solitons are algebraic --- [^1] Introduction ============ A Riemannian manifold $(M,g)$ is said to be a Ricci soliton if it satisfies the equation $$\label{eqn: ricci soliton} ric_g = cg + L_Xg$$ for some $c\in\mathbb R$ and some smooth vector field $X\in \mathfrak X(M)$. Such metrics are of interest as they correspond to self-similar solutions of the Ricci flow $$\frac{\partial}{\partial t} g = -2ric_g$$ That is, $g$ is the initial value of a solution to the Ricci flow of the form $g_t = c(t) \varphi_t^*g$, where $c(t)\in \mathbb R$ and $\varphi_t \in \mathfrak{Diffeo}(M)$. In this way, Ricci solitons are geometric fixed points of the flow and so are special metrics. Homogeneous Ricci solitons arise naturally as limits under the Ricci flow [@Lott:DimReductionAndLongTimeBehaviorOfRicciFlow; @Lauret:RicciFlowForSimplyConnectedNilmanifolds] and, independently, hold a distinguished place apart from other homogeneous metrics. For example, nilmanifolds cannot admit Einstein metrics, but do often admit Ricci solitons [@Jensen:TheScalarCurvatureOfLeftInvariantRiemannianMetrics; @Jablo:ModuliOfEinsteinAndNoneinstein], Ricci solitons on nilmanifolds are precisely the minima of a natural geometric functional [@LauretNilsoliton], and Ricci solitons are metrics of maximal symmetry on certain solvmanifolds [@Jablo:ConceringExistenceOfEinstein]. One natural kind of example arises as follows. Consider a homogeneous space $G/K$ where $K$ is closed and connected. For every derivation $D\in Der(\mathfrak g)$ such that $D:\mathfrak k \to \mathfrak k$, we have a well-defined map $D_{\mathfrak g/\mathfrak k} : \mathfrak g/\mathfrak k \to \mathfrak g / \mathfrak k$. Denote such derivations of $\mathfrak g$ by $Der(\mathfrak g/\mathfrak k)$. A homogeneous Ricci soliton $(G/K,g)$ is called *$G$-semi-algebraic* if the $(1,1)$ Ricci tensor is of the form $$\label{eqn: definition of semi-algebraic soliton} Ric = cId + \frac{1}{2}( D_{\mathfrak g/\mathfrak k} + D_{\mathfrak g/\mathfrak k} {}^t)$$ on $\mathfrak g/\mathfrak k \simeq T_eG/K$, for some $c\in \mathbb R$ and some $D\in Der(\mathfrak g/\mathfrak k)$. This definition is motivated by the idea of taking our family of diffeomorphisms $\{\varphi_t \}$ above to come from automorphisms of the group $G$ which leave $K$ invariant, see [@Jablo:HomogeneousRicciSolitons] or [@LauretLafuente:StructureOfHomogeneousRicciSolitonsAndTheAlekseevskiiConjecture] for more details. If our semi-algebraic Ricci soliton satisfies the seemingly stronger condition that $D_{\mathfrak g/\mathfrak k}$ is symmetric, then it is called a *$G$-algebraic Ricci soliton*. Up to this point, all known examples of semi-algebraic Ricci solitons were in fact algebraic and isometric to solvmanifolds. (This follows from [@Jablo:HomogeneousRicciSolitons] together with [@LauretLafuente:OnhomogeneousRiccisolitons].) Further, it was known that every homogeneous Ricci soliton must be semi-algebraic relative to its full isometry group [@Jablo:HomogeneousRicciSolitons]. We now present our main result. \[thm: main theorem\] Every $G$-semi-algebraic Ricci soliton is necessarily $G$-algebraic. Let $(M,g)$ be a homogeneous Ricci soliton. There exists a transitive group $G$, of isometries, such that $M=G/K$ is a $G$-algebraic Ricci soliton. The theorem above resolves questions raised by Lafuente-Lauret [@LauretLafuente:StructureOfHomogeneousRicciSolitonsAndTheAlekseevskiiConjecture] and He-Petersen-Wylie [@HePetersenWylie:WarpProdEinsteinMetricsOnHomogAndHomogRicciSolitons]. In these works, it was shown that one can always extend a simply-connected, algebraic soliton to an Einstein metric on a larger homogeneous space. There the goal was to relate the classical Alekseevskii conjecture on Einstein metrics to a more general version for Ricci solitons. More precisely, they showed that (among simply-connected manifolds) the Alekseevkii conjecture for Einstein metrics is equivalent to the (apriori) more general conjecture in the case of algebraic Ricci solitons. We state these conjectures for completeness. > **Alekseevskii Conjecture:** Every homogeneous Einstein metric with negative scalar curvature is isometric to a simply-connected solvmanifold. > **Generalized Alekseevskii Conjecture:** Every expanding homogeneous Ricci soliton is isometric to a simply-connected solvmanifold. Until now, it was not clear if these conjectures were equivalent. Applying [@LauretLafuente:StructureOfHomogeneousRicciSolitonsAndTheAlekseevskiiConjecture] or [@HePetersenWylie:WarpProdEinsteinMetricsOnHomogAndHomogRicciSolitons] in the simply-connected case together with [@Jablo:StronglySolvable] and the results here, we now know the following. The generalized Alekseevskii conjecture is equivalent to the Alekseevskii conjecture. It is important to note that the Alekseevskii conjecture stated above is a more modern, geometric version than that given in [@Besse:EinsteinMflds]. The version given in [@Besse:EinsteinMflds] has the weaker, topological conclusion that a non-compact, homogeneous, Einstein space is only diffeomorphic to $\mathbb R^n$. It is still an open question as to whether the classical version stated in [@Besse:EinsteinMflds] is equivalent to the stronger version we pose above. *Acknowledgments:* It is our pleasure to thank Ramiro Lafuente for providing useful comments on a draft of this manuscript. Ricci solitons by type ====================== The analysis of (homogeneous) Ricci solitons varies depending on which of the following categories the metric falls into. A Ricci soliton is called *shrinking, steady, or expanding* (respectively) if the cosmological constant $c$ appearing in Eqn. \[eqn: ricci soliton\] satisfies $c>0$, $c=0$, or $c<0$ (respectively). Shrinking solitons {#shrinking-solitons .unnumbered} ------------------ The simplest example of a non-Einstein, homogeneous, shrinker is obtained by considering a compact homogeneous Einstein space $M'$ (which necessarily has positive scalar curvature) and taking a product with $\mathbb R^n$, i.e. $M=M'\times \mathbb R^n$. Here the vector field $X\in\mathfrak{X}(M)$ appearing in Eqn. \[eqn: ricci soliton\] generates a family of diffeomorphisms which simply dilate the $\mathbb R^n$ factor. Examples of this type are called trivial Ricci solitons and a result of Petersen-Wylie [@Petersen-Wylie:OnGradientRicciSolitonsWithSymmetry] says that every homogeneous shrinking Ricci soliton is finitely covered by a trivial one. Observe that such spaces are algebraic Ricci solitons. Steady solitons {#steady-solitons .unnumbered} --------------- A homogeneous steady soliton is necessarily flat. This well-known fact is proved as follows. Along the Ricci flow of any homogeneous manifold, the scalar curvature $sc$ evolves by the ODE $$\frac{d}{d t}sc = 2 |Ric|^2$$ As the scalar curvature of a steady soliton does not change along the flow, we see that the homogeneous, steady solitons are Ricci flat and so flat by [@AlekseevskiiKimelfeld:StructureOfHomogRiemSpacesWithZeroRicciCurv]. Such spaces are trivially algebraic Ricci solitons. Expanding solitons {#expanding-solitons .unnumbered} ------------------ Every homogeneous, expanding Ricci soliton is necessarily non-compact, non-gradient and all known examples of such spaces are isometric to solvable Lie groups with left-invariant metrics. While there is no characterization in this case as nice as the previous two cases, new structural results have recently appeared in [@LauretLafuente:StructureOfHomogeneousRicciSolitonsAndTheAlekseevskiiConjecture]. The results obtained there are essential in our proof and we briefly recall those which
{ "pile_set_name": "ArXiv" }
null
null
**A direct method of solution for the Fokas-Lenells derivative** **nonlinear Schrödinger equation: II. Dark soliton solutions** Yoshimasa Matsuno[^1] *Division of Applied Mathematical Science,* *Graduate School of Science and Engineering* *Yamaguchi University, Ube, Yamaguchi 755-8611, Japan* In a previous study (Matsuno Y [ *J. Phys. A: Math. Theor.*]{} [**45**]{} (2012) 23202), we have developed a systematic method for obtaining the bright soliton solutions of the Fokas-Lenells derivative nonlinear Schrödinger equation (FL equation shortly) under vanishing boundary condition. In this paper, we apply the method to the FL equation with nonvanishing boundary condition. In particular, we deal with a more sophisticated problem on the dark soliton solutions with a plane wave boundary condition. We first derive the novel system of bilinear equations which is reduced from the FL equation through a dependent variable transformation and then construct the general dark $N$-soliton solution of the system, where $N$ is an arbitrary positive integer. In the process, a trilinear equation derived from the system of bilinear equations plays an important role. As a byproduct, this equation gives the dark $N$-soliton solution of the derivative nonlinear Schrödinger equation on the background of a plane wave. We then investigate the properties of the one-soliton solutions in detail, showing that both the dark and bright solitons appear on the nonzero background which reduce to algebraic solitons in specific limits. Last, we perform the asymptotic analysis of the two- and $N$-soliton solutions for large time and clarify their structure and dynamics. [*PACS:*]{} 05.45.Yv; 42.81.Dp; 02.30.Jr [*Keywords:*]{} derivative nonlinear Schrödinger equation; dark soliton; direct method of solution The Fokas-Lenells derivative nonlinear Schrödinger (NLS) equation (FL equation shortly) is a completely integrable nonlinear partial differential equation (PDE) which has been derived as an integrable generalization of the NLS equation using bi-Hamiltonian methods \[1\]. In the context of nonlinear optics, the FL equation models the propagation of nonlinear light pulses in monomode optical fibers when certain higher-order nonlinear effects are taken into account \[2\]. We employ the following equation which can be derived from its original version by a simple change of variables combined with a gauge transformation \[2\]: $$u_{xt}=u-2{\rm i}|u|^2u_x. \eqno(1.1)$$ Here, $u=u(x,t)$ is a complex-valued function of $x$ and $t$, and subscripts $x$ and $t$ appended to $u$ denote partial differentiations. The complete integrability of the FL equation has been demonstrated by means of the inverse scattering transform (IST) method \[3\]. Especially, a Lax pair and a few conservation laws associated with it have been obtained explicitly using the bi-Hamiltonian structure and the multisoliton solutions have been derived by applying the dressing method \[4\]. Another remarkable feature of the FL equation is that it is the first negative flow of the integrable hierarchy of the derivative NLS equation \[2, 5\]. In a previous study \[6\] which is referred to as I hereafter, the two different expressions of the bright $N$-soliton solution of the FL equation have been obtained by a direct method which does not recourse to the IST and their properties have been explored in detail. Here, we construct the dark $N$-soliton solution of the FL equation on the background of a plane wave. Explicitly, we consider the boundary condition $$u \rightarrow \rho\,{\rm exp}\left\{{\rm i}\left(\kappa x-\omega t+\phi^{(\pm)}\right)\right\}, \quad x \rightarrow \pm\infty, \eqno(1.2)$$ where $\rho(>0)$ and $\kappa$ are real constants representing the amplitude and wavenumber, respectively, $\phi^{(\pm)}$ are real phase constants and the angular frequency $\omega=\omega(\kappa)$ obeys the dispersion relation $\omega=1/\kappa+2\rho^2.$ Note that the plane wave given in (1.2) is an exact solution of the FL equation. As will be discussed later, the possible values of $\kappa$ must be restricted to assure the existence of the soliton solutions. A similar problem to that posed in this paper has been studied recently and an explicit formula for the dark $N$-soliton solution have been presented by an ingenious application of the Bäcklund transformation between solutions of the FL equation and the Ablowitz-Ladik hierarchy \[7\]. Nevertheless, the detailed analysis of the soliton solutions has not been undertaken as yet. An exact method of solution employed here which is sometimes called the direct method \[8\] or the bilinear transformation method \[9\] is a powerful tool for analyzing soliton equations and differs from the method used in \[7\]. Once the equation under consideration is transformed to a system of bilinear equations, the standard technique in the bilinear formalism is applied to obtain soliton solutions. A novel feature of the bilinearization of the FL equation is that one of the bilinear equations can be replaced by a [*trilinear*]{} equation, as already demonstrated in I. The same situation happens in the current dark soliton problem. However, the resulting trilinear equation will be used essentially in the process of performing the proof of the dark $N$-soliton solution. This paper is organized as follows. In section 2, we bilinearize the FL equation under the boundary condition (1.2). We then show that one of the resulting bilinear equations can be replaced by a trilinear equation. In section 3, we present the dark $N$-soliton solution of the bilinear equations. It has a simple structure expressed in terms of certain determinants. Subsequently, we perform the proof of the dark $N$-soliton solution using an elementary theory of determinants in which Jacobi’s identity will play a central role. As already noted, the proof of the trilinear equation turns out to be a core in the analysis. In accordance with the relation between the FL equation and the derivative NLS equation at the level of the Lax representation, we also demonstrate that the dark $N$-soliton solution obtained here yields the dark $N$-soliton solution of the derivative NLS equation by replacing simply the time dependence of the solution. As in the case of the defocusing NLS equation subjected to nonvanishing boundary conditions, it is necessary for the existence of dark solitons that the asymptotic state given by (1.2) must be stable. Hence, we perform the linear stability analysis of the plane wave solution (1.2) and provide a criterion for the stability. In section 4, we first investigate the properties of the one-soliton solution in detail. We find that depending on the sign of $\kappa$ and that of the real part of the complex amplitude parameter, the solution can be classified into two types, i.e., the dark and bright solitons. The latter soliton may be termed “anti-dark soliton” since the background field is nonzero. However, we use a term “bright soliton” throughout the paper. We demonstrate that regardless the sign of $\kappa$, the bright soliton has a limiting profile of algebraic type (or an algebraic bright soliton) whereas an algebraic dark soliton appears only if $\kappa<0$. We then analyze the asymptotic behavior of the two-soliton solution and derive the explicit formulas for the phase shift in terms of the amplitude parameters of solitons. In particular, we address the interaction between a dark soliton and a bright soliton as well as that of two dark solitons. Last, the similar asymptotic analysis to that of the two-soliton solution is performed for the general dark $N$-soliton solution. Section 5 is devoted to concluding remarks. In this section, we develop a direct method of solution for constructing dark soliton solutions of the FL equation (1.1) under the boundary condition (1.2). In particular, we show that it can be transformed to a system of bilinear equations by introducing the same type of the dependent variable transformation as that employed in I for the bilinearization of the FL equation under vanishing boundary condition. We also demonstrate that this system yields a trilinear equation which will play a crucial role in our analysis. The bilinearization of the FL equation (1.1) is established by the following proposition: [**Proposition 2.1.**]{} [*By means of the dependent variable transformation $$u=\rho\,{\rm e}^{{\rm i}(\kappa x-\omega t)}\,{g\over f}, \eqno(2.1)$$ with $\omega=1/\kappa+2\rho^2$, equation (1.1) can be decoupled into the following system of bilinear equations for the tau functions $f$ and $g$ $$D_tf\cdot f^*-{\rm i}\rho^2(gg^*-ff^*)=0, \eqno(2.2)$$ $$D_xD_tf\cdot f^*-{\rm i}\rho^2D_xg\cdot g^*+{\rm i}\rho^2D_xf\cdot f^*+2\kappa\rho^2(gg^*-ff^*)=0, \eqno(2.3)$$ $$D_xD_tg\cdot f+{\rm i}\kappa D_tg\cdot f-{\rm i}\omega D_xg\cdot f=0. \eqno(2.4)$$ Here, $f=f(x
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: '[ With the Keck Interferometer, we have studied at 2 um the innermost regions of several nearby, young, dust depleted “transitional” disks. Our observations target five of the six clearest cases of transitional disks in the Taurus/Auriga star-forming region (DM Tau, GM Aur, LkCa 15, UX Tau A, and RY Tau) to explore the possibility that the depletion of optically thick dust from the inner disks is caused by stellar companions rather than the more typical planet-formation hypothesis. At the 99.7% confidence level, the observed visibilities exclude binaries with flux ratios of at least 0.05 and separations ranging from 2.5 to 30 mas (0.35 - 4 AU) over $\gtrsim\,$94% of the area covered by our measurements. All targets but DM Tau show near-infrared excess in their SED higher than our companion flux ratio detection limits. While a companion has previously been detected in the candidate transitional disk system , we can exclude similar mass companions as the typical origin for the clearing of inner dust in transitional disks and of the near-infrared excess emission. Unlike CoKu Tau/4, all our targets show some evidence of accretion. We find that all but one of the targets are clearly spatially resolved, and UX Tau A is marginally resolved. Our data is consistent with hot material on small scales (0.1 AU) inside of and separated from the cooler outer disk, consistent with the recent SED modeling. These observations support the notion that some transitional disks have radial gaps in their optically thick material, which could be an indication for planet formation in the habitable zone ($\sim$ a few AU) of a protoplanetary disk. ]{}' author: - 'Jorg-Uwe Pott, Marshall D. Perrin, Elise Furlan, Andrea M. Ghez, Tom M. Herbst, Stanimir Metchev' title: Ruling out Stellar Companions and Resolving the Innermost Regions of Transitional Disks with the Keck Interferometer --- Introduction ============ Circumstellar disks are a natural outcome of the star-formation process: when a molecular cloud core collapses, it gives rise to a central star surrounded by a rotating circumstellar disk, which transports material towards the star. Over time, the disk material dissipates through [ processes such as]{} accretion onto the central star, [ disk winds]{} and the formation of planets. At an age of $\sim5~{\rm Myr}$, about 90% of disks have already dispersed, and within $10~{\rm Myr}$ of their formation, almost all pre-main-sequence stars are diskless [e.g. @2006ApJ...638..897S]. While it is now believed that such disks commonly give rise to planetary systems, the details of this process remain unclear. Theory predicts that disks evolve from the inside out: dust grain growth is expected to occur faster in the inner disk than in the outer disk , higher densities favor planet formation in the inner disk [@2002Icar..156..291B], and photoevaporation by the central star will cause the inner disk to dissipate first [@2001MNRAS.328..485C]. [ Possible observational support for inside out disk evolution has been found in a small number of so-called transitional disks.]{} These systems show a strong mid-infrared excess ($\gtrsim\,8\,\mu {\rm m}$) revealing the presence of dust [ but significantly reduced or no shorter wavelength infrared excess compared to typical classical T Tauri disks, indicating a depletion of optically thick inner dust out to a radius of a few AU]{} [@1990AJ.....99.1187S; @1997ApJ...489L.173M; @2001AJ....121.1003S; @2005ApJ...621..461D; @2005ApJ...630L.185C; @2006ApJ...636..932M; @2006ApJS..165..568F; @2006AJ....131.1574L; @2006ApJ...643.1003M; @2007ApJ...664L.111E_disk; @2007ApJ...664L.107B]. Therefore, these disks might be in the process of dispersing and this has often been assumed to be due to the influence of newly formed planets [e.g. @2004ApJ...612L.137Q]. Discussed explanations of the transitional disk phenomenon reveal two important features which can be tested directly by high angular resolution imaging observations. \(1) The depletion of dust inside of the outer, mid-infrared disk, could be caused by a close (AU-scale) binary system inside of the disk. [ Binary companions can perturb a circumstellar disk and create inner holes with diameters comparable to the binary separation [@1994ApJ...421..651A see also the discussion for DI Tau in Meyer et al. 1997].]{} To call such a circumbinary disk [*transitional*]{}, would be misleading, since circumbinary disks can be dynamically stable and longer-lasting than the short ($<$ Myr) time-scales derived from the small (few percent) fractional abundance of transitional disks around pre-main-sequence stars in nearby, a few Myr young, star-forming regions [@2006ApJS..165..568F; @2008AJ....135..966F; @2009ApJ...703.1964F note that the fractional abundancies of transitional disks might be as high as a few tens of percent, depending on the exact definition of transitional disks, in particular if and what type of [ residual]{} inner disk emission is [ permitted]{}]. [ Also, a close companion affects the SED interpretation of apparent transitional disk systems.]{} Unresolved infrared companions can create additional near-infrared and, if embedded, mid-infrared flux, that appears comparable to the infrared excess radiation seen in transitional disk systems, which typically is interpreted as disk emission [e.g. @2003ApJ...592..288D]. [ Indeed, recent diffraction limited NIR imaging with the Keck II telescope of the candidate transitional disk system indicates that its inner hole (10 AU radius) is actually caused by a newly discovered binary companion of $\sim\,8\,{\rm AU}$ separation, removing the need to invoke other processes like planet formation as the disk clearing mechanism in transitional disk systems [@2008ApJ...678L..59I].]{} While the census of very close companions [ of T Tauri stars (TTS) in star-forming regions]{} is far from complete, the companion star fraction in young, nearby star-forming regions is about 50% in the 15-1800 AU separation range and $\sim\,20\%$ at separations less than 10 AU . The companion star fraction typically decreases towards smaller separations (less than a few AU), but suggest that YSOs in Ophiuchus have a companion fraction of at least 10% at the 0.8-4 AU separation scale. [ These observational constraints suggest that there would be enough binaries to populate a large fraction of transitional disks, although the short-period binary frequency appears to vary between different sites of star formation . ]{} [ (2) While some transitional disks may be completely cleared of material in the inner region, the planet formation hypothesis suggests that disk clearing may often result in gaps between inner and outer dusty regions . Other possible disk clearing mechanisms such as photoevaporation would produce strictly inside-out clearing [@2000prpl.conf..401H; @2001MNRAS.328..485C], so evidence for gaps in disks (in contrast to totally cleared holes) tends to support the planet formation hypothesis. Many transitional disks show some infrared excess emission inside of the outer optically thick disk, which itself dominates at wavelengths longer than $\sim\,8~\mu$m. It has been shown for a few systems that this near-infrared excess can be explained by a small amount of emitting dust close to the star at $\sim\,0.1\,$AU-scales, leaving a gap between this innermost dust, and the outer mid-infrared disk [e.g. @2008ApJ...682L.125E_LkCa]. Therefore, the near-infrared excess in transitional disk systems might origin from such small size scales, if not emitted by a so far unresolved companion (case 1).]{} To [ directly]{} assess (1) the presence of close binary companions within transitional disks and [ (2) the emission size scale of the near-infrared excess over the stellar continuum]{}, we used the Keck Interferometer (KI) in $V^2$ mode[^1] to observe 5 transitional disks in the nearby ($\sim$ 140 pc) Taurus-Auriga young star-forming [ region]{}. The nominal interferometric resolution of $\sim\,2.7$ mas and the field of view of $\sim\,50\,{\rm mas}$, offered in the $V^2$ mode, is well suited to resolve any companion stars from about 0.5 to 5 AU [ distance]{} from the target primary stars. This angular resolution is a significant improvement over the resolution available with speckle or aperture mask interferometry and adaptive optics at 8-10 m class telescopes ($\gtrsim$ 25 mas). This article is organized as follows: [ detailed target properties ]{}are reported in Sect. \[sec:2\]. Observations and data reduction are give in Sect. \[sec:3\]. The results are discussed in Sect. \[sec:5\], and the conclusions of our experiment are given in Sect. \[sec:6\] \[sec:2\]Target selection and properties ======================================== [lccccc|c]{} & DM Tau$^a$ & GM Aur$^a$ & Lk
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'The optical/near-IR stellar continuum carries unique information about the stellar population in a galaxy, its mass function and star-formation history. Star-forming regions display rich emission-line spectra from which we can derive the dust and gas distribution, map velocity fields, metallicities and young massive stars and locate shocks and stellar winds. All this information is very useful in the dissection of the starburst phenomenon. We discuss a few of the advantages and limitations of observations in the optical/near-IR region and focus on some results. Special attention is given to the role of interactions and mergers and observations of the relatively dust-free starburst dwarfs. In the future we expect new and refined diagnostic tools to provide us with more detailed information about the IMF, strength and duration of the burst and its triggering mechanisms.' author: - 'Nils Bergvall, Thomas Marquart, Göran Östlin, Erik Zackrisson' --- galaxies:dwarfs, galaxies:evolution, galaxies:interactions, galaxies:starburst, infrared:galaxies Introduction ============ Optical/near-IR broadband photometry of a starburst galaxy gives a first indication of burst strength, age and distribution of the young and old populations and their basic morphological structure parameters. Model based spectrophotometric tools are provided for more detailed analysis. A rich set of emission lines are used for analysis of kinematics, chemical abundance, shocks, stellar upper mass limit and distribution of dust and molecular gas. Absorption line indices provide estimates of age and IMF of the evolved population. Fig. \[specfig\] shows a synthetic spectrum a mixture of a young and old population with a mass ratio 2:1. A general review of the diagnostic tools and the limitations of the photoionization models used in the analysis is discussed by Schaerer (2001). ![image](bergvall1.eps){width="65.00000%"} \[specfig\] Heavy dust obscuration, in particular in LIRGs and ULIRGs, has been a problem in the optical/near-IR. Here we will therefore focus on starbursts in low-extinction regions, notably starburst dwarf galaxies. First, however, we will discuss a widely debated issue where optical data originally had a strong impact, namely the importance of gravitational interactions as a starburst triggering mechanism. Starbursts and tidal interaction ================================ It is clear from the properties of ULIRGs that mergers are required to trigger major starbursts. But is it a sufficient requirement? How often do mergers and close encounters generate starbursts? To answer this question it is common to compare two galaxy samples - interacting/merging galaxies (IGs) or pairs, and non-interacting galaxies (NIGs). A problem with the comparison is that NIGs and IGs have evolved in different environments where e.g. mergers, ram pressure, harassment and gas infall have different influence. Integrated broadband photometry and H$\alpha$ emission are the most widely used tools in this context. In the classical paper by Larson & Tinsley (1978) the authors claim, based on UBV data, that interactions frequently trigger a major SF increase involving as much as 5% of the total mass. Many follow-up studies seem to confirm the result but are often influenced by strong selection effects, non-matching morphological type distribution NIGs/IGs and are focusing on the most dramatic cases. Studies based on more well constrained samples (Bergvall et al. 2002, Brosch et al. 2004) do not confirm these results but find that tidal interactions have an insignificant influence on the SF history of galaxies in the local universe. There seems to be an agreement however, of a correlation between interaction and increased SF within the central kpc (first discussed by Keel et al. 1985). [*Galaxy pairs*]{} with small separations show similar trends as seen in H$\alpha$ (Barton et al. 2000, Lambas et al. 2003, Nikolic et al. 2004). The mean increase is in both cases is quite moderate however, and few cases are qualified to be called ’nuclear starbursts’. Bergvall et al. (2000) and Varela et al. (2004) find that masses of perturbed galaxies are higher than NIGs of similar morphology indicating that they experience mergers more frequently. This may lead to a steady inflow of gas that can explain part of the increased SF in the centre. Varela et al. also find a [*higher frequency of bars in disturbed systems*]{}, in accordance with related studies in the past (see Knapen 2004). Bars are known to generate mass inflows. Thus it is not clear what is the main triggering mechanism of the central increase in SF. The conclusion must be that there is [*no strong support that tidal interactions generate starburst activity that significantly affects the SF history of galaxies in the local universe*]{}. Estimates give room for major starbursts among less than a few % of the IGs. Blue compact galaxies ===================== Blue compact galaxies (BCGs) is a not well defined type as the galaxies are selected either from spectroscopic or photometric critera. The general properties are high surface brightness, low chemical abundance and a high gas mass fraction. They have a wide range of morphologies (Loose & Thuan 1986). Are they bursting? Fig. \[mbhilb\] shows L$_B$/$\cal M_{\rm HI}$ vs. M$_B$ of different types of gas rich galaxies. The BCG sample is incomplete but constitutes a representative part of the nearby sample of starburst dwarfs (Mrk, UM, Tololo etc.). We see that there is a continuous distribution towards high L$_B$/$\cal M_{\rm HI}$ but that the properties of most BCGs are similar to dIrr and late type spirals of similar luminosity, i.e. they are probably not bursting. The high surface brightness of the burst could be due to a high column density (and a small scalelength, cf. Papaderos et al. 1996 and Salzer et al. 2002), perhaps caused by a low angular momentum. Since their gas mass often constitutes a major fraction of the total mass (Salzer et al. 2002), the diagram shows that starbursts in these galaxies are either shortlived or rare. ![image](bergvall2.eps){width="60.00000%"} \[mbhilb\] Some BCGs have a $\sim$ tenfold global increase in SFR, i.e they are true starbursts. What are their specific properties? There is no strong indication of a correlation between SF activity and tidal interactions (Brosch 2004, Hunter and Elmegreen 2004). On the other hand BCGs appear to be involved in mergers with intense SF more frequently than other dwarfish galaxies (e.g. Gil de Paz 2004). It could indicate that mergers are important triggers and morphologically shortlived. The gas consumption rates are typically shorter than 100 Myr, i.e. similar to the dynamical timescale of a merger. ### Ages and masses Dynamical mass estimates of BCGs are difficult since the kinematics sometimes are quite chaotic due to the mass motions that cause the burst and because of the SN winds. To overcome the problem with the stellar winds it becomes necessary to use stellar absorption features. The only useful lines for this purpose are the Ca II triplet lines at about 8500 Å. Not until quite recently has this option become accessible (Östlin et al. 2004). The results are very promising and will soon help to solve the question regarding the coupling between gas and stars and facilitate the detailed analysis of velocity fields based on H$\alpha$ (e.g. Marquart et al. 2004). Age and SFR are often estimated from the H$\alpha$ flux, the H$\alpha$ equivalent width (EW(H$\alpha$)) and broadband photometry. From this the ’photometric mass’ is obtained assuming that the SFR is constant. The age is however difficult to determine, even if we assume that the SFR is constant. In such a case, EW(H$\alpha$) is a function of the IMF and age. The IMF slope in starbursts seems to be well constrained in the intermediate stellar mass range (Elmegreen 2004) but not so well for high masses. Fig \[ewfig\] shows the predicted EW(H$\alpha$) for two values of the upper mass limit, 40 and 120 solar masses. It can be seen that the predicted ages differ with a factor of 5-10 over a large age range. There is also an observational problem in that intense starbursts may have huge Strömgrenspheres from which the H$\alpha$ emission may be lost due to a limited aperture size. The uncertainty in the determination of the widely used b parameter (b = SFR/$<$SFR$>$) obviously must be quite high, in particular if we consider the poorly constrained SF history. For BCGs there seems to be a simple way to account for the SF history reasonably well. It is based on a two component model of the galaxy consisting of a starburst superposed on a host galaxy with an exponential luminosity profile. If photometric masses are applied to this model we find that there a fairly tight correlation between mass and central velocity dispersion (Östlin et al. 2001), indicating that this simple model is quite successful. ![image](bergvall3.eps){width="50.00000%"} \[ewfig\] A very useful method to determine the past starburst activity in a galaxy is based on its rich system of super star clusters and globular clusters (GCs). The GC IMF is Salpeter-like and their stellar content is coeval. This makes them quite reliable as standard clocks and optical/near-IR photometry and spectroscopy
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We give a short proof of a recent result by Bernik, Mastnak, and Radjavi, stating that an irreducible group of complex matrices with nonnegative diagonal entries is diagonally similar to a group of nonnegative monomial matrices. We also explore the problem when an irreducible matrix semigroup in which each member is diagonally similar to a nonnegative matrix is diagonally similar to a semigroup of nonnegative matrices.' address: 'Department of Mathematics, Faculty of Mathematics and Physics, University of Ljubljana, Jadranska 19, SI-1000 Ljubljana, Slovenia' author: - Grega Cigler - Roman Drnovšek title: On semigroups of matrices with nonnegative diagonals --- matrices ,semigroups ,nonnegative matrices ,cones ,irreducibility 15B48 ,20M20 ,47D03 6.8mm Introduction ============ Multiplicative semigroups of matrices with nonnegative diagonal entries have been studied in the papers [@BMR] and [@SWG]. Their authors considered the general question under which additional assumptions such a semigroup is simultaneously similar to a semigroup of nonnegative matrices. The main result of [@BMR] is that every irreducible group of complex matrices with nonnegative diagonal entries is diagonally similar to a group of nonnegative monomial matrices. In Section 2 we give a short proof of this result. Our proof is more geometric and less group-theoretic than the proof in [@BMR]. Multiple authors of the paper [@SWG] provided several examples showing that it is impossible to extend this result from groups to semigroups. So, to obtain similarity to a semigroup of nonnegative matrices, stronger assumptions on a given semigroup must be imposed. In Section 3 we explore the problem when an irreducible matrix semigroup in which each member is diagonally similar to a nonnegative matrix is necessarily diagonally similar to a semigroup of nonnegative matrices. We now recall some definitions and basic facts. The set of all nonnegative real numbers is denoted by ${{\mathbb{R}}}_+$. A convex set $K \subseteq {{\mathbb{R}}}^n$ is said to be a [*cone*]{} if $r K \subseteq K$ for all $r \in {{\mathbb{R}}}_+$. A cone $K \subseteq {{\mathbb{R}}}^n$ is [*proper*]{} if it is closed, [*pointed*]{} ($K \cap (-K) = \{0\}$), and [*solid*]{} (the interior of $K$ is nonempty). The most natural example of a proper cone is the [*nonnegative orthant*]{} ${{\mathbb{R}}}_+^n$. A cone $K \subseteq {{\mathbb{R}}}^n$ is [*reproducing*]{} if $K - K = {{\mathbb{R}}}^n$. It is well-known that a closed cone is solid if and only if it is reproducing. Let $K$ be a closed cone in ${{\mathbb{R}}}^n$. A vector $x \in K$ is an [*extremal vector*]{} of $K$ if $y \in K$ and $x-y \in K$ imply that $y$ is a nonnegative multiple of $x$. By ${{\rm Ext \,}}(K)$ we denote the set of all extremal vectors of $K$. By the Krein-Milman theorem, $K$ is the convex hull of ${{\rm Ext \,}}(K)$. The angle $\phi \in [0, \pi]$ between non-zero vectors $x$, $y \in {{\mathbb{R}}}^n$ is determined by the equality  $x^T y = \|x\| \, \|y\| \, \cos \phi$. If $F$ is a subset of complex numbers, then $M_n(F)$ denotes the set of all $n \times n$ matrices with entries in $F$. If ${{\cal C}}\subseteq M_n({{\mathbb{C}}})$ is a collection of complex matrices, then $\overline{{{\cal C}}}$ denotes its closure in the Euclidean topology, and ${{\mathbb{R}}}_+ {{\cal C}}$ denotes its [*homogenization*]{}, i.e., ${{\mathbb{R}}}_+ {{\cal C}}= \{r C: r \in {{\mathbb{R}}}_+, C \in {{\cal C}}\}$. We say that a matrix has a [*nonnegative diagonal*]{} if all of its diagonal entries are nonnegative. A matrix is called [*monomial*]{} if it has the same nonzero pattern as a permutation matrix, i.e., there is exactly one nonzero entry in each row and in each column. A collection ${{\cal C}}\subseteq M_n({{\mathbb{C}}})$ (where $n \ge 2$) is [*reducible*]{} if there exists a common invariant subspace other than the trivial ones $\{0\}$ and ${{\mathbb{C}}}^n$, or equivalently, there exists an invertible matrix $S \in M_n({{\mathbb{C}}})$ such that the collection $S {{\cal C}}S^{-1}$ has a block upper-triangular form; otherwise, the collection ${{\cal C}}$ is said to be [*irreducible*]{}. If the matrix $S$ can be chosen to be a permutation matrix, then the collection ${{\cal C}}$ is said to be [*decomposable*]{}; otherwise, it is called [*indecomposable*]{} (or [*ideal-irreducible*]{}). Groups of matrices with nonnegative diagonals ============================================= The study of semigroups of matrices having nonnegative diagonals was initiated by the authors of [@BMR]. They started their discussion by the following result (see [@BMR Theorem 4.1]). \[rank-one\] Let ${{\cal S}}\subseteq M_n({{\mathbb{C}}})$ be an irreducible semigroup of matrices of rank at most one having nonnegative diagonals. If $\overline{{{\mathbb{R}}}_+ {{\cal S}}} = {{\cal S}}$, then, after a diagonal similarity, ${{\cal S}}= X Y^T$ for some subsets $X$ and $Y$ of ${{\mathbb{R}}}_+^n$ each of which spans ${{\mathbb{C}}}^n$. Using the Haar measure one can prove the following assertion (see [@BMR Proposition 4.3]). \[positive-valued\] Let ${{\cal S}}\subseteq M_n({{\mathbb{C}}})$ be an irreducible semigroup of matrices. Suppose that $\overline{{{\mathbb{R}}}_+ {{\cal S}}} = {{\cal S}}$ and that there exists a non-zero functional $\varphi: M_n({{\mathbb{C}}}) \to {{\mathbb{C}}}$ such that $\varphi(S) \in {{\mathbb{R}}}_+$ for all $S \in {{\cal S}}$. Then ${{\cal S}}$ has members of rank one. The following theorem is the main result of [@BMR Theorem 5.5]. We provide a short proof that is more geometric and less group-theoretic than the original one. \[positivediaggroup\] If ${{\cal G}}\subset M_n({{\mathbb{C}}})$ is an irreducible group of matrices with nonnegative diagonals, then, up to a diagonal similarity, ${{\cal G}}$ is a group in $M_n({{\mathbb{R}}}_+)$. Therefore, each member of the group ${{\cal G}}$ is a nonnegative monomial matrix. With no loss of generality we may assume that $t G \in {{\cal G}}$ for all $t > 0$ and $G \in {{\cal G}}$. Let ${{\cal S}}= \overline{{{\cal G}}}$. Applying Proposition \[positive-valued\] for the trace functional, we conclude that ${{\cal S}}$ contains elements of rank one. The semigroup ideal ${{\cal S}}_1$ of all elements of rank at most one in ${{\cal S}}$ is irreducible (see [@RR]). By Theorem \[rank-one\], we can assume that, after a diagonal similarity, ${{\cal S}}_1 = X Y^T$ for some subsets $X$ and $Y$ of ${{\mathbb{R}}}_+^n$ each of which spans ${{\mathbb{C}}}^n$. We can also assume that ${{\mathbb{R}}}_+ X = X$ and ${{\mathbb{R}}}_+ Y = Y$. The cone $\widehat{X}$ generated by $X$ is closed, and it is invariant under any $S \in {{\cal S}}$, since $ (Sx) y^T = S (x y^T) \in {{\cal S}}_1$ for every $x \in X$ and $y \in Y$. Similarly, it follows from $x (S^T y)^T = (x y^T) S \in {{\cal S}}_1$ that $Y$ is invariant under $S^T$. The dual cone $$Y^d = \{ z \in {{\mathbb{R}}}^n : z^T y \ge 0 \textrm{ for all } y \in Y \}$$ of the set $Y$ obviously contains ${{\mathbb{R}}}_{+}^n$, and it is invariant under any $S \in {{\cal S}}$, as $(Sz)^T y = z (S^T y) \ge 0$ for all $y \in Y$ and $z \in Y^d$. It follows that every $G \in {{\cal G}}$ is a bijective mapping on both $\widehat{X}$ and $Y^d$, implying that every $G \in {{\cal G}}$ maps ${{\rm Ext \,}}(\widehat{X})$ to itself, and the same holds for the cone $Y^d$. We want to show that the inclusions $\widehat{X} \subseteq {{\mathbb{R}}}_+^n \subseteq Y^d
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | In the present study, a numerical method, perturbation-iteration algorithm (shortly PIA), have been employed to give approximate solutions of nonlinear fractional-integro differential equations (FIDEs). Comparing with the exact solution, the PIA produces reliable and accurate results for FIDEs. **Keywords:** Fractional-integro differential equations, Caputo fractional derivative, Initial value problems, Perturbation-Iteration Algorithm. author: - | Mehmet ŞENOL and İ. Timuçin DOLAPCİ\ Nevşehir Haci Bektaş Veli University, Department of Mathematics, Nevşehir, Turkey\ Celal Bayar University, Department of Mechanical Engineering,\ Manisa, Turkey\ e-mail:msenol@nevsehir.edu.tr, ihsan.dolapci@cbu.edu.tr title: 'On the Numerical Solution of Nonlinear Fractional-Integro Differential Equations' --- Introduction ============ Scientists has been interested in fractional order calculus as long as it has been in classical integer order analysis. However, for many years it could not find practical applications in physical sciences. Recently, fractional calculus has been used in applied mathematics, viscoelasticity [@1], control [@2], electrochemistry [@3], electromagnetic [@4]. Developments in symbolic computation capabilities is one of the driving forces behind this rise. Different multidisciplinary problems can be handled with fractional derivatives and integrals. [@5] and [@6] are studies that describe the fundamentals of fractional calculus give some applications. Existence and uniqueness of the solutions are also studied in [@7]. Similar to the studies in physical sciences, fractional order integro differential equations (FIDEs) also gave scientists the opportunity of describing and modeling many important and useful physical problems. In this manner, a remarkable effort has been expended to propose numerical methods for solving FIDEs, in recent years. Fractional variational iteration method [@8; @9], homotopy analysis method [10,11]{}, Adomian decomposition method [@12; @13] and fractional differential transform method [@14; @15; @16] are among these methods. In our study, we use the previously developed method PIA, to obtain approximate solutions of some FIDEs. This method can be applied to a wide range of problems without requiring any special assumptions and restrictions. A few fractional derivative definitions of an arbitrary order exists in the literature. Two most used of them are the Riemann-Liouville and Caputo fractional derivatives. The two definitions are quite similar but have different order of evaluation of derivation. The Riemann-Liouville fractional integral of order $\alpha $ is described by:$$J^{\alpha }u(x)=\frac{1}{\Gamma (\alpha )}\int_{0}^{x}(x-t)^{\alpha -1}u(t)dt,\quad \alpha >0,\quad x>0. \label{1}$$ The Riemann-Liouville and Caputo fractional derivatives of an arbitrary order are defined as the following, respectively$$D^{\alpha }u(x)=\frac{d^{m}}{dx^{m}}\left( J^{m-\alpha }u(x)\right) \label{2}$$$$D_{\ast }^{\alpha }u(x)=J^{m-\alpha }\left( \frac{d^{m}}{dx^{m}}u(x)\right) . \label{3}$$where $m-1<\alpha \leqslant m$ and $m\in \mathbb{N} .$ Due to the appropriateness of the initial conditions, fractional definition of Caputo is often used in recent years. The Caputo fractional derivative of a function $u(x)$ is defined as$$D_{\ast }^{\alpha }u(x)=\left\{ \begin{array}{cc} \frac{1}{\Gamma (m-\alpha )}\int_{0}^{x}(x-t)^{m-\alpha -1}u^{(m)}(t)dt, & m-1<\alpha \leqslant m \\ \frac{d^{m}u(x)}{dx^{m}} & \alpha =m\end{array}\right. \label{4}$$for $m-1<\alpha \leqslant m,$ $m\in \mathbb{N} ,$ $x>0,$ $u\in C_{-1}^{m}.$ Following lemma gives the two main properties of Caputo fractional derivative. For $m-1<\alpha \leqslant m,$ $u\in C_{\mu }^{m},$ $\mu \geqslant -1$ and $m\in \mathbb{N} $ then $$D_{\ast }^{\alpha }J^{\alpha }u(x)=u(x) \label{5}$$and $$J^{\alpha }D_{\ast }^{\alpha }u(x)=u(x)-\sum_{k=0}^{m-1}u^{(k)}(0^{+})\frac{x^{k}}{k!},\quad x>0. \label{6}$$ After this introductory section, Section 2 is reserved to a brief review of the Perturbation-Iteration Algorithm PIA, in Section 3 some examples are illustrated to show the simplicity and effectiveness of the algorithm. Finally the paper ends with a conclusion in Section 4. Analysis of the PIA =================== Differential equations are naturally used to describe problems in engineering and other applied sciences. Searching approximate solutions for complicated equations has always attracted attention. Many different methods and frameworks exist for this purpose and perturbation techniques [@17; @18; @19] are among them. These techniques can be used to find approximate solutions for both ordinary and partial differential equations. Requirement of a small parameter in the equation that is sometimes artificially inserted is a major limitation of perturbation techniques that renders them valid only in a limited range. Therefore, to overcome the disadvantages come with the perturbation techniques, several methods have been proposed by authors [@20; @21; @22; @23; @24; @25; @26; @27; @28; @29]. Parallel to these attempts, a perturbation-iteration method has been proposed by Aksoy, Pakdemirli and their co-workers [@33; @34; @35] previously. A primary effort of producing root finding algorithms for algebraic equations [@30; @31; @32], finally guided to obtain formulae for differential equations also [@33; @34; @35]. In the new technique, an iterative algorithm is constructed on the perturbation expansion. The present method has been tested on Bratu-type differential equations [@33] and first order differential equations [@34] with success. Then the algorithms were applied to nonlinear heat equations also [@35]. Finally, the solutions of the Volterra and Fredholm type integral equations [@36] and ordinary differential equation systems [@37] have been presented by the developed method. This new algorithm have not been used for any fractional integro differential equations yet. To obtain the approximate solutions of FIDEs, the most basic perturbation-iteration algorithm PIA(1,1) is employed by taking one correction term in the perturbation expansion and correction terms of only first derivatives in the Taylor series expansion. [@33; @34; @35]. Take the fractional-integro differential equation. $$F\left( u^{(\alpha )},u,\int_{0}^{t}{g\left( t,s,u(s)\right) ds},\varepsilon \right) =0 \label{7}$$ where $u=u(t)$ and $\varepsilon $ is a small parameter. The perturbation expansions with only one correction term is $$\begin{aligned} u_{n+1} &=&u_{n}+\varepsilon {\left( u_{c}\right) }_{n}\ \notag \\ u_{n+1}^{\prime } &=&u_{n}^{\prime }+\varepsilon {\left( u_{c}^{\prime }\right) }_{n}\ \label{8}\end{aligned}$$ Replacing Eq.$(\ref{8})$ into Eq.$(\ref{7})$ and writing in the Taylor series expansion for only first order derivatives gives $$\begin{aligned} &&F\left( u_{n}^{\left( \alpha \right) },u_{n},\int_{0}^{t}{g\left( t,s,u_{n}(s)\right) ds},0\right) \notag \\ &&+F_{u}\left( u_{n}^{\left( \alpha \right) },u_{n},\int_{0}^{t}{g\left( t,s,u_{n}(s)\right) ds},0\right) \varepsilon {\left( u_{c}\right) }_{n} \notag \\ &&+F_{u^{\left( \alpha \right) }}\left( u_{n}^{\left( \alpha \right) },u_{n},\int_{0}^{t}{g\left( t,s,u_{n}(s)\right) ds},0\right) \varepsilon {\left( u_{c}^{(\alpha )}\right) }_{n} \notag \\ &&+F_{\int {u}}\left( u_{n}^{\left( \alpha \right) },u_{n},\int_{0}^{t}{g\left( t,s,u_{n
{ "pile_set_name": "ArXiv" }
null
null
--- author: - | Robert M Corless$^1$, Robert HC Moir$^1$, Marc Moreno Maza$^1$, Ning Xie$^2$\ [$^1$Ontario Research Center for Computer Algebra,\ University of Western Ontario, Canada]{}\ [$^2$Huawei Technologies Corporation, Markham, ON]{} bibliography: - 'symbint.bib' title: 'Symbolic-Numeric Integration of Rational Functions' ---
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We consider the Kalman-filtering problem with multiple sensors which are connected through a communication network. If all measurements are delivered to one place called fusion center and processed together, we call the process centralized Kalman-filtering (CKF). When there is no fusion center, each sensor can also solve the problem by using local measurements and exchanging information with its neighboring sensors, which is called distributed Kalman-filtering (DKF). Noting that CKF problem is a maximum likelihood estimation problem, which is a quadratic optimization problem, we reformulate DKF problem as a consensus optimization problem, resulting in that DKF problem can be solved by many existing distributed optimization algorithms. A new DKF algorithm employing the distributed dual ascent method is provided and its performance is evaluated through numerical experiments.' author: - 'Kunhee Ryu and Juhoon Back${}^*$[^1] [^2]' bibliography: - 'mybib.bib' title: '**Distributed Kalman-filtering: Distributed optimization viewpoint**' --- Introduction ============ It goes without saying that the Kalman-filter, an optimal state estimator for dynamic systems, has had a huge impact on various fields such as engineering, science, economics, etc. [@Welch1995; @Bell1993TAC; @Humpherys2010CSM; @Thrun2005Book]. Basically, the filter predicts the expectation of the system state and its covariance based on the dynamic model and the statistical information on the model uncertainty or process noise, and then correct them using new measurement, sensor model, and the information on measurement noise. When multiple sensors possibly different types are available, we can just combine the sensor models to process the measurements altogether. Thanks to the rapid development of sensor devices and communication technology, we are now able to monitor large scale systems or environments such as traffic network, plants, forest, sea, etc. In those systems, sensors are geometrically distributed, may have different types, and usually not synchronized. To process the measurements, the basic idea would be to deliver all the data to one place, usually called fusion center, and do the correction step as in the case of multiple sensors. This is called the centralized Kalman-filtering (CKF). As expected, CKF requires a powerful computing device to handle a large number of measurements and sensor models, is exposed to a single point of failure, and is difficult to scale up. In order to overcome these drawbacks, researchers developed the distributed Kalam-filtering (DKF) in which each sensor in the network solves the problem by using local measurements and communicating with its neighbors. Compared with CKF, DKF has advantageous in terms of the scalability, robustness to component loss, computational cost, and thus the literature on this topic is expanding rapidly [@Olfati2007CDC; @Olfati2009CDC; @Bai2011ACC; @Carli2008SAC; @Khan2008TSP; @Kim2016CDC; @Wu2016IFAC; @WU2018Aut]. For more details on DKF, see the survey [@Mahmoud2013Survey] and references therein. Some relevant results are summarized as follows. In [@Olfati2007CDC], the author proposed scalable distributed Kalman-Bucy filtering algorithms in which each node only communicates with its neighbors. An algorithm with average consensus filters using the internal models of signals being exchanged is proposed in [@Bai2011ACC]. It is noted that the algorithm works in a single-time scale. In the work [@Wu2016IFAC], the authors proposed a continuous-time algorithm that makes each norm of all local error covariance matrices be bounded, thus overcomes a major drawback of [@Olfati2007CDC]. In [@Kim2016CDC], an algorithm with a high gain coupling term in the error covariance matrix is introduced and it is shown that the local error covariance matrix approximately converges to that of the steady-state centralized Kalman-filter. An in-depth discussion on distributed Kalman-filtering problem has been provided in [@Battistelli2015TAC; @Battistelli2016Aut], and the algorithms that exchange the measurements themselves, or exchange certain signals instead of the measurements are proposed, respectively. Although each of the existing algorithms has own novel ideas and advantages, to the best of the authors’ knowledge, we do not have a unified viewpoint for DKF problem. Motivated by this, it is the aim of this paper to provide a framework for the problem from the perspective of distributed optimization. We start by observing that the [*[correction]{}*]{} step of Kalman-filtering is basically an optimization problem [@Bell1993TAC; @Humpherys2010CSM; @Thrun2005Book], and then formulate DKF problem as a consensus optimization problem, which provides a fresh look at the problem. This results in that DKF problem can be solved by many existing distributed optimization algorithms [@Boyd+2011FTML; @Nedic+2009TAC; @Nedic+2010TAC; @Zhang2018CDC; @Dorfler2017], expecting various DKF algorithms to be derived. As an instance, a new DKF algorithm employing the [*[dual ascent method]{}*]{} [@Dorfler2017], one of the basic algorithms for distributed optimization problems, is provided in this paper. This paper is organized as follows. In Section \[Sec:ProblemSetting\], we recall CKF problem from the optimization perspective, and connects DKF problem to a distributed optimization problem. A new DKF algorithm based on [*[dual ascent method]{}*]{} is proposed in Section \[Sec:DKF-DA\], and numerical experiments evaluating the proposed algorithm is conducted in Section \[Sec:NE\]. [**Notation**]{}: For matrices $A_1$, …, $A_n$, $\operatorname{\text{diag}}(A_1,\dots,A_n)$ denotes the block diagonal matrix composed of $A_1$ to $A_n$. For scalars $a_1$,…, $a_n$, $[a_1;\dots;a_n] := [a_1^\top,\dots,a_n^\top]^\top$, and $[A_1;\dots;A_n]$ with matrices $A_i$’s is defined similarly. $1_n \in \mathbb{R}^n$ denotes the vector whose components are all 1, and $I_n$ is the identity matrix whose dimension is $n \times n$. The maximum and minimum eigenvalue of a matrix $A$ are denoted by $\sigma_{\max}(A)$ and $\sigma_{\min}(A)$, respectively. For a random variable $x$, $x \sim \mathsf{N}(\mu, \sigma^2)$ denotes $x$ is normally distributed with the mean $\mu$ and the variance $\sigma^2$, and $\mathbb{E}\{ x\}$ denotes the [*[expected value]{}*]{} of a random variable $x$, [*[i.e.,]{}*]{} $\mathbb{E}\{ x\} = \mu$. The half vectorization of a symmetric matrix $M \in \mathbb{R}^{n \times n}$ is denoted by ${\text{vec}_h({M})} \in \mathbb{R}^{n(n+1)/2}$, whose elements are filled in Column-major order. $i.e., {\text{vec}_h({M})} := [M_{1,1}; \dots; M_{1,n}; M_{2,2}; \dots; M_{2,n}; \dots;$ $ M_{n-1,n-1};M_{n-1,n};M_{n,n}]$ where $M_{i,j}$ is $i,j$ element of $M$, and ${\text{vec}_h^{-1}({\cdot})}$ denotes the inverse function of ${\text{vec}_h({\cdot})}$, $i.e., {\text{vec}_h^{-1}({{\text{vec}_h({M})}})} = M$. For a function $f(x, y): \mathbb{R}^{n}\times \mathbb{R}^m \rightarrow \mathbb{R}$, $\nabla_{x} f(x,y)$ denotes the gradient vector $\frac{\partial f(x,y)}{\partial x} = [\frac{\partial f(x,y)}{\partial x_1}; \dots;\frac{\partial f(x,y)}{\partial x_n}]$. [**Graph theory**]{}: For a network consisting of $N$ nodes, the communication among nodes is modeled by a graph $\mathcal{G}$. Let ${\mathcal{A}} = [a_{ij}] \in {\mathbb{R}}^{N \times N}$ be an adjacency matrix associated to ${\mathcal{G}}$ where $a_{ij}$ is a weight of an edge between nodes $i$ and $j$. If node $i$ communicates to node $j$ then, $a_{ij} > 0$, or if not $a_{ij} = 0$. Assume there is not self edge, [*i.e.*]{}, $a_{ii} = 0$. The Laplacian matrix associated to the graph $\mathcal{G}$, denoted by $L$ is a $N \times N$ matrix such that $l_{ij, i \neq j} = -a_{ij}$, and $l_{ii} = \sum_{j=1}^N a_{ij}$. ${\mathcal{N}}_i$ is a set of nodes communicating with node $i$, [*i.e.*]{}, ${\mathcal{N}}_i = \{j | a_{ij}>0 \}$. Distributed Kalman-filtering and Its Connection to Consensus Optimization {#Sec
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We present new K- and L$''$-band imaging of a representative sample of members of the young 3$-$5Myr old $\sigma$Orionis cluster. We identified objects with $(K-L'')$ excess by analysing colour-colour diagrams and comparing the observations with empirical main-sequence colours. The derived disk frequency depends on the method used: (54$\pm$15)% if measured directly from the $JHKL''$ colour-colour diagram; or (46$\pm$14)% if excesses are computed with respect to predicted photospheric colours (according to the objects spectral types, 2-$\sigma$ excess detections). We compare the $(K-L'')$ excess with other indicators and show that this is a robust and reliable disk indicator. We also compare the derived disk frequency with similarly aged clusters and discuss possible implications for disk lifetimes. The computed age of the $\sigma$Ori cluster is very important: a cluster age of 3Myr would support the overall disk lifetime of 6Myr proposed in the literature, while an age $>$4Myr would point to a slower disk destruction rate.' author: - | J.M. Oliveira[^1], R.D. Jeffries and J.Th. van Loon\ School of Chemistry and Physics, Keele University, Keele, Staffordshire ST5 5BG, UK title: 'An L$''$-band survey for circumstellar disks around low-mass stars in the young $\sigma$Orionis cluster' --- \[firstpage\] circumstellar matter – infrared: stars – star: pre-main-sequence – stars: late type – open clusters and associations: individual ($\sigma$ Orionis) stars Introduction ============ Disk-like structures are believed to be ubiquitous around young protostars. These disks are dissipated very early in pre-main-sequence (PMS) evolution, perhaps by powerful stellar jets/outflows or photodissociation by the far-ultraviolet flux from nearby massive OB stars. Despite their short lives, the timescales and mass dependence of disk dissipation have far reaching consequences in astrophysics: the efficiency of disk depletion could be the strongest factor in determining the timescales on which planets form in a particular stellar system [@haisch01], or whether they form at all [@brandner00]. Disks probably play a significant role in early angular momentum regulation and the dissipation timescale is thought to control the spread in rotation rates of young stars [@sills00]. Stars may accrete a significant fraction of their final mass from a circumstellar disk, so the timescale and mass dependence of that accretion influences PMS evolution and thus attempts to estimate ages and masses from evolutionary PMS models [@comeron03]. The mass dependence of disk frequencies can provide a stern test for low-mass stellar and brown dwarf formation theories. For instance, models involving competitive accretion and subsequent ejection of brown dwarfs from protostellar aggregates [@reipurth00; @bate03] may imply shorter disk dissipation times for the lower mass fragments. Observed disk frequencies in samples of young stars with different ages, masses and environments provide an empirical determination of disk lifetimes. Judging from L-band excesses, young clusters exhibit high disk frequencies ($\ga$80%, e.g. the Trapezium cluster: @lada00) up to ages of $\sim$1.5Myr, which then decrease rapidly with age: at $\sim$3Myr, 50% of disks have been dissipated, and the timescale for all cluster members to lose their disks may be as short as $\sim$6Myr [@haisch01]. Such timescales have been questioned by a high disk frequency in the 9Myr $\eta$Chamaeleontis cluster, a sparsely populated cluster with no massive stars [@lyo03]. $\sigma$Orionis is a Trapezium-like system with an O9.5V primary. The population of low-mass stars spatially clustered around this system was discovered as bright X-ray sources in ROSAT images, and follow-up optical spectroscopy confirmed most sources as PMS stars [@wolk96; @walter97]. This association is young, nearby and affected by low reddening, making it an ideal target to analyse the PMS population even down to brown dwarfs [e.g. @bejar01; @barrado03; @kenyon03] and isolated planetary mass objects [@osorio00]. Furthermore, at an age of 3$-$5Myr [e.g. @oliveira02; @osorio02; @jayawardhana03], the $\sigma$Orionis cluster is at a crucial stage in terms of disk evolution and it is therefore a key case to better constrain disk dissipation timescales. Recently, a possible proto-planetary disk, apparently on the process of being dissipated, has been discovered very close to $\sigma$Ori [@loon03]. The $K_{\rm s}$-excess disk frequency is 5$-12$% for the low-mass and brown dwarf members of the $\sigma$Orionis cluster [@oliveira02; @barrado03]. On the other hand, the presence of strong H$\alpha$ emission suggests accretion disk frequencies as high as 30% [@osorio02]. However, the most reliable method to determine the disk frequency in a low-mass population is by measuring the $(K-L)$ colours and deriving colour excesses [e.g. @wood02]. @jayawardhana03 have obtained L-band observations of 6 $\sigma$Ori cluster members, finding two with a $(K-L)$ excess. The significance of this result is obviously limited by the size of the sample. We have performed L$'$-band (3.8$\mu$m) observations of a representative sample of 28 cluster members, using the newly installed imager UIST at the United Kingdom Infrared Telescope (UKIRT). Young stars are well known for their variability across the spectrum including infrared (IR) wavelengths [@carpenteretal01; @carpenter02], therefore we have obtained nearly simultaneous K-band observations for all our targets. In this paper, we describe the results of this survey, and discuss our derived disk frequency within the framework of disk destruction timescales by comparing with similar surveys in other young clusters. Cluster members and properties ============================== Sample of cluster members ------------------------- We have an on-going program to observe in the L-band $\sigma$Ori cluster members identified at optical wavelengths. We describe here the observations of 28 of the brightest cluster members: their positions, $I_{\rm c}$ magnitudes, 2MASS (Two Micron All Sky Survey) $J, H$ and $K_{\rm s}$ magnitudes, the new K- and L$'$-band magnitudes and identifications are listed in Table\[obs\_table\]. Some sources were first identified as ROSAT X-ray sources and photometric cluster candidates by @wolk96 [ W96] while other objects are photometric candidates identified by @bejar01 [ B01]. @osorio02 [ ZO02] have spectroscopically confirmed cluster membership for both these sets of objects. The remaining objects are spectroscopic cluster members found by @kenyon03 [ K03]. The I-band magnitudes are from either @bejar01 or @kenyon03. The brighter objects in the sample are mostly from the X-ray selected sample [@wolk96] while the fainter objects were photometrically and spectroscopically selected. In Sect.5.1 we discuss the effects of possible selection biases on our results. Searching for circumstellar disks around these 24 objects is the main goal of these observations. To this sample we have added 4 objects that are known IRAS sources in the region (no reference entry in Table\[obs\_table\]). They have been confirmed by @oliveira03b (see also @oliveira03a) as mid-infrared sources with spectral energy distributions (SEDs) consistent with them being young stars with dusty circumstellar disks; thus, based on their location, youth and infrared excesses they are also likely to be members of the $\sigma$Ori cluster. --- -------------------- -------------------- ------- ------- ------- ------- -------- ------- ------ ------ -------------------- ----------- ra dec ($^{h\,\,m\,\,s}$) ($^{d\,\,m\,\,s}$) (mag) (mag) (mag) (mag) 1 5 38 33.68 $-$2 44 14.2 10.13 9.28 8.66 8.600 0.00 7.71 0.02 TXOri 2 5 38 48.04 $-$2 27 14.2 12.08 10.16 9.46 9.19 9.208 0.00 8.65 0.08 4771-899 W96, ZO02 3 5 38 27.26 $-$2 45 09.7 12.82 11.96 10.79 9.94 9.729 0.00 8.64 0.04 4771-41, V505Ori W96, ZO02 4 5 40 08.89 $-$2 33 33.7 11.50 10.55 9.91 9.812 0.00 8.76 0.06 Haro5-39 5 5 39 39.83 $-$2 33 16.
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In this paper we are interested in parallels to the classical notions of special subsets in ${\mathbb{R}}$ defined in the generalized Cantor and Baire spaces ($2^\kappa$ and $\kappa^\kappa$). We consider generalizations of well-known classes of special subsets, like Lusin sets, strongly null sets, concentrated sets, perfectly meagre sets, $\sigma$-sets, $\gamma$-sets, sets with Menger, Rothberger or Hurewicz property, but also of some less-know classes like $X$-small sets, meagre additive sets, Ramsey null sets, $T''$-sets, Marczewski, Silver, Miller and Laver-null sets. We also show some relations between those classes.' author: - 'Michał Korch, Tomasz Weiss' title: 'Special subsets of the generalized Cantor space $2^\kappa$ and generalized Baire space $\kappa^\kappa$' --- Introduction and preliminaries ============================== Many classical notions of special subsets of $2^\omega$ can be generalized to the case of the generalized Cantor space $2^\kappa$. In this paper we study those classes of sets in this setting. In some cases, when it seems more appropriate, we also study such classes in the generalized Baire space $\kappa^\kappa$. It turns out that many the properties of subsets of $2^\omega$ or of $\omega^\omega$ can be easily proved in $2^\kappa$ or $\kappa^\kappa$, although sometimes one has to use some additional set-theoretic assumptions. Next we deal with less common classes of small sets in $2^\kappa$ and $\kappa^\kappa$. Special subsets of the real line {#intro-special} -------------------------------- In theory of special subsets of the real line we deal with sets which are very small. We recall below some notions which will be generalized later in this paper. ### Special subsets related to category Among classes of special subsets of the real line, the class of perfectly meager sets plays an important role. A set is [[**perfectly meager**]{}]{} if it is meager relative to any perfect set, and we denote it by ${\boldsymbol{\text{P}{{\mathcal{M}}}}}$ (this concept first appeared in [@nl:pb]). A set $A$ is called [[**strongly null**]{}]{} (strongly of measure zero) if for any sequence of positive $\varepsilon_{n}>0$, there exists a sequence of open sets $\left<A_{n}\right>_{n\in\omega}$, with ${\text{diam}}A_{n}<{\varepsilon}_{n}$ for $n\in\omega$, and such that $A\subseteq\bigcup_{n\in\omega}A_{n}$. We denote the class of such sets by ${\boldsymbol{\text{S}{{\mathcal{N}}}}}$. The idea was introduced for the first time in [@eb:cemn], and [[**Borel conjectured**]{}]{} that all ${\boldsymbol{\text{S}{{\mathcal{N}}}}}$ sets are countable. This hypothesis turned out to be independent from ZFC (see [@rl:cbc]). It is easy to see that a set $A$ is strongly null if and only if for any sequence of positive $\varepsilon_{n}>0$, there exists a sequence of open sets $\left<A_{n}\right>_{n\in\omega}$, with ${\text{diam}}A_{n}<{\varepsilon}_{n}$ for $n\in\omega$, and such that $$A\subseteq\bigcap_{m\in\omega}\bigcup_{n>m}A_{n}.$$ Galvin, Mycielski and Solovay (in [@fgjmrs:smzs]) proved that a set $A\in{\boldsymbol{\text{S}{{\mathcal{N}}}}}$ (in $2^{\omega}$) if and only if for any meagre set $B$, there exists $t\in 2^{\omega}$ such that $A\cap(B+t)={\varnothing}$. We shall say that a set $L\subseteq 2^{\omega}$ is a [[**$\kappa$-Lusin set**]{}]{} (for $\omega<\kappa\leq 2^\omega$) if for any meagre set $X$, $|L\cap X|<\kappa$, but $|L|\geq \kappa$. An $\aleph_1$-Lusin set is simply called a [[**Lusin set**]{}]{}. This idea was introduced independently in [@nl:pb] and [@pm:tkm]. The existence of a Lusin set is independent from ZFC. It is easy to see that under CH such a set exists. Indeed, enumerate all closed nowhere dense sets and inductively take a point form a complement of each such set distinct from all the points chosen so far. The same can be easily done if ${\text{cov}}({\boldsymbol{{{\mathcal{M}}}}})={\text{cof}}({\boldsymbol{{{\mathcal{M}}}}})=\aleph_1$ (see e.g. [@lb:srl]). A set $A$ is called [[**meagre-additive**]{}]{} ($A\in {\boldsymbol{{{\mathcal{M}}}}}^{*}$) if for any meagre set $X$, $A+X$ is meagre (see e.g. [@tw:manascs] and [@tbhj:stsrl]). The following [[**characterization of meagre-additive sets**]{}]{} is well-known. A set $X\in {\boldsymbol{{{\mathcal{M}}}}}^*$ ([@tbhj:stsrl]\[Theorem 2.7.17\]) if and only if for every increasing $f\in \omega^\omega$, there exists $g\in \omega^\omega$ and $y\in 2^\omega$ such that for all $x\in X$, there exists $m\in \omega$ such that for every $n>m$, there exists $k_n\in\omega$ with $g(n)\leq f(k_n)<f(k_n+1)\leq g(n+1)$ and such that $$x{\mathord{\upharpoonright}}[f(k_n), f(k_n+1))=y{\mathord{\upharpoonright}}[f(k_n), f(k_n+1)).$$ ### Trees Fix any set $A$ and an ordinal number $\xi$. Given a sequence $t\in 2^\alpha$ with $\alpha<\xi$, we denote $\alpha={\text{len}}(t)$. A set $T\subseteq A^{<\xi}$ will be called a [[**tree**]{}]{} if for all $t\in T$ and $\alpha< {\text{len}}(t)$, $t{\mathord{\upharpoonright}}\alpha\in T$ as well. A branch in a tree is a maximal chain in it. For a tree $T\subseteq A^{<\xi}$, let $$[T]=\{x\in A^\xi\colon \forall_{\alpha<\xi} x{\mathord{\upharpoonright}}\alpha\in T\}.$$ A node $s\in T\subseteq A^{<\xi}$ is called a [[**branching point**]{}]{} of $T$ if $s^\frown a,s^\frown b\in T$ for some distinct $a,b\in A$. The set of all branching points of a tree $T$ is denoted by ${\text{Split}}(T)$. For $\alpha<\xi$, $t\in {\text{Split}}_\alpha(T)$ if $\langle\{s\subsetneq t\colon s\in {\text{Split}}(T)\},\subseteq\rangle$ is order isomorphic with $\alpha$. A tree $T\subseteq A^{<\xi}$ is [[**perfect**]{}]{} if for any $t\in T$, there exists $s\in T$ such that $t\subseteq s$ and $s\in{\text{Split}}(T)$. Tree $T\subseteq A^\xi$ is pruned, if its every maximal chain is of length $\xi$. Notice that if $T\subseteq A^\omega$, then a set $C\subseteq A^\omega$ is closed if an only if $C=[T]$ for a pruned tree $T$. We denote such tree by $T_C$. Moreover, a set $P\subseteq 2^\omega$ is perfect if and only if $T_P$ is a perfect tree. Notice also that a closed set $C\subseteq \omega^\omega$ is compact if and only if there exists a sequence $\langle n_i\rangle_{i\in\omega}$ such that if $x\in C$, then $x(i)<n_i$ for all $i\in\omega$. A perfect tree $T\subseteq A^\omega$ is called a [[**Silver perfect tree**]{}]{} if $$\forall_{w,v\in T}\left( {\text{len}}(v)={\text{len}}(w)\Rightarrow\forall_{j\in A}(w^\frown j\in T\Rightarrow v^\frown j\in T)\right).$$ A perfect tree $T\subseteq \omega^{<\omega}$ is called a [[**Laver perfect tree**]{}]{} if there exists $s\in T$ such that for all $t\in T$, either
{ "pile_set_name": "ArXiv" }
null
null
[EUROPEAN LABORATORY FOR PARTICLE PHYSICS ]{} CERN-EP/98-091\ 4 June 1998 [ **Inclusive Production of Charged Hadrons and $\ks$ Mesons in Photon-Photon Collisions** ]{} [The OPAL Collaboration ]{} [Abstract]{} The production of charged hadrons and $\ks$ mesons in the collisions of quasi-real photons has been measured using the OPAL detector at LEP. The data were taken at $\ee$ centre-of-mass energies of $161$ and $172$ GeV. The differential cross-sections as a function of the transverse momentum and the pseudorapidity of the charged hadrons and $\ks$ mesons have been compared to the leading order Monte Carlo simulations of PHOJET and PYTHIA and to perturbative next-to-leading order (NLO) QCD calculations. The distributions have been measured in the range $10<W<125$ GeV of the hadronic invariant mass $W$. By comparing the transverse momentum distribution of charged hadrons measured in $\gg$ interactions with $\gamma$-proton and meson-proton data we find evidence for hard photon interactions in addition to the purely hadronic photon interactions. [(submitted to European Physics Journal C) ]{} [The OPAL Collaboration ]{} [ K.Ackerstaff$^{ 8}$, G.Alexander$^{ 23}$, J.Allison$^{ 16}$, N.Altekamp$^{ 5}$, K.J.Anderson$^{ 9}$, S.Anderson$^{ 12}$, S.Arcelli$^{ 2}$, S.Asai$^{ 24}$, S.F.Ashby$^{ 1}$, D.Axen$^{ 29}$, G.Azuelos$^{ 18, a}$, A.H.Ball$^{ 17}$, E.Barberio$^{ 8}$, R.J.Barlow$^{ 16}$, R.Bartoldus$^{ 3}$, J.R.Batley$^{ 5}$, S.Baumann$^{ 3}$, J.Bechtluft$^{ 14}$, T.Behnke$^{ 8}$, K.W.Bell$^{ 20}$, G.Bella$^{ 23}$, S.Bentvelsen$^{ 8}$, S.Bethke$^{ 14}$, S.Betts$^{ 15}$, O.Biebel$^{ 14}$, A.Biguzzi$^{ 5}$, S.D.Bird$^{ 16}$, V.Blobel$^{ 27}$, I.J.Bloodworth$^{ 1}$, M.Bobinski$^{ 10}$, P.Bock$^{ 11}$, J.Böhme$^{ 14}$, M.Boutemeur$^{ 34}$, S.Braibant$^{ 8}$, P.Bright-Thomas$^{ 1}$, R.M.Brown$^{ 20}$, H.J.Burckhart$^{ 8}$, C.Burgard$^{ 8}$, R.Bürgin$^{ 10}$, P.Capiluppi$^{ 2}$, R.K.Carnegie$^{ 6}$, A.A.Carter$^{ 13}$, J.R.Carter$^{ 5}$, C.Y.Chang$^{ 17}$, D.G.Charlton$^{ 1, b}$, D.Chrisman$^{ 4}$, C.Ciocca$^{ 2}$, P.E.L.Clarke$^{ 15}$, E.Clay$^{ 15}$, I.Cohen$^{ 23}$, J.E.Conboy$^{ 15}$, O.C.Cooke$^{ 8}$, C.Couyoumtzelis$^{ 13}$, R.L.Coxe$^{ 9}$, M.Cuffiani$^{ 2}$, S.Dado$^{ 22}$, G.M.Dallavalle$^{ 2}$, R.Davis$^{ 30}$, S.De Jong$^{ 12}$, L.A.del Pozo$^{ 4}$, A.de Roeck$^{ 8}$, K.Desch$^{ 8}$, B.Dienes$^{ 33, d}$, M.S.Dixit$^{ 7}$, M.Doucet$^{ 18}$, J.Dubbert$^{ 34}$, E.Duchovni$^{ 26}$, G.Duckeck$^{ 34}$, I.P.Duerdoth$^{ 16}$, D.Eatough$^{ 16}$, P.G.Estabrooks$^{ 6}$, E.Etzion$^{ 23}$, H.G.Evans$^{ 9}$, F.Fabbri$^{ 2}$, A.Fanfani$^{ 2}$, M.Fanti$^{ 2}$, A.A.Faust$^{ 30}$, F.Fiedler$^{ 27}$, M.Fierro$^{ 2}$, H.M.Fischer$^{ 3}$, I.Fleck$^{ 8}$, R.Folman$^{ 26}$, A.Fürtjes$^{ 8}$, D.I.Futyan$^{ 16}$, P.Gagnon$^{ 7}$, J.W.Gary$^{ 4}$, J.Gascon$^{ 18}$, S.M.Gascon-Shotkin$^{ 17}$, C.Geich-Gimbel$^{ 3}$, T.Geralis$^{ 20}$, G.Giacomelli$^{ 2}$, P.Giacomelli$^{ 2}$, V.Gibson$^{ 5}$, W.R.Gibson$^{ 13}$, D.M.Gingrich$^{ 30, a}$, D.Glenzinski$^{ 9}$, J.Goldberg$^{ 22}$, W.Gorn$^{ 4}$, C.Grandi$^{ 2}$, E.Gross$^{ 26}$, J.Grunhaus$^{ 23}$, M.Gruwé$^{ 27}$, G.G.Hanson$^{ 12}$, M.Hansroul$^{ 8}$, M.Hapke$^{ 13}$, C.K.Hargrove$^{ 7}$, C.Hartmann$^{ 3}$, M.Hauschild$^{ 8}$, C.M.Hawkes$^{ 5}$, R.Hawkings$^{ 27}$, R.J.Hemingway$^{ 6}$, M.Herndon$^{ 17}$, G.Herten$^{ 10}$, R.D.Heuer$^{ 8}$, M.D.Hildreth$^{ 8}$, J.C.Hill$^{ 5}$, S.J.Hillier$^{ 1}$, P.R.Hobson$^{ 25}$, A.Hocker$^{ 9}$, R.J.Homer$^{ 1}$, A.K.Honma$^{ 28, a}$, D.Horváth$^{ 32, c}$, K.R.Hossain$^{ 30}$, R.Howard$^{ 29}$, P.Hüntemeyer$^{ 27}$, P.Igo-Kemenes$^{ 11}$, D.C.Imrie$^{ 25}$, K.Ishii$^{ 24}$, F.R.Jacob$^{ 20}$, A.Jawahery$^{ 17}$, H.Jeremie$^{ 18}$, M.Jimack$^{ 1}$, A.Joly$^{ 18}$, C.R.Jones$^{ 5}$, P.Jovanovic$^{ 1}$, T.R.Junk$^{ 8}$, D.Karlen$^{ 6}$, V.Kartvelishvili$^{ 16}$, K.Kawagoe$^{ 24}$, T.Kawamoto$^{ 24}$, P.I.Kayal$^{ 30}$, R.K.Keeler$^{ 28}$, R.G.Kellogg$^{ 17}$, B.W.Kennedy$^{ 20}$, A.Klier$^{ 26}$, S.Kluth$^{ 8}$, T.Kobayashi$^{ 24}$, M.Kobel$^{ 3, e}$, D.S.Koetke$^{ 6}$, T.P.Kokott$^{ 3}$, M.Kolrep$^{ 10}$, S.Komamiya$^{ 24}$, R.V.Kowalewski$^{ 28}$, T.Kress$^{ 11}$, P.Krieger$^{ 6}$, J.von Krogh$^{ 11}$, P.Kyberd$^{ 13}$, G.D.Lafferty$^{ 16}$, D.Lanske$^{ 14}$, J.Lauber$^{ 15}$, S.R.Lautenschlager$^{ 31}$, I.Lawson$^{ 28}$, J.G.Layter$^{ 4}$, D.Lazic$^{ 22}$, A.M.Lee$^{ 31}$, E.Lefebvre$^{ 18}$, D.Lellouch$^{ 26}$, J.Letts$^{ 12}$, L.Levinson$^{ 26}$, R.Liebisch$^{ 11}$, B.List$^{ 8}$, C.Littlewood$^{ 5}$, A.W.Lloyd$^{ 1}$, S.L.Lloyd$^{ 13}$, F.K.Loebinger$^{ 16}$, G.D.Long$^{ 28}$, M.J.Losty$^{ 7}$, J.Ludwig$^{ 10}$, D.Lui$^{ 12}$, A.Macchiolo$^{ 2}$, A.Macpherson$^{ 30}$, M.Mannelli$^{ 8}$, S.Marcellini$^{ 2}$, C.Markopoulos$^{ 13}$, A.J.Martin$^{ 13}$, J.P.Martin$^{ 18}$, G.Martinez$^{ 17}$, T.Mashimo$^{ 24}$, P.Mättig$^{ 26}$, W.J.McDonald$^{ 30}$, J.McKenna$^{ 29}$, E.A.Mckigney$^{ 15}$, T.J.McMahon$^{ 1}$, R.A.McPherson$^{ 28}$, F.Meijers$^{ 8}$, S.Menke$^{ 3
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We exploit a slightly noncollinear second-harmonic cross-correlation scheme to map the 3D space-time intensity distribution of an unknown complex-shaped ultrashort optical pulse. We show the capability of the technique to reconstruct both the amplitude and the phase of the field through the coherence of the nonlinear interaction down to a resolution of 10 $\mu$m in space and 200 fs in time. This implies that the concept of second-harmonic holography can be employed down to the sub-ps time scale, and used to discuss the features of the technique in terms of the reconstructed fields.' address: - | Department of Physics “Aldo Pontremoli”, University of Milan and\ Istituto Nazionale Fisica della Materia, Via Celoria 16, I-20133 Milano, Italy - 'Instituto di Ciencias Fotonicas c/Jordi Girona, 29 - NEXUS II E-08034 Barcelona, Spain' - 'Istitito Nazionale Fisica della Materia, Dipartimento di Scienze Chimiche, Fisiche, Matematiche, Università dell’Insubria, Via Valleggio 11, I-22100 Como, Italy' - 'Department of Quantum Electronics, Vilnius University, Sauletekio 9, building III, LT-2040 Vilnius, Lithuania' author: - 'Marco A.C. Potenza' - Stefano Minardi - Jose Trull - Gianni Blasi - Domenico Salerno - Paolo Di Trapani - Arunas Varanavičius - Algis Piskarskas title: Three dimensional imaging of short pulses --- $^{\dagger}$ Holography; Non-linear optics; Cross-correlation of ultrashort pulses; Introduction ============ The study of ultrafast phenomena has been a major scientific priority during last decades covering different topics such as the study of radiation-matter interactions [@hullier], transient response of molecules and atoms [@zewail], coherent control of chemical reactions [@Baumert] or communication and information technology [@stegeman]. The growth of this field relies upon the development of sources of femtosecond radiation and of appropriate techniques able to provide time domain information in the femtosecond scale. However, during the interaction of short optical pulses with a nonlinear medium, different mechanisms can lead to its reshaping into complex [*spatio-temporal*]{} structures with non-trivial light distribution [@PDT00]. As a consequence, their complete characterization requires a method capable of acquiring a snap-shot of their intensity distribution in the whole 3-dimensional (3D; $x,y,t-z/c$) comoving frame. Most of the available methods for pulse diagnostic provide information of the WP characteristics in a space of reduced dimensionality. The use of frequency resolved autocorrelation techniques (i.e. FROG, SPIDER) allows for the recovery of the temporal intensity and phase profile of a given pulse but assumes uniform transverse spatial distribution [@Trebino; @Jaconis]. On the contrary, the characterization of transversally localized beams often relies upon the optical imaging onto CCD cameras, therefore the temporal information is lost because of their integration times unavoidably larger than the optical pulse duration. Recently, a space-time characterization method based on extended SPIDER technique has been developed capable of resolving electric field characteristics in time and along one spatial coordinate [@dorrer]. A quite direct way of obtaining spatio-temporal intensity profiles of a WP is to perform measurements with a streak camera, which allows a temporal resolution up to fractions of ps [@streak]. This technique allowed the investigation of the dynamics of the breakup along the pulse envelope of a large elliptical beam propagating into a saturable Kerr nonlinear medium [@Lantz02a; @Lantz02b]. However, also in this case, the space-time maps are intrinsically two dimensional (1 spatial + temporal dimensions). A different approach to the problem considers the retrieval of the pulse shape through an all-optical processing by means of spatially resolved detection systems combined with gating techniques. The principle of the method is that of characterizing with spatial resolution an optical field that is proportional to the product $E_O({\mathbf x},t)E_R({\mathbf x},t)$, where $E_O({\mathbf x},t)$ is the object to be measured and $E_R({\mathbf x},t)$ is a suitable reference pulse. Since the product is different from zero only on the intersection of the support of both fields[^1], by translating the reference with respect to the object, we get the possibility of recording information from different parts of the object. Among the linear time gating techniques, light–in–flight holographic recording has been the first technique which permitted the recording of dynamically evolving light fields during propagation [@Denisyuk69; @Abramson83; @Abramson89]. Recently, this technique has been adapted to study the propagation of a 3 ps long pulse in linear media [@Kubota02]. Linear probing techniques were also exploited in order to obtain time resolved imaging, like the probing of the birefringence properties of plasma by means of a delayed, spatially-extended 100 fs pulses to investigate the dynamics of laser pulse focusing in air [@Fujimoto99]. Nonlinear processes have been employed since long ago to resolve in time the evolution of ultrafast phenomena. Among them, the quadratic nonlinearity has been proved to be particularly versatile due to the fact that it provides easily terms containing the product of two optical fields. Recently, a type II degenerate parametric amplification scheme has been employed to obtain time resolved 2D images of a ps-pulse hitting a diffusing screen with 35 ps resolution [@Devaux95], thus yielding a 3D imaging. The same technique was later used to image an object embedded in a thick diffusing sample [@Devaux99]. Althought our setup is actually an improved version of that described in [@Devaux99], we point out that our conceptual approach is different from the study of the propagation of a wave front. In fact, in our case the propagation variable is fixed. Our goal in this article is to demonstrate the potentiality of the optical gating technique to acquire a high resolution space-time map of short, focused WPs in their comoving reference frame. Furthermore, we devise the capability of the technique to reconstruct both the amplitude and the phase of the WP thanks to the coherence of the nonlinear interaction. We propose a method that is based on quadratic type I interaction in a sum-frequency generation scheme either by a non-collinear second harmonic generation or by a collinear sum-frequency scheme. The latter has been used in [@jose]. Here we discuss the first option, showing that if the interaction angle between the two interacting fields is small enough, then a reliable space-time map of the object pulse can be obtained. This can be achieved if the duration of the gate is much smaller than that of the object to be imaged. A holographic interpretation of the method permits to gain insight into the process of up-conversion of the space-time slices of the object into the SF field, and to prove that the coherence of the SF process is able to reconstruct the wavefront in both amplitude and phase. Our results confirm this possibility. The theoretical discussion of the method is followed by section \[experiment\], where we present the set-up and the experimentally reconstructed space-time intensity profiles of a parametric spatial soliton excited by a 1 ps light pulse. For our setting, we estimate a mapping resolution of 200 fs in time and about 10$\mu$m in space. The features of the technique are presented in section \[features\], pointing out the limitations that may arise and discussing the possible implementations in each case. In the last section the main conclusions are presented. Description of the technique: intensity and field reconstruction {#crosscorrelation} ================================================================ In this section we explicitly show how a non-collinear sum-frequency (SF) scheme can be exploited to get high resolution space-time intensity maps of an unknown light wave packet with a space-time structure (object wave). The recovery of a 3D intensity map is obtained by means of a short reference pulse which provides a time gating inside a nonlinear (NL) crystal, and generates a SF signal containing the information about a set of 2D slices of the object obtained by changing the reference delay. We first discuss the case for the reconstruction of the object intensity profile, and then we show how the intrinsic coherence of the SF process allows in fact for a truly holographic recording of the unknown object. 3D Intensity profile mapping ---------------------------- Let us denote the object ($\bar{E}_O$) and reference ($\bar{E}_R$) wave packets as follows: $$\begin{aligned} \bar{E}_O&=& E_O(x,y,z,t)e^{i[\omega_1 t-k_z(\omega_1)z-k_x(\omega_1)x]}+c.c.\\ \bar{E}_R&=& E_R(x,y,z,t)e^{i[\omega_2 t-k_z(\omega_2)z+k_x(\omega_2)x]}+c.c.\end{aligned}$$ where the complex functions $E_O(x,y,t,z)$ and $E_R(x,y,t,z)$ are the slowly varying envelopes of two waves with frequencies $\omega_1$ and $\omega_2$. Note that in this form the equations describe two wavepackets propagating in the positive $z$ direction and colliding at an angle $\
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Vacuum instability of the strong electromagnetic field has been discussed since long time ago. The instability of the strong electric field due to creation of electron pairs is one of the examples, which is known as Schwinger process. What matters are the coupling of particles to the electromagnetic field and the mass of the particle to be produced. The critical electric field for electrons in the minimal coupling is $E_c \sim \frac{m^2}{e} $. Spin 1/2 neutral particles but with magnetic dipole moments can interact with the electromagnetic field through Pauli coupling. The instability of the particular vacuum under the strong magnetic field can be formulated as the emergence of imaginary parts of the effective potential. In this talk, the development of the imaginary part in the effective potential as a function of the magnetic field strength is discussed for the configurations of the uniform magnetic field and the inhomogeneous magnetic field. Neutrinos are the lightest particle(if not photon or gluon) in the “standard model", of which electromagnetic property is poorly known experimentally. Recently the observation of neutrino oscillation shows the necessity of neutrino masses. It implies that the standard model is subjected to be modified such that non-trivial electromagnetic structure of neutrino should be reconsidered although they are assigned to be neutral. And the possibility of anomalous electromagnetic form factor is an open question theoretically and experimentally. In this talk, the implication of non-vanishing magnetic dipole moment of neutrinos is also discussed: the instability of the strong magnetic field and the enhancement of neutrino production in high energy collider experiments.' address: | Department of Physics, Hanyang University University\ Seoul, 04763, Korea\ $^*$E-mail: hyunkyu@hanyang.ac.kr author: - 'Hyun Kyu Lee$^*$' title: Instability of strong magnetic field and neutrino magnetic dipole moment --- Introduction ============ There are various indications of the ultrastrong magnetic fields , $B > B_c \sim 10^{13}$ G, from astrophysical observations and terrestrial accelerator experiments and there have been continuous efforts in finding their definite evidence in Nature. Among the examples considered so far are strong magnetic fields captured on magnetar and gamma ray bursts(GRB) central engine and also created in noncentral heavy-ion collisions. Magnetars[@magnetar] are considered to be neutron stars, where strong magnetic fields of typically $10^{13} \sim 10^{15}$ G are the main source of energy. Even the stronger field strength is expected inside magnetars. One of the viable models of GRB central engine is to make use the idea of tapping the rotational energy of black hole by the strong magnetic field[@LBW]. It is estimated to be $\sim 10^{15}$ G particularly for long and energetic bursts. In noncentral heavy-ion collisions, strong magnetic fields can also be created by the two electric currents in opposite directions generated by two colliding nuclei. it is expected that the magnetic fields in RHIC Au + Au collisions and LHC Pb + Pb collisions can be as large as $10^{19}$ G and $10^{20}$ G respectively[@Huang]. These field strengths are much stronger than the critical magnetic field and offer an interesting opportunity to study the effect of super-strong electromagnetic fields beyond the classical electrodynamics and beyond the standard model of electro-weak interaction. One of the interesting questions with given such strong electromagnetic fields is the stability of the field configuration or equivalently whether there is any instability, which leads to the particle production. Vacuum instability of the strong electromagnetic field has been discussed since long time ago. The instability of the strong electric field due to the creation of electron pairs is one of the examples, which is known as Schwinger process[@schwinger; @Kim]. What matters are the coupling of particles to the electromagnetic field and the mass of the particle to be produced. The critical electric field for electron is $E_c \sim \frac{m^2}{e} $. If it is possible to imagine a charged particle lighter than the electron, the critical field can be lowered down to the field strength produced in the future high intensity laser and will be subjected to be tested in the laboratory. The pair production of fermions in a purely magnetic field configuration is shown to be absent[@dunnehall]. Therefore, the pair production of minimally interacting particles is considered to be a purely electric effect. However, while the minimal coupling derived by the local gauge invariance is of fundamental nature, there appear also non-minimal couplings as well in the form of the effective theory. Pauli introduced a non-minimal coupling of spin-1/2 particles with electromagnetic fields, which can be interpreted as an effective interaction of fermions with an anomalous magnetic moment[@Pauli]. Hence for the neutral fermions, which have no minimal coupling to electromagnetic fields, the nonvanishing magnetic moments may be the primary window through which the electromagnetic interaction of neutral fermions can be probed with the Pauli interaction. It is known the spatial inhomogeneity of the magnetic field exerts force on the magnetic dipole moment through the Pauli interaction. It plays a similar role analogous to the electric field for the creation of charged particle pairs with the minimal coupling. The possibility of pair production of the neutral fermions in a purely magnetic field configuration with spatial inhomogeneity has been demonstrated in 2+1 dimension[@lin]. The production rate in 3+1 dimension has been calculated explicitly for the magnetic fields with a spatial inhomogeneity[@LY1; @Gitman], which can be approximated as m\^4 e\^[-a m\^2/|B’|]{} analogues to the Schwinger process. The instability of the particular vacuum under the strong magnetic field can be formulated as the emergence of imaginary parts of the effective potential. For uniform magnetic fields which interact with spin-1/2 fermions through the Pauli interaction[@LY2], it is found that the non-vanishing imaginary part develops for a magnetic field stronger than the critical field $B_c$, whose strength is the ratio of the fermion mass to its magnetic moment, $B_c = \frac{m}{\mu}$, (V\_[eff]{}) = ( -1)\^3 ( +3). In section 2, the calculations of the effective potential and vacuum decay rates for the neutral fermion with Pauli coupling to electromagnetic field will be reviewed. The implication of non-vanishing magnetic dipole moment of neutrinos, the instability of the strong magnetic field and the neutrino production through the Pauli coupling in high energy collider experiment, is discussed in section 3. Imaginary part of effective potential and pair production ========================================================= The instability of the electro-weak vacuum for a strong magnetic field was discussed long time ago[@AHN]. The one-loop effective potential considering the weak-boson($W$) loop is found to have imaginary part under the pure magnetic background, (V\_[eff]{}) = B\^2(1-)(B - ) \[ImVW\] where $m_W$ is the mass of $W$-boson . In the limit of $m_W \rightarrow 0$, eq.(\[ImVW\]) agrees with the effective potential given by Nielson and Olesen [@NO]. It was argued that the instability could be avoided if the condensation of $W$ and $Z$ bosons[@AHN] appears with the strong magnetic field. The basic reason for the emergence of an imaginary part for $B > \frac{m_W^2}{e^2}$ is that the energy eigenvalue crosses zero at $B =\frac{m_W^2}{e^2}$ because of the anomalous magnetic moment of $W$ boson. One can see that there is also a level crossing of energy eigenvalue of a neutrino with Pauli interaction for a strong enough magnetic field. For a uniform magnetic field, the energy eigenvalues of the Hamiltonian are given by E = \[Epauli\] where $p_l$ and $p_t$ are respectively the longitudinal and the transversal momentum to the magnetic field direction. One can see that for a magnetic field stronger than the critical field $Bc = m/\mu$ , the ground state with $p_l = p_t = 0$ crosses the zero energy state. This indicates the possible instability of the magnetic field configuration beyond critical field strength as in electro-weak instability [@LY3]. For a uniform magnetic field, the imaginary part of the one-loop effective action is calculated explicitly[@LY2] (V\_[eff]{}) = ( -1)\^3 ( +3)(|B| - m). which takes the similar form as in eq. (\[ImVW\]). It is interesting to note that the development of the imaginary part associated with a level crossing has been also demonstrated in the different contexts[@GF]. The state occupied in the negative energy sea becomes a particle state and the vacant positive energy state plunges into the negative sea to make an antiparticle state. The instability can be interpreted as the pair creation at the expense of the magnetic field strength. Then the particle production rate is given approximately by \~2 [I]{}(V\_[eff]{}) = ( -1)\^3 ( +3)(|B| - m).\[rate\] For a nonuniform magnetic field, the inhomogeneity of the magnetic field coupled directly to the magnetic dipole moment plays an interesting role analogous to the electric field for a charged particle: The non-zero gradient of the magnetic field can exert a force on a magnetic dipole moment. Then the vacuum production of neutral fermions with a non-zero magnetic moment in an inhomogeneous magnetic field is possible. As a simple example, a static magnetic field configuration with a constant gradient, $B'= dB_z/dx$, along x-direction, B\_z(x) = B\_0 + B’x. has been considered. It is not necessary to consider an infinitely extended ever-increasing magnetic field to meet the linear magnetic field configuration. Because the particle production
{ "pile_set_name": "ArXiv" }
null
null
--- address: | Université de Paris Sud, Laboratoire de l’Accélérateur Linéaire, Bât. 200, B.P. 34, FR-91898 ORSAY CEDEX\ E-mail: claire.bourdarios@cern.ch author: - 'C. BOURDARIOS' title: 'STUDY OF D$^{**}$ AND D$^{*''}$ PRODUCTION IN B AND C JETS, WITH THE DELPHI DETECTOR' --- Introduction ============ For mesons containing heavy and light quarks (Q$\bar q$), and in the limit where the heavy quark mass is much larger than the typical QCD scale, the spin $\overrightarrow{s_Q}$ of the heavy quark decouples from other degrees of freedom. Thus, for strong decays, the total (spin+orbital) angular momentum $\overrightarrow{j_q} = \overrightarrow{s_q} + \overrightarrow{L}$ of the light component is conserved. This heavy quark symmetry, together with quark potential models used for lower mass mesons, allows the masses and decay widths of heavy mesons to be predicted [@HQET]. The present knowledge of charmed meson spectroscopy is summarized in Figure \[fig:spectro\]. The well established D and D$^*$ mesons [@PDG] correspond to the two degenerate levels of the (L=0, $j_q$ = 1/2) state. The two (L=1, $j_q$ = 3/2)states have been clearly observed [@PDG], because they have narrow decay widths of about 20 [MeV/]{}$c^2$. The measured masses of the $D^0_1$(2420) and $D^{*0}_2$(2460) agree within 20 with the prediction of the models. Section 3 presents a measurement of their production rate in [ ]{}and [ ]{}jets. The (L=1, $j_q$ = 1/2) states decay through a S wave and are expected to have large decay widths. Up to now, they have not been observed directly, but their total production rate is measured using B meson semi-leptonic decays (section 4). In addition to these orbital excitations, radial excitations of heavy mesons are foreseen. The D$^{'}$ and D$^{*'}$ are expected to have masses of 2.58 and 2.64 respectively, with a 10-25 uncertainty on the mass predictions [@thdstar]. They are expected to decay, in S wave, into $D^{(*)}\pi\pi$. Section 5 presents the first evidence for the D$^{*'}$ meson, observed in the decay mode ($D^* \pi \pi$). D$^{**}$ and D$^{*'}$ reconstruction ==================================== DELPHI [@delphi] is a multipurpose LEP detector, with special emphasis on precise vertex and charged tracks momentum reconstruction, and particle identification. The micro-vertex detector provides 3 R$\phi$ and 2 Z hits per track, with intrinsic resolutions of 7.6 and 9 $\mu$m. For muons of 45 momentum, a resolution of $\sigma(p)/p$ of $\pm$ 3% is obtained, and the precision of the track extrapolation to the beam collision point is 26 $\pm$ 2 $\mu$m. Kaon and pion identification is performed using a Ring Imaging CHerenkov detector, and the ionisation loss in the TPC, which is the main tracking device. A total of 3.4 million hadronic events is obtained from the 1992-1995 data, at center-of-mass energies close to the Z$^0$ mass. D$^*$ reconstruction -------------------- All the decay channels considered here involve the $D^{*+} \rightarrow D^0 \pi^+_*$ decay, followed by $D^0 \rightarrow (K^-\pi^+)$ or $D^0 \rightarrow (K^-\pi^+\pi^-\pi^+)$. [^1] To reconstruct the $D^0$ decay final state, all ($K^-\pi^+$) and ($K^-\pi^+\pi^-\pi^+$) combinations are tried to fit a secondary vertex in space. Kinematical and track selection cuts are described in detail in [@n483]. Kaon candidates are considered if they have a momentum larger than 1 and, in the $K 3 \pi$ channel, a loose kaon identification is required. The D$^0$ momentum and invariant mass are computed from the momenta of the decay products. Then, all charged particles with momentum between 0.4 and 4.5 and charge opposite to that of the kaon candidate are used as pion candidates for the $D^{*+} \rightarrow D^0 \pi^+_*$ decay. In the $K \pi$ ($K 3 \pi$) channel, events are selected if the mass difference $(M_{K \pi \pi_*} - M_{K \pi}$) (resp. $(M_{K3\pi\pi_*} - M_{K3\pi})$) is within $\pm$ 2 ($\pm$ 1 ) of the nominal value ($M_{D^*}-M_{D^0}$). The D$^*$ candidates must have an energy fraction $X_E(D^*) = E(D^*)/E_{beam}$ greater than 0.25. Figure \[fig:d0\] shows the distribution of the M($K\pi$) and M($K3\pi$) invariant masses for the selected events. The fitted $D^0$ masses and widths are 1868 $\pm$ 1 (1869 $\pm$ 1) and 19 $\pm$ 1 (12 $\pm$ 2) . The reconstructed D$^0$ mass is required to lie within $\pm$ 40 ($\pm$ 30) of the nominal D$^0$ mass: 4661 $\pm$ 88 (2164 $\pm$ 65) D$^*$ candidates are selected in the $K \pi$ ( $K 3 \pi$) channels. The selection efficiency is estimated, using the simulation, to be 21% (8%). $D^0_1$, $D^{*0}_2$ and D$^{*'}$ reconstuction ---------------------------------------------- Similar selection criteria and vertex reconstruction are used to reconstruct narrow orbitally and radially excited states. In the case of $D^0_1$ and $D^{*0}_2$ decaying into $D^{*+}\pi^-$, a pion with a charge opposite that of the D$^{*+}$ is added, and the $D^0 \pi^+_* \pi^-$ vertex is fitted. All combinations are tried, provided the pion candidate has a momentum larger than 1.0 (1.5) in the $K\pi$ ($K3\pi$) channel. The reconstruction efficiency is 14% (6%) in the $K \pi$ ( $K 3 \pi$) channels. In the case of D$^{*'}$ decaying into D$^{*+}\pi^+\pi^-$, all pairs of oppositely charged pions are used to fit a $D^0 \pi^+ \pi^-$ vertex. The pion candidates are required to have a momentum larger than 0.6(1.0) [GeV/c]{}, and those compatible with a kaon according to particle identification are rejected. For a signal of mass 2640 [MeV/]{}$c^2$, the reconstruction efficiency is 4% (2%) in the $K \pi$ ( $K 3 \pi$) channels. In both cases, the precision on the invariant mass reconstruction is improved by correcting for a 4 shift observed in the D$^0$ mass, by using: $$\begin{array}{rcl} M(D^*\pi) = M_{(D^0\pi_*\pi)} - M_{(D^0\pi_*)} + m_{D^*} \\ M(D^*\pi\pi) = M_{(D^0\pi_*\pi\pi)} - M_{(D^0\pi_*)} + m_{D^*} \end{array} \label{eq:mass}$$ where $m_{D^*}$ is the nominal $D^{*+}$ mass. The simulation predicts a resolution of about 6 on the mass reconstruction, for both radial and orbital excitations. Selection of [ ]{}and [ ]{}samples ---------------------------------- Due to the relatively long lifetimes of charmed and bottomed particles, heavy flavour events are characterized by the presence of secondary vertices. The probability $\mathcal{P}$ that all tracks detected in the event come from the primary vertex is small: for [ ]{}events, a purity of 90% is archieved, with an efficiency of 60%, by requiring $\mathcal{P} \le 10^{-2}$. Charmed mesons from $Z^0 \rightarrow b \bar b$ events are distinguished from those in [ ]{}events by considering both their energy and lifetime informations. Bottom quarks fragment into a B hadron, which subsequently decays into a D$^{*+}$ meson, whereas in [ ]{}events charmed mesons are directly produced in the fragmentation process. This difference in the hadronization leads to a smaller energy fraction of $X_E(D^*)$ for [ ]{}events. Also, due to the b quark lifetime, the apparent flight of the $D^0$ meson is greater than the true decay length. Its measured proper time distribution is larger than the mean B meson lifetime, 1.6 ps, compared to a true $D^0$ lifetime of 0.4 
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Let $G$ be a finite $p$-group and let $\operatorname{Aut}(G)$ denote the full automorphism group of $G$. In the recent past, there has been interest in finding necessary and sufficient conditions on $G$ such that certain subgroups of $\operatorname{Aut}(G)$ are equal. We prove a technical lemma and, as a consequence, obtain some new results and short and alternate proofs of some known results of this type.' author: - | **Deepak Gumber[^1] and Hemant Kalra\ *School of Mathematics and Computer Applications\ *Thapar University, Patiala - 147 004, India\ **** title: '**Equality of certain automorphism groups of finite $p$-groups**' --- [**2010 Mathematics Subject Classification:**]{} 20D45, 20D15. [**Keywords:**]{} Central automorphism, IA-automorphism, $p$-group. Introduction ============ Let $G$ be a finite non-abelian $p$-group and let $M_1,M_2,N_1,N_2$ be normal subgroups of $G$. For normal subgroups $X$ and $Y$ of $G$, let $\operatorname{Aut}^X(G)$ and $\operatorname{Aut}_Y(G)$ denote the subgroups of $\operatorname{Aut}(G)$ centralizing $G/X$ and $Y$ respectively. We denote the intersection $\operatorname{Aut}^X(G)\cap\operatorname{Aut}_Y(G)$ by $\operatorname{Aut}_Y^X(G)$. In the recent past, many results have been proved which give necessary and sufficient conditions on $G$ such that $\operatorname{Aut}_Y^X(G)=\operatorname{Inn}(G),Z(\operatorname{Inn}(G))$; or $\operatorname{Aut}_{N_1}^{M_1}(G)=\operatorname{Aut}_{N_2}^{M_2}(G)$ with particular choices of $M_i$ and $N_i$ (see e.g. \[3-5, 7-13\]). Quite recently, Azhdari and Malayeri [@azhmal Theorem B, Corollary C] have found conditions on certain $M_i$ and $N_i$ so that $\operatorname{Aut}_{N_1}^{M_1}(G)=\operatorname{Aut}_{N_2}^{M_2}(G)$. We prove a short technical lemma, Lemma 2.2, and as a consequence, obtain very short and easy proofs of these main results of Azhdari and Malayeri. Subsequently, we also obtain some new results of this type and alternate proofs of main results of Attar [@att3 Theorem A], Jafari [@jaf Theorem] and Rai [@rai Theorem B(1)]. Notations are mostly standard. By $\operatorname{Hom}(G,A)$ we denote the group of all homomorphisms of $G$ into an abelian group $A$ and by $C_n$ we denote the cyclic group of order $n$. The rank, exponent and nilpotence class of $G$ are respectively denoted as $d(G)$, $\exp(G)$ and $cl(G)$. A non-abelian group $G$ that has no non-trivial abelian direct factor is said to be purely non-abelian. An automorphism $\alpha$ of $G$ is called a central automorphism if it centralizes $G/Z(G)$, or equivalently, $x^{-1}\alpha(x)\in Z(G)$ for all $x\in G$. By $\operatorname{Aut}_c(G)$ we denote the group of all central automorphisms of $G$, and by $C^*$ we denote the group of all those central automorphisms of $G$ which fix $Z(G)$ element-wise. An automorphism $\alpha$ of $G$ is called an IA-automorphism if it centralizes the abelianized group $G/G'$. The group of all IA-automorphisms is denoted as $\operatorname{IA}(G)$ and the group of all those IA-automorphisms which fix $Z(G)$ element-wise is denoted as $\operatorname{IA}(G)^*$. Automorphism groups of $G$ ========================== While proving the equality of different automorphism groups of $G$, the foremost tool has been to express the group $\operatorname{Aut}^{X}_{Y}(G)$ in the form $\operatorname{Hom}(A,B)$ for suitable subgroups or quotient groups $A$ and $B$ of $G$. The trick is very well-known since old days. Our next lemma is a little modification of arguments of Alperin [@alp Lemma 3] and Fournelle [@fou Section 2]. \[ML\] Let $G$ be any group and $X$ be a central subgroup of $G$ contained in a normal subgroup $Y$ of $G$. Then $\operatorname{Aut}^{X}_{Y}(G)\simeq\operatorname{Hom}(G/Y,X)$. Let $X$ and $Y$ be two finite abelian $p$-groups and let $X\simeq C_{p^{x_1}}\times C_{p^{x_2}}\times\ldots\times C_{p^{x_h}}$ and $Y\simeq C_{p^{y_1}}\times C_{p^{y_2}}\times\ldots\times C_{p^{y_k}}$ be the cyclic decompositions of $X$ and $Y$, where $x_i\ge x_{i+1}$ and $y_i\ge y_{i+1}$ are positive integers. If either $X$ is a subgroup or is a quotient group of $Y$, then $h\le k$ and $x_i\le y_i$ for $1\le i\le h$. Consider the situation when $d(X)=d(Y)$ and $X$ is a proper subgroup or a proper quotient group of $Y$. In these circumstances, $h=k$ and there certainly exists an $r,\;1\le r\le h$, such that $x_r<y_r$ and $x_j=y_j$ for $r+1\le j\le h$. For this unique fixed $r$, let $var(X,Y)=p^{x_r}$. In other words, $var(X,Y)$ denotes the order of the last cyclic factor of $X$ whose order is less than that of corresponding cyclic factor of $Y$. \[MT\] Let $A,B,C$ and $D$ be finite abelian $p$-groups with $B$ a subgroup of $C$ and $D$ a quotient group of $A$. Then 1. $\operatorname{Hom}(A,B)= \operatorname{Hom}(A,C)$ if and only if either $B=C$ or $d(B)=d(C)$ and $\exp(A)\le var(B,C)$, 2. $|\operatorname{Hom}(D,B)|=|\operatorname{Hom}(A,B)|$ if and only if either $D=A$ or $d(D)=d(A)$ and $\exp(B)\le var(D,A)$. We prove only $(i)$ as the proof is similar for $(ii)$. Let $$\begin{array}{rcl} A &\simeq & C_{p^{\alpha_1}}\times C_{p^{\alpha_2}}\times\ldots \times C_{p^{\alpha_l}},\\ B & \simeq & C_{p^{\beta_1}}\times C_{p^{\beta_2}}\times\ldots \times C_{p^{\beta_m}}, \;\mbox{and}\\ C &\simeq & C_{p^{\gamma_1}}\times C_{p^{\gamma_2}}\times\ldots \times C_{p^{\gamma_n}} \end{array}$$ be the cyclic decompositions of $A$, $B$ and $C$, where $\alpha_i\geq\alpha_{i+1},\;\beta_i\geq\beta_{i+1},$ and $\gamma_i\geq\gamma_{i+1}$ are positive integers. First suppose that $\operatorname{Hom}(A,B)= \operatorname{Hom}(A,C)$ and $B<C$. Then $$\displaystyle\prod_{i=1}^l\displaystyle\prod_{j=1}^mp^{\mathrm {min}\{\alpha_i, \beta_j\}}=\displaystyle\prod_{i=1}^l\displaystyle\prod_{k=1}^np^{\mathrm {min}\{\alpha_i,\gamma_k\}}.$$ Since $m\le n$ and $\beta_j\le \gamma_j$ for each $1\le j\le m$, $\mathrm{min}\lbrace\alpha_i,\beta_j\rbrace\le \mathrm{min}\lbrace\alpha_i,\gamma_j\rbrace$. If $m<n$, then $|\operatorname{Hom}(A,B)|<|\operatorname{Hom}(A,C)|$, which is not so. Thus $m=n$ and $\mathrm{min}\lbrace\alpha_i,\beta_j\rbrace= \mathrm{min}\lbrace\alpha_i,\gamma_j\rbrace$ for all $i$ and $j$. Let $var(B,C)=p^{\beta_r}$, $1\le r\le m$. Observe that $\exp(A)\le var
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We consider the latency minimization problem in a task-offloading scenario, where multiple servers are available to the user equipment for outsourcing computational tasks. To account for the temporally dynamic nature of the wireless links and the availability of the computing resources, we model the server selection as a multi-armed bandit (MAB) problem. In the considered MAB framework, rewards are characterized in terms of the end-to-end latency. We propose a novel online learning algorithm based on the principle of optimism in the face of uncertainty, which outperforms the state-of-the-art algorithms by up to $\sim$1 s. Our results highlight the significance of heavily discounting the past rewards in dynamic environments.' author: - | Aniq Ur Rahman$^{{\href{https://orcid.org/0000-0003-3685-7201}{\mbox{\scalerel*{ \begin{tikzpicture}[yscale=-1,transform shape] \pic{orcidlogo}; \end{tikzpicture} }{|}}}}}$,  Gourab Ghatak$^{{\href{https://orcid.org/0000-0002-8240-4038}{\mbox{\scalerel*{ \begin{tikzpicture}[yscale=-1,transform shape] \pic{orcidlogo}; \end{tikzpicture} }{|}}}}}$, and Antonio De Domenico$^{{\href{https://orcid.org/0000-0003-1229-4045}{\mbox{\scalerel*{ \begin{tikzpicture}[yscale=-1,transform shape] \pic{orcidlogo}; \end{tikzpicture} }{|}}}}}$ [^1] [^2] [^3] [^4] bibliography: - 'bare\_jrnl.bib' title: 'An Online Algorithm for Computation Offloading in Non-Stationary Environments' --- Mobile Edge Computing, Online Learning, Computation Offloading, Multi-armed Bandit. Introduction ============ future mobile networks will be characterized by ubiquitous coverage, ultra-low latency services, quasi-deterministic communications, and the need for extremely high data rates. In this context, a radical change consists of empowering mobile devices and with data processing and storage capabilities, thereby reducing the end-to-end latency of the mobile services. This paradigm is called  [@Mao2017], also known as mobile edge computing. In networks, small cells integrate computing capabilities and local cache memories to the standard . Consequently, a can request a small cell to run a computational assignment on its behalf, resulting in a reduced effective latency and an increased battery-life. This procedure is called *task* or *computation offloading* [@Barbarossa2014]. Additionally, the -enabled small cells can implement proactive caching strategies to satisfy the ever growing demand for downloadable multimedia content in the mobile networks, thereby limiting the load on the transport network [@Bastug2014]. The resources are often divided into three categories: communication, computing, and caching [@Wang2017]. In [@elbamby2019wireless] the authors have provided a detailed overview of technology and its use-cases, particularly focusing on the services requiring low-latency and highly-reliable communications. Several researchers have investigated policies to determine when computation offloading is more efficient than local processing. For instance, Elbamby [*et al.*]{} [@elbamby2017proactive] have studied the task-offloading problem formulated as a matching game, subject to latency and reliability constraints. More recently, computation offloading was also extended to more realistic scenarios, where system dynamics and information uncertainty is taken into consideration. For example, Liao [*et al.*]{} [@icc_fog] have proposed a robust two-stage task offloading algorithm that integrates contract theory with computational intelligence to minimize the long-term delay of task assignment. On the same lines, is an online framework that can be used to find an optimal policy when the reward distribution of the actions is not *a priori* known [@lattimore2018bandit]. In particular, we focus on the case where the system characteristics, i.e., the resource availability and the wireless channel are [*non-stationary*]{}[^5]. It must be noted that, in non-stationary scenarios, off-the-shelf algorithms may indeed be sub-optimal due to the usage of outdated information. Therefore, it becomes necessary to [*forget*]{} past rewards and rapidly update the reward distribution based on recent information. However, selecting the policy refresh rate is challenging since the *agent* is typically not aware of the temporal behaviour of the system. Earlier, researchers have come up with the idea of *discounting* the past rewards, to make the system adaptive to the dynamic changes and introduced the discounted variants [@raj2017taming; @garivier2008upper] of classical algorithms. Garivier and Moulines [@garivier2008upper] considered a scenario where the distribution of the rewards remain constant over epochs and change at unknown time instants (i.e., abrupt changes). They analyzed the theoretical upper bounds of the regret for the discounted upper confidence bound (UCB) and sliding window UCB. Gupta [*et al.*]{} [@gupta2011thompson], extended this idea to Bayesian methods, and proposed the [Dynamic Thompson Sampling (Dynamic TS)]{}. Hartland [*et al.*]{} [@hartland2006multi] considered dynamic bandits with [abrupt changes]{} in the reward generation process, and proposed an algorithm called [Adapt-EvE]{}. Slivkins and Upfal. [@slivkins2008adapting] considered a dynamic bandit setting where the [reward evolves as Brownian motion]{} or a random walk, and provided results of regret linear in time horizon. Sana [*et al.*]{} [@sana2019multi] have solved the problem of optimizing the UE-BS association by employing Deep Reinforcement Learning. Liao [*et al.*]{} [@liao2019learning] have maximized the long-term throughput for a machine type device (MTD) subject to energy and data-size constraints in a learning-based channel selection framework. The learning algorithm proposed is a variant of UCB. However, these works do not take into account, the abrupt changes at unknown times. In this paper, we model the server selection problem as the exploration-exploitation dilemma of a restless framework with non-stationary rewards. For this problem, we propose an online learning algorithm *Sisyphus* that [is model-free and is based]{} on the principle of optimism in the face of uncertainty. In particular, we selectively retain the knowledge of the past rewards so as to keep up with the dynamic environment. We show that Sisyphus achieves the lowest normalized regret as compared to the other algorithms proposed for the non-stationary bandit problem, namely, Thompson sampling (TS), discounted TS, discounted optimistic TS, and discounted UCB. Consequently, Sisyphus is shown to reduce the end-to-end latency by up to 1 s under the considered test environment. System Model ============ [We focus on]{} a offloading its computational task to [a nearby]{} server $s_i \in \mathcal{S}$, where $\mathcal{S}$ represents the set of all servers. We assume that one task is offloaded by the UE in each time-step $t \in \{1, 2, 3, ..., T \}$ of duration $\delta$. The aim of the is to select the server which results in a minimum delay, while taking into account the task execution and signal propagation delays. The server $s_i$ performs the task with intensity $\kappa$, which denotes the CPU cycles required to process a byte of task, using its available computing resources, which evolves over time [@dandachi2019artificial]. Unlike the centralized architecture in [@liang2019multiuser], we consider a distributed system where each user selects an MEC server independently of the other users’ decision. Specifically, the link between the and the server is assumed to be affected by dynamic blockages, where the probability of blockage of the server $s_i$ is denoted by $p_{B,i}$. In addition, we model the servers as the arms in an framework, where the resource availability $a_i(t)$, varies with time in a *doubly-stochastic* manner. The computing resources available at time-step $t$ is expressed as $a_i(t)c_i$, where $c_i$ is the maximum computing capacity[^6] of the server and $a_i(t) \in (0,1)$ is the fraction of the computing capacity available at time $t$. We refer to this quantity $a_i(t)$ as *resource availability*. We assume that the number of UEs associated with a server changes after certain number of time-steps, which in turn impacts the resource availability. This set of consecutive time-steps constitute an **epoch**. If the probability that the number of UEs connected $v(t)$ to a server $s_i$ changes in a single time-step is $p = \text{Pr}\{v(t) \neq v(t-1)\}$, then the probability that it remains unchanged for $\Delta$ consecutive time-steps, is given by the geometric distribution [@vaseghi1995state]: $$\Pi_{l=1}^{\Delta} (1-p) = (1-p)^{\Delta}.$$ We set $p = \frac{1}{\Lambda_i}$ where $\Lambda_i$ is the mean value of epoch duration. The $j
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We consider an inverse $N$-body scattering problem of determining two potentials—an external potential acting on all particles and a pair interaction potential—from the scattering particles. This paper finds that the time-dependent Hartree-Fock approximation for a three-dimensional inverse $N$-body scattering in quantum mechanics enables us to recover the two potentials from the scattering states with high-velocity initial states. The main ingredient of mathematical analysis in this paper is based on the asymptotic analysis of the scattering operator defined in terms of a scattering solution to the Hartree-Fock equation at high energies. We show that the leading part of the asymptotic expansion of the scattering operator uniquely reconstructs the Fourier transform of the pair interaction, and the second term of the expansion uniquely reconstructs the $X$-ray transform of the external potential.' author: - | Michiyuki Watanabe\ Faculty of Education\ Niigata University\ Niigata, Japan\ `mwatanab@ed.niigata-u.ac.jp`\ bibliography: - 'michiyukirefs2.bib' title: 'Inverse $N$-body scattering with the time-dependent Hartree-Fock approximation' --- Introduction ============ Problem and result ------------------ Consider the quantum $N$-body systems of identical particles interacting pairwise by the two-body potential under an external potential acting on all particles. A typical example is $N$ electrons in an atom with proton number $Z$ at the nucleus. In that case, the external potential is the nucleus-electron attraction, and the two-body potential is the electron to electron repulsion. Inverse $N$-body scattering problems ask to determine the interaction potential and the external potential from the scattering states of particles. Such inverse problems have been extensively studied for $N$-body Schrödinger equations with no external potentials (Enss and Weder [@Enss-Weder1995]; Novikov [@Novikov]; Wang [@Wang; @Wang1996]; Vasy [@Vasy]; Uhlmann and Vasy [@Uhlmann-Vasy; @Uhlmann-Vasy2003; @Uhlmann-Vasy2004]). The inverse scattering for the $N$-body Schrödinger equation in an external constant electric field was investigated by Valencia and Weder [@Valencia-Weder2012]. Differently, Lemm and Uhlig [@Lemm-Uhlig2000] have investigated an inverse $N$-body problem by using Bayesian approach with the Hartree-Fock approximation. They gave a computationally feasible method of reconstructing an interaction potential from data by solutions to a stationary Hartree-Fock equation. Their work indicates that the Hartree-Fock approximation is also extremely useful as a way to investigate the inverse $N$-body problems. The above mentioned works have focused only on recovering interactions. Since $N$-body systems is generally described by a non-relativistic Hamiltonian consisting of a one-body term with the kinetic energy and an external potential, and a two-body interaction term, the inverse problems of determining both the interaction potential and the external potential should be also investigated. However, little has been reported on the determination both the interaction potential and the external potential in the quantum $N$-body systems. In this paper, we find that the time-dependent Hartree-Fock approximation for the inverse $N$-body scattering in quantum mechanics enables us to recover two potentials—an external potential acting on all particles and a pair interaction potential—from the scattering states with high-velocity initial states. This paper also propose a new reconstruction procedure of recovering the two potentials. Let us formulate our inverse problem and state our main result. We first recall that the $n$-dimensional $N$-body Schrödinger equation has the form: $$\begin{aligned} & i\frac{{\partial}}{{\partial}t}\Psi(t) = \widetilde{H}_N \Psi (t), \\ & \widetilde{H}_N = \sum_{j=1}^N \left[ \frac{1}{2}\left( -i \nabla_{\mathbf{x}_j} \right)^2 +V_{ext}(\mathbf{x}_j)\right] + \sum_{j<k}^N V_{int}(\mathbf{x}_j - \mathbf{x}_k),\end{aligned}$$ where $i=\sqrt{-1}$, $\mathbf{x}_j\in {\mathbb{R}}^n$, $V_{ext}(\mathbf{x}_j)$ is an external potential and $V_{int}(\mathbf{x}_j)$ is an interaction potential with $V_{int}(\mathbf{x}_j)=V_{int}(-\mathbf{x}_j)$. The Hartree-Fock approximation is known as the simplest one-body approximation. Writing the $N$-body wave function $\Psi(t)=\Psi (t, \mathbf{x}_1, \cdots , \mathbf{x}_N)$ with the Slater determinant $$\Psi (t, \mathbf{x}_1, \cdots , \mathbf{x}_N) = (N!)^{-1/2} {\rm det} \left( u_j (t, \mathbf{x}_k) \right)_{1\le j, k \le N}$$ yields the one-body Schrödinger equation: $$\begin{aligned} \label{eqn:1-1} i \frac{{\partial}u_j }{{\partial}t} &= H(u_k) u_j, \\ H(u_k)u_j &= \left[ H_0 + V_{ext} + Q_H(x,{\mbox{\boldmath ${u}$}}) \right] u_j + \int_{{\mathbb{R}}^n} Q_F (x,y, {\mbox{\boldmath ${u}$}}) u_j (t, y) \, dy \qquad \text{for $1\le j \le N$}, \notag\end{aligned}$$ where $H_0= -\dfrac{1}{2} \Delta= -\dfrac{1}{2}\sum_{j=1}^n \frac{{\partial}^2}{{\partial}x_j^2}$ and ${\mbox{\boldmath ${u}$}}={\mbox{\boldmath ${u}$}}(t,x)=( u_j(t,x) )_{1\le j\le N}$ is an unknown function in $(t,x) \in {\mathbb{R}}\times {\mathbb{R}}^n$, and $$\begin{aligned} Q_H (x, {\mbox{\boldmath ${u}$}}) & = \int_{{\mathbb{R}}^n} V_{int}(x-y) \sum_{\substack{k=1 \\ k \not=j}}^N | u_{k}(t, y) |^2 \, dy, \\ &= V_{int}* \sum_{\substack{k=1 \\k\not=j}}^N | u_k (t, \cdot)|^2, \\ Q_F(x,y, {\mbox{\boldmath ${u}$}}) &= - V_{int}(x-y) \sum_{\substack{k=1 \\k\not=j}}^N \overline{u_k} (t, y) u_k (t, x). \end{aligned}$$ The non-linear Schrödinger equation we study in this paper is called the Hartree-Fock equation (HF equation). The terms $Q_H( x,{\mbox{\boldmath ${u}$}} ) u_j (t,x)$ and $\int Q_F(x,y,{\mbox{\boldmath ${u}$}}) u_j (t,y) dy$ are called the Hartree term and the Fock term, respectively. Next, we introduce some notations and assumptions on the potentials. Let $W^{k,p}({\mathbb{R}}^n)$ be the usual Sobolev space in $L^p({\mathbb{R}}^n)$. We abbreviate $W^{k,2}({\mathbb{R}}^n)$ as $H^k({\mathbb{R}}^n)$. The weighted $L^2$-space is denoted as $$L^{2,s}({\mathbb{R}}^n) =\left\{ u(x)\, : \, (1+|x|^2)^{s/2}u(x) \in L^2({\mathbb{R}}^n) , \, s\in {\mathbb{R}}\right\}.$$ Let $C_0^{\infty}({\mathbb{R}}^n)$ be the set of compactly supported smooth functions and ${\ensuremath{\mathcal{S}}}({\mathbb{R}}^n)$ be the set of rapidly decreasing functions on ${\mathbb{R}}^n$. The Fourier transform is denoted as $$\left( {\ensuremath{\mathcal{F}}}u \right)(\xi) = \widehat{u}(\xi) = \frac{1}{(2\pi)^{n/2}}\int_{{\mathbb{R}}^n} e^{-ix\cdot \xi} u(x) \, dx.$$ We define a function space ${\ensuremath{\mathcal{S}}}_0({\mathbb{R}}^n)$ as $${\ensuremath{\mathcal{S}}}_0 ({\mathbb{R}}^n) = \left\{ f\in {\ensuremath{\mathcal{S}}}({\mathbb{R}}^n)\, ; \, \widehat{f} \in C_0^{\infty}({\mathbb{R}}^n)\right\}.$$ The multiplication operator with a fixed function $V(x)$ is denoted as $V$. The unitary group of the self-adjoint
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | Am 14.Dezember des Jahres 1900 berichtete Max Planck der Deutschen Physikalischen Gesellschaft “uber seine physikalische Interpretation einer harmlos aussehenden, von ihn selbst zuvor aufgestellten Formel, die das spektrale Verhalten der sogenannten W”armestrahlung beschreibt. Ma“sgeblich durch das Eingreifen Albert Einsteins entwickelte sich daraus im folgenden Vierteljahrhundert eine fundamentale Krise der Physik, die dann in einer wissenschaftlichen Revolution gr”o“sten Ausma”ses m"undete: der Quantentheorie. Die Quantentheorie entwickelte sich von Anfang an diametral gegen die Intentionen ihrer Sch“opfer. F”ur Planck bedeutete sie – trotz gr“o”ster “au”serer Anerkennungen – das vollst“andige Scheitern eines langj”ahrigen Forschungsprogramms, f“ur Einstein letztlich eine Absage an seine wissenschaftlichen Grund”uberzeugungen. Wir schildern die Hintergr“unde dieser seltsamen Entwicklung und beleuchten damit die begriffliche Seite physikalischer (und allgemein naturwissenschaftlicher) Forschung, die gemeinhin stark untersch”atzt wird. author: - | Domenico Giulini\ Universit“at Freiburg\ Physikalisches Institut\ Hermann-Herder-Stra”se 3\ 79104 Freiburg title: | **,,*Es lebe die Unverfrorenheit!*“**\ Albert Einstein und die Begr"undung der Quantentheorie[^1] --- Um die Rolle zu verstehen, die Albert Einstein bei der Entwicklung der Quantentheorie gespielt hat, m“ussen wir uns zun”achst die vorangegangenen Leistungen Plancks vergegenw“artigen, die ihn zur Aufstellung seiner ber”uhmten Strahlungsformel gef“uhrt haben. Mit dieser gelang ihm die vollst”andige *quantitative* Aufkl“arung des Ph”anomens der *W"armestrahlung*, die ihm den Nobelpreis des Jahres 1918 einbrachte: “als Anerkennung des Verdienstes, das er sich durch seine Quantentheorie um die Entwicklung der Physik erworben hat”. Zu diesem Zeitpunkt lag die eigentliche Tat schon mehr als 17 Jahre zur"uck. Genauer ist sie auf den 14.Dezember des Jahres 1900 zu datieren. Davon wird weiter unten die Rede sein. Etwas weniger bekannt ist die Tatsache, da“s diese wissenschaftliche Gro”stat Plancks gleichzeitig auch die restlose Zerschlagung seines langj“ahrigen, akribisch vorbereiteten und meisterhaft durchgef”uhrten Forschungsprogramms bedeutete, das in einer tief anti-atomistischen, an absoluten Gesetzm“a”sigkeiten sich orientierenden Naturauffassung wurzelt. In der Verfolgung dieser Ideale legt Planck den Grundstein zur Quantentheorie, die dem konsequenten Atomismus zum endg“ultigen Durchbruch verhilft und dem Element des Zufalls eine fundamentale Bedeutung innerhalb des Gef”uges physikalischer Gesetzm“a”sigkeiten zuweist. Hauptmotor dieser Entwicklung, die den Planckschen Vorstellungen diametral entgegenlief, war Albert Einstein. Hartn“ackig und mitunter unverfroren[^2] bestand er auf der restlosen Kl”arung der begrifflichen Grundlagen und Konsequenzen der Planckschen Theorie. Mit seiner Lichtquantenhypothese erkl“arte er nicht nur den photoelektrischen Effekt, sondern legte den eigentlich revolution”aren Kern dieser Theorie frei und provozierte so ma“sgeblich eine tiefe Krise, die 20 Jahre sp”ater in der Formulierung der Quantenmechanik m“undete. Etwas ”ubertreibend, aber im Kern doch zutreffend, kann man sagen, da“s Einstein der einzige war, der die Plancksche Theorie wirklich ernst nahm – so ernst, da”s die Konsequenzen sich schlie“slich auch gegen seine Grund”uberzeugungen richteten. Plancks Programm ================ Planck hatte sich schon in jungen Jahren ein ehrgeiziges Forschungsprogramm zurechtgelegt. Er wollte den sogenannten 2.Hauptsatz[^3] der Thermodynamik mit Hilfe der Theorie elektromagnetischer Vorg“ange streng begr”unden. Dies geschah aus einer Opposition zu den Vertretern des Atomismus, die in den Gesetzen der Thermodynamik lediglich statistische Gesetzm“a”sigkeiten einer sonst regellosen Bewegung sehr vieler Molek“ule sehen wollten, w”ahrend Planck fest an eine strenge Gesetzlichkeit ohne statistische Ausnahmen glaubte. In einer Jugendarbeit aus dem Jahre 1884 schreibt der 24-j“ahrige selbstbewu”st ([@Planck-GW], BandI, Dokument Nr.4, pp.162-163): > “Der zweite Hauptsatz der mechanischen W“armetheorie consequent durchgef”uhrt, ist unvertr“aglich mit der Annahme endlicher Atome. Es ist daher vorauszusehen, da”s es im Laufe der weiteren Entwicklung der Theorie zu einem Kampfe zwischen diesen beiden Theorien kommen wird, der einer von ihnen das Leben kostet.” Zwei Zeilen weiter l“a”st er wenig Zweifel dar“uber, welche der Theorien seiner Meinung und Hoffnung nach das Leben wird lassen m”ussen: > “... indessen scheinen mir augenblicklich verschiedenartige Anzeichen darauf hinzudeuten, da“s man trotz der bisherigen Erfolge der atomistischen Theorie sich schlie”slich doch einmal zu einer Aufgabe derselben und zur Annahme einer continuierlichen Materie wird entschlie“sen m”ussen.” Zu dieser Zeit war der junge Planck ein erkl“arter Anti-Atomist. Sein Plan war, zu versuchen, die thermodynamischen Gesetze nicht ”uber eine Mechanik elementarer Konstituenten (Atome, Molek“ule) zu begr”unden, sondern mit Hilfe der Gesetze der Elektrodynamik, die mit rein kontinuierlichen, im Raum verteilten Gr“o”sen operiert. In seiner Antrittsrede anl“a”slich seiner Aufnahme in die Preu“sische Akademie der Wissenschaften im Jahre 1894 erkl”arte er ([@Planck-GW], Band III, Dokument Nr.122, p.3): > “Es hat sich neuerdings in der physikalischen Forschung auch das Bestreben Bahn gebrochen, den Zusammenhang der Erscheinungen “uberhaupt gar nicht in der Mechanik zu suchen \[..\]. Ebenso steht zu hoffen, da”s wir auch “uber diejenigen elektrodynamischen Prozesse, welche direkt durch die Temperatur bedingt sind, wie sie sich namentlich in der W”armestrahlung “au”sern, n“ahere Aufkl”arung erfahren k“onnen, ohne erst den m”uhsamen Umweg durch die mechanische Deutung der Elektrizit“at nehmen zu m”ussen.” Planck glaubte also an die M“oglichkeit, die Gesetze der Thermodynamik, namentlich den 2.Hauptsatz, als strenge Folge bekannter elektromagnetischer Gesetze zu verstehen.[^4] Dieser sollte aus allgemeinsten Prinzipien ableitbar sein, entsprechend seiner wissenschaftlichen Disposition, die er in seinem sp”aten, pers“onlich gehaltenen Artikel
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | A new two-component system with cubic nonlinearity and linear dispersion: $$\begin{aligned} \left\{\begin{array}{l} m_t=bu_{x}+\frac{1}{2}[m(uv-u_xv_x)]_x-\frac{1}{2}m(uv_x-u_xv), \\ n_t=bv_{x}+\frac{1}{2}[ n(uv-u_xv_x)]_x+\frac{1}{2} n(uv_x-u_xv), \\m=u-u_{xx},~~ n=v-v_{xx}, \end{array}\right. $$ where $b$ is an arbitrary real constant, is proposed in this paper. This system is shown integrable with its Lax pair, bi-Hamiltonian structure, and infinitely many conservation laws. Geometrically, this system describes a nontrivial one-parameter family of pseudo-spherical surfaces. In the case of $b=0$, the peaked soliton (peakon) and multi-peakon solutions are studied. In particular, the two-peakon dynamical system is explicitly solved and their interactions are investigated in details. In the case of $b\neq0$, the weak kink solution is discussed. In addition, a new integrable nonlinear Schrödinger type equation $$\begin{aligned} m_t=bu_{x}+\frac{1}{2}[m(|u|^2-|u_x|^2)]_x-\frac{1}{2}m(uu^\ast_x-u_xu^\ast), \quad m=u-u_{xx},\end{aligned}$$ is obtained by imposing the complex conjugate reduction $v=u^\ast$ to the two-component system. The complex valued $N$-peakon solution and weak kink solution of this nonlinear Schrödinger type equation are also derived. [**Keywords:**]{}Integrable system, Lax pair, Peakon, Weak kink. [**PACS:**]{}02.30.Ik, 04.20.Jb. author: - | Baoqiang Xia$^{1}$[^1],   Zhijun Qiao$^{2}$[^2]\ $^{1}$School of Mathematics and Statistics, Jiangsu Normal University,\ Xuzhou, Jiangsu 221116, P. R. China\ $^2$Department of Mathematics, University of Texas-Pan American,\ Edinburg, Texas 78541, USA title: ' A new two-component integrable system with peakon and weak kink solutions' --- Introduction ============= In recent years, the Camassa-Holm (CH) equation [@CH] $$\begin{aligned} m_t-bu_x+2m u_x+m_xu=0, \quad m=u-u_{xx}, \label{CH}\end{aligned}$$ where $b$ is an arbitrary constant, derived by Camassa and Holm [@CH] as a shallow water wave model, has attracted much attention in the theory of soliton and integrable system. The CH equation was implied in the work of Fuchssteiner and Fokas on hereditary symmetries as a very special case [@FF1]. Since the work of Camassa and Holm [@CH], more diverse studies on this equation have been remarkably developed [@CH2]-[@CGI]. The most interesting feature of the CH equation (\[CH\]) is that it admits peaked soliton (peakon) solutions in the case of $b=0$. A peakon is a weak solution in some Sobolev space with corner at its crest. The stability and interaction of peakons were discussed in several references [@CS1]-[@JR]. In addition to the CH equation, other integrable models with peakon solutions have been found [@DP1]-[@NV1]. Among these models, there are two integrable peakon equations with cubic nonlinearity, which are $$\begin{aligned} m_t=bu_x+\left[ m(u^2-u^2_x)\right]_x, \quad m=u-u_{xx},\label{cCHQ}\end{aligned}$$ and $$\begin{aligned} m_t=u^2m_x+3uu_xm, \quad m=u-u_{xx}.\label{cCHN}\end{aligned}$$ Equation (\[cCHQ\]) was proposed independently by Fokas (1995) [@Fo], Fuchssteiner (1996) [@Fu], Olver and Rosenau (1996) [@OR], and Qiao (2006) [@Q1] where the Lax pair and peaked/cusped solitons are presented. Equation (\[cCHQ\]) is the first cubic nonlinear integrable system possessing peakon solutions. Recently, the peakon stability of equation (\[cCHQ\]) with $b=0$ was worked out by Gui, Liu, Olver and Qu [@GLOQ]. In 2009, Novikov [@NV1] derived another cubic equation, which is equation (\[cCHN\]), from the symmetry approach, and Hone and Wang [@HW1] gave its Lax pair, bi-Hamiltonian structure, and peakon solutions. Very recently [@QXL], we derived the Lax pair, bi-Hamiltonian structure, peakons, weak kinks, kink-peakon interactional and smooth soliton solutions for the following integrable equation with both quadratic and cubic nonlinearity: $$\begin{aligned} m_t=bu_x+\frac{1}{2}k_1\left[ m(u^2-u^2_x)\right]_x+\frac{1}{2}k_2(2 m u_x+ m_xu), \quad m=u-u_{xx},\label{gCH}\end{aligned}$$ where $b$, $k_1$, and $k_2$ are three arbitrary constants. It is very interesting for us to study the multi-component integrable generalizations of peakon equations. For example, in [@OR; @CLZ; @Fa], the authors proposed the two-component generalizations of the CH equation (\[CH\]) with $b=0$, and in [@GX; @SQQ], the authors present the two-component extensions of the cubic nonlinear equation (\[cCHN\]) and equation (\[cCHQ\]) with $b=0$. In this paper, we propose the following two-component system with cubic nonlinearity and linear dispersion $$\begin{aligned} \left\{\begin{array}{l} m_t=bu_{x}+\frac{1}{2}[m(uv-u_xv_x)]_x-\frac{1}{2}m(uv_x-u_xv), \\ n_t=bv_{x}+\frac{1}{2}[ n(uv-u_xv_x)]_x+\frac{1}{2} n(uv_x-u_xv), \\m=u-u_{xx},~~ n=v-v_{xx}, \end{array}\right. \label{eq}\end{aligned}$$ where $b$ is an arbitrary real constant. This system is reduced to the CH equation (\[CH\]), the cubic CH equation (\[cCHQ\]), and the generalized CH equation (\[gCH\]) as $v=-2$, $v=2u$, and $v=k_1u+k_2$, respectively. Thus it is a kind of two-component extensions of equation (\[CH\]), (\[cCHQ\]) and (\[gCH\]) with a linear dispersive term. We prove integrability of system (\[eq\]) by providing its Lax pair, bi-Hamiltonian structure, and infinitely many conservation laws. Geometrically system (\[eq\]) describes pseudo-spherical surfaces and thus it is also integrable in the sense of geometry. In the case of $b=0$ (dispersionless case), we show that this system admits the single-peakon of traveling wave solution as well as multi-peakon solutions. In particular, the two-peakon dynamic system is explicitly solved and their interactions are investigated in details. In the case of $b\neq0$ (dispersion case), we find that the two-component system (\[eq\]) possesses the weak kink solution. Moreover, by imposing the complex conjugate reduction $v=u^\ast$ to system (\[eq\]), we obtain a new integrable nonlinear Schrödinger type equation $$\begin{aligned} m_t=bu_{x}+\frac{1}{2}[m(|u|^2-|u_x|^2)]_x-\frac{1}{2}m(uu^\ast_x-u_xu^\ast), \quad m=u-u_{xx}, \label{nlseq}\end{aligned}$$ where the symbol $^\ast$ denotes the complex conjugate of a potential. The complex valued $N$-peakon solution and weak kink solution for this nonlinear Schrödinger type system are also proposed. The whole paper is organized as follows. In section 2, a Lax pair, bi-Hamiltonian structure as well as infinitely many conservation laws of equation (\[eq\]) are presented. In section 3, the geometric integrability of equation (\[eq\]) are studied. In section 4, the single-peakon, multi-peakon, and two-peakon dynamics are discussed for the case of $b=0$. Section 5 shows that equation (\[eq\]) possesses the weak kink solution for the case of $b\neq0$. Section 6 derives the peakon and kink solutions of the nonlinear
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | The authors in their previous papers obtained compact, arbitrarily accurate expressions for two-center one- and two-electron relativistic molecular integrals expressed over Slater-type orbitals. In this present study, the accuracy limits of given expressions is examined for three-center nuclear attraction integrals, which are the first integral set do not have analytically closed form relations. They are expressed through new molecular auxiliary functions obtained via Neumann expansion of Coulomb interaction. The numerical global adaptive method is used to evaluate these integrals for arbitrarily values of orbital parameters, quantum numbers. Several methods, such as Laplace expansion of Coulomb interaction, single-center expansion, Fourier transformation method, have been performed in order to evaluate these integrals considering the values of principal quantum numbers in the set of positive integer numbers. This is the first attempts to study the three-center integrals without any restrictions on quantum numbers and in all ranges of orbital parameters. Keywords : PACS numbers : ... . author: - 'A. Ba[ğ]{}c[i]{}' - 'P. E. Hoggan' title: 'Benchmark values for molecular three-center integrals arising in the Dirac equation' --- \[sec:intro\]Introduction ========================= The LCAO-SCF [@Roothaan1951] method is generally employed for molecules, in which molecular wave functions taken to be linear combinations of atomic basis functions whose should possess the cusps condition at the nuclei [@Kato1957] and decay exponentially for large distances [@Agmon1982]. This approach leads to use, namely, Slater-type orbitals [@Slater1930; @Parr1957], $$\begin{aligned} \label{eq:STSOs} \chi_{nlm} \left(\zeta,\vec{r}\right)= \frac{\left(2\zeta \right)^{n+1/2}}{\sqrt{\Gamma(2n+1)}}r^{n-1}e^{-\zeta r}Y_{lm}(\theta,\phi),\end{aligned}$$ here, $Y_{lm}$ are complex or real spherical harmonics $(Y^{*}_{lm}=Y_{l-m}; Y_{lm} \equiv S_{lm})$ differs from the Condon$-$Shortley phases by sign factor $(-1)^{m}$ [@CS1935; @Steinborn1978; @Blanco1997], $\Gamma(z)$ are gamma functions [@Abramowitz1972], $\left\lbrace n, l, m \right\rbrace$ are the principal, orbital, magnetic quantum numbers with, $n \in \mathbb{R}^{+}$, $0\leq l \leq \lfloor n \rfloor$, $-l \leq m \leq l$ and $\lfloor n \rfloor$ stands for the integer part of $n$, respectively, in one$-$ and two$-$electron multi$-$center molecular integrals. These integrals needs to be calculated in spectroscopic accuracy in order to meaningful discussions on basis-set expansion methods, Born-Oppenheimer energy, vibrational frequency calculations. The difficulty of finding analytically closed form relations, however, for molecular integrals have more than two-center referred to as *The bottleneck of quantum chemistry* [@Mulliken1959], have been greatest obstacle since Slater-type orbitals have no simple addition theorem; relations for products of two Slater-type orbitals centered on different positions not available in compact form [@Bouferguene1998]. The Slater-type orbitals are obtained by simplification of Laguerre functions in hydrogen$-$like orbitals [@Willock2009] by keeping only the term of the highest power of $r$, for integer values of principal quantum number $n$ (ISTOs), where $n \in \mathbb{N}^{+}$, $\Gamma(2n+1)=(2n)!$ and it has been proved that they provide extra flexibility for closer variational description of atoms and molecules by considering the values of $n$ in more general set of number, namely positive real numbers (NSTOs), where $n \in \mathbb{R}^{+}$. The studies on the evaluation of molecular integrals, thus, are performed in two main group: those restrict the principal quantum number with integer values, which are practically used in nonrelativistic molecular electronic structure calculations [@Bouferguene1996; @Rico2001] and those free them from any specification but also reduce the area of applications only to investigation of atoms [@Koga1997-1; @Koga1997-2; @Koga1997-3; @Koga1998; @Koga2000; @Erturk2015]. The multi$-$center molecular integrals over ISTOs can be evaluated by expansion of Slater-type orbitals through complete orthonormal basis functions to a new origin [@Barnett1951; @Harris1965; @Guseinov1978; @Guseinov2001; @Bouferguene2005] (see also references therein), $$\begin{gathered} \label{eq:WAVEEXPTHEO} \chi_{nlm}(\zeta,\vec r_{A})\\ =\lim_{N_{e} \to \infty}\sum_{n'l'm'}^{N_{e}} V_{nlm,n'l'm'}^{N_{e}}(\zeta,\vec R_{AB})\chi_{n'l'm'}(\zeta,\vec r_{B}).\end{gathered}$$ or by expressing them as a finite linear combination of $B$ functions through Fourier transform [@Filter1978-1; @Filter1978-2; @Weniger1983; @Grotendorst1985; @Steinborn1992; @Homeier1992]. However, infinite series representation formulas arising in expansion method require increasing upper limit of summation as much as possible to converge to exact values with sufficient decimals (the choice adopted as threshold for the total energy in nonrelativistic variational energy calculation is of order E$-$03 atomic units, therefore, constitute matrix elements should be accurate to E$-$10 atomic units) and presence of spherical Bessel functions brings computational difficulties in Fourier transform method since they provoke an oscillation [@Safouhi1999; @Safouhi2000; @Safouhi2003-1; @Safouhi2003-2]. (Origin) at (0,0,0); (0,0,0) – (7,0,0) node\[anchor=north east\][$Y$]{}; (0,0,0) – (0,7,0) node\[anchor=north west\][$Z$]{}; (0,0,0) – (0,0,7) node\[anchor=south\][$X$]{}; \(A) at (2,5,2); (B) at (7,4,2); (C) at (2.5,2.7,2.5); (E) at (4.5,7,2); (Origin) circle node \[left\] [O]{}; (A) circle (3pt) node \[below left=-0.3cm and 0.00 of A\] [A]{}; (B) circle (3pt) node \[right\] [B]{}; (C) circle (3pt) node \[below=0.1cm and 0.00 of C\] [C]{}; (E) circle (2pt) node \[right\] [$\vec{r}$]{}; (2,5,2) – (7,4,2); (2,5,2) – (2.5,2.7,2.5); (7,4,2) – (2.5,2.7,2.5); (0,0,0) – (2,5,2) node \[black, pos=0.6, left\] [$\vec{R}_{A}$]{}; (0,0,0) – (7,4,2)node \[black,pos=0.6, below\] [$\vec{R}_{B}$]{}; (0,0,0) – (2.5,2.7,2.5) node \[black,pos=0.75, below\] [$\vec{R}_{C}$]{}; (0,0,0) – (4.5,7,2) node \[black,pos=0.55, left\] ; (2,5,2) – (4.5,7,2) node \[black,pos=0.6, left\] [$\vec{r}_{A}$]{}; (7,4,2) – (4.5,7,2) node \[black,pos=0.6, below\] [$\vec{r}_{B}$]{}; (2.5,2.7,2.5) – (4.5,7,2) node \[black,pos=0.7, right\] [$\vec{r}_{C}$]{}; (OriginA) at (2,5,2); (OriginB) at (7,4,2); (OriginC) at (2.5,2.7,2.5); The problem of multi$-$center integrals evaluation by the use of NSTOs even much more through insurmountable. The Slater type orbitals with noninteger principal quantum numbers do not have infinite series representation formulas; they can not be expanded via complete orthonormal basis functions since power series for a function such as $z^\rho$, $z \in \mathbb{C}$ and $\rho \in \mathbb{R}/\mathbb{N}_{0}$ are not analytic at the origin [@Weniger2008; @Weniger2012], where the symbols $\mathbb{C}$, $\mathbb{R}$, $\mathbb{N}_{0}$ used to denote the sets of complex, real
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'In order to more effectively cope with the real-world problems of vagueness, impreciseness, and subjectivity, fuzzy discrete event systems (FDESs) were proposed recently. Notably, FDESs have been applied to biomedical control for HIV/AIDS treatment planning and sensory information processing for robotic control. Qiu, Cao and Ying independently developed supervisory control theory of FDESs. We note that the controllability of events in Qiu’s work is fuzzy but the observability of events is crisp, and, the observability of events in Cao and Ying’s work is also crisp although the controllability is not completely crisp since the controllable events can be disabled with any degrees. Motivated by the necessity to consider the situation that the events may be observed or controlled with some membership degrees, in this paper, we establish the supervisory control theory of FDESs with partial observations, in which both the observability and controllability of events are fuzzy instead. We formalize the notions of fuzzy controllability condition and fuzzy observability condition. And Controllability and Observability Theorem of FDESs is set up in a more generic framework. In particular, we present a detailed computing flow to verify whether the controllability and observability conditions hold. Thus, this result can decide the existence of supervisors. Also, we use this computing method to check the existence of supervisors in the Controllability and Observability Theorem of classical discrete event systems (DESs), which is a new method and different from classical case. A number of examples are elaborated on to illustrate the presented results.' author: - 'Daowen Qiu and Fuchun Liu[^1][^2][^3][^4]' title: 'Fuzzy Discrete Event Systems under Fuzzy Observability and a Test-Algorithm' --- Discrete event systems, fuzzy logic, observability, supervisory control, fuzzy finite automata. Introduction ============ ISCRETE event systems (DESs) are dynamical systems whose evolution in time is governed by the abrupt occurrence of physical events at possibly irregular time intervals. Event though DESs are quite different from traditional continuous variable dynamical systems, they clearly involve objectives of control and optimization. A fundamental issue of supervisory control for DESs is how to design a controller (or supervisor), whose task is to enable and disable the controllable events such that the resulting closed-loop system obeys some prespecified operating rules \[1\]. Up to now, the supervisory control theory of DESs has been significantly applied to many technological and engineering systems such as automated manufacturing systems, interaction telecommunication networks and protocol verification in communication networks \[2-9\]. In most of engineering applications, the states of a DES are crisp. However, this is not the case in many other applications in complex systems such as biomedical systems and economic systems, in which vagueness, impreciseness, and subjectivity are typical features. For example, it is vague when a man’s condition of the body is said to be “good". Moreover, it is imprecise to say at what point exactly a man has changed from state “good" to state “poor". It is well known that the fuzzy set theory first proposed by Zadeh \[10\] is a good tool to cope with those problems. Indeed, up to now, fuzzy control systems have been well developed by many authors, and we may refer to \[11\] (and these references therein) regarding a survey on model-based fuzzy control systems. Notably, Lin and Ying \[12, 13\] recently initiated significantly the study of [*fuzzy discrete event systems*]{} (FDESs) by combining fuzzy set theory \[14\] with classical DESs. Notably, FDESs have been applied to biomedical control for HIV/AIDS treatment planning \[15, 16\] and decision making \[17\]. More recently, R. Huq [*et al*]{} \[18, 19\] have proposed an intelligent sensory information processing technique using FDESs for robotic control in the field of mobile robot navigation. Just as Lin and Ying \[13\] pointed out, a comprehensive theory of FDESs still needs to be set up, including many important concepts, methods and theorems, such as controllability, observability, and optimal control. These issues have been partially investigated in \[20-23\]. It is worthy to mention that Qiu \[20\], Cao and Ying \[21\] independently developed the supervisory control theory of FDESs. The similarity between the two theories is that the fuzzy systems considered in both \[20\] and \[21\] are modeled by max-min automata instead of max-product automata adopted in \[13\], and the controllability theorem was established in their respective frameworks. However, there are great differences between them. For the purpose of control, the set of events in \[21\] is partitioned into two disjoint subsets of controllable and uncontrollable events, as usually done in classical DESs, but the controllability of events is not completely crisp since the controllable events can be disabled by supervisors with any degrees. In contrast with \[21\], the controllable set and uncontrollable set of events in \[20\] are two [*fuzzy subsets*]{} of the set of events. That is, each event not only belongs to the uncontrollable set but also belongs to the controllable set; only its degree of belonging to those sets may be different. In particular, Qiu \[20\] presented an algorithm to check the existence of fuzzy supervisors for FDESs. As a continuation of the supervisory control under full observations \[20, 21\], this paper is to deal with the supervisory control of FDESs with fuzzy observations (generalizing partial observations). We notice that the observability in Qiu’s work \[20\] and Cao and Ying’s work \[21-23\] is [*crisp*]{}, that is, each fuzzy event is either completely observable or completely unobservable, although the controllability is fuzzy in \[20\] and not completely crisp in \[21-23\] where the controllable events can be disabled with any degrees. However, in real-life situation, each event generally has a certain degree to be observable and unobservable, and, also, has a certain degree to be controllable and uncontrollable. In fact, this idea of fuzziness of observability and controllability was originally proposed by Lin and Ying \[13\], and Qiu \[20\], and then it has been subsequently applied to robot sensory information processing by Huq [*et al*]{} \[18, 19\]. For example, in the cure process for a patient having cancer via either operation or drug therapy \[24\], some treatments (events) can be clearly seen by supervisors (viewed as a group of physicians), while some therapies (such as some operations) may not completely be observed by supervisors. For another example, in order to provide state-based decision making for a physical agent in mobile robot control, Huq [*et al*]{} \[18, 19\] introduced the concept of state-based observability to interpret the degree of reliability of the sensory information used in constructing fuzzy event matrices. Motivated by the necessity to consider the situation that the events may be observed or controlled with some membership degrees, in this paper, we establish the supervisory control theory of FDESs with partial observations, in which both the observability and the controllability of events are [*fuzzy*]{} instead. We formalize the notions of fuzzy controllability condition and fuzzy observability condition. A Controllability and Observability Theorem of FDESs is set up in a more generic framework. In particular, we present a computing flow to verify whether the controllability and observability conditions hold, which can decide the existence of supervisors. Also, we apply this computing method to testing the existence of supervisors in the Controllability and Observability Theorem of classical DESs \[1\], which is a different method from classical case \[1\]. The remainder of the paper is organized as follows. In the interest of readability, in Section II, we recall related notation and notions in supervisory control theory of FDESs. In Section III, we establish a Controllability and Observability Theorem of FDESs. Section IV deals with the realization of supervisors in the theorem; we present a computing flow for testing the existence of supervisors. Also, we elaborate on a number of related examples to illustrate the presented results. Preliminaries ============= Firstly we give some notation. ${\cal P}(X)$ denotes the power set of set $X$. A fuzzy subset of set $X$ is defined as a mapping from $X$ to $ [0,1]$. The set of all fuzzy subsets over $X$ is denoted as ${\cal F}(X)$. For two fuzzy subsets $\widetilde{A}$ and $\widetilde{B}$, $\widetilde{A}\subseteq \widetilde{B}$ stands for $\widetilde{A}(x) \leq \widetilde{B}(x)$ for any element $x$ of domain. A nondeterministic finite automaton \[25\] is a system described by $ G=(Q,E,\delta,q_{0},Q_{m})$, where $Q$ is the finite set of states with the initial state $q_{0}$, $E$ is the finite set of events, $\delta: Q\times E\rightarrow {\cal P}(Q)$ is the transition relation, and $Q_{m}\subseteq Q$ is called the set of marked states. Each sequence over $E$ is called a [*string*]{}. $E^{*}$ denotes the set of all finite strings over $E$. For $u\in E^{*}$, $|u|$ denotes the length of $u$; if $|u|=0$, then $u$ is an empty string, denoted by $\epsilon$. A
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Approximate inference algorithm is one of the fundamental research fields in machine learning. The two dominant theoretical inference frameworks in machine learning are variational inference (VI) and Markov chain Monte Carlo (MCMC). However, because of the fundamental limitation in the theory, it is very challenging to improve existing VI and MCMC methods on both the computational scalability and statistical efficiency. To overcome this obstacle, we propose a new theoretical inference framework called ergodic Inference based on the fundamental property of ergodic transformations. The key contribution of this work is to establish the theoretical foundation of ergodic inference for the development of practical algorithms in future work.' bibliography: - 'database.bib' nocite: '[@langley00]' --- Introduction {#sec:intro} ============ Statistical inference is the cornerstone of probabilistic modelling in machine learning. The research on inference algorithms always attracts a great attention in the research community, because it is the fundamentally important in the computation of Bayesian inference, deep generative models. The majority of research is focused on algorithmic development in two theoretical frameworks: variational inference (VI) and Markov chain Monte Carlo (MCMC). These two methods are significantly different. VI is an optimisation-based approach, in particular, which fits a simple distribution to a given target. In contrast, MCMC is a simulation-based approach, which sequentially generates asymptotically unbiased samples of arbitrary target. Unfortunately, both VI and MCMC suffer from fundamental limitations. VI methods are in general biased because the density function of approximate distribution must be in closed-form. MCMC methods are also biased in practice because the Markov property limits the sample simulation in a local sample space close to previous samples. However, VI is in general more scalable in computation. Optimising variational distribution and simulating samples in VI are computationally efficient and can be accelerated by parallelization on GPU. In contrast, simulating Markov chains is computationally inefficient and, more importantly, asynchronized parallel simulation of multiple Markov chains has no effect on reducing sample correlations but multiplies the computation. Ergodic Measure preserving flow (EMPF), introduced by [@DBLP:journals/corr/abs-1805-10377], is a recent novel optimisation-based inference method that overcomes the limitations of both MCMC and VI. However, there is no theoretical proof of the validity of EMPF. In this work, we will generalize EMPF to a novel inference framework called ergodic inference. In particular, the purpose of this work is to establish the theoretical foundation of ergodic inference. We list the key contribution of this work as following - The mathematical foundation of ergodic inference. (Section \[sec:ei\_principle\] and \[sec:ergodic\_transformation\]) - A tractable loss of ergodic inference and the proof of the validity of the loss. (Section \[sec:ergodic\_loss\]) - An ergodic inference model: deep ergodic inference networks (Section \[sec:deins\]) - Clarification of differences between ergodic inference, MCMC and VI (Section \[sec:deins\]) The background {#sec:background} ============== Convergence of probability measures is the foundation of statistical inference. Distance metric between probability measures is critical in the study of convergence. We will review the basics of distance metrics between probability measures and connect these metrics to theoretical foundation of inference methods. Distance Metric of Probability Measures --------------------------------------- Total variation distance is fundamentally important in probability theory, because it defines the strongest convergence of probability measure. Let $(\Omega, {\mathcal{F}})$ be a measure space, where $\Omega$ denotes the sample space and ${\mathcal{F}}$ denotes the collection of measurable subsets of $\Omega$. Given two probability measure $P$ and $Q$ defined on $(\Omega, {\mathcal{F}})$, the TV distance between $Q$ and $P$ is defined as $$\begin{aligned} {D_{\text{TV}}}(Q, P) = \sup_{A \in {\mathcal{F}}} \vert Q(A) - P(A) \vert.\end{aligned}$$ Convergence in TV, that is ${D_{\text{TV}}}(Q, P)=0$, means $Q$ and $P$ cannot be distinguished on any measurable set. The Kullback-Leibler (KL) divergence is an important measure of difference between probability measures in statistical methods. For a continuous sample space $\Omega$, the KL divergence is defined as $$\begin{aligned} {D_{\text{KL}}}(Q \vert \vert P) = \int_{\Omega} dQ\log \frac{dQ}{dP},\end{aligned}$$ where $dP$ denote the density of probability measure. Approximate Monte Carlo Inference --------------------------------- Monte Carlo method is the most popular simulation based inference technique in probabilistic modelling. For example, to fit a probabilistic model ${\pi}$ by maximum likelihood estimation, it is essential to compute the gradient of the partition function $Z(\theta) = \int {\pi^*}(z)dz$. Given the unnormalised density function $\log{\pi^*}(z)$, computing the gradient becomes a problem of expectation estimation $$\partial_{\theta}Z(\theta) = {\mathbf{E}}_{{\pi}(z)}[\partial_{\theta}\log{\pi^*}(z)].$$ Monte Carlo methods allow us to construct unbiased estimator of expectation as $${\mathbf{E}}_{{\pi}(z)}[f(z)] = \lim_{N \rightarrow \infty}\frac{1}{N} \sum_{i=1}^N f(z_i),$$ where $z_i$ denotes samples from ${\pi}$. Unfortunately, it is intractable to generate samples from complex distributions, like the posterior distributions in model parameters or latent variables. Because of this challenge, approximate Monte Carlo Inference is fundamentally important. We will review the theoretical foundation of two important inference methods: variational Inference (VI) and Markov chain Monte Carlo (MCMC) in the next two sections. Variational Inference {#sec:vi} --------------------- The theoretic foundation of VI is Pinsker’s inequality. Pinsker’s inequality states that the KL divergence is a upper bound of TV distance $$\begin{aligned} {D_{\text{TV}}}(Q, P) \le {D_{\text{KL}}}(Q \vert \vert P).\end{aligned}$$ Given a parametric distribution $Q$ and the target distribution ${\pi}$, minimising the KL divergence ${D_{\text{KL}}}(Q \vert \vert {\pi})$ implies the less TV distance ${D_{\text{TV}}}(Q, {\pi})$. The key challenge of VI is how to construct the parametric family ${\mathcal{Q}}$ so that the estimation of the KL divergence is tractable and family ${\mathcal{Q}}$ is expressive to approximate complex target. This forces most VI methods to choose $Q$ with closed-form density function. Otherwise, the estimation of entropy term $\text{H}(Q)=-\int Q(dz) \log q(z)$ becomes challenging. In practice, the approximation family ${\mathcal{Q}}$ in most VI methods are rather simple, like Gaussian distribution, so the approximation bias due to oversimplified $Q$ is the key issue of VI. However, simple approximate family gives VI methods great computational advantage in practice. First, the main loss function in VI is known as the evidence lower bound (ELBO) $$\begin{aligned} L_{\text{ELBO}} = \int_{\Omega} dQ\log \frac{d{\pi^*}}{dQ} \le \log \int d{\pi^*}.\end{aligned}$$ With analytic form of the entropy of $Q$, ELBO can be efficiently computed and optimized using standard gradient descent algorithm. Second, simulating i.i.d. samples from a simple variational family $Q$ is straightforward and very efficient. Markov Chain Monte Carlo {#sec:mcmc} ------------------------ The theoretical foundation of Markov chain Monte Carlo (MCMC) is ergodic theorem. Ergodic theorem states that, given an ergodic Markov chain $(Z_n)$ with a stationary distribution ${\pi}$, the average cross states of chain is equivalent to the average in state space of the chain, that is $${\mathbf{E}}_{{\pi}}[f] = \lim_{m \rightarrow \infty}\frac{1}{M} f(Z_{\infty}^{m}) = \lim_{n \rightarrow \infty}\frac{1}{N} f(Z_n),$$ where $Z_{\infty}^m$ denotes the sample of a well-mixed Markov chains after infinitely long transitions. Ergodic theorem implies that we can generate unbiased samples from every Markov transition without waiting forever for the chains to reach stationary state. Therefore, we can trade computational efficiency with a bias that may decrease in a long time. The key challenge of MCMC methods is to define ergodic Markov chains with any given stationary distribution ${\pi}$. This challenge was solved first by Metropolis-Hastings algorithm. We will discuss in detail in Section \[sec:mh\]. Ergodic Markov chains enjoy strong stability. Irrespective of the distribution of initial state $\mu(z_0)$ and the parameter of Markov kernel $K(\cdot, \cdot)$, the distribution of the state of the chain is guaranteed to converge to the stationary distribution in total variation after every transition. Formally, that means the reduce of TV distance to stationary for all $L \ge 0$ $$\begin{aligned} {D_{\text{TV}}}\left(Q_{L+1}, {\pi}\right) < {D_{\text{TV}}}\left(Q_{L}, {\pi}\right)\end{aligned}$$ where $q_{L
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We describe <span style="font-variant:small-caps;">Artemis</span> (Annotation methodology for Rich, Tractable, Extractive, Multi-domain, Indicative Summarization), a novel hierarchical annotation process that produces indicative summaries for documents from multiple domains. Current summarization evaluation datasets are single-domain and focused on a few domains for which naturally occurring summaries can be easily found, such as news and scientific articles. These are not sufficient for training and evaluation of summarization models for use in document management and information retrieval systems, which need to deal with documents from multiple domains. Compared to other annotation methods such as Relative Utility and Pyramid, <span style="font-variant:small-caps;">Artemis</span> is more tractable because judges don’t need to look at all the sentences in a document when making an importance judgment for one of the sentences, while providing similarly rich sentence importance annotations. We describe the annotation process in detail and compare it with other similar evaluation systems. We also present analysis and experimental results over a sample set of 532 annotated documents.' author: - | \ [**Rahul Jha**]{}$^\star$, [**Keping Bi**]{}$^\dag$, [**Yang Li**]{}$^{\star}$, [**Mahdi Pakdaman**]{}$^{\star}$\ [**Asli Celikyilmaz**]{}$^\star$, [**Ivan Zhiboedov**]{}$^\ddag$, [**Kieran McDonald**]{}$^{\star}$\ $^\star$ Microsoft Corporation\ $^\dag$ Umass Amherst\ $^\ddag$ Facebook Inc bibliography: - 'refs.bib' title: '<span style="font-variant:small-caps;">Artemis</span>: A Novel Annotation Methodology for Indicative Single Document Summarization' --- Introduction {#sec:intro} ============ Annotation Methodology {#sec:method} ====================== Related Work {#sec:rel} ============ Annotated Data Analysis {#sec:analysis} ======================= Experiments {#sec:experiments} =========== Concluding Remarks {#sec:conclusion} ==================
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Out-of-plane screening (OPS) is expected to occur generally in metal-semiconductor interfaces but this aspect has been overlooked in previous studies. In this paper we study the effect of OPS in electron-hole bilayer (EHBL) systems. The validity of the dipolar interaction induced by OPS is justified with a RPA calculation. Effect of OPS in electron-hole liquid with close-by screening layers is studied. We find that OPS affects the electronic properties in low density and long wavelength regime. The corresponding zero-temperature phase diagram is obtained within a mean field treatment. We argue that our result is in general relevant to other heterostrucutures. The case of strongly correlated EHBL is also discussed.' author: - Cheung Chan - 'T. K. Ng' title: Out of plane screening and dipolar interactions in heterostructures --- introduction ============ Modern micro-electronics relies to a large degree on surface science, which concerns the material properties near a surface or interface. To enhance the performance of such devices, knowledge of the electronic states near the interfaces is required. Near a surface or interface, electronic reconstruction may alter three key factors - interaction strengths, bandwidths and electron densities [@AMillis] which determine electronic states and their properties. In this paper, we consider another factor - the modification in form of interaction between electrons. For instance, in an insulator-semiconductor-insulator superstructure, if the dielectric constant of the semiconductor is sizably larger than that of insulator (barrier layer), the image charges induced at semiconductor-insulator interface can substantially enhance the binding energy of the excitons confined in the semiconductor layer [@Xconfine95; @Xconfine92]. In this case, the electrons and holes do not interact via usual Coulomb potential after the effect of the image charges at the semiconductor-insulator interface is taken into account. Recently, Huang *et al.* observed non-activated electronic conductivity of a two-dimensional (2D) low density hole system in a heterojunction insulated-gate field-effect transistor [@Nonact-transport]. Such non-activated conductivity is unexpected as at low charge density strong Coulomb interaction is expected to crystallize the system (Wigner crystal), which is then pinned by disorder resulting in insulating behavior and activated conductivity. Huang *et al.* attribute the behavior to the screening of Coulomb interactions by the metallic gate, which leads to destruction of the Wigner crystal phase. Physically, the metallic gate which is located at a distance away from the 2D hole gas, provides an out of plane screening (OPS) to the hole-hole interaction, resulting in effective dipolar interaction between holes. Microscopically, when a charge is placed near a metal surface, an image charge of opposite sign will be induced at the surface to screen out the (static) electric field from the charge. From elementary electrostatics, the system can be described equivalently as a dipole formed by the charge and its image charge and the interaction between two charges located near the interface changes from a Coulomb potential $\sim1/r$ to a dipolar potential $\sim1/r^{3}$. This modified interaction, which is generally expected to exist in metal-semiconductor heterostructures, can change the electronic properties near the interface. Unexpectedly, there has been no detailed theoretical study of this effect on electronic properties until recently [@HoLH]. The neglect of OPS might be due to dynamical screening of in-plane charges [@HoLH]. For high charge density, the screening can effectively reduce both Coulomb and dipolar interactions to short range interactions. However for low charge density electronic liquids in-plane screening is less effective and OPS can lead to a difference, as is observed by Huang *et al.* [@Nonact-transport]. In this paper, we study how OPS affects the electronic properties in systems with two-layer of charges of opposite sign, i.e. the 2D electron-hole bilayer (EHBL) system. We shall study how OPS affects Wigner crystalization and exciton condensate in the system [@Nonact-transport; @exciton-cond] and will also comment on the effect of OPS in interfaces between metals and strongly correlated electron systems [@thin_film_on_metal; @YBCO_metal_interface1; @YBCO_metal_interface2]. OPS and effective interaction between charges ============================================= ![\[fig:EHBL-OPS\] (a) EHBL system separated by distance $b$. (b) EHBL with OPS by metallic plates in both layers. Dotted line represents metallic interface, separated from the main layer by a distance of $a/2$. (c) Similar to (b) but with only one OPS layer. (d) Charges (black dots) and screening charge response at the metal interface (grey patches). (e) Effective image charge (grey dots) and effective interactions $V^{\text{intra}}$ and $V^{x}$. (f) Charges attract when they are aligned while repel when they are not. This behavior is different from the Coulomb potential which is always attractive for a pair of electron and hole. The repulsive behavior inhibits exciton pairing.](EHBL-OPS){width="0.9\columnwidth"} In this section we provide the details for the EHBL systems we study and the corresponding OPS effective interaction. We shall assume that the only effect of the metallic screening layers is to provide an image charge for point charges sitting close to it and the effective interaction between charges will be derived from the image charge picture. The validity of this approximation is bounded by the plasma frequency $\omega_{p}^{(s)}$ of the screening layer, above which the screening layer cannot respond rapidly to the charge fluctuations. Thus our approximation is valid when the plasma frequency of the EHBL layer $\omega_{p}$ is much less than $\omega_{p}^{(s)}$, or that the screening layer has density of electric charge much larger than the charge density of the EHBL layers we consider. The image charge picture can be justified by a Random Phase Approximation (RPA) calculation which is shown in the Appendix. Starting with a EHBL system (Fig.\[fig:EHBL-OPS\](a)), two metallic screening layers can be added as shown in Fig.\[fig:EHBL-OPS\](b), or a single metallic screening layer can be added as shown in Fig.\[fig:EHBL-OPS\](c). We first consider the two-layer case (b). Fig.\[fig:EHBL-OPS\](d) depicts the charge response in the metallic layer to a nearby charge. The charge response is assumed to be an image charge, which carries opposite charge of the same magnitude and is centered at distance $a$ from the point charge. Thus the point charge and the screening charge together form a dipole. We have assumed that the distance between the two layers of charges $b$ is sufficiently larger than $a$ ($b\gg a$) such that the presence of the other screening layer does not affect the simple dipole picture. In this case, the intralayer interaction between two charges located in an OPS layer (Fig.\[fig:EHBL-OPS\](e)) is in real space $$V^{\mathrm{intra}}(\vec{r})=\frac{e^{2}}{\epsilon_{e,h}}\left(\frac{1}{r}-\frac{1}{\sqrt{r^{2}+a^{2}}}\right)\;,$$ where $r$ is the charge-charge distance within the charge plane. It is easy to see that for $r\gg a$, $V^{\mathrm{intra}}$ scales as $1/r^{3}$ while for $r\ll a$ it follows the usual Coulomb scaling $1/r$. By using 2D Fourier transform $\frac{1}{\sqrt{r^{2}+a^{2}}}\overset{\mathrm{2D}\mathcal{F}}{\longrightarrow}\frac{2\pi}{k}e^{-ka}$, the Fourier transformed interaction is $$V^{\mathrm{intra}}(\vec{k})=\frac{2\pi e^{2}}{\epsilon_{e,h}k}\left(1-e^{-ka}\right)\;.\label{v2}$$ For an electron and a hole sitting in different layers, the interlayer interaction is $$\begin{aligned} V^{x,2}(\vec{r}) & = & -\frac{e^{2}}{\epsilon_{x}}\left(\frac{1}{\sqrt{r^{2}+b^{2}}}\right.\\ & & \left.-\frac{2}{\sqrt{r^{2}+(a+b)^{2}}}+\frac{1}{\sqrt{r^{2}+(2a+b)^{2}}}\right)\end{aligned}$$ and its Fourier counterpart is $$V^{x,2}(\vec{k})=-\frac{2\pi e^{2}}{\epsilon_{x}k}e^{-kb}(1-e^{-ka})^{2}\;.\label{v3}$$ $\epsilon_{e,h}$ and $\epsilon_{x}$ are the intra-layer and inter-layer dielectric constants, respectively. Next we consider EHBL with only one metallic screening layer (see Fig.\[fig:EHBL-OPS\](c)). In this case the two layers of charges have distance $a/2+b$ (layer $1$) and $a/2$ (layer $2$) from the screening layer, respectively. The intralayer interactions are thus $$\begin{aligned} V_{1}^{\mathrm{intra}}(\vec{r}) & = & \frac{e^{2}}{\epsilon_{1}}\left(\frac{1}{r}-\frac{1}{\sqrt{r^{
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We establish Hardy–Littlewood inequalities for the Heckman–Opdam transform associated to a general root datum $({\mathfrak{a}},\Sigma,m)$ that generalizes an analogous result for the spherical Fourier transform on a Riemannian symmetric space of the non-compact type due to Eguchi and Kumahara. In particular we obtain a more precise Hausdorff–Young inequality that generalizes a recent result due to Narayanan, Pasquale, and Pusti.' address: | Mathematisches Seminar\ Chr.-Albrechts–Universität zu Kiel\ Ludewig-Mey–Str. 4, DE-24098 Kiel\ Germany author: - Troels Roussau Johansen title: 'Hardy–Littlewood inequalities for the Heckman–Opdam transform' --- Introduction ============ The classical Hausdorff–Young inequality $\|{\hat{f}}\|_q\leq c_p\|f\|_p$, $1\leq p\leq 2$, $\frac{1}{p}+\frac{1}{q}=1$, for the Euclidean Fourier transform can be viewed as a partial extension of the Plancherel theorem to $L^p$-functions. More generally, the Fourier transform extends to a continuous mapping from $L^p({\mathbb{R}}^n)$ into the Lorentz space $L^{p',p}({\mathbb{R}}^n)$, a result that is due to Paley. A variation on this theme is provided by the Hardy–Littlewood inequality which may be stated as follows: Let $f$ be a measurable function on ${\mathbb{R}}^n$ such that $x\mapsto f(x)\|x\|^{n(1-2/q)}$ belongs to $L^q({\mathbb{R}}^n)$, where $q\geq 2$. Then $f$ has a well-defined Fourier transform ${\hat{f}}$ in $L^q({\mathbb{R}}^n)$ and there exists a positive constant $A_q$ independent of $f$ such that $$\label{HL-ineq1} \Bigl(\int_{{\mathbb{R}}^n}\vert{\hat{f}}(\xi)\vert^qd\xi\Bigr)^{1/q}\leq A_q\Bigl(\int_{{\mathbb{R}}^n}\vert f(x)\vert^q\|x\|^{n(q-2)}dx\Bigr)^{1/q}.$$ By duality and general properties of the Fourier transform, one has the following equivalent formulation: For every $p\in(1,2)$ there exists a positive constant $B_p$ independent of $f$ such that $$\label{HL-ineq2} \Bigl(\int_{{\mathbb{R}}^n}|{\hat{f}}(\xi)|^p|\xi|^{n(p-2)}\,d\xi\Bigr)^{1/p}\leq B_p\Bigl(\int_{{\mathbb{R}}^n}|f(x)|^p\,dx\Bigr)^{1/p}.$$ An analogue of for the spherical transform on a Riemannian symmetric space $G/K$ was obtained by Eguchi and Kumahara in [@Eguchi-Kumahara Theorem 1, Section 5]: \[thm.1\] Let $q\geq 2$. The spherical Fourier transform can be defined for $K$-invariant functions $f$ on $G/K$ with the property that $f\cdot\sigma^{n(1-2/q)}\Omega^{1-2/q}$ belongs to $L^q(K\setminus G/K)$, and there exists a positive constant $A_q$ that is independent of $f$ such that $$\label{ineq-HL-GK} \Bigl(\frac{1}{\vert W\vert}\int_{{\mathfrak{a}}^*}\vert\widetilde{f}(\lambda)\vert^q\,\vert\mathbf{c}(\lambda)\vert^{-2}\,d\lambda\Bigr)^{1/q} \leq A_q\Bigl(\int_G\vert f(x)\vert^q\sigma(x)^{n(q-2)}\Omega(x)^{q-2}\,dx\Bigr)^{1/q}$$ for all $f\in\mathcal{S}(K\setminus G/K)$. Here $\sigma(x)=\left<X,X\right>^{1/2}$ where $\left<\cdot,\cdot\right>$ is the Cartan–Killing form and $G\ni x=k\exp X\in K\times\mathfrak{p}$, and $\Omega(\exp H)=c\prod_{\alpha\in\Sigma}\vert\sinh\alpha(H)\vert^{m(\alpha)}$, $H\in\mathfrak{a}$, the usual weight and $\mathcal{S}(K\setminus G/K)$ an $L^2$-based Schwartz space of $K$-invariant functions on $G/K$. An interpolation argument leads to an analogous statement for exponents below $2$: \[thm.2\] Let $p\in(1,2]$ and $\frac{1}{p}+\frac{1}{q}=1$. Let $r\in[p,q]$ and set $\mu=\frac{1}{r}+\frac{1}{q}-1=\frac{1}{r}-\frac{1}{p}$. Then there exists a positive constant $B_r$ independent of $f$ such that $$\Bigl(\frac{1}{|W|}\int_{{\mathfrak{a}}^*}\vert\widetilde{f}(\lambda)\vert^q\vert\mathbf{c}(\lambda)\vert^{-2}\,d\lambda\Bigr)^{1/q} \leq B_r\Bigl(\int_G\vert f(x)\vert^r\sigma(x)^{-n\mu r}\Omega(x)^{-\mu r}\,dx\Bigr)^{1/r}$$ for all $f$ satisfying $f\cdot\sigma^{-n\mu}\Omega^{-\mu}\in L^r(K\setminus G/K)$. It was remarked in the MathSciNet review by Michael Cowling that one could simplify the proof of Eguchi and Kumahara by means of more refined interpolation techniques. These were later incorporated in [@Mohanty-II] where the authors established an analogue of for the Helgason–Fourier transform on a noncompact Riemannian symmetric space of rank one: It holds that $$\label{eqn.MRSS} \int_{{\mathfrak{a}}^*} \|\widetilde{f}(\lambda,\cdot)\|_{L^1(K)}|\lambda|^{p-2}(1+|\lambda|)^{-(m_\gamma+m_{2\gamma})}|\mathbf{c}(\lambda)|^{-2}\,d\lambda\leq C\|f\|_p^p$$ for $1<p<2$. According to [@Mohanty-II Remark 4.6], their method also works for higher rank spaces. While we share this sentiment, it turns out to be slightly involved to fill in the necessary details. One may also object that the appearance of the average over $K$ is not natural. A different version was recently obtained in [@Ray-Sarkar-trans], to which we shall return later. A further drawback of is that for $p=2$ it does not resemble the Parseval identity, and section \[sec.HY-ineqs\] opens with the observation that the analogue of for the Heckman–Opdam transform, or even just the Jacobi transform in rank one, does not hold for arbitrary non-negative root multiplicities. We also wish to emphasize a quantitative difference between and : In the first inequality a weight is introduced on the function-side, whereas the second inequality incorporates a weight on the Fourier transform side. Theorem \[thm.1\] and theorem \[thm.2\] therefore resemble , whereas resembles . It is the purpose of the present paper to obtain analogues of and for the Heckman–Opdam transform associated to a triple $({\mathfrak{a}},\Sigma,m)$, where ${\mathfrak{a}}$ is an Euclidean $n$-dimensional vector space, $\Sigma$ a root system in ${\mathfrak{a}}^*$ and $m$ a positive multiplicity function. In order to place the contributions of the present paper in perspective, the reader is reminded that some classical aspects of the $L^2$-theory for hypergeometric Fourier analysis in root systems (that is, Plancherel and Paley–Wiener theorems and an inversion formula) was already obtained in [@Opdam-acta], whereas the $L^p$-analysis is much more recent. As far as we can ascertain, the first decisive contribution was given in the recent publication [@Narayanan-Pasquale-Pusti], and the results we obtain should be seen as natural contributions to the general theme of classical harmonic analysis in a root system framework, The details pertaining to harmonic analysis in root systems will be presented in section \[sec.root1\]. There are several standard references but we follow closely the presentation in [@Narayanan-Pasquale-Pusti] as far as the Heckman–Opdam theory is concerned. Section \[sec.root1\] also summarizes the interpolation theorems for Lorentz spaces. An immediate consequence is a generalized Hausdorff–Young inequality of Paley-type. Section \[sec.HY-ineqs\] presents several versions of the Hardy–Littlewood inequality for the Heckman–Opdam transforms, corresponding to different weights. The last section briefly outlines a generalization of the Eguchi–Kumahara result for the Cartan motion groups. On can introduce a ‘flat
{ "pile_set_name": "ArXiv" }
null
null
--- author: - Tianxing Ma - Fan Yang - Zhongbing Huang - 'Hai-Qing Lin' title: 'Triplet $p$-wave pairing correlation in low-doped zigzag graphene nanoribbons' --- Introduction {#introduction .unnumbered} ============ Triplet superconductivity (SC) has been a focus of modern condensed matter physics because of its possible connection to topological quantum information and computation[@Kitaev2001; @Alicea2012; @PhysRevLett.107.217001; @Mourik1003; @Deng2012; @Rokhinson2012; @PhysRevLett.109.056803; @Anindya2012; @PhysRevB.87.241401; @PhysRevA.82.053611]. It has been proposed that a gapless Majorana bound state would localize at the end of the one-dimensional spinless $p-$wave superconductor[@Kitaev2001], which could be used to practically realize topological quantum computation[@Kitaev2003; @RevModPhys.80.1083]. To realize such a Majorana bound state in real material, the superconducting proximity effect was proposed [@PhysRevLett.105.077001; @PhysRevLett.106.127001; @PhysRevLett.105.227003], and experimental evidence of its existence was recently reported[@Perge602]. Here, we explore the possibility of intrinsic triplet SC, which is generated by an electronic correlation. In this paper, we reveal a possible edge-spin triplet $p$-wave superconducting pairing correlation in slightly doped zigzag graphene nanoribbons with appropriate interactions. Graphene, a single layer of carbon, has generated immense interest ever since its experimental discovery[@Novoselov666; @RevModPhys.81.109]. Recently, experimental advances in doping methods have made it possible to change the type of carriers (electrons or holes)[@PhysRevLett.104.136803; @PhysRevLett.105.256805], opening the doors for exotic phases, such as SC and magnetism induced by repulsive interactions. For instance, it was shown by the two-stage renormalization-group calculation that unconventional SC is induced by weak repulsive interactions in honeycomb Hubbard models that are away from half-filling[@PhysRevB.81.224505], and that a topological $d+id$ SC is induced in a heavily doped system[@PhysRevB.75.134512; @PhysRevB.78.205431; @PhysRevB.81.085431; @PhysRevB.86.020507; @PhysRevB.84.121410; @PhysRevB.85.035414; @Nandkishore2012]. At graphene edges the density of states may be peaked due to the presence of edge-localized states close to the Fermi level[@PhysRevLett.106.226401]. Especially at extended zigzag edges this leads to a phenomenon called edge magnetism, for which various theories [@PhysRevB.80.245436; @PhysRevB.91.075410; @LiJPCM2016] predict ferromagnetic (FM) intraedge and antiferromagnetic (AFM) interedge correlations. [ The proposed magnetism is similar to the flat-band ferromagnetism appearing in the orbital-active optical honeycomb lattice[@PhysRevA.82.053618], where the band flatness dramatically amplifies the interaction effect, driving the ferromagnetic transition even with a relatively weak repulsive interaction]{}. From these discoveries, a question which naturally arises: is there is triplet SC mediated by the FM spin correlations on each edge in the doped zigzag graphene nanoribbons? ![(Color online) A piece of a honeycomb lattice displaying zigzag edges with $L_y=4$ which defines the width of the ribbon in the transverse direction and $L_x$=12, which defines the length in the longitudinal direction. The lattice sites at zigzag edge are much larger than the sites in the bulk, indicating that the charge carriers are moving along the edge. []{data-label="Fig:Structure"}](Fig1) ![(Color online) The carrier distribution (a) as a function of the site index at $U=2.0t$ and (b) from edge $\rightarrow$ bulk $\rightarrow$ edge with different $U$. It is clear to see that most charge carriers are distributed along the edge. []{data-label="Fig:ndistrubution"}](Fig2) ![(Color online) Band structure (a) and DOS (b) of a six-chain nanoribbon system. Note that the flat band bottom, located at approximately $-0.2t$ in (a), leads to the DOS peak in (b). The Fermi level of the half-filled system is marked by the red dashed lines in both figures. []{data-label="Fig:band"}](Fig3) In the present work, we establish the $p$-wave superconducting pairing correlation at the edges of zigzag graphene nanoribbons by using combined random-phase approximation (RPA)[@RevModPhys.84.1383; @PhysRevB.69.104504; @JPSJ; @Graser2009; @PhysRevB.75.224509; @PhysRevLett.101.087004; @PhysRevLett.111.066804; @Srep0820], the finite-temperature determinant quantum Monte Carlo (DQMC)[@PhysRevD.24.2278; @PhysRevB.31.4403; @MaAPL2010; @PhysRevLett.110.107002; @PhysRevB.94.075106] and the ground-state constrained-path quantum Monte Carlo (CPQMC)[@PhysRevLett.74.3652; @PhysRevB.55.7464; @PhysRevB.84.121410; @WuEPL2013; @MaEPL2015] methods. Our unbiased results show that both the ferromagnetic spin correlation and the effective $p$-wave superconducting pairing correlation are greatly enhanced as the interaction increases. Results {#results .unnumbered} ======= The ribbon geometry considered here is depicted in Fig. \[Fig:Structure\], in which the blue and white circles represent sublattices A and B, respectively, and the transverse integer index $1,2, . . . ,L_y$ defines the width of the ribbon while $1,2, . . . ,L_x$ at the zigzag edge defines the length. Assuming the ribbon to be infinite in the $x$ direction but finite in the $y$ direction, we produce a graphene nanoribbon with zigzag edges. In the following studies, the interaction $U$ is introduced through the standard Hubbard model. In Fig.\[Fig:ndistrubution\], the carrier distribution (a) as a function of the site index at $U=2.0t$ and (b) from edge $\rightarrow$ bulk $\rightarrow$ edge with different interactions is shown. It is clear to see that most charge carriers distribute along the edge, and the increasing interaction pushes many more charge carriers to the edges. The band structure of a six-chain nanoribbon system is shown in Fig. \[Fig:band\](a). Here, as the system is periodic in the $x$-direction, the momentum $k_x$ is a good quantum number. From Fig. \[Fig:band\](a), one finds a flat band bottom with energies located near the Fermi level ($\approx-0.2t$) of the half-filled system. Physically, such a flat band bottom is caused by the edge states, which leads to the DOS peak at approximately $-0.2t$ shown in Fig. \[Fig:band\](b). ![(Color online) (a) The largest eigenvalue $\chi(q_x)$ of the susceptibility matrix $\chi^{(0)}_{l,m}\left(q_x\right)$ as a function of $q_x$ for three different dopings, i.e., $\mu=-0.195t$ ($\delta=3.6\%$), $\mu=-0.2t$ ($\delta=3.0\%$) and $\mu=-0.205t$ ($\delta=0.8\%$) for the 6-chain system near half-filling. (b) Sketch of the pattern of the dominating spin fluctuations for $\mu=-0.2t$, as determined by the eigenvector of $\chi^{(0)}_{l,m}\left(q_x=0\right)$ corresponding to its largest eigenvalue. []{data-label="Fig:magnetic"}](Fig4) RPA study {#rpa-study .unnumbered} --------- Guided by the idea that triplet SC may be mediated by the strong FM spin fluctuations in the system, we performed an RPA-based study on the possible pairing symmetries of the system. The multi-orbital RPA approach[@RevModPhys.84.1383; @PhysRevB.69.104504; @JPSJ; @Graser2009; @PhysRevB.75.224509; @PhysRevLett.101.087004; @PhysRevLett.111.066804; @Srep0820], which is a standard and effective approach for the case of the weak coupling limit, is applied in our study for small $U$($<0.01t$). Various bare susceptibilities of this system are defined as $$\begin{aligned} \chi^{(0)l_{1},l_{2}}_{l_{3},l_{4}}\left(q_x,\tau\right)\equiv \frac{1}{N}\sum_{k_{1},k_{2}}\left<T_{\tau}c^{\dagger}_{l_{1}}(k_{1},\tau) c_{l_{2}}(k_{
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | We prove some normality criteria for a family of meromorphic functions under a condition on differential polynomials generated by the members of the family. 0.15cm *Keywords:* Meromorphic function, Normal family, Nevanlinna theory. 0.15cm Mathematics Subject Classification 2010: 30D35. --- $\qquad$ SOME NORMALITY CRITERIA OF MEROMORPHIC FUNCTIONS and Nguyen Van Thin$^c$ [$^{a}$ Université de Brest, LMBA, UMR CNRS 6205,\ 6, avenue Le Gorgeu - C.S. 93837, 29238 Brest Cedex 3, France ]{} [$ ^{b}$ Department of Mathematics, Hanoi National University of Education,\ 136 Xuan Thuy Street, Cau Giay, Hanoi, Vietnam]{} [$ ^{c}$ Department of Mathematics, Thai Nguyen University of Education,\ Luong Ngoc Quyen Street, Thai Nguyen City, Vietnam]{} 0.15cm Introduction ============ Let $D$ be a domain in the complex plane $\C$ and $\mathcal F$ be a family of meromorphic functions in $D.$ The family $\mathcal F$ is said to be normal in $D,$ in the sense of Montel, if for any sequence $\{f_v\}\subset \mathcal F,$ there exists a subsequence $\{f_{v_i}\}$ such that $\{f_{v_i}\}$ converges spherically locally uniformly in $D,$ to a meromorphic function or $\infty.$\ In 1989, Schwick proved: [**Theorem A** ]{}([@Sch], Theorem 3.1)[**.**]{} [*Let $k, n$ be positive integers such that $n\geq k+3.$ Let $\mathcal F$ be a family of meromorphic functions in a complex domain $D$ such that for every $f\in\mathcal F,$ $(f^n)^{(k)}(z)\ne 1$ for all $z\in D.$ Then $\mathcal F$ is normal on $D.$*]{} [**Theorem B** ]{}([@Sch], Theorem 3.2)[**.**]{} [*Let $k, n$ be positive integers such that $n\geq k+1.$ Let $\mathcal F$ be a family of entire functions in a complex domain $D$ such that for every $f\in\mathcal F,$ $(f^n)^{(k)}(z)\ne 1$ for all $z\in D.$ Then $\mathcal F$ is normal on $D.$*]{} The following normality criterion was established by Pang and Zalcman [@PZ] in 1999: [**Theorem C**]{} ([@PZ])[**.**]{} [*Let $n$ and $k$ be natural numbers and $\mathcal F$ be a family of holomorphic functions in a domain $D$ all of whose zeros have multiplicity at least $k.$ Assume that $f^nf^{(k)}-1$ is non-vanishing for each $f\in\mathcal F.$ Then $\mathcal F$ is normal in D.*]{}\ The main purpose of this paper is to establish some normality criteria for the case of more general differential polynomials. Our main results are as follows: \[Th1\] Take $q \;(q\geq1)$ distinct nonzero complex values $a_1,\dots,a_q,$ and $q$ positive integers (or $+\infty$) $\ell_1,\dots\ell_q.$ Let $n$ be a nonnegative integer, and let $n_1,\dots,n_k, t_1,\dots,t_k$ be positive integers ($k\geq 1$). Let $\mathcal F$ be a family of meromorphic functions in a complex domain $D$ such that for every $f\in\mathcal F$ and for every $m\in\{1,\dots,q\},$ all zeros of $f^n(f^{n_1})^{(t_1)}\cdots(f^{n_k})^{(t_k)}-a_m$ have multiplicity at least $\ell_m.$ Assume that $ a)\quad n_j\geq t_j \text{\;for all \;} 1\leqslant j\leqslant k,\;\text{ and\;} \ell_i\geq 2 \text{\;for all\;} 1\leqslant i\leqslant q,$\ $ b)\quad \sum_{i=1}^q\frac{1}{\ell_i}<\frac{ qn-2+\sum_{j=1}^kq(n_j-t_j)}{n+\sum_{j=1}^k(n_j+t_j)}.$\ Then $\mathcal F$ is a normal family. Take $q=1$ and $\ell_1=+\infty,$ we get the following corollary of Theorem \[Th1\]: \[H1\] Let $a$ be a nonzero complex value, let $n$ be a nonnegative integer, and $n_1,\dots,n_k,t_1,\dots,t_k$ be positive integers. Let $\mathcal F$ be a family of meromorphic functions in a complex domain $D$ such that for every $f\in\mathcal F,$ $f^n(f^{n_1})^{(t_1)}\cdots(f^{n_k})^{(t_k)}-a$ is nowhere vanishing on $D.$ Assume that $a)$ $n_j\geq t_j \text{\;for all \;} 1\leqslant j\leqslant k,$ $b)$ $n+\sum_{j=1}^kn_j\geq 3+\sum_{j=1}^kt_j.$\ Then $\mathcal F$ is normal on $D.$ We remark that in the case where $n\geq 3,$ condition $a)$ in the above corollary implies condition $b);$ and in the case where $n=0$ and $k=1,$ Corollary \[H1\] gives Theorem A. For the case of entire functions, we shall prove the following result: \[Th2\] Take $q \;(q\geq1)$ distinct nonzero complex values $a_1,\dots,a_q,$ and $q$ positive integers $($or $+\infty)$ $\ell_1,\dots\ell_q.$ Let $n$ be a nonnegative integer, and let $n_1,\dots,n_k, t_1,\dots,t_k$ be positive integers $(k\geq 1).$ Let $\mathcal F$ be a family of holomorphic functions in a complex domain $D$ such that for every $f\in\mathcal F$ and for every $m\in\{1,\dots,q\},$ all zeros of $f^n(f^{n_1})^{(t_1)}\cdots(f^{n_k})^{(t_k)}-a_m$ have multiplicity at least $\ell_m.$ Assume that $ a)\quad n_j\geq t_j \text{\;for all \;} 1\leqslant j\leqslant k,\;\text{ and\;} \ell_i\geq 2 \text{\;for all\;} 1\leqslant i\leqslant q,$\ $ b)\quad \sum_{i=1}^q\frac{1}{\ell_i}<\frac{ qn-1+\sum_{j=1}^kq(n_j-t_j)}{n+\sum_{j=1}^kn_j}.$\ Then $\mathcal F$ is a normal family. Take $q=1$ and $\ell_1=+\infty,$ Theorem \[Th2\] gives the following generalization of Theorem B, except for the case $n=k+1$. So for the latter case, we add a new proof of Theorem B in the Appendix which is slightly simpler than the original one. \[H2\] Let $a$ be a nonzero complex value, let $n$ be a nonnegative integer, and $n_1,\dots,n_k,t_1,\dots,t_k$ be positive integers. Let $\mathcal F$ be a family of holomorphic functions in a complex domain $D$ such that for every $f\in\mathcal F,$ $f^n(f^{n_1})^{(t_1)}\cdots(f^{n_k})^{(t_k)}-a$ is nowhere vanishing on $D.$ Assume that $a)$ $n_j\geq t_j \text{\;for all \;} 1\leqslant j\leqslant k,$ $b)$ $n+\sum_{j=1}^kn_j\geq 2+\sum_{j=1}^kt_j.$\ Then $\mathcal F$ is normal on $D.$ In the case where $n\geq 2,$ condition $a)$ in the above corollary implies condition $b).$ \[Re\] Our above results remain valid if the monomial $f^n(f^{n_1})^{(t_1)}\cdots(f^{n_k})^{(t_k)}$ is replaced by the following polynomial $$\begin{aligned} f^n(f^{n_1})^{(t_1)}\cdots(f^{n_k})^{(t_k)}+\sum_{I}c_If^{n_I}(f^{n_{1I}})^{(t_{
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'The Beta coalescents are stochastic processes modeling the genealogy of a population. They appear as the rescaled limits of the genealogical trees of numerous stochastic population models. In this article, we take interest in the number of blocs at small times in the Beta coalescent. Berestycki, Berestycki and Schweinsberg [@BBS08] proved a law of large numbers for this quantity. Recently, Limic and Talarczyk [@LiT15] proved that a functional central limit theorem holds as well. We give here a simple proof for an unidimensional version of this result, using a coupling between Beta coalescents and continuous-time branching processes.' author: - 'Yier Lin[^1] and Bastien Mallein[^2]' title: Second order behavior of the block counting process of beta coalescents --- Introduction {#sec:intro} ============ A coalescent process is a stochastic model for the genealogy of an infinite haploid population, built backward in time. In such a model, an individual is represented by an integer $n \in {\mathbb{N}}$. At each time $t$, we denote by $\Pi(t)$ the partition of ${\mathbb{N}}$ such that two individuals $i$ and $j$ belong to the same set in $\Pi(t)$ (that we call “bloc” from now on) if they share a common ancestor less than $t$ units of time in the past. In particular, we always assume that $\Pi(0) = \{ \{ 1\},\{2\}, \ldots \}$ is the partition in singletons. We construct $(\Pi(t), t \geq 0)$ as a Markov process on the set of partitions, that gets coarser over time. Let $\Lambda$ be a probability measure on $[0,1]$. The $\Lambda$-coalescent is a coalescent process such that given there are $b$ distinct blocs in $\Pi(t)$, any particular set of $k$ blocs merge at rate $$\lambda_{b,k} = \int_0^1 x^{k-2}(1-x)^{b-k} \Lambda(dx).$$ The $\Lambda$-coalescent has been introduced independently by Pitman [@Pit99] and Sagitov [@Sag99]. In this process, several blocs may merge at once, but at most one such coalescing event may occur at a given time. For any $t \geq 0$, we denote by $N(t)$ the number of blocs in $\Pi(t)$. We have in particular $N(0) = +\infty$. We say that the $\Lambda$-coalescent comes down from infinity if almost surely $N(t) < +\infty$ for any $t > 0$. Pitman [@Pit99] proved that if $\Lambda(\{1\})=0$, either the $\Lambda$-coalescent comes down from infinity, or $N(t)= +\infty$ for any $t >0$ a.s. In the rest of the article, we always assume that $\Lambda$ has no atom at 1. Schweinsberg [@Sch00] obtained a necessary and sufficient condition for the $\Lambda$-coalescent to come down from infinity, that Bertoin and Le Gall [@BeG06] proved equivalent to $$\label{eqn:defPsi} \int_1^{+\infty} \frac{dq}{\psi(q)} < +\infty, \quad \text{where } \psi(q) = \int_0^1 (e^{-qx} - 1 + q x) x^{-2} \Lambda(dx).$$ Berestycki, Berestycki and Limic [@BBL10] obtained the almost sure behaviour for the number of blocs $N(t)$ as $t$ goes to 0, which they called the speed of coming down from infinity. More precisely, setting $v_\psi(t) = \inf\{ s > 0 : \int_s^{+\infty} \frac{dq}{\psi(q)} \leq t \}$, they proved that for a $\Lambda$-coalescent that comes down from infinity, $$\label{eqn:speedCDI} \lim_{t \to 0} \frac{N(t)}{v_\psi(t)} = 1 \quad \text{a.s.}$$ In this article, we consider the one parameter family of coalescent processes called Beta-coalescents. For any $\alpha \in (0,2)$, we consider the $\Lambda$-coalescent such that the measure $\Lambda$ is ${\mathrm{Beta}}(2-\alpha,\alpha)$, i.e. $$\Lambda(dx) = \frac{1}{\Gamma(\alpha)\Gamma(2-\alpha)} x^{1-\alpha} (1-x)^{\alpha - 1} dx.$$ The Beta-coalescents have a number of interesting properties (see e.g. [@BBC+05; @BBS08] and references therein). In particular, if $\alpha \in (1,2)$, it can be constructed as the genealogy of an $\alpha$-stable continuous state branching process. We observe that thanks to , $\alpha \in (1,2)$ is a necessary and sufficient condition for the Beta-coalescent to come down from infinity. Moreover, can be restated as $$\lim_{t \to 0} t^\frac{1}{\alpha-1} N(t) = (\alpha \Gamma(\alpha))^\frac{1}{\alpha - 1} \quad \text{a.s.}$$ The speed of coming down from infinity for the Beta coalescent can also be found in [@BBS08]. The main result of this article is a central limit theorem for the number of blocs, as $t \to 0$. \[thm:main\] Let $\alpha \in (1,2)$ we set $(\Pi(t),t \geq 0)$ the ${\mathrm{Beta}}(2-\alpha,\alpha)$-coalescent and $N(t) = \# \Pi(t)$ the number of blocs at time $t$, we have $$\lim_{t \to 0} t^{\frac{1}{\alpha(\alpha-1)}} \left(N(t) - \left(\frac{\alpha \Gamma(\alpha)}{t}\right)^{\frac{1}{\alpha -1}} \right) = - D_\alpha X \quad \text{in law,}$$ where $D_\alpha = \left(\Gamma(\alpha)\alpha \right)^{\frac{1}{\alpha(\alpha-1)}}(\alpha-1)^{-\frac{1}{\alpha}}$, $X=\int_0^1 Y(t) dt$ and $(Y(t), t\geq 0)$ is a Lévy process satisfying $\operatorname{\mathbb{E}}(e^{-\lambda Y_t}) = e^{t \lambda^\alpha}$. Note that a more precise functional central limit theorem has been obtained by [@LiT15] for any $\Lambda$-coalescent with a regularly varying density in a neighbourhood of $0$. However, our proof follows from simple coupling arguments, that might be of independent interest. We observe that the random variable $X$ defined in Theorem \[thm:main\] is an $\alpha$-stable random variable, that satisfies $$\operatorname{\mathbb{E}}(e^{-\lambda X}) = \exp\left( \frac{\lambda^\alpha}{\alpha + 1} \right).$$ In Section \[sec:csbp\], we use [@BBS08] to couple the Beta-coalescent with a stable continuous state branching process, and link the small times behaviour of the number of blocs with the small times behaviour of the continuous-state branching process. In Section \[sec:lamperti\], we use the so-called Lamperti transform to transfer the computations into the small times asymptotic of an $\alpha$-stable Lévy process, and use scaling properties to conclude. Continuous state branching process {#sec:csbp} ================================== A continous-state branching process (or CSBP for short) is a càdlàg (right-continuous with left limits at each point) Markov process $(Z(t), t \geq 0)$ on ${\mathbb{R}}_+$ that satisfies the so-called branching property: For any $x, y \geq 0$, if $(Z_x(t), t \geq 0)$ and $(Z_y(t), t \geq 0)$ are two independent versions of $Z$ starting from $x$ and $y$ respectively, then the process $(Z_x(t) + Z_y(t), t \geq 0)$ is also a version of $Z$ starting from $x+y$. The study of CSBP started with the seminal work of [@Jir58]. As observed in [@Lam67; @Sil67], there exists a deep connexion between CSBP and Lévy processes. In effect, we observe that for any $x,t, \lambda \geq 0$, the Laplace transform of the CSBP $Z$ satisfies $$\operatorname{\mathbb{E}}\left( \exp(-\lambda Z_x(t) \right) = \exp(- x u_t(\lambda)),$$ where $u$ is the solution of the following differential equation $$\label{eqn:csbpLevy} \partial_t u_t(\lambda) = \phi(u_t(\lambda)), \quad \text{with } u_0(\lambda) = \lambda,$$ and $\phi$ is the Lévy-Khinchine exponent of
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We have calculated a complete set of primary fission fragment mass yields, $Y(A)$, for heavy nuclei across the chart of nuclides, including those of particular relevance to the rapid neutron capture process ($r$ process) of nucleosynthesis. We assume that the nuclear shape dynamics are strongly damped which allows for a description of the fission process via Brownian shape motion across nuclear potential-energy surfaces. The macroscopic energy of the potential was obtained with the Finite-Range Liquid-Drop Model (FRLDM), while the microscopic terms were extracted from the single-particle level spectra in the fissioning system by the Strutinsky procedure for the shell energies and the BCS treatment for the pairing energies. For each nucleus considered, the fission fragment mass yield, $Y(A)$, is obtained from 50,000 – 500,000 random walks on the appropriate potential-energy surface. The full mass and charge yield, $Y(Z,A)$, is then calculated by invoking the Wahl systematics. With this method, we have calculated a comprehensive set of fission-fragment yields from over 3,800 nuclides bounded by $80\leq Z \leq 130$ and $A\leq330$; these yields are provided as an ASCII formatted database in the supplemental material. We compare our yields to known data and discuss general trends that emerge in low-energy fission yields across the chart of nuclides.' author: - 'M. R. Mumpower' - 'P. Jaffke' - 'M. Verriere' - 'J. Randrup' bibliography: - 'refs.bib' title: Primary fission fragment mass yields across the chart of nuclides --- \#1 \ This paper is dedicated to the memory of our friend and colleague Arnie J. Sierk who contributed significantly to the development and application of macroscopic-microscopic nuclear fission theory throughout his career. \ \ \ Introduction ============ The description of nuclear fission has presented exceptional challenges to the theoretical modeling of heavy nuclei since its discovery in the late 1930’s [@Hahn+39]. One way to view this complicated physical process is to consider the evolution of the nuclear shape as it progresses from a compact form through increasingly deformed shapes until the division into two fragments occurs at the scission configuration [@Meitner+39; @Bohr+39], as illustrated in Fig. \[fig:schema\]. This general picture naturally leads to the description of the fission process in terms of a potential-energy surface (PES) as a function of the nuclear shape. The accumulation of many fission events provides the primary fission fragment yield whose appearance is sensitive to the structure of the nuclear system. In this description, much is still uncertain about the evolution of the nuclear shape and, consequently, about the extracted fission yields. For example, what are the most probable trajectories through the shape configuration space? How do these paths depend upon the dissipative coupling of the shape to the remainder of the system? And, which microscopic properties impact the division of the nucleus at scission? Questions like these drive the current research in fission dynamics. Our ability to calculate fission fragment yields across the chart of nuclides has wide reaching implications for a variety of applications, from nuclear security and reactor operations to our understanding of the cosmos in astrophysical explosions [@Andreyev+13; @Eichler+15; @Talou+18; @Jaffke+18; @Horowitz+18; @Holmbeck+19; @Fotiades+19]. Many methods have been proposed for calculating fission fragment yields. Phenomenological approaches [@Kodama+75; @Wahl+80; @Brosa+86; @Wahl+88; @Brosa+90; @Brosa+99; @Benlliure+98; @Schmidt+16] typically consist of simple models with fitted parameters with varying degrees of refinement. The parameters of these models are determined by comparisons to mass or charge yields or other fission observables in the actinide region. Simple, yet insightful descriptions of observed phenomena can arise, such as in the case with the unchanged charge distribution of Ref. [@Wahl+02]. These approaches can reproduce experimental or evaluated data when it is known, but the applicability across the chart of nuclides outside the narrow fitting region is still in question. In contrast, microscopic models for the description of fission are built upon the consideration of an effective energy density functional (EDF), minimized in a chosen trial subspace of the full many-body Fock space while subject to external constraints on the density distribution (e.g. the quadrupole moment $Q_2$ which governs the overall distortion away from a sphere or the octupole moment $Q_3$ which influences the reflection asymmetry of the system) [@Schunck+16]. The self-consistent Hartree-Fock (HF) equations arise from the minimization of the EDF by assuming a system of independent nucleons, with the trial space taken to be the set of Slater determinants of the constituent nucleons. Pairing can be included self-consistently by extending the trial space to quasi-particle Slater determinants, leading to the Hartree-Fock-Bogoliubov (HFB) model [@Goutte+05; @Giuliani+19]. These treatments make it possible to calculate the nuclear PES as a function of the constraints employed ($Q_2$, $Q_3$, ..), and they have been widely used in fission studies [@Berger+89; @Goriely+07; @Minato+09; @Regnier+16]. However, the required computational effort is considerable which imposes a practical limit on the number of constraints that can be included, currently up to just two or three [@Schunck+15; @Regnier+16; @Regnier+17]. As a consequence, the resulting energy surfaces may exhibit spurious discontinuities and, importantly, the fission barrier heights cannot be determined with confidence [@Myers+96; @Moller+09; @Dubray+12; @Schunck+14]. Although methods exist for remedying this inherent problem [@Dubray+12], the required computational cost is prohibitive. The microscopic approach, at the present time, is therefore best suited for studies of specific nuclei, but is not adequate for large-scale, global studies of fission yields and their trends across the chart of nuclides. A recent review covering the progress of this approach can be found in Ref. [@Schunck+16]. ![\[fig:schema\] A schematic illustration of the fission process: The lower panel shows the potential energy of the nuclear system along its most probable path, while the upper panel shows the appearance of the system at four stages along that path. The nuclear shape, which is initially located near that of the ground state, is strongly coupled to the internal microscopic degrees of freedom and, as a result, it executes a Brownian-like random walk on the multidimensional potential-energy surface. After passing over the various saddle points, generally after multiple attempts, the system eventually acquires a binary shape and reaches a necked-in scission configuration where it divides into two fission fragments. The shown potential-energy profile is representative of known actinides, and may be differ qualitatively for nuclei in other regions. ](fission-schematic.pdf){width="\columnwidth"} The macroscopic-microscopic approach offers a simpler and very effective framework for calculating the nuclear PES [@Nix+65]. This method was originally developed for the calculation of nuclear masses because purely microscopic calculations tend to have difficulty obtaining accurate absolute energies due to the small but significant role played by many-body correlations which are hard to treat. Nuclear masses exhibit smoothly varying macroscopic trends, reflecting the energetics of a charged droplet, overlaid with small-amplitude deviations reflecting the microscopic nuclear structure [@Gustafsson+71; @Brack+72; @Bolsterli+72]. The nuclear potential-energy surface is therefore considered to consist of a [*macroscopic*]{} liquid-drop like energy functional, whose parameters (volume energy, surface tension, ...) are determined by global fitting to the measured masses, and a [*microscopic*]{} contribution expressing the shell [@Strutinsky+63] and pairing corrections [@Nogami+64], which can be calculated from the neutron and proton level spectra in the deformed effective potential well. This approach makes it possible to calculate the potential energy of any nuclear system with $Z$ protons and $N$ neutrons, $(Z,N)$, as a function of its shape (as well as its angular momentum). The above approaches can be used to not only provide the static nuclear PES but also to obtain the temporal evolution of the fissioning system. The HF and HFB Hamiltonians naturally lead to the time-dependent Hartree-Fock (TD-HF) and time-dependent Hartree-Fock-Bogoliubov equations (TD-HFB) [@Negele+78; @Bulgac+16; @Scamps+18; @Bulgac+18]. However, these methods are not well suited for processes that generate qualitatively different final configuration, such as fission, because of the restriction to a single Slater determinant. A more general approach considers the time-dependent state as a superposition of many microscopic states having time-dependent weights, leading to the time-dependent generator coordinate method (TDGCM) [@Verriere+17; @Regnier+18]. A recent attempt has been made to couple TD-HF methods with TDGCM [@Berger+91; @Goutte+04; @Regnier+18]. An alternative approach is to treat the evolution of the shape degrees of freedom (
{ "pile_set_name": "ArXiv" }
null
null
--- author: - | Nesime Tatbul [^1]\ Intel Labs and MIT\ `tatbul@csail.mit.edu`\ Tae Jun Lee $^*$\ Microsoft\ `tae_jun_lee@alumni.brown.edu`\ Stan Zdonik\ Brown University\ `sbz@cs.brown.edu`\ Mejbah Alam\ Intel Labs\ `mejbah.alam@intel.com`\ Justin Gottschlich\ Intel Labs\ `justin.gottschlich@intel.com`\ bibliography: - 'main.bib' title: Precision and Recall for Time Series --- [^1]: Lead authors.
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Federated learning obtains a central model on the server by aggregating models trained locally on clients. As a result, federated learning does not require clients to upload their data to the server, thereby preserving the data privacy of the clients. One challenge in federated learning is to reduce the client-server communication since the end devices typically have very limited communication bandwidth. This paper presents an enhanced federated learning technique by proposing a synchronous learning strategy on the clients and a temporally weighted aggregation of the local models on the server. In the asynchronous learning strategy, different layers of the deep neural networks are categorized into shallow and deeps layers and the parameters of the deep layers are updated less frequently than those of the shallow layers. Furthermore, a temporally weighted aggregation strategy is introduced on the server to make use of the previously trained local models, thereby enhancing the accuracy and convergence of the central model. The proposed algorithm is empirically on two datasets with different deep neural networks. Our results demonstrate that the proposed asynchronous federated deep learning outperforms the baseline algorithm both in terms of communication cost and model accuracy.' author: - 'Yang Chen, Xiaoyan Sun, Yaochu Jin, [^1] [^2] [^3]' title: 'Communication-Efficient Federated Deep Learning with Asynchronous Model Update and Temporally Weighted Aggregation' --- Federated learning, Deep neural network, aggregation, asynchronous learning, temporally weighted aggregation INTRODUCTION {#sec1} ============ Smart phones, wearable gadgets, and distributed wireless sensors usually generate huge volumes of privacy sensitive data. In many cases, service providers are interested in mining information from these data to provide personalized services, for example, to make more relevant recommendations to clients. However, the clients are usually not willing to allow the service provider to access the data for privacy reasons. Federated learning is a recently proposed privacy-preserving machine learning framework [@mcmahan2017communication]. The main idea is to train local models on the clients, send the model parameters to the server, and then aggregate the local models on the server. Since all local models are trained upon data that are locally stored in clients, the data privacy can be perserved. The whole process of the typical federated learning is divided into communication rounds, in which the local models on the clients are trained on the local datasets. For the $k$-th client, where $k \in S$, and $S$ refers to the participating subset of $m$ clients, its training samples are denoted as $\mathcal{P}_k$ and the trained local model is represented by the model parameter vector $\omega^k$. In each communication round, only models of the clients belonging to the subset $S$ will download the parameters of the central model from the server ans use them as the initial values of the local models. Once the local training is completed, the participating clients send the updated parameters back to the server. Consequently, the central model can be updated by aggregating the updated local models, i.e. $\omega=Agg(\omega^k)$ [@Konecny2015; @Konecny2016; @mcmahan2017communication]. In this setting, the local models of each client can be any type of machine learning models, which can be chosen according to the task to be accomplished. In most existing work on federated learning [@mcmahan2017communication], deep neural networks (DNNs), e.g., long short-term memory (LSTM), are employed to conduct text-word/text-character prediction tasks. In recent years, DNNs have been successfully applied to many complex problem-solvings, including text classification, image classification, and speech recognition [@lecun2015deep; @Shin2016Deep; @Greff2015LSTM]. Therefore, DNNs are widely adopted as the local model in federated learning, and the stochastic gradient descent (SGD) is the most popular learning algorithm for training the local models. As aforementioned, one communication round includes parameter download (on clients), local training (on clients), trained parameter upload (on clients), and model aggregation (on the server). Such a framework appears to be similar to distributed machine learning algorithm [@ma2017distributed; @reddi2016aide; @shamir2014communication; @zhang2015disco; @chilimbi2014project; @dean2012large]. In federated learning, however, only the models’ parameters are uploaded and downloaded between the clients and server, and the data of local clients are not uploaded to the server or exchanged between the clients. Accordingly, the data privacy of each client can be preserved. Compared with other machine leanring paradiagms, federated learning are subject to the following challenges [@mcmahan2017communication; @konevcny2016federated]: 1. **Unbalanced data**: The data amount on different clients may be highly imbalanced because there are light and heavy users. 2. **Non-IID data**: The data on the clients may be strongly non-IID because of different preferences of different users. As a result, local datasets are not able to represent the overall data distribution, and the local distributions are different from each other, too. The IID assumption in distributed learning that training data are distributed over local clients uniformly at random [@Boyd2011a] usually does not hold in federated learning. 3. **Massively distributed data**: The number of clients is large. For example, the clients may be mobile phone users [@Konecny2015], which can be enormous. 4. **Unreliable participating clients**: It is common that a large portion of participating clients are often offline or on unreliable connections. Again in case the clients are mobile phone users, their communication state can vary a lot and thus cannot ensure their participation in each round of learning [@mcmahan2017communication]. Apart from the above challenges, the total communication cost is often used as an overall performance indicator of federated learning due to the limited bandwidth and battery capacity of mobile phones. Of course, like other learning algorithms, the learning accuracy, which is mainly determined by the local training and the aggregation strategy, is also of great importance. Accordingly, the motivation of our work is to reduce the communication cost and improve the accuracy of the central model, assuming that DNNs are used as the local learning models. Inspired by interesting observations in fine-tuning of DNNs [@yosinski2014transferable], an asynchronous strategy for local model updating and aggregation is proposed to improve the communication efficiency in each round. The main contributions of the present work are as follows. First, an asynchronous strategy that aggregates and updates the parameters in the shallow and deep layers of DNNs at different frequencies is proposed to reduce the number of parameters to be communicated between the server and clients. Second, a temporally weighted aggregation strategy is suggested to more efficiently integrate information of the previously trained local models in model aggregation to enhance the learning performance. The remainder of the paper is organized as follows. In Section \[sec2\], related work is briefly reviewed. The detail of the proposed algorithm, especially the asynchronous strategy, the temporally weighted aggregation and the overall framework are described in Section \[sec3\]. Section \[sec4\] presents the experimental results and discussions. Finally, conclusions are drawn in Section \[sec5\]. Related Work {#sec2} ============ Kone[č]{}n[ý]{} et al. developed the first framework of federated learning and also experimentally proved that existing machine learning algorithms are not suited for this setting [@Konecny2015]. In [@Konecny2016], Kone[č]{}n[ý]{} et al. proposed two ways to reduce the uplink communication costs, i.e., structured updates and sketched updates, using data compression/reconstruction techniques. A more recent version of federated learning, FedAVG for short, was reported in [@mcmahan2017communication], which was developed for obtaining a central prediction model of Google’s Gboard APP and can be embedded in a mobile phone to protect the user’s privacy. The pseudo code of FedAVG is provided in Algorithm 1. initialize $w_{0}$ $m \gets$ max($C\cdot K, 1$) $S_{t} \gets$ (random set of $m$ clients) $w_{t+1}^k \gets$ ClientUpdate($k, w_t$) $w_{t+1} \gets \sum_{k=1}^K \frac{n_k}{n} w_{t+1}^k$\ $\mathcal{B} \gets$ (split $\mathcal{P}_k$ into batches of size $B$) $w \gets$ $w- \eta \bigtriangledown \ell(w;b)$) return $w$ to server In the following, we briefly explain the main components of FedAVG: 1. **Server Execution** consists of the *initialization* and *communication rounds*. 1. *Initialization:* Line 2 initializes parameter $\omega_0$. 2. *Communication Rounds:* Line 4 obtains $m$, the number of participating clients; $K$ indicates the number of local clients, and $C$ corresponds to the fraction of participating clients per round, according to which line 5 randomly selects participating subset $S_t$. In lines 6-8, sub-function $Client Update$ is called in parallel to get $\omega_{t+1}^k$. Line 9 executes aggregation to update $\omega_{t+1}$. 2. **Client Update** The sub-function gets $k$ and $\omega$. $B$ and $E$ are the
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'G. L. Litvinov' title: Hypergroups and hypergroup algebras --- [^1] The survey contains a brief description of the ideas, constructions, results, and prospects of the theory of hypergroups and generalized translation operators. Representations of hypergroups are considered, being treated as continuous representations of topological hyper-group algebras. 1. Introduction {#introduction .unnumbered} =============== [**1.1.**]{} The important role which group-theoretic methods play in analysis and its applications, in particular in applications to theoretical physics, are well known. Such basic mathematical concepts as translation operator, convolution, periodic function, almost periodic function, positive definite function, etc. are formulated in group-theoretic terms. One can get far-reaching generalizations of the fundamental principles and results, connected with the concepts indicated, in the framework of the theory of hypergroups. Essential fragments of this theory became familiar as the theory of the Delsarte – B. M. Levitan generalized translation operators, the theory of Yu. M. Berezanskii – S. G. Krein of hypercomplex systems with continuous basis, the theory of convolution algebras, etc. Roughly speaking, a hypergroup is a topological space or manifold with a supplementary structure, which permits one to construct a Banach or topological algebra of the type of a group algebra – a hypergroup algebra. Thus, in the theory of hypergroups, just as in the theory of supergroups, the object of generalization is not so much a group as a group algebra (coalgebra). The ideas and methods of the theory of group representations carries over to the case of hypergroups, while it is convenient to treat representations of hypergroups as representations of the corresponding hypergroup algebras. With the help of the theory of representations for hypergroups one can generalize the duality principle of L. S. Pontryagin, construct an analog of the Fourier transform, get the Plancherel theorem and the inversion formula. It turns out that the converse result is also valid: the existence of a transformation of the type of the Fourier transform, for which the Plancherel theorem and inversion formula are valid, is necessarily connected with the existence of a certain hypergroup. This result explains the appearance of hypergroup structures in various problems of harmonic analysis. For some classes of hypergroups results on a connection with infinitesimal objects in the spirit of the Lie theory are found. Not only Lie algebras but also algebras generated by commutation relations of a more general kind can appear as such objects. The present survey contains a short description of the ideas, constructions, results, and prospects of the theory of hypergroups and generalized translation operators. In composing it material of the papers \[17, 57, 61, 62\] has been used in part. Separate aspects of the theory are considered in detail in the following works of monograph or survey type: \[3, 7, 37, 51, 55, 56, 66, 93, 97, 102, 103, 131, 132, 136, 160, 169, 183\], in which one can find additional information and references to the literature. An extremely large literature is devoted to hypergroups of special form – topological groups and semigroups, and also their representations (cf. in particular, \[31, 32, 40, 43, 59, 69, 72, 73, 86, 92, 156-158, 174, 177\]), the systematic analysis of which would leave the framework of the present survey. The literature cited in the present survey does not pretend to exhaustive completeness; the works included in this list contain additional bibliography.\ [**1.2.**]{} The concept of hypergroup arose originally as a generalization of the concept of abstract group. An abstract “algebraic” hypergroup is a set $H$ with a binary multiplication operation $a , b \mapsto ab$ which associates with any pair of elements of $H$ a nonempty subset of $H$. The multiplication is assumed to be associative in the sense that the sets $(ab)c$ and $a(bc)$ coincide; here $(ab)c$ denotes the union of the sets $dc$ for all $d \in (ab)$, and the product $a(bc)$ is defined analogously. A hypergroup $H$ has an identity $e \in H$ if $a \in ea \cap ae$ for all $a \in H$. The standard examples of hypergroups are connected with sets of cosets and conjugacy classes of elements in groups, with sets of points in certain geometries. However, it is more convenient to start with the analysis of an example, which, at first glance, is of a different kind. Let $G$ be a compact group, $\widehat{G}$ be the set of all irreducible linear (finite-dimensional) representations of the group $G$, considered up to equivalence. For any irreducible representations $\alpha$ and $\beta$ of the group $G$, their tensor product $\alpha \otimes \beta$ decomposes uniquely into a direct sum of primary representations $$\alpha \otimes \beta = \sum^n_{i=1} m_i \pi_i, \eqno(1.1)$$ where $\pi_i \in \widehat{G}$ and $m_i$ is the multiplicity with which the irreducible representation $\pi_i$ occurs in the tensor product $\alpha \otimes \beta$. If the product of the elements $\alpha$ and $\beta$ in $\widehat{G}$ is defined as the set $\{ \pi_1 , \pi_2 \ldots ,\pi_n \}$ of irreducible representations contained in $\alpha \otimes \beta$, then $\widehat{G}$ gets the structure of a hypergroup. The example given was considered by Helgason in \[130\], which is devoted to lacunary Fourier series on compact groups. In order to take into account the multiplicities with which the irreducible representations occur in the decomposition (1.1), Helgason defined the product $\alpha \beta$ as a finite measure on $\widehat{G}$. The support of this measure coincides with the set $\{ \pi_1 , \pi_2 \ldots ,\pi_n \}$ of elements of $\widehat{G}$ which occur in the decomposition (1.1), and the measure of the point $\pi_i$ is the integer $m_i$. If one identifies each element $\pi \in G$ with the unit measure $\delta \pi$, concentrated at the point $\pi$ (the delta-function), then one can consider the measure $\alpha \beta$ as the result of a convolution type operation over the measures $\delta_{\alpha}$ and $\delta_{\beta}$. Since any measure on $\widehat{G}$ is a linear combination of delta-functions, one can extend this operation linearly to the space $\mathscr{M} (\widehat{G})$ of all finite complex measures with finite support and even to the space $\mathscr{M}^b (\widehat{G})$ of all bounded measures on $\widehat{G}$. As a result, $\mathscr{M} (\widehat{G})$ and $\mathscr{M}^b (\widehat{G})$ are turned into associative algebras with identities (hypergroup algebras).\ [**1.3.**]{} It is easy to modify the definition of multiplication – convolution of measures – in the example considered (replacing the measure $\delta_{\pi}$ by $\delta_{\pi} / \dim \pi$) so that the product of delta-functions appears as a probability measure, i.e., a positive measure with unit volume. Constructions of this kind were studied particularly intensively after the appearance of the papers of Dunkl (112, 113\], Jewett \[136\], Spector \[169\], and the beautiful survey of Ross \[160\]. Following these papers, in the modern literature hypergroup usually means a locally compact space $H$, such that on the set $\mathscr{M}^b (H)$ of bounded Radon measures there is given an associative bilinear operation $\mu_1 , \mu_2 \mapsto \mu_1 \ast \mu_2$ called (generalized) convolution, where the result of the convolution of any probability measures is again a probability measure. It is required that the convolution turn $\mathscr{M}^b (H)$ into a Banach algebra with identity, whose role is played by the delta-function $\delta_e$, concentrated at some point $e \in H$. Moreover, it is required that the convolution be compatible with some involution in $H$ and that additional conditions of the type of continuity hold. In particular, if $H$ is a locally compact group, then the operation in $\mathscr{M}^b (H)$ coincides with the usual convolution of measures, and $\mathscr{M}^b (H)$ coincides with the group algebra. In this case $\delta_a \ast \delta_b = \delta_c$, where $c$ is the product of the elements $a$ and $b$ in $H$. In general, $\delta_a \ast \delta_b$ is a probability measure which can be considered as the distribution of a random variable with values in $H$. Hence one can say that in the hypergroup $H$ there is defined an associative product of elements, but it is defined “randomly” and its result is a “random” element in $H$. In what follows, hypergroups in the sense of \[112, 136, 169\] will be called p-hypergroups, so as to distinguish them from the objects of a more general kind introduced by Delsarte.\ [**1.4.**]{} Although in Ross’ survey \[160\] it is indicated that Helgason \[130\] was the first analyst to use the term “hypergroup,” this term was used in a broader sense in the important
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Let $M_R$ be a module and $\sigma$ an endomorphism of $R$. Let $m\in M$ and $a\in R$, we say that $M_R$ satisfies the condition $\mathcal{C}_1$ (respectively, $\mathcal{C}_2$), if $ma=0$ implies $m\sigma(a)=0$ (respectively, $m\sigma(a)=0$ implies $ma=0$). We show that if $M_R$ is p.q.-Baer then so is $M[x;\sigma]_{R[x;\sigma]}$ whenever $M_R$ satisfies the condition $\mathcal{C}_2$, and the converse holds when $M_R$ satisfies the condition $\mathcal{C}_1$. Also, if $M_R$ satisfies $\mathcal{C}_2$ and $\sigma$-skew Armendariz, then $M_R$ is a p.p.-module if and only if $M[x;\sigma]_{R[x;\sigma]}$ is a p.p.-module if and only if $M[x,x^{-1};\sigma]_{R[x,x^{-1};\sigma]}$ ($\sigma\in Aut(R)$) is a p.p.-module. Many generalizations are obtained and more results are found when $M_R$ is a semicommutative module.' --- **[Mohamed Louzari]{}** Department of mathematics Abdelmalek Essaadi University B.P. 2121 Tetouan, Morocco mlouzari@yahoo.com \[section\] \[Theorem\][Definition]{} \[Theorem\][Proposition]{} \[Theorem\][Corollary]{} \[Theorem\][Lemma]{} \[Theorem\][Example]{} \[Theorem\][Remark]{} This work is dedicated to my Professor El Amin Kaidi Lhachmi from University of Almería on the occasion of his 62nd birthday. [**Mathematics Subject Classification:**]{} 16S36, 16D80, 16W80\ [**Keywords:**]{} Semicommutative modules, p.q.-Baer modules, p.p.-modules. Introduction ============ In this paper, $R$ denotes an associative ring with unity and modules are unitary. We write $M_R$ to mean that $M$ is a right module. Throughout, $\sigma$ is an endomorphism of $R$ (unless specified otherwise), that is, $\sigma\colon R\rightarrow R$ is a ring homomorphism with $\sigma(1)=1$. The set of all endomorphisms (respectively, automorphisms) of $R$ is denoted by $End(R)$ (respectively, Aut(R)). In [@Kaplansky], Kaplansky introduced Baer rings as rings in which the right (left) annihilator of every nonempty subset is generated by an idempotent. According to Clark [@clark], a ring $R$ is said to be [*quasi-Baer*]{} if the right annihilator of each right ideal of $R$ is generated (as a right ideal) by an idempotent. These definitions are left-right symmetric. Recently, Birkenmeier et al. [@birk/pqBaer] called a ring $R$ a [*right*]{} $($respectively, [*left$)$ principally quasi-Baer*]{} (or simply [*right*]{} $($respectively, [*left$)$ p.q.-Baer*]{}) if the right (respectively, left) annihilator of a principally right (respectively, left) ideal of $R$ is generated by an idempotent. $R$ is called a [*p.q.-Baer*]{} ring if it is both right and left p.q.-Baer. A ring $R$ is a right (respectively, left) [*p.p.-ring*]{} if the right (respectively, left) annihilator of an element of $R$ is generated by an idempotent. $R$ is called a [*p.p.-ring*]{} if it is both right and left p.p.-ring. Lee-Zhou [@lee/zhou] introduced Baer, quasi-Baer and p.p.-modules as follows: $(1)$ $M_R$ is called [*Baer*]{} if, for any subset $X$ of $M$, $r_R(X)=eR$ where $e^2=e\in R$. $(2)$ $M_R$ is called [*quasi-Baer*]{} if, for any submodule $N$ of $M$, $r_R(N)=eR$ where $e^2=e\in R$. $(3)$ $M_R$ is called [*p.p.*]{} if, for any $m\in M$, $r_R(m)=eR$ where $e^2=e\in R$. In [@baser2007], a module $M_R$ is called [*principally quasi Baer*]{} (p.q.-Baer for short) if, for any $m\in M$, $r_R(mR)=eR$ where $e^2=e\in R$. It is clear that $R$ is a right p.q.-Baer ring if and only if $R_R$ is a p.q.-Baer module. If $R$ is a p.q.-Baer ring, then for any right ideal $I$ of $R$, $I_R$ is a p.q.-Baer module. Every submodule of a p.q.-Baer module is p.q.-Baer module. Moreover, every quasi-Baer module is p.q.-Baer, and every Baer module is quasi-Baer module. A ring $R$ is called [*semicommutative*]{} if for every $a\in R$, $r_R(a)$ is an ideal of $R$ (equivalently, for any $a,b\in R$, $ab=0$ implies $aRb=0$). In [@rege2002], a module $M_R$ is semicommutative, if for any $m\in M$ and $a\in R$, $ma=0$ implies $mRa=0$. Let $\sigma$ an endomorphism of $R$, $M_R$ is called $\sigma$-semicommutative module [@zhang/chen] if, for any $m\in M$ and $a\in R$, $ma=0$ implies $mR\sigma(a)=0$. According to Annin [@annin], a module $M_R$ is $\sigma$-[*compatible*]{}, if for any $m\in M$ and $a\in R$, $ma=0$ if and only if $m\sigma(a)=0$. In [@lee/zhou], Lee-Zhou introduced the following notations. For a module $M_R$, we consider $M[x;\sigma]:={\left\{\sum_{i=0}^sm_ix^i:s\geq 0,m_i\in M\right\}},$ $M[[x;\sigma]]:={\left\{\sum_{i=0}^\infty m_ix^i:m_i\in M\right\}},$ $M[x,x^{-1};\sigma]:={\left\{\sum_{i=-s}^tm_ix^i:\;t\geq 0,s\geq 0,m_i\in M\right\}},$ $M[[x,x^{-1};\sigma]]:={\left\{\sum_{i=-s}^\infty m_ix^i:s\geq 0,m_i\in M\right\}}.$ Each of these is an Abelian group under an obvious addition operation. Moreover $M[x;\sigma]$ becomes a module over $R[x;\sigma]$ under the following scalar product operation: For $m(x)=\sum_{i=0}^n m_ix^i\in M[x;\sigma]$ and $f(x)=\sum_{j=0}^m a_jx^j\in R[x;\sigma]$ $$m(x)f(x)=\sum_{k=0}^{n+m}{\left(\sum_{k=i+j}m_i\sigma^i(a_j)\right)}x^k\eqno(*)$$ Similarly, $M[[x;\sigma]]$ is a module over $R[[x;\sigma]]$. The modules $M[x;\sigma]$ and $M[[x;\sigma]]$ are called the [*skew polynomial extension*]{} and the [*skew power series extension of $M$*]{}, respectively. If $\sigma\in Aut(R)$, then with a scalar product similar to $(*)$ , $M[x,x^{-1};\sigma]$ (respectively, $M[[x,x^{-1};\sigma]]$) becomes a module over $R[x,x^{-1};\sigma]$ (respectively, $R[[x,x^{-1};\sigma]]$). The modules $M[x,x^{-1};\sigma]$ and $M[[x,x^{-1};\sigma]]$ are called the [*skew Laurent polynomial extension*]{} and the [*skew Laurent power series extension*]{} of $M$, respectively. In [@zhang/chen], a module $M_R$ is called $\sigma$-[*skew Armendariz*]{}, if $m(x
{ "pile_set_name": "ArXiv" }
null
null
--- author: - 'Mari Carmen Bañuls,' - 'Krzysztof Cichy,' - Karl Jansen - and Hana Saito bibliography: - 'MPSSchwinger.bib' title: Chiral condensate in the Schwinger model with Matrix Product Operators --- Introduction {#sec:intro} ============ Investigations of gauge field theories within the Hamiltonian approach have progressed substantially in the last years with the help of tensor network (TN) techniques [@verstraete08algo; @cirac09rg; @orus2014review]. Taking the example of the Schwinger model, numerical calculations have been performed to investigate ground state properties [@Byrnes:2002nv; @Cichy:2012rw; @Banuls:2013jaa; @Banuls:2013zva; @Rico:2013qya; @Buyens:2015dkc], to demonstrate real-time dynamics [@Buyens:2013yza; @Buyens:2014pga] and to address the phenomenon of string breaking [@Pichler:2015yqa; @Buyens:2015tea], which has also been explored in non-Abelian models [@Kuhn:2015zqa]. In Refs. [@Banuls:2015sta; @Saito:2014bda; @Saito:2015ryj], thermal properties of the Schwinger model were studied for massless fermions. From a more conceptual point of view, TN have been developed that incorporate the gauge symmetry by construction, and constitute ground states of gauge invariant lattice models [@Tagliacozzo:2014bta; @Silvi:2014pta; @haegeman15gauging; @zohar2015peps]. Yet a different line of work is the study of potential quantum simulations of these models, using ultracold atoms, see Refs. [@wiese2013review; @Zohar:2015hwa; @Dalmonte:2016alw] for a review. Also in this field, TN techniques can play a determinant role to study the feasibility of the proposals [@kuehn2014schwinger]. The last numerical developments go beyond standard Markov Chain Monte Carlo (MC-MC) methods. At zero temperature, the Hamiltonian approach allows us to go substantially closer to the continuum limit and reach a much improved accuracy compared to MC-MC. When temperature is switched on, a broad and very large set of non-zero temperature points can be evaluated, ranging from very high to almost zero temperature. In the string breaking calculation, a nice picture of the string breaking phenomenon and the emergence of the hadron states can be demonstrated. Finally, real-time simulations are not even possible in principle with MC-MC methods. The key to this success is the employment of tensor network states and, in the case of one spatial dimension, as for the Schwinger model, the Matrix Product States (MPS). In this approach, which is closely linked to the Density Matrix Renormalization Group (DMRG) [@white92dmrg], the problem, which has an exponentially large dimension in terms of the system size, is reduced to an –admittedly– sophisticated variational solution which can be encoded in substantially smaller $D\times D$ matrices. The ansatz can represent arbitrary states in the Hilbert space if $D$ is large enough (exponential in the system size). Instead in numerical applications, usually an approximation is found to the desired state within the set of MPS with fixed $D$. By varying $D$, an extrapolation of results to $D\to \infty$ can be performed allowing thus to reach the solution of the real system under consideration. A different approach also using tensor network techniques was applied to the Schwinger model with a topological $\theta$-term in Refs. [@Shimizu:2014uva; @Shimizu:2014fsa], where the exact partition function on the lattice was expressed as a two dimensional tensor network and approximately contracted using the Tensor Renormalization Group (TRG). The application of the MPS technique discussed in the present paper is concerned with non-zero temperature properties of the Schwinger model. In Refs. [@Banuls:2015sta; @Saito:2014bda; @Saito:2015ryj], we have for the first time investigated the thermal evolution of the chiral condensate in the Schwinger model. In the first paper, where we only studied the massless case, we could demonstrate that the MPS technique can be successfully used to compute such a thermal evolution from very high to almost zero temperature. For massless fermions, the results from our MPS calculation could be confronted with the analytical solution of Ref. [@Sachs:1991en] and a very nice agreement was found demonstrating the correctness and the power of the MPS approach. In the present paper, we will extend our calculations of the thermal evolution of the chiral condensate to the case of non-vanishing fermion masses. Here, no exact results exist anymore, but only approximate solutions are available [@Hosotani:1998za] which can be tested against our results. For our work at zero fermion mass, we also introduced a truncation of the charge sector [@Banuls:2015sta] which was necessary to obtain precise results at high temperature. Here, we will employ this truncation method, too. It needs to be stressed that the calculations with MPS, as performed here, have a number of systematic uncertainties which are very important to control. This concerns in particular: - an estimate of results for infinite bond dimension; [^1] - an extrapolation to zero step size in the thermal evolution process; - a study of the truncation in the charge sector of the model; - an infinite volume extrapolation; - and a careful analysis of the continuum limit employing various extrapolation functions with different orders in the lattice spacing. Controlling these systematic effects renders the calculations with MPS demanding, but it is absolutely necessary to obtain precise and trustworthy results. We have therefore made a significant effort to perform the above extrapolations and we will provide various examples in this paper for the studies of systematic effects carried through here. The Schwinger model and chiral symmetry breaking {#sec:schwinger} ================================================ The one-flavour Schwinger model [@schwinger62], i.e. Quantum Electrodynamics in 1+1 dimensions, is one of the simplest gauge theories and a toy model allowing for studies of new lattice techniques before employing them to real theories of interest, like Quantum Chromodynamics (QCD). Despite its apparent simplicity, it has a non-perturbatively generated mass gap and shares some features with QCD, such as confinement and chiral symmetry breaking, although the mechanism of the latter is different than in QCD – it is not spontaneous, but results from the chiral anomaly. We start with the Hamiltonian of the Schwinger model in the staggered discretization, derived and discussed in Ref. [@Banks:1975gq]: $$\begin{aligned} \label{eq:H} H &=& x \displaystyle \sum_{n=0}^{N-2} \left[ \sigma_n^+ \sigma_{n+1}^- + \sigma_n^- \sigma_{n+1}^+ \right] +\frac{\mu}{2} \sum_{n=0}^{N-1} \Big[ 1+ (-1)^n \sigma_n^z \Big] + \sum_{n=0}^{N-2} \left[ L(n) \right] ^2\\ &\equiv& H_{hop} + H_m + H_g,\nonumber \end{aligned}$$ where $n$ is the site index, $x=1/g^2a^2$, $a$ is the lattice spacing, $g$ is the coupling, and $\mu=2m/g^2a$ with $m$ denoting the fermion mass and $N$ the number of lattice sites. We use open boundary conditions (OBC). The gauge field, $L(n)$, can be integrated out using the Gauss law: $$L(n+1) - L(n) = \frac{1}{2} \left[ (-1)^{n+1} + \sigma_{n+1}^z \right]. \label{eq:Gausslaw}$$ Thus, only $L(n)$ at one of the boundaries is an independent parameter and we take $L(0)=0$, i.e. no background electric field. We work with the following basis for our numerical computations: $\left| s_0 s_1 \cdots \right\rangle$  [@Banuls:2013jaa], where $s_n=\{\downarrow,\uparrow\}$ is the spin state at site $n$ and all the gauge degrees of freedom have been integrated out. In this paper, we are interested in the chiral symmetry breaking ($\chi$SB) in the Schwinger model, both at zero and non-zero temperature. The order parameter of $\chi$SB is the chiral condensate $\Sigma=\left\langle {\bar \psi}\psi \right\rangle$, which can be written in terms of spin operators as $\Sigma = \frac{g\sqrt{x}}{N} \sum_n (-1)^n \frac{1+\sigma_n^z}{2}$. The ground state and thermal expectation values of the chiral condensate diverge logarithmically in the continuum limit for non-zero fermion mass [@deForcrand98; @duerr05scaling; @Christian:2005yp].
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We examine the quantum tunneling process in Bose condensates of two interacting species trapped in a double well configuration. We discover the condition under which particles of different species can tunnel as pairs through the potential barrier between two wells in opposition directions. This novel form of tunneling is due to the interspecies interaction that eliminates the self- trapping effect. The correlated motion of tunneling atoms leads to the generation of quantum entanglement between two macroscopically coherent systems.' address: | [Department of Physics, The Chinese University of Hong Kong,]{}\ [Shatin, NT, Hong Kong, China]{} author: - 'H. T. Ng, C. K. Law, and P. T. Leung' title: 'Entangled quantum tunneling of two-component Bose-Einstein condensates' --- Quantum tunneling of macroscopically coherent systems is an intriguing phenomena well known in the context of Josephson junction effects in superconducting electronic systems. For superfluids consisting of neutral particles, detailed investigations of tunneling is aided by the recent realization Bose-Einstein condensation of atomic vapor in a well controllable environment. Indeed, recent experiments have successfully demonstrated quantum tunneling for condensates confined in an array of optical potentials [@kasevich1; @burger]. One prominent feature of tunneling in Bose condensates is the nonlinear dynamics arising from the interaction between atoms. Quite remarkably, for single-component condensates trapped in double-well configurations, previous studies have indicated that a self-trapping mechanism can suppress the tunneling rate significantly by increasing the atom-atom interaction strength [@shenoy; @walls; @raghavan; @williams]. An interesting extension of the tunneling problem involves Bose condensates of two interacting species (Fig. 1). The main issue is how the interspecies interaction affects the tunneling process, and particularly the quantum coherence as the two condensates mix together. Previous studies of the general properties of two-component Bose condensates have emphasized the important role of the interspecies interaction, which leads to novel features, such as the components separation [@phase1; @phase2], cancellation of the mean field energy shift [@pu], and the suppression of quantum phase diffusion [@law]. However, the investigation of the influence of interspecies interaction on tunneling dynamics has only just begun [@lobo; @pu2]. In this paper we present a novel tunneling mechanism for a two-component condensate trapped in a double-well (see Fig. 1). The atoms of the component $A(B)$ are initially prepared in the left (right) potential well. We discover the condition under which the interspecies interaction can eliminate the self-trapping effect and thus enhances the tunneling significantly. Such an enhanced tunneling originates from the correlated quantized motion of the two condensates. We also show that atoms of different species tunnel through the barrier as correlated pairs in opposition directions, i.e., a form of [*quantum entangled tunneling*]{}. Therefore tunneling serves as a mechanism to build up a strong correlation among atoms of different species, and this leads to the generation of quantum entanglement between two multi-particle systems. The configuration of our double-well system is sketched in Fig. 1. Our focus in this paper is the quantum dynamics beyond the mean field description. An exact many-body description is difficult even for single-component condensate problems. The usual method to capture the essential physics is based on the two-mode approximation in which the evolution is confined by the left and right localized mode functions associated with the respective potential wells [@shenoy; @walls; @raghavan; @williams; @juha]. Such an approximation is valid when each potential well is sufficiently deep so that higher modes of the wells essentially do not participate in the dynamics. =2.5in In the two-mode approximation, the system is modeled by the Hamiltonian $(\hbar =1)$, $$\begin{aligned} H &=& \frac{\Omega}{2} ({\aL}a_{R}+{\aR}a_{L}+{\bL}b_{R}+{\bR}b_{L}) \nonumber \\ && +\frac{\kappa_{a}}{2}\left[ ({\aL}a_{L})^{2}+({\aR}a_{R})^{2}\right] \nonumber \\ && + \frac{\kappa_{b}}{2}\left[({\bL}b_{L})^{2}+({\bR}b_{R})^{2}\right] \nonumber \\ && +\kappa ({\aL}a_{L}{\bL}b_{L}+{\aR}a_{R}{\bR}b_{R}). \label{Hamiltonian}\end{aligned}$$ Here the subscripts $L$ and $R$ respectively denote the localized modes in the left and right potential wells. Since there are two modes available for each component, the model in fact consists of four mode operators. We use $a^{\dag}_j$ and $b^{\dag}_j$ $(j=L,R)$ to denote the creation operators of the component $A$ and $B$ respectively. The parameters $\Omega$, $\kappa_a(\kappa_b)$ and $\kappa$ describe the tunneling rate, self-interaction strength of the component $A(B)$ and the interspecies interaction strength respectively. To gain insight of the quantum correlation developing in the tunneling process, we first consider the exactly solvable case with only one $A$ atom in the left well and one $B$ atom in the right well. In this case the system is spanned by four basis vectors: $|1,0\rangle_{A}|1,0\rangle_{B}$, $|1,0\rangle_{A}|0,1\rangle_{B}$, $|0,1\rangle_{A}|1,0\rangle_{B}$ and $|0,1\rangle_{A}|0,1\rangle_{B}$, where $|p,q\rangle_{S}$ denotes the state with $p$ atoms of species $S$ $(S=A,B)$ in the left well and $q$ atoms of species $S$ in the right well. The eigenvalues and eigenvectors of $H$ can be found straightforwardly. In the regime where the interspecies interaction is sufficiently strong such that $\kappa \gg \Omega$, the state vector evolves as $$\begin{aligned} |\Psi(t)\rangle &=& e^{-i[(\kappa_{a}+\kappa_{b})/2-\omega_0]{t}} [\cos{\omega_{0}{t}}|1,0\rangle_{A}|0,1\rangle_{B} \nonumber \\ && + i\sin{\omega_{0}{t}}|0,1\rangle_{A}|1,0\rangle_{B}] + O (\Omega / \kappa) . \label{2atom-state}\end{aligned}$$ In writing Eq. (\[2atom-state\]) we have defined $\omega_0 = \Omega^2/2 \kappa$ as an effective tunneling frequency. Because of the strong interaction between the atoms, the probability of finding both particles in the same well at any time $t$ is negligible (of order $\Omega^2/\kappa^2$). The tunneling motion of the two atoms are anti-correlated in the sense that the atom $A$ and the atom $B$ always move in opposite directions. Such an anti-correlated tunneling motion gives rise to quantum entanglement between the two atoms. At time $t=(n+1/4)\pi / \omega_0$, ($n=$ integers), the state is a form of Bell’s state that is maximally entangled in the two-particle two-mode subspace. Now we examine the multiple atoms case. In order to facilitate the discussion, we assume the number of particles are the same for the two components, i.e., $N_a=N_b=N$, and the condensates shares the same interaction strength i.e., $\kappa_a=\kappa_b=\kappa$. The latter condition is a good approximation to $^{87}$[Rb]{} condensate of atoms in hyperfine spins states $|F=2,m_{f}=1{\rangle}$ and $|F=1,m_{f}=-1{\rangle}$, which share similar scattering lengths [@phase2]. However, we emphasize that these assumptions are not crucial, we shall relax these conditions later in the paper. We shall limit our study to the $4 \kappa {\gg} N\Omega$ regime where the nonlinear interaction is dominant. As before we consider the initial condition in which all atoms in the component $A(B)$ are localized in the left (right) potential well. The general form of the state vector at time $t$ is given by: $|{\Psi}(t){\rangle}= e^{-i{\kappa}N^{2}t} \sum_{n=0}^{N}\sum_{m=0}^{N} {c}_{n,m}(t) |n,N-n{\rangle}_A|m,N-m{\rangle}_B$. The amplitudes ${c}_{n,m} (t)$ are governed by the Schrödinger equation according to the Hamiltonian (\[Hamiltonian\]): $$\begin{aligned} \label{qpamp} i\dot{{c}}_{n,m}&=&\frac{\Omega}{2} \left[ \sqrt{(n+1)(N-n)}{{c}}_{n+1,m} \right. \nonumber \\ && \left. \ \ \ \ \ \ + \sqrt{n(N-n+1)}{{c}}_{n-1,m}\right] \nonumber\\ && + \frac{\Omega}{2}\left[\sqrt{(m+1)(N-m)}{{c}}_{n,m+1
{ "pile_set_name": "ArXiv" }
null
null
**Semi-Finite Forms of** Bilateral Basic Hypergeometric Series [ William Y. C. Chen]{}$^{1}$ and $^{2}$ Center for Combinatorics, LPMC\ Nankai University, Tianjin 300071, P.R. China Email: $^1$chen@nankai.edu.cn, $^2$fu@nankai.edu.cn [**Abstract.**]{} We show that several classical bilateral summation and transformation formulas have semi-finite forms. We obtain these semi-finite forms from unilateral summation and transformation formulas. Our method can be applied to derive Ramanujan’s $_1\psi_1$ summation, Bailey’s $_2\psi_2$ transformations, and Bailey’s $_6\psi_6$ summation. [**Corresponding Author:**]{} William Y. C. Chen, Email: chen@nankai.edu.cn [**AMS Classification:**]{} 33D15 [**Keywords:**]{} Bilateral hypergeometric summation, semi-finite forms, Ramanujan’s ${}_1\psi_1$ summation, Bailey’s ${}_2\psi_2$ transformations, Bailey’s ${}_6\psi_6$ summation. Introduction ============ We follow the terminology for basic hypergeometric series in [@GR]. Assuming $|q|<1$, let $$(a;q)_\infty = (1-a) (1-aq) (1-aq^2) \cdots .$$ For any integer $n$, the $q$-shifted factorial $(a;q)_n$ is given by $$(a;q)_n = { (a;q)_\infty \over (aq^n;q)_\infty}.$$ For $n\geq 0$, we have the following relation which is crucial for this paper: $$\label{Defi} (a;q)_{-n}= \frac{1}{(aq^{-n};q)_n}={ (-q/a)^{n} q^{\binom{n}{2} }\over (q/a;q)_{n}} .$$ For convenience, we employ the following usual notation: $$(a_1, a_2, \ldots, a_m;q)_n=(a_1;q)_n(a_2;q)_n\ldots(a_m;q)_n.$$ The (unilateral) basic hypergeometric series $_{r+1}\phi_r$ is defined by $$\begin{aligned} \label{Hype} _{r+1}\phi_r\left[ \begin{array}{c} a_1, a_2, \cdots, a_{r+1}\\ b_1, b_2, \cdots, b_{r} \end{array};q, z \right]=\sum_{k=0}^{\infty}A(k),\end{aligned}$$ where $$A(k)=\frac{(a_1, a_2, \cdots, a_{r+1};q)_k}{(b_1, b_2, \cdots, b_r,q;q)_k}z^k.$$ The bilateral basic hypergeometric series $_s\psi_s$ is defined as follows, $$\begin{aligned} \label{Bila} _s\psi_s\left[ \begin{array}{c} a_1, a_2, \cdots, a_{s}\\ b_1, b_2, \cdots, b_{s} \end{array};q, z \right]=\sum_{k=-\infty}^{\infty} B(k),\end{aligned}$$ where $$B(k)=\frac{(a_1, a_2, \cdots, a_{s};q)_k}{(b_1, b_2, \cdots, b_s;q)_k}z^k.$$ In this paper, we propose the following method of deriving bilateral summation and transformation formulas using [*semi-finite forms*]{}. For a bilateral series $_s\psi_s$ as given in (\[Bila\]), we construct a summand $G(k,m)$ which implies a unilateral series $_{r+s+1}\phi_{r+s}$, where $r$ is a nonnegative integer, such that $$\lim _{m \rightarrow \infty }G(k,m)=B(k)$$ for all $k$, and the summation $$\label{gn0} \sum_{k=-m}^\infty G(k,m)$$ can be easily accomplished as a Laurent extension of the summation $$\label{laurants} \sum_{k=0}^\infty G(k-m, m)= G(-m,m)\sum_{k=0}^\infty A(k),$$ where $G(k,m)$ can be written as $$G(k-m, m) = G(-m,m) A(k)$$ for some $A(k)$. The bilateral series (\[Bila\]) is then obtained from (\[gn0\]) as $m\to\infty$, subject to suitable convergence conditions. We apply this procedure to derive bilateral series identities from suitable unilateral ones. The above summation (\[gn0\]) is called the [ *semi-finite form*]{} of the bilateral summation (\[Bila\]). A method similar to ours was recently used by Schlosser [@SCHL], and Jouhet and Schlosser [@SCHL04], who derived summations for bilateral series from [*finite forms*]{}. We also note that another method, which uses a similar factorization as above, for deriving bilateral series identities from unilateral ones was used by Ismail [@Ismail], and Askey and Ismail [@AsIs]. Rather than taking limits, they apply analytic continuation as the main ingredient. In this paper, we present semi-finite forms of several classical bilateral summation and transformation formulas such as Ramanujan’s $_{1}\psi_1$ formula, Bailey’s $_2\psi_2$ transformations, and Bailey’s $_6\psi_6$ summation. From $_2\phi_1$ to $_1\psi_1$ ============================= Using the well known Gauss summation formula $$\label{Gauss} _2\phi_1\left[ \begin{array}{c} a,b\\ c \end{array};q, c/ab \right]=\frac{(c/a,c/b;q)_{\infty}}{(c,c/ab;q)_{\infty}},$$ where $|c/ab|<1$, we get a semi-finite form of Ramanujan’s summation of the general $_1\psi_1$, $$\label{Ran} _{1}\psi_1 \left[ \begin{array}{l} a\\ b \end{array};q,z\right]=\sum_{k=-\infty}^{\infty}\frac{(a;q)_k}{(b;q)_k}z^k= \frac{(q;q)_{\infty}(b/a;q)_{\infty}(az;q)_{\infty}(q/az;q)_{\infty}} {(b;q)_{\infty}(q/a;q)_{\infty}(z;q)_{\infty}(b/az;q)_{\infty}},$$ where $|b/a|<|z|<1$. For $|z|<1$, the following identity holds: \[theo\] $$\label{r-f} \sum_{k=-m}^{\infty}\frac{(a;q)_k(bq^{m}/az;q)_k}{(q^{1+m};q)_k(b;q)_k}z^k =\frac{(q;q)_m(q/az;q)_m}{(q/a;q)_m(b/az;q)_m} \frac{(b/a;q)_{\infty}(az;q)_{\infty}}{(b;q)_{\infty}(z;q)_{\infty}}.$$ [*Proof.*]{} The left hand side of (\[r-f\]) can be rewritten as $$\begin{aligned} \lefteqn{\sum_{k=0}^{\infty}\frac{(a;q)_{k-m}(bq^{m}/az;q)_{k-m}} {(q^{1+m};q)_{k-m}(b;q)_{k-m}}z^{k-m}}\\[6pt] &=&z^{-m}\frac{(a;q)_{-m}(bq^{m}/az;q)_{-m}}{(q^{1+m};q)_{-m}(b;q)_{-m}} \sum_{k=0}^{\infty}\frac{(aq^{-m};q)_k(b/az;q)_k}{(q;q)_k(bq^{-m};q)_k}z^k\\[6pt] &\overset{(\ref{Gauss})}{=}&z^{-m}\frac{(a;q)_{-m}(bq^{m}/az;q)_{-m}}{(q^{1+m};q)_{-m}(b;q)_{-m}} \frac{(b/a;q)_{\infty}(azq^{-m};q)_{\infty}}{(bq^{-m};q)_{\infty}(z;q)_{\infty}} \\[6pt] &\overset{(\ref{Defi})}{=}&z^{-m}\frac{(q;q)_{m}(azq^{-m};q)_m}{(aq^{-m};q)_m (b/az;q)_m}\frac{(az;q)_{\infty}(b/a;q)_{\infty}}
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: | Let $q = p^s$ be a power of a prime number $p$ and let ${\mathbb{F}_q}$ be the finite field with $q$ elements. In this paper we obtain the explicit factorization of the cyclotomic polynomial $\Phi_{2^nr}$ over ${\mathbb{F}_q}$ where both $r \geq 3$ and $q$ are odd, $\gcd(q,r) = 1,$ and $n\in \mathbb{N}.$ Previously, only the special cases when $r = 1,\ 3,\ 5,$ had been achieved. For this we make the assumption that the explicit factorization of $\Phi_r$ over ${\mathbb{F}_q}$ is given to us as a known. Let $n = p_1^{e_1}p_2^{e_2} \cdots p_s^{e_s}$ be the factorization of $n \in \mathbb{N}$ into powers of distinct primes $p_i,\ 1\leq i \leq s.$ In the case that the orders of $q$ modulo all these prime powers $p_i^{e_i}$ are pairwise coprime we show how to obtain the explicit factors of $\Phi_{n}$ from the factors of each $\Phi_{p_i^{e_i}}.$ We also demonstrate how to obtain the factorization of $\Phi_{mn}$ from the factorization of $\Phi_n$ when $q$ is a primitive root modulo $m$ and $\gcd(m,n) = \gcd(\phi(m),{\operatorname{ord}}_n(q)) = 1.$ Here $\phi$ is the Euler’s totient function, and ${\operatorname{ord}}_n(q)$ denotes the multiplicative order of $q$ modulo $n.$ Moreover, we present the construction of a new class of irreducible polynomials over ${\mathbb{F}_q}$ and generalize a result due to Varshamov (1984) [@Varshamov]. address: - 'School of Mathematics and Statistics, Carleton University, 1125 Colonel By Drive, Ottawa, Ontario, K1S 5B6, Canada.' - 'School of Mathematics and Statistics, Carleton University, 1125 Colonel By Drive, Ottawa, Ontario, K1S 5B6, Canada.' author: - Aleksandr Tuxanidy - Qiang Wang title: Composed Products and Explicit Factors of Cyclotomic Polynomials over Finite Fields --- [^1] Introduction ============ Composed Products and Applications ---------------------------------- Let $q = p^s$ be a power of a prime $p,$ and ${\mathbb{F}_q}$ be a finite field with $q$ elements. The multiplicative version of composed products of two polynomials $f,\ g \in {\mathbb{F}_q}[x]$ (or composed multiplication for short) defined by $$(f \odot g)(x) = \prod_{\alpha}\prod_{\beta} (x - \alpha \beta)$$ where the product $\prod_\alpha \prod_{\beta}$ runs over all roots $\alpha,\ \beta$ of $f,\ g$ respectively, was first introduced by Selmer (1966) [@Selmer] for the purposes of studying linear recurrence sequences (LRS). Informally, LRS’s are sequences whose terms depend linearly on a finite number of its predecessors; thus a famous example of a LRS is the Fibonacci sequence whose terms are the sum of the previous two terms. Let $k$ be a positive integer and let $a,a_0,\dots,a_{k-1}$ be given elements in ${\mathbb{F}_q}.$ Then a sequence $S = \{s_0,s_1,\dots\}$ of elements $s_i \in {\mathbb{F}_q}$ satisfying the relation $$s_{n+k} = a_{k-1}s_{n+k-1} + a_{k-2}s_{n+k -2} + \dots + a_0s_n + a,{\hspace*{2em}}n=0,1,\dots$$ is a LRS. If $a = 0,$ then $S$ is called a *homogeneous* LRS. If we let $k = 2,\ a = 0,\ a_0 = a_1 = 1$ and $s_0 = 0,\ s_1 = 1$ then $S$ becomes the (homogeneous) Fibonacci sequence. LRS’s have applications in coding theory, cryptography, and other areas of electrical engineering where electric switching circuits such as linear feedback shift registers (LFSR) are used to generate them. See Chapter 8 in [@Lidl] for this and a general introduction. In particular, the matter of the linear complexity of a LRS, and more generally, the linear complexity of the component wise multiplication of LRS’s, is of great importance in stream cipher theory, a branch in cryptography; here a higher complexity is preferred. See [@Gao] for instance and the references contained therein. Since the linear complexity of a LRS is given by the degree of the minimal polynomial of the LRS, minimal polynomials with higher degrees are therefore preferred. The polynomial $$f(x) = x^k -a_{k-1}x^{k-1} - a_{k-2}x^{k-2} - \dots - a \in {\mathbb{F}_q}[x]$$ is called the *characteristic polynomial of S* (see [@Lidl]). In 1973, Zierler and Mills [@Zierler] showed that the characteristic polynomial of a component wise multiplication of homogeneous LRS’s is the composed multiplication of the characteristic polynomials of the respective LRS’s. That is, if $S_1,S_2,\dots,S_r$ are homogeneous LRS’s with respective characteristic polynomials $f_1,f_2,\dots, f_r,$ then the characteristic polynomial of $S_1S_2 \cdots S_r,$ with component wise multiplication, is given by $f_1 \odot f_2 \odot \dots \odot f_r.$ We refer the reader to page 433-435 in [@Lidl] as well. Note that since the required minimal polynomials are factors of the characteristic polynomials $f_1 \odot f_2 \odot \dots \odot f_r$ of LRS’s, the study of factorizations of composed products has an important significance. Thus composed products have applications in stream cipher theory, LFSR, and LRS in general. Similarly, the *composed sum* of $f, g \in {\mathbb{F}_q}[x]$ is defined by $$(f \oplus g)(x) = \prod_\alpha \prod_\beta (x - (\alpha + \beta))$$ where the product runs over all the roots $\alpha$ of $f$ and $\beta$ of $g,$ including multiplicities. In 1987, Brawley and Carlitz [@Brawley; @and; @Carlitz] generalized composed multiplications and composed sums in the following. [**[@Brawley; @and; @Carlitz] (Composed Product)**]{} Let $G$ be a non-empty subset of the algebraic closure $\Gamma_q$ of ${\mathbb{F}_q}$ with the property that $G$ is invariant under the Frobenius automorphism $\alpha \mapsto \sigma(\alpha) = \alpha^q$ (i.e., if $\alpha \in G,$ then $\sigma(\alpha) \in G$). Suppose a binary operation $\diamond$ is defined on $G$ satisfying $\sigma(\alpha \diamond \beta) = \sigma(\alpha)\diamond \sigma(\beta)$ for all $\alpha,\beta \in G.$ Then the *composed product* of $f$ and $g,$ denoted by $f \diamond g,$ is the polynomial defined by $$(f \diamond g)(x) = \prod_\alpha \prod_\beta (x - (\alpha \diamond \beta)),$$ where the $\diamond$-products run over all roots $\alpha$ of $f$ and $\beta$ of $g.$ Observe that $\deg (f \diamond g) = (\deg f)(\deg g)$ clearly. Moreover, in [@Brawley; @and; @Carlitz] it is noted that when $G = {\Gamma_q}-\{0\}$ (respectively, ${\Gamma_q}$) and $\diamond$ is the usual multiplication (respectively, addition) then $f \diamond g$ becomes $f \odot g$ (respectively, $f \oplus g,$). Other less common examples are \(i) $G = {\Gamma_q},\ \alpha \diamond \beta = \alpha + \beta - c$ where $c \in {\mathbb{F}_q}$ is fixed. \(ii) $G = {\Gamma_q}- \{1\},\ \alpha \diamond \beta = \alpha + \beta - \alpha\beta$ (sometimes called the circle product), and \(iii) $G =$ any $\sigma$-invariant subset of ${\Gamma_q}, \alpha \diamond \beta = f(\alpha,\beta)$ where $f(x,y)$ is any fixed polynomial in ${\mathbb{F}_q}[x,y]$ such that $f(\alpha,\beta) \in G$ for all $\alpha, \beta \in G.$ Let $M_G[q,x]$ be the set of all mon
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Turbulent Rayleigh-Bénard convection displays a large-scale order in the form of rolls and cells on lengths larger than the layer height once the fluctuations of temperature and velocity are removed. These turbulent superstructures are reminiscent of the patterns close to the onset of convection. They are analyzed by numerical simulations of turbulent convection in fluids at different Prandtl number ranging from 0.005 to 70 and for Rayleigh numbers up to $10^7$. For each case, we identify characteristic scales and times that separate the fast, small-scale turbulent fluctuations from the gradually changing large-scale superstructures. The characteristic scales of the large-scale patterns, which change with Prandtl and Rayleigh number, are also found to be correlated with the boundary layer dynamics, and in particular the clustering of thermal plumes at the top and bottom plates. Our analysis suggests a scale separation and thus the existence of a simplified description of the turbulent superstructures in geo- and astrophysical settings.' author: - Ambrish Pandey - 'Janet D. Scheel' - Jörg Schumacher title: 'Turbulent superstructures in Rayleigh-Bénard convection' --- Large temperature differences across a horizontally extended fluid layer induce a turbulent convective fluid motion which is relevant in numerous geo- and astrophysical systems [@Kadanoff2001]. These flows are typically highly turbulent with very large Rayleigh numbers $Ra$, the parameter that quantifies the intensity of the thermal driving in convection. From the classical perspective of turbulence one would expect a chaotic, irregular motion of differently sized vortices and thermal plumes. Rather than such a featureless stochastic fluid motion, some turbulent flows in nature display an organization into prominent and regular flow patterns that persist for times long compared to an eddy turnover time and extend over lengths which are larger than the height scale. Examples are cloud streets in the atmosphere [@Markson1975] or granulation networks at the solar surface [@Nordlund2009] and other stars [@Michel2008]. This large-scale order will be termed a turbulent superstructure. It is observed in turbulent convection flows with very different molecular dissipation properties. The Prandtl number $Pr=\nu/\kappa$, another dimensionless parameter which relates kinematic viscosity $\nu$ to temperature diffusivity $\kappa$, is for example very small for stellar convection, $Pr\lesssim 10^{-3}$ [@Spiegel1962; @Thual1992; @Hanasoge2016]. It is 0.7 for atmospheric flows and 7.0 for heat transport in the oceans. Rayleigh-Bénard convection (RBC) is the simplest turbulent convection flow evolving in a planar fluid layer of height $H$ that is uniformly heated with a temperature $T=T_b$ from below and cooled from above with $T=T_t$ such that $T_b-T_t=\Delta T>0$. The Rayleigh number is given by $Ra=g\alpha \Delta T H^3/(\nu\kappa)$ with $g$ being the acceleration due to gravity and $\alpha$ the thermal expansion coefficient. RBC can be considered as a paradigm for many applications [@Ahlers2009; @Chilla2012] that usually contain further physical processes, such as radiation [@Christensen1996] and phase changes [@Stevens2005; @Pauluis2011], and additional fields such as magnetic fields [@Aurnou2010]. Numerical simulations of convection [@Hartlep2003; @Hartlep2005; @Rincon2005; @Hardenberg2008; @Bailon2010; @Emran2015] have enabled researchers to access the large-scale structure formation in turbulent convection flows. Long-term investigations at very small Prandtl numbers $Pr\ll 0.1$ require simulations on massively parallel supercomputers in order to resolve the highly inertial turbulence properly. Such simulations have not been done before and this is a central motivation for the present study. At the onset of convection, $Ra_c=1708$, straight convection rolls have a unique and Prandtl-number-independent wavelength, $\lambda_c\approx 2H$ [@Jeffreys1928; @Chandrasekhar1961]. For $Ra\gtrsim Ra_c$, these rolls become susceptible to secondary linear instabilities causing modulations, such as Eckhaus, zig-zag or oscillatory patterns [@Busse1978; @Cross1993; @Bodenschatz2000]. These secondary instabilities depend strongly on the Prandtl number of the working fluid and the wavenumber range of the plane-wave perturbation to the convection straight rolls in the layer [@Busse1978]. Dependencies on Rayleigh and Prandtl numbers of the pattern wavelength for $Ra>Ra_c$ have been studied systematically in RBC experiments in air, water and silicone oil by Willis et al. [@Willis1972]. Average roll widths tend to increase with $Ra$, which the authors attributed to increasingly unsteady three-dimensional motions. The trend with growing $Pr$ is less systematic [@Hartlep2003] and accompanied by hystereses at $Pr\gg 1$ [@Willis1972]. Roll and cell patterns of the velocity field in a [*turbulent*]{} RBC for $Ra\gtrsim 10^5$ that are reminiscent of the flow structures in the weakly nonlinear regime at $Ra \lesssim 5\times 10^3$ have been observed in recent DNS at $Pr\gtrsim 1$ [@Bailon2010; @Emran2015]. Their detection requires an averaging over a time interval that should be long enough to remove the turbulent fluctuations in the fields effectively and yet short enough to not wash away the large-scale structures [@Emran2015]. A sliding time average with an appropriate time window width should thus be able to separate the fast, small-scale turbulent fluctuations of velocity and temperature from the gradual variation of the large-scale superstructure patterns. Physically, this time window should be connected with the turnover time of fluid parcels in the superstructure rolls and cells. The determination of this averaging time scale as a function of $Ra$ and $Pr$ is a second motivation for the present study. In the present work, we report an analysis of the characteristic spatial and temporal scales of turbulent superstructures in RBC by means of three-dimensional direct numerical simulations (DNS) spanning more than four orders of magnitude in $Pr$ and more than three orders in $Ra$. All simulations reported here are of the Boussinesq equations of motion and performed in an extended closed square cell of aspect ratio of 25:25:1. We identify the characteristic averaging time scales, $\tau(Ra, Pr)$, which will be connected with a characteristic spatial scale (or wavelength) that can be determined by a spectral analysis of the turbulent superstructures. Our study of large-aspect-ratio turbulent RBC extends to very small Prandtl numbers with values significantly below 0.1, which have not been obtained before. The gradual evolution of the patterns at all Prandtl numbers is confirmed by radially averaged, azimuthal power spectra that reveal a gradual switching of the orientation of the superstructures which is reminiscent of cross-roll or skewed varicose instabilities that are well-known from the weakly nonlinear regime of RBC. Furthermore, we compare the characteristic pattern scale in the bulk of the RBC flow to the scales of plumes and plume clusters that are present in the boundary layers in the vicinity of the top and bottom walls. The temperature patterns in the bulk are found to be correlated with the most prominent ridges in the vertical temperature field derivative at the bottom and top plates which in turn are correlated with the wall stresses of the advecting velocity. Our analysis provides characteristic separation time and length scales for turbulent convection flows in extended domains and thus opens the possibility to describe the superstructure patterns in turbulent convection by effective and reduced models that separate the fast, small scales from the slow, large scales. These reduced models can advance our understanding of a variety of turbulent systems that exhibit large-scale pattern formation, including mesoscale convection and solar granulation. Results {#results .unnumbered} ======= [**Superstructures for different Rayleigh and Prandtl numbers.**]{} Figure \[fig0\] shows the velocity field lines (top row) and the corresponding temperature contours in the midplane (bottom row) for a simulation at one of the lowest Prandtl numbers in our simulations. While the instantaneous pictures display the expected irregularity of a turbulent flow as visible for example by the streamline tangle in panel (a), the averaged data reveal a much more ordered pattern. We also see that the superstructure patterns are more easily discerned in temperature field snapshots than in those of the velocity field. Figure \[fig2\] confirms this observation. Here, we plot the root mean square (rms) values of the vertical velocity component $u_z$ and the temperature $T$. In agreement with Fig. \[fig0\], we split both fields into contributions coming from the time average over the time interval $\tau$ and the fluctuations, $$\begin{aligned} u_z({\bm x},t)&=U({\bm x})+u_z^{\prime}({\bm x},t)\,,\\ T({\bm x},t)&=\Theta({\bm x})+T^{\prime}({\bm x},t)\,.\end{aligned}$$ The averaging volume $\tilde{V}$ is a slab around the midplane. See Eqns. (\[uvf\]) and (\[tvf\]) later in the text for definitions of $U$ and $\Theta$. It can be seen that the rms values of the total and time averaged temperature are always close together when Prandtl and Rayleigh number are varied. This is in contrast to the vertical velocity component. Fluctuations dominate here when the Prandtl numbers are low and the Rayleigh numbers are sufficiently high. An averaging with respect to time is thus
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We comment on zero- and low-temperature structural phase transitions, expecting that these comments might be relevant not only for this structural case. We first consider a textbook model whose classical version is the only model for which the Landau theory of phase transitions and the concept of “soft mode” introduced by Ginzburg are exact. Within this model, we reveal the effects of quantum fluctuations and thermal ones at low temperatures. To do so, the knowledge of the dynamics of the model is needed. However, as already was emphasized by Ginzburg [*et al.*]{} in eighties, a realistic theory for such a dynamics at high temperatures is lacking, what also seems to be the case in the low temperature regime. Consequently, some theoretical conclusions turn out to be dependent on the assumptions on this dynamics. We illustrate this point with the low-temperature phase diagram, and discuss some unexpected shortcomings of the continuous medium approaches.' author: - 'A. Cano' - 'A. P. Levanyuk' title: 'On low-temperature structural phase transitions' --- Introduction ============ Zero- and low-temperature ($T$) phase transitions are nowadays a subject of great interest (see, e.g., Refs. [@Sondhi97; @Kvyatkovskii01; @Vojta03; @Belitz05; @Sachdev00] for recent reviews). The special case of structural phase transitions deserves, in our opinion, a special attention. First, it is very convenient when introducing the topic of low-$T$ phase transitions although, to the best of our knowledge, this pedagogical facet of the structural case has not been developed in the literature. One of the purposes of the present paper is just to develop this facet. Second, the discussion of structural transitions allows to reveal some unsolved problems which might have a fairly broad interest. It is worth mentioning that our study will be restricted to the region of small fluctuations (not very close to the phase-transition point). This region normally is not the region of main interest in the aforementioned papers, but the main specific features of the phase transition anomalies are clearly seen already in this region, not to mention that for interpretation of the experimental data this region is quite often the most relevant one. A considerable part of the theory of low-$T$ structural phase transitions is very simple. Its formulation uses elementary formulas of quantum and statistical mechanics, and its development involves a fairly simple mathematics. Nevertheless, this elementary theory suffices to discuss some points of general interest such as the validity of the Landau theory, the soft-mode concept, the role of quantum fluctuations in defining the phase-transition point, the specific features of the low-$T$ phase diagram, etc. This constitutes the first part of the paper where, because of pedagogical considerations, we use a very simple model. Nevertheless, even within this elementary treatment, there arise some questions as well as not completely justified assumptions which will be discussed in the second part of the paper. These questions and assumptions refer to the character of the dynamics of the order parameter near the zero- and low-$T$ phase transitions. This character has not been successfully explained for high-$T$ phase transitions: the origin of the so-called central peak in the soft mode spectrum is understood only qualitatively [@Ginzburg80]. For zero- and low-$T$ structural transitions this question has not been studied at all, although the dynamics of the order parameter is much more important here. Indeed, according to the classical statistical mechanics the static properties of the system do not depend on its dynamics. This is because (gaussian) integration over momenta simply gives a factor in the corresponding partition function. But the situation is different when quantum effects play a role. In this case, the partition function does not factorize because momenta and coordinates, now operators, do not commute with each other [@note1]. Therefore, a lack of exact knowledge of the dynamics impedes obtaining definite results for, e.g., such a “static” property as dependence of the phase transition temperature on a control parameter (e.g., strain or pressure) in the low-$T$ region. Given this situation, we will discuss several possibilities without proposing a finite conclusion about which of them corresponds to the reality. For this discussion we need no model at all, and the system is considered in this second part as a continuous medium. The single-ion model ==================== The so-called single-ion model (see, e.g., Ref. [@Strukov_Levanyuk]) is very convenient when illustrating a zero-$T$ structural phase transition. Within this model one assumes, first of all, that the crystal is composed by two types of atoms, say $A$ and $B$. Our aim is to describe “active” $A$-atoms in the simplest way, so we further assume that i) the sublattice of $B$-atoms can only be deformed homogeneously and ii) the interaction between $A$ atoms is a nearest-neighbor interaction mediated by springs. Additionally, there is an interaction between $A$ and $B$ atoms which is responsible for the relative position of the corresponding sublattices. Restricting ourselves to the orthorhombic case, let us choose the unit cell with $B$-atoms placed at the apices of the corresponding cell (see Fig. \[fig:1\]). Thus, the potential acting on $A$-atoms due to the $B$ ones has to be symmetric with respect to the center of this cell. This is so if this potential has i) a minimum in the center of the unit cell or ii) two symmetric out-of-center minima. In the following we shall assume that i) is the case when the crystal is strongly compressed and then, along the $z$-axis, it turns into case ii) with diminishing the compression (see Fig. \[fig:1\]). This makes possible a change in the mean position of $A$-atoms, i.e. a phase transition, in a fairly simple way. The potential energy of the system then can be written as $$\begin{aligned} U= U_0 +\sum_{\bm R}\left({a\over 2}u_{\bm R}^2 + {b\over 4}u_{\bm R}^4\right) +\sideset{}{'}\sum_{\bm R,\bm R'}{c\over 2} (u_{\bm R}-u_{\bm R'})^2, \label{potential}\end{aligned}$$ where $u_{\bm R}$ represents the displacement of the $A$-atom along the $z$-axis in the $\bm R$th unit cell. The first sum in this expression represents the effective potential acting on $A$-atoms due to $B$ ones. Let us characterize the compression of the system by the magnitude $w= (V_0 - V)/V_0$, where $V $ is the volume of the system and $V_0$ is this volume at zero pressure for the (nonequilibrium) configuration in which all $A$-atoms are maintained in the center of the corresponding unit cells (i.e., $u_{\bm R}=0$). Thus, by taking $a=\alpha(w-w_0)$, with $\alpha>0$, and $b$ as a positive constant; $w_0$ gives the strain at which the form of this potential change from one-well to two-well. \[The usually small difference between $V$ and $V_0$ ($|w|,|w_0|\ll 1$) turns out to be relevant for the change in the sign of the coefficient $a$ only, so we shall not distinguish between $V$ and $V_0$ anywhere but here.\] The second sum in Eq. is the interaction potential between $A$-atoms, where $c$ is the stiffness coefficient of springs linking pairs of $A$-atoms (see Fig. \[fig:1\]) and summation is carried out over nearest-neighbors only. Static properties: A classical zero-$T$ transition -------------------------------------------------- Let us suppose at this point that the mass of $A$-atoms is infinite, so they can be treated as classical particles. Consequently, the configuration of the system will be the one which simply minimizes the potential energy. The static properties of the system will be in accordance with this configuration, so let us proceed to determine it. ![The model: unit cell, effective potential acting on $A$-atoms due to $B$ ones, and an illustration of the interaction between $A$-atoms.[]{data-label="fig:1"}](SingleIonModel_a){width=".1\textwidth"} ![The model: unit cell, effective potential acting on $A$-atoms due to $B$ ones, and an illustration of the interaction between $A$-atoms.[]{data-label="fig:1"}](SingleIonModel_b "fig:"){width=".175\textwidth"} ![The model: unit cell, effective potential acting on $A$-atoms due to $B$ ones, and an illustration of the interaction between $A$-atoms.[]{data-label="fig:1"}](SingleIonModel_c "fig:"){width=".275\textwidth"} It is clear that the minimum of the potential energy corresponds to the configuration in which the springs linking $A$-atoms do not experience any deformation. So all the atoms will be located in the same minimum of the effective potential created by $B$-atoms: $u_{\bm R}=u_0$. Eq. then reduces to $$\begin{aligned} U= U_0 + N \Big( {a\over 2}u_{0}^2 + {b\over 4
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We establish a Galois-theoretic interpretation of cohomology in semi-abelian categories: cohomology with trivial coefficients classifies central extensions, also in arbitrarily high degrees. This allows us to obtain a duality, in a certain sense, between “internal” homology and “external” cohomology in semi-abelian categories. These results depend on a geometric viewpoint of the concept of a higher central extension, as well as the algebraic one in terms of commutators.' address: - 'Departamento de Matemática, Faculdade de Ciências e Tecnologia, Universidade do Algarve, Campus de Gambelas, 8005–139 Faro, Portugal' - 'Centro de Matemática, Universidade de Coimbra, 3001–454 Coimbra, Portugal' - 'Institut de Recherche en Mathématique et Physique, Université catholique de Louvain, chemin du cyclotron 2 bte L7.01.02, 1348 Louvain-la-Neuve, Belgium' author: - Diana Rodelo - Tim Van der Linden title: | Higher central extensions\ and cohomology --- [^1] [^2] Introduction {#introduction .unnumbered} ============ This article exposes a hidden duality between “internal” homology and “external” cohomology for certain group-like structures: we prove that cohomology with trivial coefficients classifies (higher) central extensions. Together with the work in low dimensions and with several closely related results in homology theory, this reveals a deep connection between Galois theory and cohomology, and a close link with homology which has been invisible so far. The context in which we work is sufficiently general to cover cohomology of, say, groups, crossed modules, Lie algebras and non-unitary rings, as well as the Yoneda $\operatorname{Ext}$ groups in the abelian case, and many new examples can easily be added to the list. In fact, almost any semi-abelian category would do, as long as it satisfies a certain commutator condition which occurs naturally in this setting—see below. This interpretation of cohomology is part of a bigger programme which intends to understand homological algebra in a non-abelian environment from the viewpoint of (categorical) Galois theory. Related results include, for instance, higher Hopf formulae for homology in semi-abelian categories [@EGVdL], higher-dimensional torsion theories [@Everaert-Gran-TT], a theory of satellites for homology without projectives [@GVdL2], and higher-dimensional commutator theory based on a notion of higher centrality [@EverVdL4; @EverVdLRCT]. Higher centrality {#higher-centrality .unnumbered} ----------------- The key novelty in the present approach to (co)homology of non-abelian algebraic objects is the concept of *higher centrality*. It allows us to express in an abstract but simple way the commutator conditions which we have to deal with. Following the ideas of Janelidze [@Janelidze:Double; @Janelidze:Hopf-talk], the formal theory of (not necessarily central) *higher (cubic) extensions* was first developed in [@EGVdL] in order to provide a general setting for the Brown–Ellis–Hopf formulae [@Brown-Ellis; @Donadze-Inassaridze-Porter]. The notion of *centrality* in the sense of categorical Galois theory [@Borceux-Janelidze; @Janelidze:Pure; @Janelidze-Kelly] depends on a Galois structure, and accordingly, centrality of higher extensions is defined using a tower of Galois structures. Let us make this explicit with a concrete example. Consider the category ${\ensuremath{\mathsf{Gp}}}$ of all groups and its (reflective) subcategory ${\ensuremath{\mathsf{Nil}}}_{2}$ determined by all groups of nilpotency class at most $2$. The induced reflector ${\ensuremath{\mathsf{nil}}}_{2}\colon{{\ensuremath{\mathsf{Gp}}}\to {\ensuremath{\mathsf{Nil}}}_{2}}$, left adjoint to the inclusion functor, takes a group $G$ and sends it to its $2$-nilpotent quotient $G/[[G,G],G]$. This situation—${\ensuremath{\mathsf{Gp}}}$ being a variety of algebras over ${\ensuremath{\mathsf{Set}}}$, and ${\ensuremath{\mathsf{Nil}}}_{2}$ a subvariety of it—admits a canonical homology theory: Barr–Beck comonadic homology [@Barr-Beck] with coefficients in the reflector ${\ensuremath{\mathsf{nil}}}_{2}$. Now for any group $Z$, the induced third homology group ${\mathrm{H}}_{3}(Z,{\ensuremath{\mathsf{nil}}}_{2})$ of $Z$ may be expressed by a Hopf formula, namely the quotient [@EGVdL Theorem 9.3] $$\frac{K_{0}\cap K_{1}\cap [[X,X],X]}{[[K_{0}\cap K_{1},X],X][[K_{0}, K_{1}],X][[K_{0},X],K_{1}][[X,K_{0}],K_{1}][[X,X],K_{0}\cap K_{1}]}.$$ Here the objects $K_{0}={\operatorname{Ker}(c)}$ and $K_{1}={\operatorname{Ker}(d)}$ are the kernels of $c$ and $d$, for any *two-cubic presentation* $$\label{Double-Extension-Intro} \vcenter{\xymatrix{X \ar@{ >>}[r]^-{c} \ar@{ >>}[d]_-{d} & C \ar@{ >>}[d] \\ D \ar@{ >>}[r] & Z}}$$ of $Z$. This means that the objects $C$, $D$ and $X$ are projective (= free) groups, and furthermore this commutative square is a *two-cubic extension* of $Z$: all its arrows, as well as the induced arrow to the pullback ${\lgroup}d,c{\rgroup}\colon{X\to D\times_{Z}C}$, are surjections. The denominator in the formula is a generalised commutator: a two-cubic extension of groups such as  is central (with respect to ${\ensuremath{\mathsf{Nil}}}_{2}$) precisely when this denominator is zero. The concept of *centrality* of two-cubic extensions is given by the Galois structure $\Gamma_{1}$ in the “tower” consisting of $$\Gamma_{0}=({\ensuremath{\mathsf{Gp}}},{\ensuremath{\mathsf{Nil}}}_{2},{\ensuremath{\mathcal{E}}},{\ensuremath{\mathcal{F}}},{\ensuremath{\mathsf{nil}}}_{2},\subseteq)$$ and $$\Gamma_{1}=({\ensuremath{\mathsf{Ext}}}({\ensuremath{\mathsf{Gp}}}),{\ensuremath{\mathsf{CExt}}}_{{\ensuremath{\mathsf{Nil}}}_{2}}({\ensuremath{\mathsf{Gp}}}),{\ensuremath{\mathcal{E}}}^{1},{\ensuremath{\mathcal{F}}}^{1},({\ensuremath{\mathsf{nil}}}_{2})_{1},\subseteq),$$ where ${\ensuremath{\mathcal{E}}}$, ${\ensuremath{\mathcal{F}}}$ are the classes of surjections and ${\ensuremath{\mathcal{E}}}^{1}$, ${\ensuremath{\mathcal{F}}}^{1}$ are the classes of two-cubic extensions in ${\ensuremath{\mathsf{Gp}}}$ and in ${\ensuremath{\mathsf{Nil}}}_{2}$, respectively. Here $\Gamma_{1}$ is induced by $\Gamma_{0}$ through its one-cubic central extensions, which are the objects of the full reflective subcategory ${\ensuremath{\mathsf{CExt}}}_{{\ensuremath{\mathsf{Nil}}}_{2}}({\ensuremath{\mathsf{Gp}}})$ with reflector $({\ensuremath{\mathsf{nil}}}_{2})_{1}$ of the category ${\ensuremath{\mathsf{Ext}}}({\ensuremath{\mathsf{Gp}}})$ of one-cubic extensions in ${\ensuremath{\mathsf{Gp}}}$. It is not hard to construct a two-cubic presentation of an object, certainly not in the varietal case, since a truncation of any simplicial projective resolution will do. As is apparent from the formula, the main difficulty in making it explicit lies in characterising the (two-cubic) central extensions corresponding to the functor which is being derived (in this case, ${\ensuremath{\mathsf{nil}}}_{2}$). Higher cubic central extensions are defined by induction; let us explain how this is done for lowest degrees (more details can be found in the following sections and in the articles [@EverHopf; @EGoeVdL; @EGVdL], amongst others). A [semi-abelian]{} category [@Janelidze-Marki-Tholen; @Borceux-Bourn] is pointed, Barr-exact [@Barr] and Bourn-protomodular [@Bourn1991] with binary sums. Let ${\ensuremath{\mathcal{X}}}$ be a semi-abelian category and ${\ensuremath{\mathcal{B}}}$ a [Birkhoff subcategory]{} [@Janelidze-Kelly] of ${\ensuremath{\mathcal{X}}}$—full, reflective and closed under subobjects and regular quotients, so that a Birkhoff subcategory of a variety is nothing but a subvariety. Let $$\label{Adjunction-1} \vcenter{\xymatrix{{{\ensuremath{\mathcal{X}}}} \ar@<1ex>[
{ "pile_set_name": "ArXiv" }
null
null
--- author: - Simone Secchi subtitle: 'Ph.D. Thesis' title: Nonlinear differential equations on noncompact domains ---
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'The motion of colloids in the flow field of a viscous liquid is investigated. The small colloid size compare to the macroscopical scale of the flow allow to calculate their velocity relative to that of the liquid. If the inner colloid density is larger then the density of the liquid the flow field has the domains where the colloid velocity is close to the liquid velocity. But in the domains with a strong braking of the liquid velocity the colloids are accelerated relative to the liquid. This effect is used for the qualitative explanation of the drag reduction in the flow around macroscopical bodies and in the pipes.' author: - | S.V.Iordanski\ Landau Institute for Theoretical Physics RAS\ 142432 Russia, Chernogolovka title: The flow around a macroscopical body by a colloid solution and the drag crisis --- More then 60 years ago [@1] it was discovered that a small concentration of polymers in liquid solution essentially decrease the drag in pipes. This effect is used in the oil transportation. There is a lot of theoretical and experimental publications on this subject. However there is no accepted qualitative interpretation of the physical origin for the observed drug reduction. Rather detailed paper[@2] use a complicate theory of the polymer deformation and its dependence on the inner strain but does not state its connection with the liquid flow. The recent work [@3] shows a bad agreement of the performed experiment with existing theories. The large emount of publications is devoted to the rheological properties of the concentrated polymer solutions see e.g. the review [@4], or the book [@5]. We shall not discuss this complicate subject suggesting that in the dilute polymer solutions the main problem is connected with the interaction of the separate polymer and the flow field of the liquid. The more subtle effects of the polymer deformation may be important for more exact quantitative description. In this note we considier first the more investigated problem of the flow around an immobile macroscopical body by Newtonian viscous liquid and its modification due the dilute solution of comparatively large polymer molecules. The description of the large polymer having thousands of the connected links is well developed (see e.g.[@7] or [@8]). The equilibrium state is represented by a coil having on average a spherical form with the radius $l=\sqrt{\frac{Na^2}{6}}$. Where $N$ is the number of the links, and $a$ is their length. The molecular weight of such a coil is much more then the molecular weight of the solvent. Therefore it is possible to neglect the Brownian motion because the polymer thermal velocity is small to the considering flow velocities. To simplify the problem further we shall treat the polymer as a large spherical colloid weakly compressible. The largest scale of the motion is given by the size of the macroscopic body $L$ which is much more then the average distance between the nearest colloids $c^{-1/3}$ where $c$ is the small volume concentration of the collids. This distance is large compare to the size of the colloid $$L\gg c^{-1/3}\gg l$$ ![the vertical direction coincides with that of the relative velocity](fig1.eps){width="5cm"} The motion of the colloid relative to the liquid ================================================ At the low colloid concentration it is possible to use the linear approximation (see e.g [@9]). Let us consider one colloid in the flow field. At the large distances compare to the colloid size $l$ the liquid flow can be considered as uniform. The equations of the motion have the form of the standard equations for an incompressible viscous liquid. We assume that the colloid is incompressible also. At the colloid boundary the liquid velocities and that of the colloid surface are equal and the tensor of the momentum transfer is continuous. The macroscopic motion connected with the large scale $L$ is a kind of an external force acting on the separate colloid and we can use the well known result (see e.g. [@6]) for the calculation of the force acting on the body (colloid) immersed in the liquid. If the body is moved with the liquid this force is equal to $\rho^lV_0\frac{dv_i^l}{dt}$, where $\rho^l$ is the density of the liquid, $V_0=\frac{4\pi l^3}{3}$ is the volume of the colloid. But really one must take into account the relative motion and add $-m_{ik}\frac{d}{dt}(w_i^p-v_i^l)$, where $m_{i,k}$ is the tensor of the associated masses. The relative motion gives also Stokes friction force $-6\pi\eta l(w_i^p-v_i^l)$. Summing the various contributions one get the force acting on the colloid $$\label{2} \rho^pV_0\frac{dw_{i}^p}{dt}=\rho^lV_0\frac{dv_i^l}{dt}-m_{ik}\frac{d}{dt}(w_k^p-v_k^l)- 6\pi\eta l(w_i^p-v_i^l)$$ Using Stokes law suggests that the time of the “viscous” relaxation $\frac{l^2\rho^l}{\eta}$ is small compare to the “hydrodynamical” time $L/U$, where $U$ is the velocity of the macroscopic body relative to the liquid. Stokes force is proportional to the first power of the colloid size and therefore the terms with the accelerations are comparatively small. Therefore the zero approximation gives $w_i^p=v_i^l$ where $v_i^l$ is the local fluid velocity at a small distance $l$ from the colloid. The next approximation can be obtained by introducing the zero approximation in that gives $$\label{3} w_i^p-v_i^l=-(\rho^p-\rho^l)\frac{2l^2}{9\eta}\frac{dv_i^l}{dt}$$ We shall use this approximation for the relative velocity of the colloid. This local “ microscopic” motion gives the contribution to the averaged equations of motion on the distances of the order $c^{-1/3}$. On the contrary neglecting this contribution the colloids have no effect on the averaged equations of motion. In Stokes approximation the force acting on unit area of the spherical colloids surface is constant $F_i=-\frac{3\eta}{l}(w_i^p-v_i^l)$ (see e.g. [@6]). Therefore the collid deformations are absent. In order to find the deformations one needs to consider Oseens corrections connected with the nonlinear inertial terms. The procedure to find the corrections in Reynolds number $$Re=\frac{|\vec{w}^p-\vec{v}^l|}{\eta}\rho^l$$ made in [@10; @11] can be found in [@6]. It is necessary to investigate the solution of the equation $$(\vec{u}\vec{\nabla})\vec{v}^l=-\frac{1}{\rho^l}\vec{\nabla}p+\nu\Delta\vec{v}^l$$ where $\nu=\frac{\eta}{\rho^l}$ is the kinematic viscosity, $\vec{u}$ is the relative velocity equal to its constant value far from the coil. In the external flow field there are two domains: the near one at $r\ll l/Re$ and the far at $r\gg l$, overlapping at $l/{Re}\gg r\gg l$. In the near domain the starting approximation coincides with Stokes solution, in the far domain the starting is Oseen approximation $\vec{u}=const$. The sewing of the appropriate solutions in the overlapping region gives the corrections to Stokes solution $\vec{v}^{(1)}$ $$\begin{aligned} v_r^{(2)}=\frac{3Re}{8}v_r^{(1)}+\frac{3Re}{32}\left(1-\frac{1}{r'}\right)\left(2+\frac{1}{r'}+\frac{1}{(r')^2}\right)(1-3cos^2\vartheta)\\ v_\vartheta^{(2)}=\frac{3Re}{8}v_{\vartheta}^{(1)}+\frac{3Re}{32}\left(1-\frac{1}{r'}\right) \left(4+\frac{1}{r'}+\frac{1}{(r')^2}+\frac{2}{(r')^2}\right)sin\vartheta cos\vartheta\end{aligned}$$ Here the spherical coordinates with the polar ax along the relative velocity are used and the non dimensional quantities are introduced:$r'$ in the units of the colloid radius $l$ and the velocities in the units of the relative velocity $u$. The calculations are simplified at small velocities and give at the colloid surface the pressure $$p^{(2)}=-(1-3cos^2\vartheta)\frac{3}{8l}\eta Re|\vec{u}^l-\vec{w}^p|$$ and the tangential strain $$\label{9} \sigma_{r\theta}=\frac{3\eta Re}{8l}|\vec{u}^l-\vec{w}^p|sin\vartheta cos\vartheta$$ At the picture (1) it is shown some section of the colloid and the points on its surface with the maximal stresses and their directions. There is the compression along the relative velocity direction and the elongation along the meridians with zeros at the poles and
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'We explore the long-time dynamics of Rabi model in a driven-dissipative setting and show that, as the atom-cavity coupling strength becomes larger than the cavity frequency, a new time scale emerges. This time scale, much larger than the natural relaxation time of the atom and the cavity, leads to long-lived metastable states susceptible to being observed experimentally. By applying a Floquet-Liouville approach to the time-dependent master equation, we systematically investigate the set of possible metastable states. We find that the properties of the metastable states can differ drastically from those of the steady state and relate these properties to the energy spectrum of the Rabi Hamiltonian.' author: - 'Alexandre Le Boité, Myung-Joong Hwang, and Martin B. Plenio' title: 'Metastability in the driven-dissipative Rabi model' --- Introduction ============ In the context of cavity quantum electrodynamics (QED), a common way to probe the quantum nature of the interaction between light and matter is to drive the system with a classical light field and record the statistics of the photons emitted from the cavity. For example, a sub-Poissonian statistics of output photons is an important evidence of effective photon-photon interactions induced by the atom-cavity coupling [@Walls:2007]. Such genuine quantum effects have been observed in a variety of systems, in the so-called strong-coupling regime of cavity QED, when the atom-cavity coupling strength is larger than any dissipation rate [@Rempe:1987; @Reithmaier:2004; @Wallraff:2004; @Peter:2005]. Recently, experimental progress in tailoring the light-matter interaction has made it possible to achieve a coupling strength that is comparable or even larger than the cavity frequency $\omega_c$ [@Devoret:2007; @Bourassa:2009; @Todorov:2010; @Niemczyk:2010; @Forn-Diaz:2010; @Nataf:2011; @Forn-Diaz:2016; @Yoshihara:2016; @Forn-Diaz:2016b]. From a theoretical perspective, the possibility of exploring this so-called ultrastrong coupling regime has stimulated numerous studies on the quantum Rabi model that takes into account the counter-rotating terms in the atom-cavity interaction [@Irish:2007; @Ashhab:2010; @Hwang:2010; @Casanova:2010; @Braak:2011; @Hwang:2015; @Wang:2016]. Since dissipation also plays a crucial role in most quantum optical setups, a meaningful description in this context involves a driven-dissipative scenario [@Ciuti:2006; @DeLiberato:2009; @Beaudoin:2011; @Ridolfo:2012; @Henriet:2014], in which the interplay between cavity losses and the external field drives the system into a steady state. In such a driven-dissipative setting of the Rabi model, it has been shown recently in Ref. [@LeBoite:2016] that as the coupling strength increases from $0.1\omega_c$ to $3\omega_c$, a series of transitions occurs in the output photon statistics, leading to a breakdown and revival of the so-called photon blockade effect and to a reversion to non-interacting photons. It demonstrates that the intricate interplay among the ultrastrong light-matter coupling, the external coherent driving and the dissipation stabilizes the system into a steady state exhibiting a rich quantum optical phenomenology. In this paper, going beyond the study of steady-state properties, we investigate the transient dynamics of the driven-dissipative Rabi model and show that it exhibits metastability in the ultrastrong coupling regime. Namely, we find that the convergence to the steady state is governed by a time scale significantly larger than the decay times of the atom and the cavity, giving rise to long-lived metastable states. When the atom-cavity coupling is much smaller than the cavity frequency, the time dependency of the Liouvillian can be eliminated by a change of reference frame [@Walls:2007]. All the information on the dynamics and metastable states is then encoded in the eigenvalues and eigenfunctions of the time-independent Liouvillian [@Risken:1987; @Vogel:1988; @Risken:1988; @Vogel:1989; @Casteels:2016; @Macieszczak:2016]. The break-down of the rotating-wave approximation in the ultrastrong coupling do not allow for such a simple transformation and the master equation remains time-dependent [@Ridolfo:2012; @LeBoite:2016]. To circumvent this issue we employ a Floquet-Liouville approach [@Ho:1986; @Grifoni:1998]: By applying Floquet theory to the Linblad master equation we reduce the time-dependent master equation to a time-independent eigenvalue problem in an enlarged Hilbert space. Within this theoretical framework, we compute the long-time dynamics in the weak-excitation regime, for a driving field resonant with the second available transition. We find that the corresponding Liouvillian gap becomes significantly smaller than the natural decay rates as one increases the atom-cavity coupling strength and relate this feature to the dressed-state properties of the Rabi Hamiltonian. More specifically, a central role is played by a parity shift occurring in the spectrum, resulting in the existence of two distinct decay channels. Metastability stems from the interplay between the two different time scales involved in these two channels. The Floquet-Liouville formalism also allows us to derive analytical expressions for the set of all possible metastable sates in terms of eigenvectors of the Floquet-Liouvillian and set bounds on the deviations from the steady state. Finally, we discuss practical implications of our analysis for future experiments probing the steady-state properties of the driven-dissipative Rabi model. The paper is organized as follows: The model is introduced in Sec. \[sec:model\]. The first numerical evidence of a separation of time scales in the dynamics and the emergence of metastable states are presented in Sec. \[sec:longtime\]. Section \[sec:floquet\] is devoted to the Floquet-Liouville formalism which is applied in Sec. \[sec:meta\] to a more thorough and systematic analysis of metastability. In Sec. \[sec:noise\] we evaluate the robustness of our findings when pure dephasing noise is included in the model and we conclude in Sec. \[sec:conclu\]. More details on Floquet theory are presented in Appendix \[app:floquet\] and the proofs of some spectral properties of the Floquet-Liouville operator are provided in Appendix \[app:meta\]. The model {#sec:model} ========= We consider a single cavity mode coupled to a two-level atom described by the Rabi Hamiltonian, $$\label{Hamilto_r} H_r = \omega_c a^{\dagger}a + \omega_a\sigma_+\sigma_- -g(a+a^{\dagger})\sigma_x,$$ where we have introduced the photon annihilation operator $a$, and the Pauli matrices $\sigma_x$, $\sigma_y$ (with $\sigma_{\pm} = \frac{1}{2}(\sigma_x \pm i\sigma_y)$). Here, $\omega_c$ is the cavity frequency, $\omega_a$ the atomic transition frequency, and $g$ the atom-cavity coupling strength. In the following we will focus on a resonant case, i.e., $\omega_c = \omega_a$. Note that there is no general explicit expression for the eigenstates and eigenvalues of the Rabi model. In the following, it will be convenient to label them by using an important symmetry property of the Hamiltonian, namely that the parity of the total number of excitations, $\Pi=\exp[i\pi(a^\dagger a +\sigma_+\sigma_-)]$, is a conserved quantity. We will denote by $|\Psi_j ^{\pm}\rangle$ the $j^{th}$ eigenstate ($j=0,1,..$) of the $\pm$ parity subspace and by $E_{j}^{\pm}$ the corresponding energy. With these notations, the ground state of $H_r$ is the state $|\Psi_0^{+}\rangle$, which is the lowest energy state of the $+$ parity subspace; while the first excited state of $H_r$, which corresponds to the lowest energy state of the $-$ parity subspace, is $|\Psi_0^{-}\rangle$. We focus in this paper on a driven-dissipative scenario where the cavity is driven by a monochromatic coherent field and both the cavity and the atom are coupled to their environments, leading to dissipation. The total time-dependent Hamiltonian of the system is $$\label{Hamilto} H(t) = H_r + F\cos(\omega_dt)(a+a^{\dagger}),$$ where $F$ is the intensity of the driving field and $\omega_d$ its frequency. The time evolution of the density matrix $\rho(t)$ is governed by a master equation of the form, $$\label{ME} \partial_t \rho = i[\rho,H(t)] +\mathcal{L}_a\rho+\mathcal{L}_{\sigma}\rho,$$ where the term $\mathcal{L}_a\rho+\mathcal{L}_{\sigma}\rho$ describes the dissipation of the system excitations into the environment. In the ultrastrong coupling regime, it is
{ "pile_set_name": "ArXiv" }
null
null
--- author: - Leigh Martin bibliography: - 'LeighBibliography.bib' title: Quantum feedback for measurement and control --- To my parents, my step parents, and my amazing sister Willow. The fact that research is not an isolated endeavor is one of the most important lessons I learned from my time at Berkeley. The quality of one’s research, and the joy of doing it are a direct result of one’s collaborators, colleagues and friends. I feel incredibly fortunate to have worked with such a talented, curious and kind group of people. I first wish to thank my advisors, Irfan Siddiqi and Birgitta Whaley for taking a risk on me by supporting my joint work in theory and experiment. The chance to work with both research groups has been an amazing and irreplaceable opportunity. I wish to thank both of them for providing the perfect balance of encouragement and critique, and guidance and freedom. In my experimental work, I am deeply grateful for the guidance of and collaboration with Shay Hacohen-Gourgy and Emmanuel Flurin, who showed me the ropes of experimental work, helped me discover my flaws and strengths, and never turned down an chance to discuss a crazy idea (no matter how sure they were that it was wrong!). I am also indebted to Mollie Schwartz, who helped give me the opportunity to work in Irfan’s group and introduced the field to me. Her warmth and encouragement made an enormous difference in embarking on a new path in research. In my theory work, I am indebted to the guidance and sharp intuition of Felix Motzoi, whose initial suggestion for a project carried me through a PhD’s worth of theory research, as well as Mohan Sarovar, whose mentorship gave me confidence and stability. I also wish to thank Mahrud Sayrafi, Sissi Wang, Yitian Chen, Song Zhang and Yuxiao Jiang for the meetings all over Berkeley while we carried out our joint projects. I looked forward and enjoyed each and every one of these discussions, which consistently took us in exciting and unexpected directions. In my experimental work, I especially wish to thank Vinay Ramasesh, who taught me the importance of cordial competition and open communication, and William Livingston, who always helped me see the light in the darkness of challenge or my own stubbornness. Many of the general insights that I attempt to convey in this thesis are of their making. I greatly appreciate Machiel Blok’s support, and all of the late afternoons spent tossing around ideas (also my apologies to Esther Blok for the countless times that I made Machiel late). I also wish to thank Sydney Schreppler, Kevin O’Brien, John Mark Kreikebaum, Andrew Eddins, David Toyli and Norman Yao for their support, collaborations and friendship. Finally, an enormous thank you to my parents, step parents, sister and friends in Berkeley, whose support was everything.
{ "pile_set_name": "ArXiv" }
null
null
--- abstract: 'Using the numerical data of MHD simulation for AGN jets based on our “sweeping magnetic twist model”, we calculated the Faraday rotation measure (FRM) and the Stokes parameters to compare with observations. We propose that the FRM distribution can be used to discuss the 3-dimensional structure of magnetic field around jets, together with the projected magnetic field derived from the Stokes parameters. In the present paper, we supposed the basic straight part of AGN jet, and used the data of axisymmetric simulation. The FRM distribution we derived has a general tendency to have gradient across the jet axis, which is due to the toroidal component of the helical magnetic field generated by the rotation of the accretion disk. This kind of gradient in the FRM distribution is actually observed in some AGN jets (e.g. Asada et al. 2002), which suggests helical magnetic field around the jets and thus supports our MHD model. Following this success, we are now extending our numerical observation to the wiggled part of the jets using the data of 3-dimensional simulation based on our model in the following paper.' author: - 'Yutaka Uchida, Hiromitsu Kigure, Shigenobu Hirose, Masanori Nakamura, and Robert Cameron' title: | Distribution of Faraday Rotation Measure\ in Jets from Active Galactic Nuclei\ I. Prediction from our Sweeping Magnetic Twist Model --- Introduction ============ To explain the formation of active galactic nucleus (AGN) jets and other astrophysical jets, various models have been proposed. Among them, magnetohydorodynamic (MHD) model is one of the most promising models, since it can explain both the acceleration and the collimation of the jets. Lovelace (1976) and Blandford (1976) first proposed the magnetically driven jet from accretion disks, and Blandford & Payne (1982) discussed magneto-centrifugally driven outflow from a Keplerian disk in steady, axisymmetric and self-similar situation. Uchida & Shibata (1985) performed a time-dependent, two-dimensional axisymmetric simulation in the case of star-forming outflows. They pointed out that large amplitude torsional Alfvén waves (TAW’s) generated by the interaction between the accretion disk and a large scale magnetic field play an important role (detail is described in section \[sec:review-model\]). In this paper, we refer this model as “sweeping magnetic twist model”. Uchida & Shibata (1986) extended the treatment to the case of AGN jets. After this work, many authors have performed time-dependent, two-dimensional axisymmetric simulations (e.g. Stone & Norman 1994, Ustyugova et al. 1995, Matsumoto et al. 1996, Ouyed & Pudritz 1997, Kudoh, Matsumoto, & Shibata 1998). Acceleration mechanisms in MHD model were studied in detail by 1.5-dimensional MHD equations (Kudoh & Shibata 1997a, 1997b). Using the numerical data of MHD model, observational quantities such as the Faraday rotation measure (FRM) or the Stokes parameters have been derived to compare with observations of AGN jets: Laing (1981) computed the total intensity, the linear polarization, and the projected magnetic field distributions, assuming some simple magnetic field configurations and high energy particle distributions in the cylindrical jet. Clarke, Norman, & Burns (1989) performed two dimensional MHD simulations in which a supersonic jet with a dynamically passive helical magnetic field was computed, and derived distributions of the total intensity, the projected electric field, and the linear polarization. Hardee & Rosen (1999) calculated the total intensity and the projected magnetic field distributions, using 3-dimensional MHD simulations of strongly magnetized conical jets. Hardee & Rosen (2002) calculated the FRM distribution and discussed that the radio source 3C465 in Abell cluster A2634 (Eilek & Owen 2002) suggests helical twisting of the flow. The FRM is given by the integral of $n_e B_\parallel$ along the line-of-sight between the emitter and the observer (where $B_\parallel$ is the line-of-sight component of the magnetic field, and $n_e$ is the electron density there). It is, in principle, not possible to specify which part on the line-of-sight the contribution comes from. However, in recent high-resolution radio observations (e.g. Eilek & Owen 2002, Asada et al. 2002), the FRM distribution seems to have good correlation with the configuration of the jet; this suggests that the FRM variation is due to the magnetized thermal plasma surrounding the emitting part of the jet. In fact, sharp FRM gradients seen in 3C273 can not be produced by a foreground Faraday screen (Taylor 1998, Asada et al. 2002). If this is the case, we can get a new information, that is, the line-of-sight component of the magnetic field, and thus can predict the 3-dimensional configuration of the magnetic field around the jet, together with the projected magnetic field. In this paper, we calculate the FRM, projected magnetic field, and total intensity from the numerical data of MHD simulation based on our “sweeping magnetic twist model”, and discuss these model counterparts comparing with some observations. Here we consider the straight part of the jet, and thus use the data of axisymmetric simulation. In section 2, we review the physics of our “sweeping magnetic twist model”. We introduce the method to calculate model counterparts of observational quantities in section 3, and show the results in section 4. Comparisons of model counterparts with some observations are discussed in section 5. Brief Review of Our “Sweeping Magnetic Twist Model” {#sec:review-model} =================================================== In this section, we briefly review the results of 2.5-dimensional MHD simulations based on our “sweeping magnetic twist model” to discuss the magnetic field around the straight part of jets. In the following paper, we will extend our treatment to the wiggled part of jets, which we have given an interpretation using a 3-dimensional MHD simulation based on our model (Nakamura, Uchida, & Hirose 2001). In the original MHD model (Uchida & Shibata 1985) for bipolar outflows in star-forming regions, they considered a gravitational contraction of magnetized gas to form a star (plus an accretion disk). They attributed the large scale magnetic field to the weak field in the Galactic arms. It is strengthened in the process of gravitational contraction of the interstellar gas to the star-forming core, and plays a critical role. The toroidal field is continuously produced from the poloidal field by the rotation of the accretion disk. This causes magnetic braking to the disk material, and the material which loses angular momentum falls gradually toward the central gravitator, and releases the gravitational energy. A part of the released gravitational energy is supplied to the jets along the magnetic field. The produced toroidal magnetic field propagates into two directions along the bunched large scale magnetic field as large amplitude TAW’s. These TAW’s serve to collimate the large scale poloidal field into the shape of a slender jet by dynamically pinching it in the propagation (“sweeping pinch effect”). This process, verified in the simulation, was proposed by Uchida & Shibata (1985) as a generic magnetic effect operating in the formation of astrophysical jets utilizing gravitational energy. The mechanism was applied to the case of AGN jets (Uchida & Shibata 1986) by supposing that a large scale intergalactic magnetic field plays the role in the case of the formation of a protogalaxy and a giant black hole at its core. They argued that the same process as in the star formation case is applicable to the AGN jet cases with more or less similar set up (having accretion disk around the central gravitator etc.), due to the similarity of the basic equation system. One of the possible differences between AGN jets and the star formation jets may be the relativistic effects. The effect of general relativity will be appreciable very close to the central giant black hole comparable to the Schwarzschild radius (Koide, Shibata, & Kudoh 1998). There are regions in which the special relativity should be taken into account when the Alfvén velocity estimated in the classical definition is close to or exceed the velocity of light. Here in this paper, we concentrate ourselves on the essential physical process in the production and collimation of the jet in the non-relativistic range. The problem was treated with the non-linear system of MHD equations in a time dependent way for the first time when they proposed this model in 1985. The numerical approach was so-called axial 2.5-dimensional approximation, where the quantities are axisymmetric, but the azimuthal components of vectors are included to allow them play very essential roles such as centrifugal effect or pinch effect. Thus the authors were able to deal with the physical driving and collimating mechanism they proposed to be in operation for astrophysical jets. Figure \[FIG01\] shows the time development in the 2.5-dimensional MHD simulation based on our model. The rotating gas pulls the magnetic field gradually inward, which twists up the magnetic field because the rotational velocity is faster as close to the center (Figure \[FIG02\]). This continuously supplies large amplitude TAW’s (Poynting flux) along the external magnetic field, which pinch the poloidal magnetic field into the shape of a slender jet as discussed in the above. The gas in the surface of the torus is swirled out into two directions along the axis, both by the magnetic pressure gradient and the
{ "pile_set_name": "ArXiv" }
null
null