text
stringlengths 0
12.5k
| meta
dict |
|---|---|
---
abstract: 'In October 2017, numerous women accused producer Harvey Weinstein of sexual harassment. Their stories encouraged other women to voice allegations of sexual harassment against many high profile men, including politicians, actors, and producers. These events are broadly referred to as the \#MeToo movement, named for the use of the hashtag “\#metoo” on social media platforms like Twitter and Facebook. The movement has widely been referred to as “empowering” because it has amplified the voices of previously unheard women over those of traditionally powerful men. In this work, we investigate dynamics of sentiment, power and agency in online media coverage of these events. Using a corpus of online media articles about the \#MeToo movement, we present a *contextual affective analysis*—an entity-centric approach that uses contextualized lexicons to examine how people are portrayed in media articles. We show that while these articles are sympathetic towards women who have experienced sexual harassment, they consistently present men as most powerful, even after sexual assault allegations. While we focus on media coverage of the \#MeToo movement, our method for contextual affective analysis readily generalizes to other domains.[^1]'
author:
- |
Anjalie Field, Gayatri Bhat, Yulia Tsvetkov\
Language Technologies Institute\
Carnegie Mellon University\
{anjalief, ytsvetko}@cs.cmu.edu
bibliography:
- 'references.bib'
title: |
Contextual Affective Analysis:\
A Case Study of People Portrayals in Online \#MeToo Stories
---
Introduction
============
In 2006, Tarana Burke founded the \#MeToo movement, aiming to promote hope and solidarity among women who have experienced sexual assault [@metoo_origins]. In October 2017, following waves of sexual harassment accusations against producer Harvey Weinstein, actress Alyssa Milano posted a tweet with the hashtag \#MeToo and encouraged others to do the same. Her message initiated a widespread movement, calling attention to the prevalence of sexual harassment and encouraging women to share their stories.
Tarana Burke has described her primary goal in founding the movement as “empowerment through empathy.”[^2] However, mainstream media outlets vary in their coverage of these recent events, to the extent that some outlets accuse others of misappropriating the movement. For instance, in January 2018, [Babe.net](Babe.net) published an article written by Katie Way, describing the interaction between anonymous ‘Grace’ and famous comedian Aziz Ansari [@babe]. The article sparked not only instant support for Grace, but also instant backlash criticizing Grace’s lack of agency: “The single most distressing thing to me about this story is that the only person with any agency in the story seems to be Aziz Ansari” [@nyt_grace]. One widely circulated article, written by Caitlin Flanagan and published in The Atlantic, strongly criticized Way’s article and questioned whether modern conventions prepare women to fight back against potential abusers [@atlantic].
The manner in which accounts of sexual harassment portray the people involved affects both the audience’s reaction to the story and the way people involved in these incidents interpret or cope with their experiences [@spry1995absence]. In this work, we use natural language processing (NLP) techniques to analyze online media coverage of the \#MeToo movement. In a *people-centric* approach, we analyze narratives that include individuals directly or indirectly involved in the movement: victims, perpetrators, influential commenters, reporters, etc. Unlike prior work focused on social media [@ribeiro2018media; @rho2018fostering], our work examines the prominent role that more traditional outlets and journalists continue to have in the modern-era online media landscape.
In order to structure our approach, we draw from social psychology research, which has identified 3 primary affect dimensions: *Potency* (strength vs. weakness), *Valence* (goodness vs. badness), and *Activity* (liveliness versus torpidity) [@osgood1957measurement; @russell1980circumplex; @russell2003core]. Exact terminology for these terms has varied across studies. For consistency with prior work in NLP, we refer to them as **power**, **sentiment**, and **agency**, respectively [@SapConnotationFilms; @RashkinConnotationInvestigation]. In the context of the \#MeToo movement, these dimensions tie closely to the concept of “empowerment through empathy.”
The crux of our method is in developing contextualized, entity-centric connotation frames, where polarity scores are generated for words in context. We generate token-level sentiment, power, and agency lexicons by combining contextual ELMo embeddings [@peters2018deep] with (uncontextualized) connotation frames [@RashkinConnotationInvestigation; @SapConnotationFilms] and use supervised learning to propagate annotations to unlabeled data in our \#MeToo corpus. Following prior work, we first evaluate these models over held-out subsets of the connotation frame annotations. We then evaluate the specifics of our method, namely contextualization and entity scoring, through manual annotations.
{width="\linewidth"}
We ultimately use these contextualized connotation frames to generate sentiment, power, and agency scores for entities in news articles related to the \#MeToo movement. We find that while the media generally portrays women revealing stories of harassment positively, these women are often not portrayed as having high power or agency, which threatens to undermine the goals of the movement. To the best of our knowledge, this is the first work to introduce *contextual affective analysis*, a method that enables nuanced, fine-grained, and directed analyses of affective social meanings in narratives.
Background
==========
We motivate the development of contextual affective analysis as a people-centric approach to analyzing narratives. Entity-centric models, which focus on people or characters rather than plot or events, have become increasingly common in NLP [@bamman2015people]. However, most approaches rely on unsupervised models [@iyyer2016feuding; @chambers2009unsupervised; @bamman2013learning; @card2016analyzing], which can capture high-level patterns but are difficult to interpret and do not target specific dimensions.
In contrast, we propose an interpretable approach that focuses on power, sentiment, and agency. These dimensions are considered both distinct and exhaustive in capturing affective meaning, in that all 3 dimensions are needed, and no additional dimensions are needed; other affective concepts, such as anger or joy, are thought to decompose into these three dimensions [@russell1980circumplex; @russell2003core]. Furthermore, these dimensions form the basis of *affective control theory*, a social psychological model which broadly addresses how people respond emotionally to events and how they attribute qualities to themselves and others [@heise1979understanding; @heise2007expressive; @robinson2006affect]. Affective control theory has served as a model for stereotype detection in NLP [@joseph2017girls].
Furthermore, while automated sentiment analysis has spanned many areas [@pang2008opinion; @liu2012sentiment][^3] analysis of power has been almost entirely limited to a dialog setting: how does person A talk to a higher-powered person B? [@gilbert2012phrases; @Prabhakaran2017DialogPower; @danescu2012echoes]. Here, we focus on a *narrative* setting: does the journalist portray person A or person B as more powerful?
In order to develop an interpretable analysis that focuses on sentiment, power, and agency in narrative, we draw from existing literature on *connotation frames*: sets of verbs annotated according to what they imply about semantically dependent entities. Connotation frames, first introduced by [@RashkinConnotationInvestigation ]{}, provide a framework for analyzing nuanced dimensions in text by combining polarity annotations with frame semantics [@fillmore1982frame]. We visualize connotation frames in [Figure \[fig:push\]]{} on the left. More specifically, verbs are annotated across various dimensions and perspectives, so that a verb might elicit a positive sentiment for its subject (i.e. sympathy) but imply a negative effect for its object. We target power, agency, and sentiment of entities through pre-collected sets of verbs that have been annotated for these traits:
- Perspective($writer \rightarrow agent$) – Does the writer portray the agent (or subject) of the verb as positive or negative?
- Perspective($writer \rightarrow theme$) – Does the writer portray the theme (or object) of the verb as positive or negative?
- Power – does the verb imply that the theme or the agent has power?
- Agency – does the verb imply that the subject has positive agency or negative agency?
For clarity, we refer to Perspective($writer \rightarrow agent$) as Sentiment($agent$) and Perspective($writer \rightarrow theme$) as Sentiment($theme$) throughout this paper.
These dimensions often differ for the same verb. For example, in the sentence: “She amuses him,” the verb “amuses” connotes that *she* has high agency, but *he* has higher power than *she*. [@RashkinConnotationInvestigation ]{} present a set of verbs annotated for sentiment, while [@SapConnotationFilms ]{} present a set of verbs annotated for power and agency. However, these lexicons are not extensive enough to facilitate corpus analysis without further refinements. First, they contain only a limited set of verbs, so a given corpus may contain many verbs that are not
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We prove a Chevalley restriction theorem and its double analogue for the cyclic quiver.'
---
The aim of this paper is to prove a Chevalley restriction theorem and its double analogue for the cyclic quiver. When the quiver is of type $\widehat{A}_0$, we recover the results for ${{\mathfrak{gl}}}_n$. The proof of our Chevalley restriction theorem is similar to the proof for ${{\mathfrak{gl}}}_n$; however, the proof of the double analogue uses a theorem of Crawley-Boevey on decomposition of quiver varieties. The double analogue is the limiting case of an isomorphism between a Calogero-Moser space and the center of a symplectic reflection algebra proved by Etingof and Ginzburg. It is also the associated graded version of a conjectural Harish-Chandra isomorphism for the cyclic quiver.
We now introduce our notations. Let $Q$ be the cyclic quiver with $m$ vertices. Let $\delta = (1, \ldots, 1)$ be the minimal positive imaginary root. Let ${{\mathcal{R}}}_{n} = \mathrm{Rep}(Q, n\delta)$ be the space of representations of $Q$ with dimension vector $n\delta$. Thus, $${{\mathcal{R}}}_{n} = \underbrace{{{\mathfrak{gl}}}_{n} \times \cdots \times {{\mathfrak{gl}}}_{n}}_{m}.$$ Next, let ${{\mathfrak{h}}}$ be the subspace of diagonal matrices in ${{\mathfrak{gl}}}_{n}$, and let $${{\mathcal{L}}}_n = \{ (z, \ldots, z)\in {{\mathcal{R}}}_{n}\ |\ z \in {{\mathfrak{h}}}\}.$$ Note that ${{\mathcal{L}}}_n$ is a $n$ dimensional subspace of ${{\mathcal{R}}}_{n}$. Let $$G_n = \underbrace{GL_{n} \times \cdots \times GL_{n}}_{m}.$$ An element $(g_{1}, \ldots, g_{m}) \in G_n$ acts on an element $(x_{1}, \ldots, x_{m}) \in {{\mathcal{R}}}_{n}$, giving $$(g_{2}^{-1}x_{1}g_{1},\ g_{3}^{-1}x_{2}g_{2},\ \ldots,
\ g_{1}^{-1}x_{m}g_{m}).$$ Let $\s_n$ be the symmetric group on $n$ letters, which we will also regard as the subgroup of permutation matrices in $GL_n$. Finally, let $$W_n = \s_{n} \ltimes (\mathbb{Z}/m\mathbb{Z})^{n}.$$ We have $W_n \hookrightarrow G_n$ via $$\begin{aligned}
(\sigma , \zeta_{1}, & \ldots, \zeta_{n}) \mapsto \\
& ( \sigma\cdot {{\mathrm{diag}}}(1, \ldots, 1), \sigma\cdot {{\mathrm{diag}}}(\zeta_{1},
\ldots, \zeta_{n}), \cdots, \sigma\cdot {{\mathrm{diag}}}(\zeta_{1}^{m-1},
\ldots, \zeta_{n}^{m-1})),\end{aligned}$$ where ${{\mathrm{diag}}}(\ldots)$ denotes the diagonal matrix with the indicated entries. Hence, $W_n$ acts on ${{\mathcal{R}}}_n$. Observe that the action of $W_n$ on ${{\mathcal{L}}}_n$ is stable. We remark that $W_n$ is the complex reflection group of type $G(m,1,n)$ and ${{\mathcal{L}}}_n$ is its reflection representation.
\[cr\] Restriction of functions from ${{\mathcal{R}}}_{n}$ to ${{\mathcal{L}}}_n$ gives an isomorphism $$\rho: \mathbb{C}[{{\mathcal{R}}}_{n}]^{G_n}
\stackrel{\sim}{\longrightarrow} \mathbb{C}[{{\mathcal{L}}}_n]^{W_n}.$$
[*Surjectivity of $\rho$*]{}: Write an element in ${{\mathcal{R}}}_{n}$ as $(x_{1}, \ldots, x_{m})$ and an element in ${{\mathcal{L}}}_n$ as $$({{\mathrm{diag}}}(z_{1}, \ldots, z_{n}), \ldots).$$ Note that $\mathbb{C}[{{\mathcal{L}}}_n]^{W_n}$ is a polynomial algebra generated by the elementary symmetric polynomials in $z_{1}^{m}, \ldots, z_{n}^{m}$. The homomorphism $\rho$ takes the coefficients of the characteristic polynomial of $x_{m}x_{m-1}\cdots x_{1}$ to the elementary symmetric polynomials in $z_{1}^{m}, \ldots, z_{n}^{m}$. This proved that $\rho$ is surjective.
[*Injectivity in the $n=1$ case*]{}: Call an element $(x_{1}, \ldots, x_{m}) \in
{{\mathcal{R}}}_{1}$ generic if $x_{1}\cdots x_{m} \neq 0$. Observe that the set of generic elements are Zariski open dense in ${{\mathcal{R}}}_{1}$. Moreover, it is easy to see that in this case, two generic elements $(x_{1}, \ldots, x_{m})$ and $(x'_{1}, \ldots, x'_{m})$ are in the same $G_1$-orbit iff $x_{1}\cdots x_{m} = x'_{1}\cdots x'_{m}$. In particular, ${{\mathcal{L}}}_1$ intersects every generic orbit. Hence, $\rho$ is injective.
[*Injectivity in the general case*]{}: Call an element $(x_{1}, \ldots, x_{m})$ in ${{\mathcal{R}}}_n$ generic if $x_{m}x_{m-1}\cdots x_{1}$ has pairwise distinct nonzero eigenvalues. Denote the subset of generic elements in ${{\mathcal{R}}}_n$ by ${{\mathcal{R}}}'_n$, and let ${{\mathcal{L}}}'_n={{\mathcal{L}}}_n\cap{{\mathcal{R}}}'_n$. Observe that ${{\mathcal{R}}}'_n$ and ${{\mathcal{L}}}'_n$ are, respectively, Zariski open dense in ${{\mathcal{R}}}_n$ and ${{\mathcal{L}}}_n$. Moreover, ${{\mathcal{R}}}'_n$ is $G_n$-stable and ${{\mathcal{L}}}'_n$ is $W_n$-stable. The injectivity of $\rho$ follows from the $n=1$ case and the following claim.
*Claim*: If $(x_{1}, \ldots, x_{m})\in {{\mathcal{R}}}'_n$, then it can be diagonalized, i.e. $G_n$-conjugated to an element in $$\underbrace{R_{1} \times \cdots
\times R_{1}}_{n} = \underbrace{{{\mathfrak{h}}}\times \cdots \times {{\mathfrak{h}}}}_{m}.$$
[*Proof of Claim*]{}: By our assumption, $x_1, \ldots, x_m$ are invertible matrices. Moreover, there exists an invertible matrix $g$ such that $g^{-1}x_{m}x_{m-1}\cdots x_{1}g$ is diagonal. Then, using $$(g, x_1g, x_2x_1g, \ldots, x_{m-1}\cdots x_1g)\in G_n,$$ we can conjugate $(x_1, \ldots, x_m)$ to $$(1, \ldots, 1, g^{-1}x_{m}x_{m-1}\cdots x_{1}g).$$ This proved the claim, and hence the theorem.
The Jacobian of the morphism ${{\mathcal{L}}}_n \rightarrow {{\mathcal{L}}}_n/W_n$ at a point $$(z, \ldots)=({{\mathrm{diag}}}(z_{1}, \ldots, z_{n}), \ldots) \in {{\mathcal{L}}}_n$$ is, up to a nonzero constant, equal to $$(z_1\cdots z_n)^{m-1}\prod_{i<j}(z_i^m-z_j^m).$$ Thus, ${{\mathcal{L}}}'_n$ is the set of points where the Jacobian is nonzero.
We now proceed to the double analogue of Theorem \[cr\]. Let ${{\mathcal{Z}}}_n$ be the zero set of the moment map of the $G_n$-action on $T^{*}{{\mathcal{R}}}_{n} = {\mathrm{Rep}}(\overline{Q}, n\delta)$, where $\overline{Q}$ is the double quiver of $Q$. Write an element in ${\mathrm{Rep}}(\overline{Q}, n\delta)$ as $$(x_{1}, \ldots, x_{m}, y_{1}, \ldots, y_{m})
\in \underbrace{{{\mathfrak{gl}}}_n\times \cdots \times {{\mathfrak{gl}}}_n}_{2m}.$$ Here, the arrow for $y_{i}$ is opposite to the arrow for $x_{i}$. In explicit terms, ${{\mathcal{Z}}}_n$ is defined by the moment map equations $$y_{1}x_{1} - x_{m}y_{m} = 0,\ y_{2}x_{2} - x_{1}y_{1} = 0,\ \ldots.$$ The action of an element $(g_{1}, \ldots, g_{m})\in G_n$ on ${\mathrm{Rep}}(\overline{Q}, n\delta)$ is given by the formula $$(g_{2
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper I describe the history of the surreal trajectories problem and argue that in fact it is not a problem for Bohm’s theory. More specifically, I argue that one can take the particle trajectories predicted by Bohm’s theory to be the actual trajectories that particles follow and that there is no reason to suppose that good particle detectors are somehow fooled in the context of the surreal trajectories experiments. Rather than showing that Bohm’s theory predicts the wrong particle trajectories or that it somehow prevents one from making reliable measurements, such experiments ultimately reveal the special role played by position and the fundamental incompatibility between Bohm’s theory and the relativity.[^1]'
---
=10000
The Persistance of Memory: Surreal Trajectories in Bohm’s Theory Jeffrey A. Barrett
Bohm’s theory[^2] has become increasingly popular as a nonrelativistic solution to the quantum measurement problem. It makes the same empirical predictions for the statistical distribution of particle configurations as the standard von Neumann-Dirac collapse formulation of quantum mechanics whenever the latter makes unambiguous predictions. Bohm’s theory also treats measuring devices exactly the same way it treats other physical systems. The quantum-mechanical state of a system always evolves in the usual linear, deterministic way, so one does not encounter the problems that arise in collapse formulations of quantum mechanics when one tries to stipulate the conditions under which a collapse occurs. And Bohm’s theory does not require one to postulate branching worlds or disembodied minds or any of the other extravagant assumptions that often accompany no-collapse formulations of quantum mechanics.
While Bohm’s theory avoids many of the problems associated with other formulations of quantum mechanics, it does have its own problems. One problem, it has been argued, is that the particle trajectories it predicts are not the real particle trajectories. This is the surreal trajectories problem. If Bohm’s theory does in fact make false predictions concerning particle trajectories, then this is presumably a serious problem. I will argue, however, that there is no reason to suppose that Bohm’s theory makes false predictions concerning the trajectories of particles. Indeed, I will argue that a good position measuring device need never be mistaken concerning the actual position of a particle [*at the moment that the particle’s position is in fact recorded*]{}.
While surreal trajectories are not a problem for Bohm’s theory, the way that it accounts for the results of the surreal trajectories experiments reveals the sense in which it is fundamentally incompatible with relativity, and this is a problem.
On Bohm’s theory the quantum-mechanical state $\psi$ evolves in the usual linear, deterministic way, but one supposes that every particle always has a determinate position and follows a continuous, determinsitic trajectory. The motion of a particular particle typically depends on the evolution of $\psi$ and the positions of other (perhaps distant) particles. The particle motion is described by an auxiliary dynamics, a dynamics that supplements the usual linear quantum dynamics. In its simplest form, what one might call the [*minimal version*]{} (the version of the theory described by Bell 1987, 127), Bohm’s theory is characterized by the following basic principles:
1\. State Description: The complete physical state at a time is given by the wave function $\psi$ and the determinate particle configuration $Q$.
2\. Wave Dynamics: The time evolution of the wave function is given by the usual linear dynamics. In the simplest case, this is just Schrödinger’s equation $$i \hbar \frac{\partial \psi}{\partial t} = \hat{H} \psi$$ More generally, one uses the form of the linear dynamics appropriate to one’s application (as in the spin examples discussed below).
3\. Particle Dynamics: The particles move according to $$\frac{d Q_k}{dt} = \frac{1}{m_k} \frac{\mbox{Im}(\psi^* \nabla_k \psi)}{\psi^* \psi}
\mbox{ evaluated at }Q$$ where $m_k$ is the mass of particle $k$ and $Q$ is the current particle configuration.
4\. Distribution Postulate: There is a time $t_0$ when the epistemic probability density for the configuration $Q$ is given by $\rho(Q, t_0)= |\psi(Q, t_0)|^2$.
If there are $N$ particles, then $\psi$ is a function in $3N$-dimensional configuration space (three dimensions for the position of each particle), and the current particle configuration $Q$ is represented by a single point in configuration space (in configuration space a single point gives the position of every particle). Again, each particle moves in a way that depends on its position, the evolution of the wave function, and the positions of the other particles.
Concerning how one should think of the role of the wave function in Bohm’s theory, John Bell once said that “[*no one can understand this theory until he is willing to think of $\psi$ as a real objective field rather than just a ‘probability amplitude.’ Even though it propagates not in $3$-space but in $3N$-space*]{}” (1987, 128). While the ontology suggested by Bell here is at best puzzling, the practical idea behind it is a good one: The best way to picture what the particle dynamics does is to picture the point representing the $N$-particle configuration being carried along by the probability currents generated by the linear evolution of the wave function $\psi$ in configuration space. Once one has this picture firmly in mind one will understand how Bohm’s theory accounts for quantum-mechanical correlations in the context of the surreal-trajectory experiments and the sense in which the theory is fundamentally incompatible with relativity.
Since the total particle configuration can be thought of as being pushed around by the probability current in configuration space, the probability of the particle configuration being found in a particular region of configuration space changes as the integral of $|\psi|^2$ over that region changes. More specifically, the continuity equation $$\frac{\partial \rho}{\partial t} + \mbox{div}(\rho v^\psi) = 0$$ is satisfied by the probability density $\rho=|\psi|^2$. And this means that if the epistemic probability density for the particle configuration is ever $|\psi|^2$, then it will always be $|\psi|^2$, unless one makes an observation. That is, if one starts with an epistemic probability density of $\rho(t_0)=|\psi(t_0)|^2$, then, given the dynamics, one should update this probability density at time $t$ so that $\rho(t)=|\psi(t)|^2$. And if one makes an observation, then the epistemic probability density will be given by the system’s [*effective*]{} wave function, the component (in the configuration space representation) of the total wave function that is in fact responsible for the post-measurement time evolution of the system’s configuration. The upshot is that if the distribution postulate is ever satisfied, then the most that one can learn from a measurement is the wave packet that the current particle configuration is associated with and the epistemic probability distribution for the actual configuration over this packet.[^3] This is why Bohm’s theory makes the same statistical predictions for particle configurations as the standard collapse formulation of quantum mechanics.
While it makes the same statistical predictions as the standard formulation of quantum mechanics, Bohm’s theory is deterministic. More specifically, given the energy properties of a simple closed system, the complete physical state at any time (the wave function and the particle configuration) fully determines the physical state at all other times.[^4]. It follows that, given a particular evolution of the wave function, possible trajectories for the configuration of a system can never cross at a time in configuration space. And this feature of Bohm’s theory will prove important later.
Another feature of Bohm’s theory that will prove imporant later is the special role played by position in accounting for our determinate measurement results. In order for Bohm’s theory to explain why we get the determinate measurement records that we do (which is presumably a precondition for it counting as a solution to the measurement problem), one must suppose, as a basic interpretational principle, that, given the usual quantum mechanical state, making particle positions determinate provides determinate measurement records. Since particle positions are always determinate on Bohm’s theory, this would guarantee determinate measurement records. And, at least on the minimal version of Bohm’s theory, position is the only determinate, noncontextual property that could serve to provide determinate measurement records.[^5]
The distinction between noncontextual and contextual properties deserves some explanation. Whether a system is found to have a particular contextual property or not typically depends on how one measures the property: one might get the result “Yes” if the contextual property is measured one way and “No” if it is measured another. Consequently, contextual properties are not intrinsic properties of the system to which they are typically ascribed. One might say that contextual properties serve to prop up our talk of those properties that we are used to talking about but which arguably should not count as properties at all in Bohm’s theory. While the language of contextual properties provides a convenient (but often misleading!) way of comparing the predictions of Bohm’s theory with the predictions of other physical theories, the predictions of Bohm’s theory are always ultmately just predictions about the evolution of the wavefunction and the positions of the particles relative to the wavefunction.
The upshot of all this is just that position relative to the wave function, or more precisely configuration relative to the wave function, is ultimately the only property that one can appeal to in the minimal version
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Let $(A,{\mathfrak{m} })$ be a Noetherian local ring with infinite residue field and let $I$ be an ideal in $A$ and let $F(I) =
\oplus_{n \geq 0}I^n/{\mathfrak{m} }I^n$ be the fiber cone of $I$. We prove certain relations among the Hilbert coefficients $f_0(I),f_1(I), f_2(I)$ of $F(I)$ when the $a$-invariant of the associated graded ring $G(I)$ is negative.
address:
- 'Chennai Mathematical Institute, Plot H1, SIPCOT IT Park Padur PO, Siruseri 603103, India'
- 'Department of Mathematics, IIT Bombay, Powai, Mumbai 400 076, India'
author:
- 'Clare D’Cruz'
- 'Tony J. Puthenpurakal'
title: |
The Hilbert coefficients of the fiber cone and\
the $a$-invariant of the associated graded ring
---
[^1]
introduction
============
Let $(A,{\mathfrak{m} })$ be a Noetherian local ring with *infinite residue field* $k = A/{\mathfrak{m} }$. Let $I$ be an ideal in $A$. The *fiber cone* of $I$ is the standard graded $k$-algebra $F(I) = \bigoplus_{n\geq 0}I^n/{\mathfrak{m} }I^{n}.$ Set $l(I) = \dim F(I)$, the *analytic spread* of $I$. The Hilbert polynomial of $F(I)$ is denoted by $f_I(z)$. Write $f_I(z) = \sum_{i = 0}^{l-1}(-1)^i f_i(I)\binom{z+l-1-i}{l-1-i}$ where $l = l(I)$ We call $f_i(I)$ the $i^{th}$ *fiber coefficient* of $I$.
Most recent results in the study of fiber cone involve the depth of $G(I) = \bigoplus_{n \geq 0}I^n/ I^{n+1} $, the associated graded ring of $I$. When $I$ is ${\mathfrak{m} }$-primary there has been some research relating $f_0(I)$ (the *multiplicity* of $F(I)$) with various other invariants of $I$ (see [@JayV05 4.1], [@CVP032 4.3] and [@Cor05 3.4]). In the case of $G(I)$ the relations among the Hilbert coefficients $e_0(I),e_1(I),e_2(I)$ are well known (see [@VaSix]). However there is no result relating $f_0(I), f_1(I)$ and $f_2(I)$. The reason for this is not difficult to find: any standard $k$-algebra can be thought as a fiber cone of its graded maximal ideal. So any result involving the relation between $f_i(I)$ *would only hold in a restricted class of ideals*. Our paper explores the relation between ${\mathbf{a} }(I)$, *the $a$-invariant of $G(I)$*, and the Hilbert coefficients of $F(I)$. This is a new idea.
We first analyze when $l(I) = 2,3$ as it throws light on the general result.
\[result1\] Let $(A,{\mathfrak{m} })$ be a Noetherian local ring with infinite residue field $k = A/{\mathfrak{m} }$. Let $I$ be an ideal with $l(I) =2$. If ${\mathbf{a} }(I) < 0$ then
$f_1(I) \leq f_0(I) -1. $
Furthermore, equality holds if and only if $F(I^n)$ is [Cohen-Macaulay]{} for all $n \gg 0$. If ${\operatorname{grade}}(I) = 2$ then equality holds.
This result should be compared with a result due to Northcott [@Nor-e1], which in our context states that $f_1({\mathfrak{m} }) \geq f_0({\mathfrak{m} }) -1$ whenever $A$ is [Cohen-Macaulay]{}. In \[2ex1\], we give an example of a two dimensional Noetherian local ring $(A,{\mathfrak{m} })$ with ${\operatorname{depth}}A = 1$ but $f_1(I) < f_0(I) -1$.
To analyze the case when equality holds in Theorem \[result1\] we resolve $F(I^n)$ as a $F(J^{[n]})= k[X^{n}_{1}, X^{n}_{2}]$-module and write it as: $$0 {\longrightarrow}K_n {\longrightarrow}\bigoplus_{i = 1}^{\beta_{1}^{[n]}}F(J^{[n]})( - 1 - \alpha_{i}^{[n]} ) {\longrightarrow}F(J^{[n]})^{\beta_{0}^{[n]}} {\longrightarrow}F(I^n) {\longrightarrow}0$$ Here $\alpha_{i}^{[n]} \geq 0$ for all $i$. As $ {\operatorname{depth}}F(I^n) \geq 1$ for all $n \gg 0$ we get $K_n = 0$ for all $n \gg 0$. We show in Theorem \[sub\] that if ${\mathbf{a} }(I) < 0$ then *for all* $n \gg 0$, $$\begin{aligned}
f_1(I) - f_0(I) + 1 &= - \sum_{i = 1}^{\beta_{1}^{[n]}}\alpha_{i}^{[n]} \quad \text{and} \\
\beta_{1}^{[n]} &= 0 \ \ \text{if and only if } \alpha_{i}^{[n]} = 0 \ \text{ for all $i$ }.\end{aligned}$$
Our second result Theorem \[sectheorem\] has a noteworthy consequence when $G(I)$ is [Cohen-Macaulay]{}.
\[seccor\] Let $(A,{\mathfrak{m} })$ be [Cohen-Macaulay]{} local ring of dimension $d = 3$. Let $I$ be an ${\mathfrak{m} }$-primary ideal with $G(I)$ [Cohen-Macaulay]{} and ${\operatorname{red}}(I) = 2$. Then
$f_2(I) \geq f_1(I) - f_0(I) + 1$.
We extend our results to higher analytic spread using Rees-superficial sequences (see section $6$ for details), under some mild assumptions on $grade(I)$. We state some of our noteworthy results. The first one (see \[mth1\]) states that if $l(I) \geq 2 $, ${\operatorname{grade}}(I) \geq l(I) -2$ and reduction number of $I \leq 1$ then $f_1(I) \leq f_0(I) - 1$ with equality if ${\operatorname{grade}}(I) = l(I)$. An immediate consequence (see \[NarC\]) is that if $(A,{\mathfrak{m} })$ is [Cohen-Macaulay]{} with $\dim A \geq 2$, $I$ an ${\mathfrak{m} }$-primary ideal and the second Hilbert-Samuel coefficient $e_2(I) = 0$ then $f_1(I) = f_0(I) - 1$ (see 2.7 for definition of $e_2(I)$).
Finally, we show that if $A$ is [Cohen-Macaulay]{} ring of dimension at least three and if $I$ an ${\mathfrak{m} }$-primary ideal of reduction number two whose associated graded ring is [Cohen-Macaulay]{}, then $f_2(I) \geq f_1(I)-f_0(I) +1$ (see \[mth2\]).
Here is an overview of the contents of the paper. In section $1$ we introduce some notations and preliminary facts needed. In section $2$ we introduce two complexes which will be used in the subsequent sections. In section $3$ we prove the main result for $l=2$ (Theorem \[result1\]). In section $4$ we prove our second Theorem and as a consequence obtain Theorem \[seccor\]. In section $5$ we obtain results on the coefficients of the fiber cone for any analytic spread. In the appendix( =section 6) we recall some basic facts regarding minimal reductions and filter-regular elements and prove an elementary result; which is useful in section 3.
*Acknowledgments* The authors thanks the referee for many pertinent comments. The author also thanks Fahed Zulfeqarr and A. V. Jayanthan for help in examples.
preliminaries
=============
From now on $(A,{\mathfrak{m} })$ is a Noetherian local ring of dimension $d$, with infinite residue field. All modules are assumed to be finitely generated. For a finitely generated
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'When a large collection of objects (e.g., robots, sensors, etc.) has to be deployed in a given environment, it is often required to plan a coordinated motion of the objects from their initial position to a final configuration enjoying some global property. In such a scenario, the problem of minimizing some function of the distance travelled, and therefore energy consumption, is of vital importance. In this paper we study several motion planning problems that arise when the objects must be moved on a graph, in order to reach certain goals which are of interest for several network applications. Among the others, these goals include broadcasting messages and forming connected or interference-free networks. We study these problems with the aim of minimizing a number of natural measures such as the average/overall distance travelled, the maximum distance travelled, or the number of objects that need to be moved. To this respect, we provide several approximability and inapproximability results, most of which are tight.'
author:
- Davide Bilò
- Luciano Gualà
- Stefano Leucci
- Guido Proietti
date: 'Received: date / Accepted: date'
title: 'Exact and approximate algorithms for movement problems on (special classes of) graphs [^1] [^2] '
---
Introduction
============
In many practical applications a number of centrally controlled objects need to be moved in a given environment in order to complete some task. Problems of this kind often occur in robot motion planning where we seek to move a set of robots from their starting position to a set of ending positions such that a certain property is satisfied. For example, if the robots are equipped with a short range communication device we might want to move them so that a message originating from one of the robots can be routed to all the others. If the robots’ goal is to monitor a certain area we might want to move them so that they are not too close to each other. Other interesting problems include gathering (placing robots next to each other), monitoring of traffic between two locations, building interference-free networks, and so on. To make things harder, objects to be moved are often equipped with a limited supply of energy. Preserving energy is a critical problem in ad-hoc networking, and movements are expensive. To prolong the lifetime of the objects we seek to minimize the energy consumed during movements and thus the distance travelled. Sometimes, instead, movements are cheap but before and/or after an object moves it needs to perform expensive operations. In this scenario we might be interested in moving the minimum number of objects needed to reach the goal.
In this paper, we assume the underlying environment is actually a *network*, which can be modelled as an undirected graph $G$, and the moving objects are centrally controlled *pebbles* that are initially placed on vertices of $G$, and that can be moved to other vertices by traversing the graph edges. To this respect, we study several movement planning problems that arise by various combinations of final positioning goals and movement optimization measures. In particular, we focus our study on the scenarios where we want the pebbles to be moved to a *connected subgraph* ([$\textsc{Con}$]{}), an *independent set* ([$\textsc{Ind}$]{}), or a *clique* ([$\textsc{Clique}$]{}) of $G$, while minimizing either the *overall movement* ([$\textsc{Sum}$]{}), the *maximum movement* ([$\textsc{Max}$]{}), or the *number of moved pebbles* ([$\textsc{Num}$]{}). We also give some preliminary results on the problem of moving the pebbles to an *s-t-cut*, i.e., a set of vertices whose removal makes two given vertices $s,t$ disconnected ([$s\mbox{-}t\mbox{-}\textsc{Cut}$]{}) while minimizing the above measures.
We will denote each of the above problems with $\psi$-$c$, where $\psi$ represents the goal to be achieved and $c$ the measure to be minimized. For a more rigorous definition of the problems we refer the reader to Section \[sec:formal\_definition\].
#### Related work.
Although movement problems were deeply investigated in a distributed setting (see [@PS06] for a survey), quite surprisingly the centralized counterpart has received attention from the scientific community only in the last few years.
The first paper which defines and studies these problems in this latter setting is [@demaine2007minimizing]. In their work, the authors study the problem of moving the pebbles on a graph $G$ of $n$ vertices so that their final positions form a *connected component*, a *path* (directed or undirected) *between two specified nodes*, an *independent set*, or a *matching* (two pebbles are matched together if their distance is exactly $1$).
Regarding connectivity problems, in [@demaine2007minimizing] the authors show that all the variants are hard and that the approximation ratio of ${\ensuremath{\textsc{Con}}}$-${\ensuremath{\textsc{Max}}}$ is between $2$ and ${\ensuremath{O}}(1+\sqrt{k/c^*})$, where $k$ is the number of pebbles and $c^*$ denotes the measure of an optimal solution. This result has been improved in [@berman2011], where the authors show that ${\ensuremath{\textsc{Con}}}$-${\ensuremath{\textsc{Max}}}$ can be approximated within a constant factor. In [@demaine2007minimizing] it is also shown that ${\ensuremath{\textsc{Con}}}$-${\ensuremath{\textsc{Sum}}}$ and ${\ensuremath{\textsc{Con}}}$-${\ensuremath{\textsc{Num}}}$ are not approximable within ${\ensuremath{O}}(n^{1-\epsilon})$ (for any positive $\epsilon$) and $o(\log n)$, respectively, while they admit approximation algorithms with ratios of ${\ensuremath{O}}(\min\{n \log n, k\})$ and ${\ensuremath{O}}(k^\epsilon)$, respectively. Moreover, the authors also provide an exact polynomial-time algorithm for ${\ensuremath{\textsc{Con}}}$-${\ensuremath{\textsc{Max}}}$ on trees.
Concerning independency problems, in [@demaine2007minimizing] the authors remark that it is [$\textsf{\upshape NP}$]{}-hard even to find any feasible solution on general graphs since it would require to find an independent set of size at least $k$. This clearly holds for all three objective functions. For this reason, they study an Euclidean variant of these problems where pebbles have to be moved on a plane so that their pairwise distances are strictly greater than $1$. In this case, the authors provide an approximation algorithm that guarantees an additive error of at most $1+1/\sqrt{3}$ for ${\ensuremath{\textsc{Ind}}}$-${\ensuremath{\textsc{Max}}}$, and a polynomial time approximation scheme for ${\ensuremath{\textsc{Ind}}}$-${\ensuremath{\textsc{Num}}}$.
More recently, in [@friggstad2011minimizing], a variant of the classical facility location problem has been studied. This variant, called *mobile facility location*, can be modelled as a movement problem and is approximable within $(3+\epsilon)$ (for any constant $\epsilon>0$) if we seek to minimize the total movement [@ahmadian2013local], while the variant where the maximum movement has to be minimized admits a tight $2$-approximation [@demaine2007minimizing; @friggstad2011minimizing]. Moreover, as it is frequent in the practice to have a small number of pebbles compared to the size of the environment (i.e., the vertices of the graph), the authors of [@demaine2009FPT] turn to study fixed-parameter tractability. They show a relation between the complexity of the problems and their *minimal configurations* (sets of final positions of the pebbles that correspond to feasible solutions, such that any removal of an edge makes them unacceptable). Finally, we mention that in [@BDGMPW13] it was considered a set of vertex-to-vertex motion planning problems in a simple polygon, with the aim of forming final configurations enjoying some sort of *visual connectivity* among the pebbles.
#### Our results.
We start by studying connectivity motions problems in the case where pebbles move on a tree, and we devise two polynomial-time dynamic programming algorithms for ${\ensuremath{\textsc{Con}}}$-${\ensuremath{\textsc{Sum}}}$ and ${\ensuremath{\textsc{Con}}}$-${\ensuremath{\textsc{Num}}}$. These algorithms complement the already known polynomial-time algorithm for ${\ensuremath{\textsc{Con}}}$-${\ensuremath{\textsc{Max}}}$ on trees shown in [@demaine2007minimizing].
Then, we study independency motion problems on graphs where a *maximum independent set* (and thus a feasible solution for the corresponding motion problem) can be computed in polynomial time. This class of graphs includes, for example, perfect and claw-free graphs. More precisely, we show that ${\ensuremath{\textsc{Ind}}}$-${\ensuremath{\textsc{Max}}}$ and ${\ensuremath{\textsc{Ind}}}$-${\ensuremath{\textsc{Sum}}}$ are [$\textsf{\upshape NP}$]{}-hard even on bipartite graphs (which are known to be perfect graphs [@bollobas1998modern]). Moreover, we devise three exact polynomial-time algorithms: one for solving ${\ensuremath{\textsc{Ind}}}$-${\ensuremath{\textsc{Max}}}$ on paths, and the other two for solving ${\ensuremath{\textsc{Ind}}}$-${\ensuremath{\textsc{Sum}}}$ and ${\ensuremath{\textsc{Ind}}}$-${\ensuremath{\textsc{Num}}}$ on trees, respectively. Moreover, we devise a polynomial-time approximation algorithm for ${\ensuremath{\textsc{Ind}}}$-${\ensuremath{\textsc{Max}}}$ which is optimal unless an additive term of $1
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We study the formation of the Intra-Cluster Light (ICL) using a semi-analytic model of galaxy formation, coupled to merger trees extracted from N-body simulations of groups and clusters. We assume that the ICL forms by (1) stellar stripping of satellite galaxies and (2) relaxation processes that take place during galaxy mergers. The fraction of ICL in groups and clusters predicted by our models ranges between 10 and 40 per cent, with a large halo-to-halo scatter and no halo mass dependence. We note, however, that our predicted ICL fractions depend on the resolution: for a set of simulations with particle mass one order of magnitude larger than that adopted in the high resolution runs used in our study, we find that the predicted ICL fractions are 30-40 per cent larger than those found in the high resolution runs. On cluster scale, large part of the scatter is due to a range of dynamical histories, while on smaller scale it is driven by individual accretion events and stripping of very massive satellites, $M_{*} \gtrsim
10^{10.5} M_{\odot}$, that we find to be the major contributors to the ICL. The ICL in our models forms very late (below $z\sim 1$), and a fraction varying between 5 and 25 per cent of it has been accreted during the hierarchical growth of haloes. In agreement with recent observational measurements, we find the ICL to be made of stars covering a relatively large range of metallicity, with the bulk of them being sub-solar.
bibliography:
- 'biblio.bib'
title: 'On the formation and physical properties of the Intra-Cluster Light in hierarchical galaxy formation models'
---
\[firstpage\]
clusters: general - galaxies: evolution - galaxy: formation.
Introduction {#sec:intro}
============
The presence of a diffuse population of intergalactic stars in galaxy clusters was first proposed by @zwicky37, and later confirmed by the same author using observations of the Coma cluster with a 48-inch schmidt telescope [@zwicky52]. More recent observational studies have confirmed that a substantial fraction of stars in clusters are not bound to galaxies. This diffuse component is generally referred to as Intra-Cluster Light (hereafter ICL).
Both from the observational and the theoretical point of view, it is not trivial to define the ICL component. A fraction of central cluster galaxies are characterized by a faint and extended stellar halo. These galaxies are classified as *cD galaxies*, where ‘c’ refers to the fact that these galaxies are very large and stands for supergiant and ‘D’ for diffuse (@matthews64), to highlight the presence of a diffuse stellar envelope made of stars that are not bound to the galaxy itself. Separating these two components is not an easy task. On the observational side, some authors use an isophotal limit to cut off the light from satellite galaxies, while the distinction between the brightest cluster galaxy (hereafter BCG) and ICL is based on profile decomposition [e.g. @zibetti]. Others [e.g. @gonzalez05] rely on two-dimensional profile fittings to model the surface brightness profile of brightest cluster galaxies. In the framework of numerical simulations, additional information is available, and the ICL component has been defined using either a binding energy definition [i.e. all stars that are not bound to identified galaxies, e.g. @giuseppe], or variations of this technique that take advantage of the dynamical information provided by the simulations [e.g. @Dolag_etal_2010]. In a recent work, @rudick11 discuss different methods that have been employed both for observational and for theoretical data, and apply them to a suite of N-body simulations of galaxy clusters. They find that different methods can change the measured fraction[^1] of ICL by up to a factor of about four (from $\sim 9$ to $\sim 36$ per cent). In contrast, [@puchwein] apply four different methods to identify the ICL in hydrodynamical SPH simulations of cluster galaxies, and consistently find a significant ICL stellar fraction ($\sim$ 45 per cent).
There is no general agreement in the literature about how the ICL fraction varies as a function of cluster mass. @zibetti find that richer clusters (the richness being determined by the number of red-sequence galaxies), and those with a more luminous BCG have brighter ICL than their counterparts. However, they find roughly constant ICL fractions as a function of halo mass, within the uncertainties and sample variance. In contrast, @lin empirically infer an increasing fraction of ICL with increasing cluster mass. To estimate the amount of ICL, they use the observed correlation between the cluster luminosity and mass and a simple merger tree model for cluster formation. Results are inconclusive also on the theoretical side, with claims of increasing ICL fractions for more massive haloes (e.g. @giuseppe04 [@purcell07; @giuseppe; @purcell08]), as well as findings of no significant increase of the ICL fraction with cluster mass (e.g. @pigi [@henriques; @puchwein]), at least for systems more massive than $10^{13} \, M_{\odot}/h$.
Different physical mechanisms may be at play in the formation of the ICL, and their relative importance can vary during the dynamical history of the cluster. Stars can be stripped away from satellite galaxies orbiting within the cluster, by tidal forces exerted either during interactions with other cluster galaxies, or by the cluster potential. This is supported by observations of arclets and similar tidal features that have been identified in the Coma, Centaurus and Hydra I clusters [@gregg; @trentham; @calcaneo; @arnaboldi12]. As pointed out by several authors, in a scenario where galaxy stripping and disruption are the main mechanisms for the production of the ICL, the major contribution comes from galaxies falling onto the cluster along almost radial orbits, since tidal interactions by the cluster potential are strongest for these galaxies. Numerical simulations have also shown that large amounts of ICL can come from ‘pre-processing’ in galaxy groups that are later accreted onto massive clusters [@mihos; @willman; @rudick06; @sommer]. In addition, @giuseppe found that the formation of the ICL is tightly linked to the build-up of the BCG and of the other massive cluster galaxies, a scenario supported by other theoretical studies (e.g. @diemand [@abadi; @font; @read]). It is important, however, to consider that results from numerical simulations might be affected by numerical problems. @giuseppe find an increasing fraction of ICL when increasing the numerical resolution of their simulations. In addition, @puchwein show that a significant fraction ($\sim 30$ per cent) of the ICL identified in their simulations forms in gas clouds that were stripped from the dark matter haloes of galaxies infalling onto the cluster. Fluid instabilities, that are not well treated within the SPH framework, might be able to disrupt these clouds suppressing this mode of ICL formation.
In this paper, we use the semi-analytic model presented in @dlb [ hereafter DLB07], that we extend by including three different prescriptions for the formation of the ICL. We couple this model to a suite of high-resolution N-body simulations of galaxy clusters to study the formation and evolution of the ICL component, as well as its physical properties, and the influence of the updated prescriptions on model basic predictions (in particular, the galaxy stellar mass function, and the mass of the BCGs). There are some advantages in using semi-analytic models to describe the ICL formation with respect to hydrodynamical simulations: they do not suffer from numerical effects related to the fragility of poorly resolved galaxies, and allow the relative influence of different channels of ICL generation to be clearly quantified. However, the size and abundance of satellite galaxies (that influence the amount of predicted ICL) might be estimated incorrectly in these models. We will comment on these issues in the following.
The layout of the paper is as follows. In Section \[sec:sim\] we introduce the simulations used in our study, and in Section \[sec:model\] we describe the prescriptions we develop to model the formation of the ICL component. In Section \[sec:massfunction\] we discuss how our prescriptions affect the predicted galaxy stellar mass function, and in Section \[sec:haloprop\] we discuss how the predicted fraction of ICL varies as a function of halo properties. In Section \[sec:formation\], we analyse when the bulk of the ICL is formed, and which galaxies provide the largest contribution. We then study the correlation between the ICL and the properties of the corresponding BCGs in Section \[sec:bcgprop\], and analyse the metal content of the ICL in Section \[sec:metallicity\]. Finally, we discuss our results and give our conclusions in Section \[sec:discussion\].
N-body simulations {#sec:sim}
==================
In this study we use collisionless simulations of galaxy clusters, generated using the ‘zoom’ technique [@Tormen_etal_1997 see also @Katz_and_White_1993]: a target cluster is selected from a cosmological simulation and all its particles, as well as those in its immediate surroundings, are traced back to their Lagrangian region and replaced with a larger number of lower mass particles. Outside this high-resolution region, particles of increasing mass are displaced on a spherical grid. All particles are then perturbed using the same fluctuation field used in the parent cosmological simulations, but now extended to smaller scales. The method
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Word vectors and Language Models (LMs) pretrained on a large amount of unlabelled data can dramatically improve various Natural Language Processing (NLP) tasks. However, the measure and impact of similarity between pretraining data and target task data are left to intuition. We propose three cost-effective measures to quantify different aspects of similarity between source pretraining and target task data. We demonstrate that these measures are good predictors of the usefulness of pretrained models for Named Entity Recognition (NER) over 30 data pairs. Results also suggest that pretrained LMs are more effective and more predictable than pretrained word vectors, but pretrained word vectors are better when pretraining data is dissimilar.'
bibliography:
- 'naaclhlt2019.bib'
title: Using Similarity Measures to Select Pretraining Data for NER
---
Acknowledgments {#acknowledgments .unnumbered}
===============
We would like to thank Massimo Piccardi and Mark Dras for their constructive feedback. The authors also thank the members of CSIRO Data61’s Language and Social Computing (LASC) team for helpful discussions, as well as anonymous reviewers for their insightful comments.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Developments in the educational landscape have spurred greater interest in the problem of automatically scoring short answer questions. A recent shared task on this topic revealed a fundamental divide in the modeling approaches that have been applied to this problem, with the best-performing systems split between those that employ a knowledge engineering approach and those that almost solely leverage lexical information (as opposed to higher-level syntactic information) in assigning a score to a given response. This paper aims to introduce the NLP community to the largest corpus currently available for short-answer scoring, provide an overview of methods used in the shared task using this data, and explore the extent to which more syntactically-informed features can contribute to the short answer scoring task in a way that avoids the question-specific manual effort of the knowledge engineering approach.'
author:
- |
Derrick Higgins^\*^, Chris Brew^^, Michael Heilman^\*^, Ramon Ziai^^, Lei Chen^\*^, Aoife Cahill^\*^, Michael Flor^\*^,\
**Nitin Madnani^\*^**, **Joel Tetreault^§^**, **Daniel Blanchard^\*^**, **Diane Napolitano^\*^**, **Chong Min Lee^\*^**, **and** **John Blackmore^\*^**\
$\ast$Educational Testing Service\
Nuance Communications\
Tuebingen University\
§Yahoo! Labs
bibliography:
- 'henry.bib'
nocite: '[@carlson1988]'
title: 'Is getting the right answer just about choosing the right words? The role of syntactically-informed features in short answer scoring.'
---
=1
Introduction
============
This paper aims to demonstrate that “higher-level” linguistic features that encode information such as syntactic relations, topics referenced, and response structure can make a contribution to the accuracy and validity of automated methods of short answer scoring. Although the results of a recent shared task on short answer scoring seem to indicate that lexical features alone cannot be improved upon, a more thorough examination of the performance of models using different sorts of features tells a different story. In support of this goal, we also provide an overview of the ASAP short-answer scoring competition, which has gone largely unnoticed in the community of NLP researchers working on educational applications.
Research on using computers to score open-ended student responses has a long history, dating back to Ellis Page’s work on automated scoring of essays [@page66; @page68]. Since the very beginning of this research field, there has been an awareness that agreement with human raters is a limited evaluation measure. Page’s work demonstrated that the length of an essay correlates strongly with human ratings. Such superficial measures can sometimes do surprisingly well as predictive mechanisms, despite the fact that they are only marginally related to the skills and attributes we aim to measure with a writing task (the test *construct*).
Given a sample of scored test-taker responses it is possible to identify many potentially measurable linguistic features that correlate well with score. Some of these features rely on advanced natural language processing, but many do not. Given the redundancy of information encoded in many of these features, and the difficulty of reliably measuring features that depend on advanced NLP, it is tempting to focus attention on superficial features that are easy to extract, and to hope that the redundancy will allow good prediction. However, a system that relies on superficial features as proxies for important underlying attributes will fail when it begins to see answers in which the measureable surface features are no longer correlated with the underlying attributes. Unfortunately, such answers are exactly what is to be expected when a sophisticated test-taking community begins to analyse the test in the search for simple ways to get good scores. Therefore, it is important to understand the potential of deeper features even when their predictive contribution to scoring in a research setting is limited.
The field of automated essay scoring has made advances in the intervening years, allowing the development of features related to various aspects of the writing construct, including lexical sophistication, discourse structure, syntactic variety, and grammatical accuracy (cf. ; ; ; ; ; ). The addition of these features has not only improved the conceptual basis of scoring but also improved the accuracy of these systems according to traditional evaluation metrics.
Other automated scoring tasks have not yet progressed to the same level of maturity. In particular, not as much work has been focused to date on automated scoring of short-answer questions. Such questions are distinguished from essays by their brevity (eliciing responses of only a few words or a few sentences), and by the fact that they are scored according to response content, rather than quality of written expression. The scoring rubrics for short-answer questions often require specific information (e.g., scientific principles, trends in a graph, or details from a reading passage) to be included in a response for it to receive credit.
The task of short-answer scoring has received more attention recently, however, because short-answer questions are expected to figure prominently in new, computerized state tests currently under development with Race To the Top funding from the US Department of Education. As proved to be the case for the automated essay scoring task, results on the recent ASAP short answer scoring task (described later in Section \[sec:asap-description\]) have demonstrated that superficial features (in this case, features related to the use of particular words in a response) are strongly predictive. We aim to re-examine the contribution of different sorts of predictive features on the same dataset of short-answer tasks on which these results were achieved, and demonstrate that attention to linguistic structure is empirically valuable in automated scoring of short answers.
Previous Work
=============
Like the field of automated essay scoring, research on methods for automated scoring of short answer questions has a history that spans multiple decades. As early as 1988, Carlson & Ward examined the potential use of natural language processing for the “formulating hypotheses” task, a new item type under consideration for the GRE test that would ask students to list all of the possible explanations they could think of that would account for some observed phenomenon (for example, a steady reduction in the mortality rate for a particular population). While this is a somewhat unique item type, but it is quite similar in its fundamental scoring characteristics (the fact that it is scored according to the correctness or semantic appropriateness of a short, textual unit) to many other “short answer” tasks that have been considered more recently.
Research on automated scoring of short-answer tasks continued at the Educational Testing Service during the early 1990s [@kaplan1991; @kaplan1994; @burstein1995], and received broader attention in the early 2000s, when a number of short answer scoring systems were developed, including ETS’ *c-rater* [@leacock03], AutoMark [@mitchell02], the Intelligent Essay Assessor [@landauer03], the Oxford-UCLES system [@sukkarieh05], and applications developed at the University of Portsmouth [@callear01] and the University of Manchester [@sargeant04]. Some approaches to the task have relied heavily on knowledge engineering, involving manual creation of patterns to encapsulate correct answer types for particular questions [@callear01; @sukkarieh05]. Other approaches have aimed to use more generic text similarity features to determine the distance between students’ responses and some “gold standard” answer or answers [@landauer03; @perez05; @mohler11; @meurers11; @hahn12]. Hybrid systems have also been developed, in which some human involvement is required for task-specific pattern creation or annotation, but other components of the system use automatically-constructed features and statistical calibration [@mitchell02; @leacock03; @nielsen08]. There has been a shift over time towards more fully-automated and statistically-based systems, and away from those relying on manual knowledge engineering, but the selection of methodology also depends on the exact type of short answer questions targeted by each system. For instance, the tasks addressed by required answers to include specific well-defined concepts (see Figure \[mitchell-item-fig\]), and were therefore more amenable to a knowledge engineering approach, whereas those addressed by Foltz et al. elicited longer, less-constrained responses (see Figure \[foltz-item-fig\]), and were scored according to the evidence students gave of their “depth of knowledge”, rather than for specific, correct concepts.
Some of these systems have recently seen operational use for scoring consequential tests. Foltz reported that Pearson’s Intelligent Essay Assessor was being used to score science questions on the Maryland State Assessment. Leacock and Chodorow also cite the use of ETS’ *c-rater* in a state assessment context. More opportunities for the use of such systems in consequential testing systems are likely to emerge in coming years, as well, as more state tests move from paper-and-pencil administration to online formats, and as new multi-state tests are developed. Two state consortia (known as PARCC[^1] and Smarter Balanced[^2]) have received funding from the US Department of Education to develop next-generation tests that can be used in multiple states, and incorporate innovative technology to address a new set of standards for what children at different grades should know and be able to do (the Common Core State Standards[^3]). These tests are slated to be launched in the 2014-2015 school year, and have explicitly included the automated scoring of open-ended tasks as one of their design desiderata.
Partly as a result of this increased commercial interest in the automated scoring of short-answer questions, recent efforts have arisen to empirically assess the state of the art in this field, and to compare the performance of available systems. One of
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Let $R$ be a Noetherian ring, $N$ a finitely generated $R$-module and $I$ an ideal of $R$. It is shown that the sequences ${\operatorname{Ass}}_R R/(I^n)_a^{(N)}$, ${\operatorname{Ass}}_R (I^n)_a^{(N)}/ (I^{n+1})^{(N)}_a$ and ${\operatorname{Ass}}_R (I^n)_a^{(N)}/ (I^n)_a, n= 1,2, \dots,$ of associated prime ideals, are increasing and ultimately constant for large $n$. Moreover, it is shown that, if $S$ is a multiplicatively closed subset of $R$, then the topologies defined by $(I^n)_a^{(N)}$ and $S((I^n)_a^{(N)}), \,{n\geq1}$, are equivalent if and only if $S$ is disjoint from the quintasymptotic primes of $I$. By using this, we also show that, if $(R, \mathfrak{m})$ is local and $N$ is quasi-unmixed, then the local cohomology module $H^{\dim N}_I(N)$ vanishes if and only if there exists a multiplicatively closed subset $S$ of $R$ such that $\mathfrak{m} \cap S \neq \emptyset$ and that the topologies induced by $(I^n)_a^{(N)}$ and $S((I^n)_a^{(N)}), \, {n\geq1},$ are equivalent.'
address:
- 'Department of mathematics, University of Tabriz, Tabriz, Iran; and School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O. Box: 19395-5746, Tehran, Iran.'
- 'Martin-lüther-universität halle-wittenberg, fachbereich mathematik and informatik, d-06099 halle (saale), germany'
author:
- 'Reza Naghipour$^*$ and Peter Schenzel'
title: 'Asymptotic behaviour of integral closures, quintasymptotic primes and ideal topologies'
---
[^1]
Introduction
============
The important concept of integral closure of an ideal of a commutative Noetherian ring (with identity), developed by D. G. Northcott and D. Rees in [@NR], is fundamental to a considerable body of recent and current research both in commutative algebra and algebraic geometry. Let $R$ be a commutative ring (with identity), $I$ an ideal of $R$. In the case when $R$ is Noetherian, we denote by $(I)_a$ the integral closure of $I$, i.e., $(I)_a$ is the ideal of $R$ consisting of all elements $x\in R$ which satisfy an equation $x^n+ r_1x^{n-1}+
\cdots + r_n= 0$, where $r_i\in I^i, i=1, \ldots, n$.
In [@R] L.J. Ratliff, Jr., has shown that (when $R$ is Noetherian), the sequence of associated prime ideals $${\operatorname{Ass}}_R R/(I^n)_a, n= 1,2, \ldots ,$$ is increasing and ultimately constant; we use the notation $A^*_a(I)$ to denote ${\operatorname{Ass}}_R R/(I^n)_a$ for large $n$.
The notion of integral closures of ideals of $R$ relative to a Noetherian $R$-module $N$, was initiated by R.Y. Sharp et al., in [@STY]. An element $x\in R$ is said to be [*integrally dependent on $I$ relative to*]{} $N$ if there exists a positive integer $n$ such that $x^{n}N \subseteq \sum_{i=1}^n x^{n-i}I^iN.$ Then the set
$I^{(N)}_a=\{x\in R\,|\, x$ is integrally dependent on $I$ relative to $N$}
is an ideal of $R$, called the [*integral closure of $I$ relative to*]{} $N$, in the case $N=R$, $I^{(N)}_a$ is the classical integral closure $I_a$ of $I$. It is clear that $I\subseteq I^{(N)}_a.$ We say that $I$ is [*integrally closed*]{} relative to $N$ if $I= I^{(N)}_a.$
In the second section (among other things) we show that, when $R$ is a Noetherian ring and $N$ is a finitely generated $R$-module, the sequences $${\operatorname{Ass}}_R R/(I^n)^{(N)}_a, \,\,\, {\operatorname{Ass}}_R (I^n)_a^{(N)}/ (I^{n+1})^{(N)}_a \text{ and } {\operatorname{Ass}}_R (I^n)_a^{(N)}/ ((I+{\operatorname{Ann}}_R N)^n)_a, \,\, n= 1,2, \ldots,$$ of associated primes, are ultimately constant; we let $A^*_a(I, N):={\operatorname{Ass}}_R R/(I^n)^{(N)}_a$ and $C^*_a(I, N):={\operatorname{Ass}}_R (I^n)_a^{(N)}/ ((I+{\operatorname{Ann}}_R N)^n)_a$, for large $n$. Pursing this point of view further we shall show that $A^*_a(I+ {\operatorname{Ann}}_R N)\setminus C^*_a(I, N) \subseteq A^*_a(I, N).$
In [@Mc2], McAdam studied the following interesting set of prime ideals of $R$ associated with $I$, $$\bar{Q}^*(I)= \{\mathfrak{p} \in {\operatorname{Spec}}R : \text{ there exists
a } \mathfrak{q} \in {\operatorname{mAss}}\hat{R}_{\mathfrak{p}} \text{ such
that} {\operatorname{Rad}}(I\hat{R}_{\mathfrak{p}}+ \mathfrak{q})=
\mathfrak{p}\hat{R}_{\mathfrak{p}}\},$$ and he called $\bar{Q}^*(I)$ the set of [*quintasymptotic prime ideals*]{} of $I$.
On the other hand, Ahn in [@Ah] extended the notion of the quintasymptotic prime ideals to a finitely generated module over $R.$ More precisely, if $N$ is a finitely generated $R$-module then a prime ideal $\mathfrak{p}$ of $R$ is said to be a [*quintasymptotic prime ideal*]{} of $I$ with respect to $N$ whenever there exists a $\mathfrak{q}\in {\operatorname{mAss}}_{\hat{R}_\mathfrak{p}}\hat{N}_\mathfrak{p}$ such that ${\operatorname{Rad}}(I\hat{R}_{\mathfrak{p}}+ \mathfrak{q})=
\mathfrak{p}\hat{R}_{\mathfrak{p}}.$ The set of all [*quintasymptotic prime ideals*]{} of $I$ with respect to $N$ is denoted by $\bar{Q}^*(I,N).$
In the third section, for a multiplicatively closed subset $S$ of $R$, we examine the equivalence between the topologies defined by the filtrations $\{(I^n)_a^{(N)}\}_{n\geq1}$, $\{S((I^n)_a^{(N)})\}_{n\geq1}$, $\{S(((I+ {\operatorname{Ann}}_RN)^n)_a)\}_{n\geq 1}$ and $\{S((I+ {\operatorname{Ann}}_RN)^n)\}_{n\geq 1}$ by using the quintasymptotic prime ideals of $I$ with respect to $N$. Some of these results has been established, by Schenzel in [@Sc; @Sc1], McAdam in [@Mc2] and Mehrvarz et al., in [@MNS], in certain case when $N=R$.
A typical result in this direction is the following:
Let $N$ be a finitely generated module over a Noetherian ring $R$ and let $I$ be an ideal of $R$. Let $S$ be a multiplicatively closed subset of $R$. Then the topologies defined by $(I^n)_a^{(N)}$, $S((I^n)_a^{(N)})$, $S(((I+ {\operatorname{Ann}}_RN)^n)_a)$ and $S((I+ {\operatorname{Ann}}_RN)^n), \, {n\geq 1},$ are equivalent if and only if $S$ is disjoint from each of the quintasymptotic prime ideals of $I$ with respect to $N$.
The proof of Theorem 1.1 is given in Theorem 3.11. One of our tools for proving Theorem 1.1 is the following, which is a characterization of the quintasymptotic prime ideals of $I$ with respect to $N$. In the following, we use $I_a^{\langle N \rangle}$ to denote the union $I_a^{(N)}:_Rs$, where $s$ varies in $R\backslash \bigcup \{\frak p\in {\operatorname{mAss}}_RN/IN\}$; in particular, for every integer $k\geq1$ and every prime ideal
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'On $\mathbb R^N$ equipped with a normalized root system $R$ and a multiplicity function $k\geq 0$ let us consider a (non-radial) kernel $K(\mathbf x)$ which has properties similar to those from the classical theory. We prove that a singular integral Dunkl convolution operator associated with the kernel $K$ is bounded on $L^p$ for $1<p<\infty$ and of weak-type (1,1). Further we study a maximal function related to the Dunkl convolutions with truncation of $K$.'
address: 'J. Dziubański and A. Hejna, Uniwersytet Wrocławski, Instytut Matematyczny, Pl. Grunwaldzki 2/4, 50-384 Wrocław, Poland'
author:
- Jacek Dziubański and Agnieszka Hejna
title: Singular integrals in the rational Dunkl setting
---
[^1]
Introduction
============
The aim of this note is to study singular integral convolution operators in the Dunkl setting. We fix a normalized root system $R$ in $\mathbb R^N$ and a multiplicity function $k\geq 0$. Let $dw(\mathbf x)$ denote the associated measure and $\mathbf N$ the homogeneous dimension (see Section \[sec:preliminaries\]). For a positive integer $s$ consider a kernel $K\in C^{s} (\mathbb R^N\setminus \{0\})$ such that $$\label{eq:uni_on_annulus}\tag{A} \sup_{0<a<b<\infty} \Big| \int_{a<\|\mathbf x\|<b} K(\mathbf x)\, dw(\mathbf x)\Big|<\infty,$$ $$\label{eq:assumption1}\tag{D}
\Big|\frac{\partial^\beta}{\partial \mathbf x^\beta} K(\mathbf x)\Big|\leq C\|\mathbf x\|^{-\mathbf N-|\beta|} \quad \text{for all} \ |\beta |\leq s.$$ Set $$K^{\{t\}}(\mathbf x)=K(\mathbf x)(1-\phi(t^{-1} \mathbf x)),$$ where $\phi$ is a fixed radial $C^\infty$-function supported by the unit ball $B(0,1)$ such that $\phi (\mathbf x)=1$ for $\|\mathbf x\|<1/2$. We prove that if $s$ is sufficiently large, then there are constants $C_p>0$ independent of $t>0$ such that $$\begin{aligned}
\| f*K^{\{t\}} \|_{L^p(dw)}\leq C_p\| f\|_{L^p(dw)} \quad \text{for} \ 1<p<\infty
\end{aligned}$$ and $$\begin{aligned}
w(\{\mathbf x\in\mathbb R^N:| f*K^{\{t\}}(\mathbf x)|>\lambda\} )\leq C_1\lambda^{-1}\| f\|_{L^1(dw)}\end{aligned}$$ (Theorems \[theorem:weak\_type\_truncated1\] and \[theorem:strong\_type\_truncated1\]), where the symbol $*$ denotes the Dunkl convolution. We also prove that under the additional assumption $$\label{eq:limitA}\tag{L}
\lim_{\varepsilon \to 0} \int_{\varepsilon <|\mathbf x|<1} K(\mathbf x)\, dw(\mathbf x)=L,$$ where $L\in \mathbb C$, the limit $\lim_{t\to 0} f*K^{\{t\}} (\mathbf x)$ exists and defines a bounded operator on $L^p(dw)$ for $1<p<\infty$, which is of weak-type (1,1) as well (Theorem \[theorem:weak\_type\_K\], see also Theorem \[theorem:main\_L2\]). Moreover, in this case, the maximal operator $$K^*f(\mathbf x)=\sup_{t>0} |f*K^{\{t\}}(\mathbf x)|$$ is bounded on $L^p(dw)$ for $1<p<\infty$ and of weak-type (1,1) (Theorem \[theorem:maximal\]).
If $k\equiv 0$, then $dw$ is the Lebesgue measure in $\mathbb R^N$ and the Dunkl convolution reduces to the classical one. So the the above results are well known and $s=1$ suffices in this case (see i.e. [@Duo Chapter 5], [@St1], [@St2]). However, in the general case of $R$ and $k$ the main difficulty which one faces trying to study singular integral operators in the Dunkl setting lies in the lack of knowledge about boundendess of the so called Dunkl translations $\tau_{\mathbf x}$ on $L^p(dw)$-spaces for $p\ne 2$. Consequently, it is not known if for a fixed non-radial $L^1$-function $f$ the Dunkl convolution operator $g\mapsto f*g$ is bounded on $L^p(dw)$. The recent observations made in [@DzH] allow us to obtain some knowledge for the functions $\tau_{\mathbf y}f(\mathbf x)$ provided $f$ satisfies certain regularity conditions in smoothness and decay. In the present paper we explore and extend these ideas of [@DzH] for proving boundedness of singular integral convolution operators provided $s=s_{0}$ in , where $s_0$ is the smallest even integer bigger than $\mathbf{N}/2$.
Preliminaries and notation {#sec:preliminaries}
==========================
The Dunkl theory is a generalization of the Euclidean Fourier analysis. It started with the seminal article [@Dunkl] and developed extensively afterwards (see e.g. [@RoeslerDeJeu], [@Dunkl0], [@Dunkl3], [@Dunkl2], [@GR], [@Roesler2], [@Roesle99], [@Roesler2003], [@ThangaveluXu], [@Trimeche2002]). In this section we present basic facts concerning the theory of the Dunkl operators. For details we refer the reader to [@Dunkl], [@Roesler3], and [@Roesler-Voit].
We consider the Euclidean space $\mathbb R^N$ with the scalar product $\langle\mathbf x,\mathbf y\rangle=\sum_{j=1}^N x_jy_j
$, $\mathbf x=(x_1,...,x_N)$, $\mathbf y=(y_1,...,y_N)$, and the norm $\| \mathbf x\|^2=\langle \mathbf x,\mathbf x\rangle$. For a nonzero vector $\alpha\in\mathbb R^N$, the reflection $\sigma_\alpha$ with respect to the hyperplane $\alpha^\perp$ orthogonal to $\alpha$ is given by $$\begin{aligned}
\sigma_\alpha (\mathbf x)=\mathbf x-2\frac{\langle \mathbf x,\alpha\rangle}{\| \alpha\| ^2}\alpha.\end{aligned}$$ In this paper we fix a normalized root system in $\mathbb R^N$, that is, a finite set $R\subset \mathbb R^N\setminus\{0\}$ such that $\sigma_\alpha (R)=R$ and $\|\alpha\|=\sqrt{2}$ for every $\alpha\in R$. The finite group $G$ generated by the reflections $\sigma_\alpha \in R$ is called the [*Weyl group*]{} ([*reflection group*]{}) of the root system. A [*multiplicity function*]{} is a $G$-invariant function $k:R\to\mathbb C$ which will be fixed and $\geq 0$ throughout this paper. Let $$\begin{aligned}
dw(\mathbf x)=\prod_{\alpha\in R}|\langle \mathbf x,\alpha\rangle|^{k(\alpha)}\, d\mathbf x\end{aligned}$$ be the associated measure in $\mathbb R^N$, where, here and subsequently, $d\mathbf x$ stands for the Lebesgue measure in $\mathbb R^N$. We denote by $\mathbf N=N+\sum_{\alpha \in R} k(\alpha)$ the homogeneous dimension of the system. Clearly, $$\begin{aligned}
w(B(t\mathbf x, tr))=t^{\mathbf N}w(B(\mathbf x,r)) \ \ \text{\rm for all } \mathbf x\in\mathbb R^N, \ t,r>0 \end{aligned}$$ and $$\begin{aligned}
\int_{\mathbb R^N} f(\mathbf x)\, dw(\mathbf x)=\int_{\mathbb R^N} t^{-\mathbf N} f(\mathbf x\slash t)\, dw(\mathbf x)\ \ \text{for} \ f\in L^1(dw) \ \text{\rm and} \ t>0.\end{aligned}$$ Observe that ([^2]) $$\begin{aligned}
w(B(\mathbf x,r))\sim r^{N}\prod_{\alpha \in R} (|\langle \mathbf x,\alpha\rangle |+r)^{k(\alpha)},\end{aligned}$$ so $dw(\mathbf x)$ is doubling, that is, there is a constant $C>0$ such that $$\label{eq:doubling}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Recent work highlights that tens of Galactic double neutron stars are likely to be detectable in the millihertz band of the space-based gravitational-wave observatory, LISA. Kyutoku and Nishino point out that some of these binaries might be detectable as radio pulsars using the Square Kilometer Array (SKA). We point out that the joint LISA+SKA detection of a $f_\text{gw}\gtrsim\unit[1]{mHz}$ binary, corresponding to a binary period of $\lesssim\unit[400]{s}$, would enable precision measurements of ultra-relativistic phenomena. We show that, given plausible assumptions, multi-messenger observations of ultra-relativistic binaries can be used to constrain the neutron star equation of state with remarkable fidelity. It may be possible to measure the mass-radius relation with a precision of [$\approx$0.2%]{} after $\unit[10]{yr}$ of observations with the SKA. Such a measurement would be roughly an order of magnitude more precise than possible with other proposed observations. We summarize some of the other remarkable science made possible with multi-messenger observations of millihertz binaries, and discuss the prospects for the detection of such objects.'
author:
- Eric Thrane
- Stefan Osłowski
- Paul Lasky
bibliography:
- 'bibliography.bib'
title: 'Ultra-relativistic astrophysics using multi-messenger observations of double neutron stars with LISA and the SKA'
---
Detecting ultra-relativistic Galactic binaries with LISA
========================================================
Kyutoku and Nishino [@Kyutoku] recently pointed out that the Laser Interferometer Space Antenna (LISA) [@LISA] is likely to detect ultra-relativistic, Galactic double neutron stars, some of which could be subsequently detected in radio with follow-up from the Square Kilometer Array (SKA) [@SKA]. These millihertz binaries have the potential to probe a regime of relativistic astrophysics not accessible with currently known binary systems. For example, the Double Pulsar (PSR J0737–3039) has an orbital period of $\unit[2.5]{hr}$ and semi-major axis $a=\unit[6\times10^{-3}]{AU}$ [@DoublePulsar]. In contrast, the double neutron stars observed by LISA with gravitational-wave frequencies $\gtrsim\unit[1]{mHz}$, will have binary periods of $P_B\lesssim\unit[2000]{s}$ and semi-major axes $a\lesssim\unit[1\times10^{-3}]{AU}$ [^1].
The number of double neutron stars emitting gravitational waves above $\unit[1]{mHz}$ can be estimated using the double neutron star merger rate inferred from LIGO/Virgo following the detection of GW170817 [@GW170817]. Following [@Kyutoku; @Kyutoku2], we estimate $$\begin{aligned}
N_\text{LISA} = & (47-690)
\left(\frac{{\cal M}}{\unit[1.2]{M_\odot}}\right)^{-5/3}
\left(\frac{f_\text{gw}}{\unit[1]{mHz}}\right)^{-8/3} .\end{aligned}$$ Here, ${\cal M}$ is chirp mass, which we take throughout to be $1.2 M_\odot$ (corresponding to an equal mass binary with $1.38 M_\odot$ components). The range of values (90% credible interval) comes from uncertainty in the merger rate. While not all of these binaries will be detectable by LISA, many of them will be.
We adopt the convention that a double neutron star is detectable if it produces a matched-filter signal-to-noise ratio $\rho>7$; see, for example, [@Robson]. We calculate typical signal-to-noise ratios using [@Seto] $$\begin{aligned}
\widehat\rho \equiv & \langle\rho^2\rangle^{1/2} \\
= & \frac{8G^{5/3}T^{1/2}{\cal M}^{5/3}\pi^{2/3}}{5^{1/2}c^4d}
\left(\frac{f_\text{gw}^{2/3}}{S_n^{1/2}(f_\text{gw})}\right) ,\end{aligned}$$ which is the square root of the signal-to-noise ratio squared, averaged over binary orientation and sky location [@Robson]. Here, $T$ is the observation time, $G$ is the gravitational constant, $c$ is the speed of light, $d$ is the distance, and $S_n(f_\text{gw})$ is the noise power-spectral density. We model the LISA noise curve (shown below in Fig. \[fig:asd\]) using the $T=\unit[4]{yr}$ prescription from [@Robson], which includes the effect of foreground from white-dwarf binaries. Using this expression, one finds that a $\unit[1]{mHz}$ binary can be detected to distances of $d\approx\unit[9]{kpc}$ (beyond the distance to the Galactic Center), while a $\unit[5]{mHz}$ binary can be detected to distances of $d\approx\unit[590]{kpc}$, 75% the distance to Andromeda.
This result is roughly consistent with other estimates of the number of double neutron stars detectable with LISA. Kremer et al. [@NU] examined the population of LISA-band binaries in the globular clusters of the Milky Way. They estimate $22$ globular-cluster double neutron stars will radiate in the LISA band ($\approx\unit[10^{-2}-100]{mHz}$). Of these, two double neutron stars are likely to be detected above the LISA noise floor. Since many binaries in globular clusters form dynamically, many of these systems have significant eccentricity. The number of millihertz binaries in globular clusters is likely to be small compared to the number of millihertz binaries in the field since the prevalence of millihertz binaries is directly related to the double neutron star merger rate, and $N$-body studies predict that globular clusters are relatively inefficient at merging double neutron stars; see, e.g., [@Belczynski]. See also work by Seto [@Seto] and Lau et al. [@Lau], who considered the population of LISA-band double neutron stars in the Local Group. Our best guess for the Galactic rate of double neutron star mergers is $\unit[1.5\times10^{-4}]{MWEG^{-1} yr^{-1}}$ [@GW170817], which implies a typical time between mergers in the Milky Way of $\sim\unit[6700]{yr}$. (Here, MWEG stands for “Milky Way Equivalent Galaxy.”) Therefore, our best guess for the shortest time to merge for a Galactic double neutron star binary is half that: $\unit[3300]{yr}$. Below, we focus our attention on binaries that are, in principle, observable as pulsars. Here, we assume that $\approx10\%$ of the recycled neutron stars in double neutron star systems are detectable as pulsars due to beaming effects. Thus, our best guess for the shortest time to merge for a Galactic double neutron star binary [*with a potentially observable radio pulsar*]{} is [[$\unit[33]{kyr}$]{}]{}.
Let us imagine that such a double neutron star binary exists in the Milky Way and give it a fictitious name: PSR J1234–5678 or “J1234” for short. We proceed to investigate the properties of J1234, and then see how our results would change assuming different times to merge. For reasons that will become clear momentarily, we are especially interested in millihertz binaries with non-negligible eccentricity. Therefore, let us further suppose that J1234 was born recently through unstable “case BB” mass transfer. Such systems have been hypothesized as possible progenitors for binary neutron star mergers [@Ivanova; @Belczynski2; @VignaGomez] as well as sources for $r$-process enrichment in ultra-faint dwarf galaxies [@Safarzadeh]. In this scenario, a neutron star - helium star binary undergoes unstable mass transfer, leading to a common envelope event, and eventually—in some cases—a neutron star - helium core binary with an orbital period of ${\cal O}(\unit[1000]{s})$. Since the binary is so tight at this stage of its evolution, it is likely to survive when the helium core undergoes a supernova, leading to a double neutron star binary likely to merge in $\lesssim\unit[10]{Myr}$. A binary with such a short life time can retain significant eccentricity as it passes through the LISA band. For illustrative purposes, we assume that J1234 was born with an eccentricity of [[$e_0=0.75$]{}]{} (typical of population synthesis studies) and a period of [[$P_b^0=\unit[12]{ks}$]{}]{} giving it a lifetime of $\unit[10]{Myr}$ [@Peters64].
We evolve J1234 forward in time using the standard prescription from [@Peters64], so that it is [[$\unit[33]{kyr}$]{}]{} from merger. At this point in its evolution, the orbital period of J1234 is [[$P_b = \
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Topological measures and deficient topological measures are defined on open and closed subsets of a topological space, generalize regular Borel measures, and correspond to (non-linear in general) functionals that are linear on singly generated subalgebras or singly generated cones of functions. They lack subadditivity, and many standard techniques of measure theory and functional analysis do not apply to them. Nevertheless, we show that many classical results of probability theory hold for topological and deficient topological measures. In particular, we prove a version of Aleksandrov’s Theorem for equivalent definitions of weak convergence of deficient topological measures. We also prove a version of Prokhorov’s Theorem which relates the existence of a weakly convergent subsequence in any sequence in a family of topological measures to the characteristics of being a uniformly bounded in variation and uniformly tight family. We define Prokhorov and Kantorovich-Rubenstein metrics and show that convergence in either of them implies weak convergence of (deficient) topological measures on metric spaces. We also generalize many known results about various dense and nowhere dense subsets of deficient topological measures. The present paper constitutes a first step to further research in probability theory and its applications in the context of topological measures and corresponding non-linear functionals.'
author:
- 'S. V. Butler, University of California, Santa Barbara'
date: 'May 20, 2020'
title: Weak convergence of topological measures
---
Introduction
============
The origins of the theory of quasi-linear functionals and topological measures lie in mathematical axiomatization and interpretations of quantum physics ([@vonN], [@MackeyPaper], [@MackeyBook], [@Kadison]). In J. von Neumann’s axiomatization of quantum mechanics, physical observables can be represented by the space $\mathcal{L}$ of Hermitian operators on a complex Hilbert space. The state of a physical system is represented by a positive normalized linear functional on $\mathcal{L}$. Some physicists, however, argued that the linearity of the functional, $\rho(A + B) = \rho(A) + \rho(B), \, A, B \in \mathcal{L} $, makes sense if observables $A$ and $B$ are simultaneously measurable, which means that $A,B$ are polynomials of the same $C \in \mathcal{L} $, so $A,B$ belong to the subalgebra of $\mathcal{L} $ generated by $C$. Mathematical interpretations of quantum physics by G. W. Mackey and R. V. Kadison led to very interesting mathematical problems, including the extension problem for probability measures in von Neumann algebras. This extension problem may be regarded as a special case of the linearity problem for physical states, which is closely related to the existence of quasi-linear functionals. J. F. Aarnes [@Aarnes:TheFirstPaper] introduced quasi-linear functionals (that are not linear) on $C(X) $ for a compact Hausdorff space $X$ and corresponding set functions, generalizing measures (initially called quasi-measures, now topological measures). He connected the two by establishing a representation theorem. Aarnes’s quasi-linear functionals are functionals that are linear on singly generated subalgebras, but (in general) not linear. For more information about physical interpretation of quasi-linear functionals see [@EntovPolterovich], [@EnPoltZap], [@Entov], [@PoltRosenBook], [@Aarnes:PhysStates69], [@Aarnes:QuasiStates70], [@Aarnes:TheFirstPaper].
M. Entov and L. Polterovich first linked the theory of quasi-linear functionals to symplectic topology. They introduced symplectic quasi-states and partial symplectic quasi-states ([@EntovPolterovich]), which are subclasses of quasi-linear functionals. (On a symplectic manifold that is a closed oriented surface every normalized quasi-linear functional is a symplectic quasi-state, see [@PoltRosenBook Chapter 5]). Article [@EntovPolterovich] was followed by numerous papers and a monograph [@PoltRosenBook], and many authors have investigated and used various aspects of symplectic quasi-states and topological measures: their properties, their connection to spectral numbers and homogeneous quasi-morphisms, ways of constructing and approximating symplectic quasi-states, etc. Symplectic quasi-states can be used as a measurement of Poisson commutativity, and topological measures can be used to distinguish Lagrangian knots that have identical classical invariants ([@EntovPolterovich Chapters 4,6]). Symplectic quasi-states and topological measures play an important role in function theory on symplectic manifolds.
Deficient topological measures are generalizations of topological measures. They were first defined and used by A. Rustad and O. Johansen ([@OrjanAlf:CostrPropQlf]) and later independently reintroduced and further developed by M. Svistula ([@Svistula:Signed], [@Svistula:DTM]). Deficient topological measures are not only interesting by themselves, but also provide an essential framework for studying topological measures and quasi-linear functionals. Topological measures and deficient topological measures generalize regular Borel measures and correspond to functionals that are linear on singly generated subalgebras or singly generated cones of functions. These non-linear functionals can be described in several ways, including symmetric and asymmetric Choquet integrals, see [@DD pp. 62, 87] and [@Butler:ReprDTM Corollary 8.5, Theorem 8.7, Remark 8.11]. Deficient topological measures are not supermodular, and their domains are not closed under intersection and union; for these and other reasons, results of Choquet theory do not automatically translate for functionals representing deficient topological measures. It is interesting that, with different proof methods, one may obtain results that are typical for, stronger than, or strikingly different from Choquet theory results.
Topological measures and deficient topological measures are defined on open and closed subsets of a topological space, which means that there is no algebraic structure on the domain. They lack subadditivity and other properties typical for measures, and many standard techniques of measure theory and functional analysis do not apply to them. Nevertheless, we show that many classical results of probability theory hold for topological and deficient topological measures. In particular, we prove versions of Aleksandrov’s Theorem for equivalent definitions of weak convergence of topological and deficient topological measures. We also prove a version of Prokhorov’s Theorem which relates the existence of a weakly convergent subsequence in any sequence in a family of topological measures to the characteristics of being a uniformly bounded in variation and uniformly tight family. We define Prokhorov and Kantorovich-Rubenstein metrics and show that convergence in either of them implies weak convergence of deficient topological measures. We also generalize many known results about various dense and nowhere dense subsets of deficient topological measures.
The present paper constitutes a first step to further research in probability theory and its applications in the context of topological measures and corresponding non-linear functionals.
In this paper $X$ is a locally compact space Hausdorff space. By $C(X)$ we denote the set of all real-valued continuous functions on $X$ with the uniform norm, by $C_0(X)$ the set of continuous functions on $X$ vanishing at infinity, by $C_c(X)$ the set of continuous functions with compact support, and by $C_0^+(X)$ the collection of all nonnegative functions from $C_0(X)$. When we consider maps into extended real numbers we assume that any such map is not identically $\infty$.
We denote by $\overline E$ the closure of a set $E$, and by $ \bigsqcup$ a union of disjoint sets. A set $A \subseteq X$ is called bounded if $\overline A$ is compact. We denote by $id$ the identity function $id(x) = x$, and by $1_K$ the characteristic function of a set $K$. By $ supp \, f $ we mean $ \overline{ \{x: f(x) \neq 0 \} }$. We say that $Y$ is dense in $Z$ if $Z \subseteq \overline Y$.
Several collections of sets are used often. They include: $\mathscr{O}(X)$; $\mathscr{C}(X)$; and $\mathscr{K}(X)$– the collection of open subsets of $X$; the collection of closed subsets of $X $; and the collection of compact subsets of $X $, respectively.
\[MDe2\] Let $X$ be a topological space and $\nu$ be a set function on a family $\mathcal{E}$ of subsets of $X$ that contains $\mathscr{O}(X) \cup \mathscr{C}(X)$ with values in $[0, \infty]$. We say that
- $\nu$ is compact-finite if $ \nu(K) < \infty$ for any $ K \in \mathscr{K}(X)$;
- $\nu$ is simple if it only assumes values $0$ and $1$;
- $ \nu$ is finite if $ \nu(X) < \infty$;
- $\nu$ is inner regular (or inner compact regular) if $\nu(A) = \sup \{ \nu(C) : C \subseteq A, C \in \mathscr{K}(X)\}$ for $A \in \mathcal{E}$;
- $\nu$ is inner closed regular if $\nu(A) = \sup \{ \nu(C) : C \subseteq A, C \in \mathscr{C}(X) \}$ for $A \in \mathcal{E}$;
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We prove that on the typical translation surface the flow in almost every pair of directions are not isomorphic to each other and are in fact disjoint. It was not known if there were any translation surfaces other than torus covers with this property. We provide an application to the convergence of ‘circle averages’ for the flow (away from a sequence of radii of density 0) for such surfaces. Even the density of a sequence of ’circles’ was only known in a few special examples. MSC classes: 37A10, 37A25, 37A34, 37E35'
address:
- ' University of Utah Department of Mathematics, 203 155 S 1400 E, Room 233 Salt Lake City, UT, 84112-0090 USA'
- 'I2M, Centre de Mathématiques et Informatique (CMI), Université Aix-Marseille, 39 rue Joliot Curie, 13453 Marseille Cedex 13, France.'
author:
- 'Jon Chaika, Pascal Hubert'
title: Circle averages and disjointness in typical translation surfaces on every Teichmüller disc
---
The illumination problem is a classical one in billiard theory (see for instance [@LMW] and references therein). A light source is located at some point in the billiard table: we wonder which part of the table is eventually illuminated. This question has recently been solved in full generality for rational polygons and translation surfaces by Lelièvre, Monteil and Weiss [@LMW] using deep results of Eskin and Mirzakhani on moduli spaces of translation surfaces [@Eskin-Mirzakhani]. Here we tackle a related question. We want to understand how big light “circles" distribute in translation surfaces. This question was orally addressed more than 10 years ago by Boshernitzan to the second named author. It was also asked in [@Mon Section 0.1.5 Page 13].
Formally, let $(X,\omega)$ denote a compact translation surface (with distinguished vertical direction). Let $F_{2\pi\theta,\omega}^t$ denote the linear flow in direction $\theta$ at time $t$ on $(X,\omega)$ and $\lambda_\omega$ denote the (2-dimensional) area on $(X,\omega)$ normalized to have area 1. [^1]
A translation surface is *illuminated by circles* if $$\underset{t \to \infty}\lim \, \int_0^1 h(F_{2\pi\theta}^tp) d\theta=\int_X h d\lambda_\omega$$ for all points $p$ in $X$ and all continuous functions $h$.
Is the typical surface illuminated by circles? Is every surface illuminated by circles?
It is easy to see that a flat torus is illuminated by circles since a piece of a large circle has small curvature and can be approximated by a segment. For a translation surface, this is an open problem. The main difference is that a big “circle" on a translation surface of higher genus is a union of disjoint small arcs. The size of each arc decreases when the radius of the circle grows.
We prove a partial result that requires definitions:
Let $A \subset \mathbb{R}$. The *density of $A$* is $\underset{N \to \infty}{\liminf}\, \frac{\lambda(A \cap [-N,N])}{2N}$.
A surface is *weakly illuminated by circles* if for each $p$ there exists a set of density 1, $G_p\subset \mathbb{R}$ so that $$\label{eq:illuminate}
\underset{t \in G_p}{\lim} \int_0^1 h(F^t_{2\pi \theta}(p))d\theta=\int hd\lambda_\omega$$ for all continuous functions $h$.
\[thm:weakly seen\] Almost every surface is weakly illuminated by circles. [^2] In fact, if $\omega$ is a translation surface then for almost every $A \in SL_2(\mathbb{R})$, $A\omega$ is weakly illuminated by circles.
The weaker question of whether circles became dense was also open (and is resolved by the previous theorem). That is:
Almost every surface [^3] has the property that for any $\epsilon>0$ there is a $T$ so that $\cup_{\theta \in 2\pi} F^T_{\theta}(p)$ is $\epsilon$ dense (in the usual flat metric on the surface).
This answers a question in [@Mon].
We derive Theorem \[thm:weakly seen\] from
\[thm:typ disjoint\] For almost every surface [^4] for any $k \in \mathbb{N}$ we have $$\lambda^k(\{(\theta_1,...,\theta_k):F_{\theta_1}\times ...\times F_{\theta_k} \text{ is uniquely ergodic} \})=1$$ where $\lambda$ is the (normalized) Lebesgue measure on the circle. Moreover, for every $\omega$ then for almost every $A\in SL_2(\mathbb{R})$ we have that $$\lambda^k(\{(\theta_1,...,\theta_k):F_{\theta_1, A\omega}\times ...\times F_{\theta_k, A\omega} \text{ is uniquely ergodic} \})=1.$$
For almost every surface the flow in almost every direction is not isomorphic to the vertical flow.
Before this result it was not known whether for every surface, other than torus covers, there was a single isomorphism class (depending on the surface) so that the flow in almost every direction was in this isomorphism class. This is a strengthening of a result by Gadre and the first named author [@Disjoint; @flow] (which ruled out that there was one isomorphism class for almost every translation surface).
Organization of the paper
-------------------------
The condition that a surface is weakly illuminated by circles is approachable from general ergodic theory. In section \[sect:reduction\], we prove that Theorem \[thm:typ disjoint\] (for $k=2$) implies Theorem \[thm:weakly seen\]. In section \[sect:disjointness\], we provide an abstract disjointness criterion which is a refinement of the main result in [@Disjoint] . We apply this criterion to translation flows in section \[sect:application\] using a matrix decomposition. Given two directional flows $F_{\theta_1}$ and $F_{\theta_2}$, the $SL_2(\mathbb{R})$ deformation allows us to match two sets of real numbers together: these two sets are defined in section \[sect:disjointness\] (Definitions \[def:part rig\] and \[def:spread\]). One is defined in terms of $F_{\theta_1}$ and the other one in terms of $F_{\theta_2}$.\
**Acknowledgments:** We thank Sebastien Gouëzel for a helpful conversation (he found an important simplication of section \[sect:reduction\]: the proof of Proposition \[prop:seb\]). We thank the anonymous referee for many corrections and helpful suggestions that improved the paper. In particular, the current proof of Proposition 2 is due to the referee. We thank Oberwolfach, where the project began and CIRM where it was completed. J. Chaika was supported in part by NSF grants DMS-1300550 and DMS-1452762, the Sloan foundation and a Warnock chair. J. Chaika thanks Giovanni Forni for bringing this question to his attention. P. Hubert is partially supported by Projet ANR blanc GeoDyM.
Background
==========
We will freely use the language of translation surfaces and ergodic theory. Concerning the background on translation surfaces, see for instance the following surveys [@Forni-Matheus], [@MT], [@viana; @survey], [@Zo]. To learn more about ergodic theory, especially about joinings, see [@Ru] and [@glasner].
Translation surfaces
--------------------
A translation surface $X$ is a compact surface of genus $g$ endowed with a flat metric with trivial rotational holonomy and conical singularities whose angles are multiples of $2\pi$. Alternatively, a translation surface $X$ is a datum $(S,\omega)$, where $S$ is a compact Riemann surface of genus $g$ and $\omega$ is an holomorphic 1-form on $S$ with zeros of orders $k_1, \dots ,k_r$ at points $p_1, \dots ,p_r$. The linear flow $F_\theta$ is well defined for every direction $\theta$. Kerckhoff, Masur, Smillie showed that $F_\theta$ is uniquely ergodic for almost every $\theta$ ([@KMS]). A maximal subset of $X$ filled by parallel closed geodesics is called an (open) cylinder.\
For a translation surface $X$, the genus and the orders of zeroes satisfy the relation $k_1 + \cdots +k_r = 2g-2$. For fixed integers $k_1, \dots ,k_r$ satisfying the last relation, let ${{\mathcal H}}(k_1, \dots, k_r)$ denote the corresponding stratum of the moduli space of translation surfaces, that is the set of translation surfaces whose associated 1-form $\omega$ has $r$ zeroes with orders $k_1, \dots ,k_r$. It is a complex orbifold with complex dimension $2g + r -1$. Consider a translation surface $
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'It is inferred from bulk-sensitive muon Knight shift measurement for a Bi$_{1.76}$Pb$_{0.35}$Sr$_{1.89}$CuO$_{6+\delta}$ single-layer cuprate that metal-insulator (MI) transition (in the low temperature limit, $T\rightarrow0$) occurs at the critical hole concentration $p=p_{\rm MI}=0.09(1)$, where the electronic density of states (DOS) at the Fermi level is reduced to zero by the pseudogap irrespective of the Néel order or spin glass magnetism. Superconductivity also appears for $p>p_{\rm MI}$, suggesting that this feature is controlled by the MI transition. More interestingly, the magnitude of the DOS reduction induced by the pseudogap remains unchanged over a wide doping range ($0.1\le p\le0.2$), indicating that the pseudogap remains as a hallmark of the MI transition for $p>p_{\rm MI}$.'
author:
- 'M. Miyazaki'
- 'R. Kadono'
- 'M. Hiraishi'
- 'A. Koda'
- 'K. M. Kojima'
- 'Y. Fukunaga'
- 'Y. Tanabe'
- 'T. Adachi'
- 'Y. Koike'
title: 'Metal-Insulator Transition and Pseudogap in Bi$_{1.76}$Pb$_{0.35}$Sr$_{1.89}$CuO$_{6+\delta}$ High-$T_c$ Cuprates'
---
[^1]
[^2]
[^3]
The microscopic origin of the pseudogap, or the reduction in the electronic density of states (DOS) observed below a certain onset temperature ($T^*$) in hole-doped high-$T_c$ cuprates, remains elusive despite decades of extensive research. From angle-resolved photoemission spectroscopy (ARPES) analysis, it has been inferred that the pseudogap and superconductivity compete with each other and coexist largely by segregating on the Fermi surface, where the carriers primarily resident around the nodes (comprising the Fermi “arc") facilitate superconductivity while the pseudogap develops in the antinodal region [@ARPES1; @ARPES2; @2011Kondo]. Although there is growing evidence that the pseudogap accompanies certain broken electronic symmetries [@Ghiringhelli:12; @Chang:12; @Comin:14; @Neto:14; @Hashimoto:15], it is not clear whether or not these broken symmetries are the origin of the pseudogap. It is worth remembering that the Néel order in lightly doped cuprates is controlled by the inter-layer coupling between the CuO$_2$ layers, which is a material-dependent parameter that is not necessarily relevant to the underlaying energy scale of the intrinsic intra-layer electronic correlation. In contrast, the metal-insulator (MI) transition (or crossover at finite temperatures) in underdoped cuprates is the direct manifestation of the intra-layer correlation central to the Mott physics, and its relevance to the pseudogap is of crucial importance.
Here, we report on the hole concentration ($p$) dependence of the normal state DOS in Bi$_{1.76}$Pb$_{0.35}$Sr$_{1.89}$CuO$_{6+\delta}$ \[(Bi,Pb)2201\] which is derived from muon Knight shift ($K_\mu$) measurements under a high transverse field. The physical quantities characterizing the pseudogap, i.e., $T^*$ and the gap energy ($\Delta_1$, assuming $d$-wave symmetry) are also determined from the temperature ($T$) dependence of the shift \[$K_\mu(T)$\]. We find that the residual shift $K_0$ $[\equiv K_\mu(0)]$ is zero at $p\simeq0.10\equiv p_{\rm MI}$ (the critical concentration), and that $K_0$ develops linearly with $p$. More interestingly, while $T^*$ and $\Delta_1$ exhibit a strong $p$ dependence consistent with earlier reports, the magnitude of the reduction, $K_{\rm pg}\equiv K_\mu(T^*)-K_0$, demonstrates a least dependence on $p$, indicating that the DOS depleted by the pseudogap is determined by the Fermi surface at $p=p_{\rm MI}$. Considering the absence of magnetism at $p_{\rm MI}$, these observations suggest that the pseudogap is primarily linked to the momentum-dependent charge localization driven by the intra-layer electronic correlation.
The (Bi,Pb)2201 compound is a variant of [Bi$_2$Sr$_2$CuO$_{6+\delta}$]{}, in which carrier doping can be attained over a wide range of hole concentrations, i.e., from lightly doped ($p\le0.1$) to overdoped ($p\ge 0.2$) by controlling the oxygen content $\delta$. In contrast to [La$_{2-x}$Sr$_x$CuO$_4$]{} (LSCO), the strong intra-layer antiferromagnetic (AF) correlation does not lead to instability of the spin glass or the Néel order in the lightly doped region; this behavior most likely results from the large distance between the CuO$_2$ layers ($\simeq1.22$ nm, almost twice as large as that of LSCO) [@Russo; @Bi2201ZFmSR; @Enoki:2013]. This feature provides a considerable advantage for muon spin rotation ([$\mu$SR]{}), because the majority of high-$T_{\rm c}$ cuprates exhibit the Néel order in the relevant doping range which precludes high precision frequency shift measurements for investigating the DOS [@LSCOIshida]. Moreover, the (Bi,Pb)2201 compound exhibits relatively low $T_c$ ($\le20$ K) and a correspondingly low irreversibility field ($B_{\rm irr}<6$ T); thus, it is feasible to study the normal state DOS below $T_c$ by suppressing the superconducting gap under a modest external field [@Hc2]. The samples examined in this study and their bulk properties are summarized in the Supplemental Material [@Suppl] (see also Fig. \[kmu\_T\] inset for a quick reference to their label and $T_c$ vs $p$).
A conventional [$\mu$SR]{} experiment was conducted on the TRIUMF M15 beamline using the HiTime spectrometer. An external field of 6 T ($B_0,\:\parallel \hat{z}$ axis) was applied parallel to the $c$-axes of the (Bi,Pb)2201 crystals for all Knight shift measurements, where the field was sufficiently high to suppress the superconductivity of all the investigated samples. The complex decay positron asymmetry \[$A(t)=A_x(t)+iA_y(t)$\] was monitored by two pairs of scintillation counters placed along the $\hat{x}$ and $\hat{y}$ directions. The sample $T$ was controlled over a $T$ range of 2 to 300 K using a helium gas flow cryostat.
Typical examples of fast Fourier-transformed [$\mu$SR]{} time spectra observed at 250 K for the LD and OPT samples are shown in Figs. \[spectra\](a) and (c), where two lines with comparable amplitudes are clearly discernible. This feature is apparent for all examined samples, and their relative amplitude is primarily sample independent; this result strongly suggests that the signal splitting is due to muons occupying two inequivalent sites with different hyperfine parameters in the unit cell (see below). This conclusion is also in line with the result of a previous ZF-[$\mu$SR]{} study on the AF phase of La-Bi2201, which suggests the presence of two different sites for muons probing different internal magnetic fields [@Labi2201_zfmsr].
![(Color online) Examples of fast Fourier-transformed [$\mu$SR]{} spectra observed at $\sim$250 K in (a) LD and (c) OPT, with the temperature dependence of the muon Knight shift $K_i$ ($i=1,2$, corresponding to the signal with amplitude $A_i$) in the respective samples being shown in (b) and (d). The small downturn below $\sim$50 K in (d) is commonly observed for $K_1$ and $K_2$ and is due to the diamagnetism of the impurities (see text). []{data-label="spectra"}](bi2201-fig1.eps){width="45.00000%"}
The [$\mu$SR]{} spectra were analyzed in the time domain using conventional least-squares fitting with appropriate modeling of the two signal components, with $$\label{eq:analysis}
A(t)\simeq A_1e^{-\lambda_1t} e^{i(2\pi f_1t+\phi)}+A_2e^{-\sigma_2^2t^2}e^{i(2\pi f_2t+\phi)},$$ where $A_i$ indicate the initial amplitudes ($i=1,2$), $\lambda_1$ and $\sigma_2$ are the depolarization rates, $f_i$ are the precession frequencies, and $\phi$ is the initial phase of precession. The line shape for depolarization (either exponential or Gaussian) was chosen so as to minimize the chi-square values for the respective components after several trials. Because of the small depolarization rates
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study elliptic gradient systems with fractional laplacian operators on the whole space $$(- \Delta)^\mathbf s \mathbf u =\nabla H (\mathbf u) \ \ \text{in}\ \ \mathbf{R}^n,$$ where $\mathbf u:\mathbf{R}^n\to \mathbf{R}^m$, $H\in C^{2,\gamma}(\mathbf{R}^m)$ for $\gamma > \max(0,1-2\min \left \{s_i \right \})$, $\mathbf s=(s_1,\cdots,s_m)$ for $0<s_i<1$ and $\nabla H (\mathbf u)=(H_{u_i}(u_1, u_2,\cdots,u_m))_{i}$. We prove De Giorgi type results for this system for certain values of $\mathbf s$ and in lower dimensions, i.e. $n=2,3$. Just like the local case, the concepts of orientable systems and $H-$monotone solutions, established in [@FG], play the key role in proving symmetry results. In addition, we provide optimal energy estimates, a monotonicity formula, a Hamiltonian identity and various Liouville theorems.'
address:
- 'Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Alberta, Canada T6G 2G1'
- 'Universite Aix-Marseille and LATP 9, rue F. Joliot Curie, 13453 Mar- seille Cedex 13, France.'
author:
- Mostafa Fazly
- Yannick Sire
bibliography:
- 'biblio.bib'
title: Symmetry results for fractional elliptic systems and related problems
---
[^1]
Introduction and main results
=============================
This paper is devoted to several symmetry results, qualitative properties and Liouville-type theorems for solutions of non local elliptic systems. Non local equations have led to several active research areas, in both applied and purely theoretical aspects for the past recent years. The prototypical operator involved is the so-called fractional laplacian $(-\Delta)^s$ for $s \in (0,1)$. It is a Fourier multiplier of symbol $|\xi|^{2s}$, see [@landkof]. Despite its interest in harmonic analysis (see the book [@landkof] for the first systematic study of potential analysis of this operator) it is also of great importance in the probability theory. Indeed, the fractional laplacian is the basic example of infinitesimal generator of Levy processes, see the book of Bertoin [@B] for an extensive study of such stochastic processes. Levy processes are processes whose generators are given by the following formula (up to a normalizing constant) $$\mathcal I u(\mathbf x)=\int_{\mathbf R^n} (u(\mathbf x+\mathbf y)+u(\mathbf x-\mathbf y)-2u(\mathbf y))\mu(d\mathbf y)$$ for sufficiently smooth (say) functions $u$ and where $\mu(d\mathbf y)$ is a Levy measure, i.e. a positive function on $\mathbf R^n$ such that $\mu( \left \{ 0 \right \})=0$ and $$\int_{\mathbf R^n} \min (|\mathbf x|^2,1)\mu(d\mathbf x) < \infty.$$ In the case of the fractional laplacian, one has $$(-\Delta)^s u(\mathbf x)=\text{P.V.}\int_{\mathbf R^n} \frac{u(\mathbf x)-u(\mathbf y)}{|\mathbf x-\mathbf y|^{n+2s}}\,d\mathbf y,$$ where $\text{P.V.}$ stands for the principal value in the Cauchy sense.
We consider the following system $$\begin{aligned}
\label{main}
(- \Delta)^\mathbf s \mathbf u =\nabla H (\mathbf u) \ \ \text{in}\ \ \mathbf{R}^n,
\end{aligned}$$ where $\mathbf u:\mathbf{R}^n\to \mathbf{R}^m$, $H\in C^{2,\gamma}(\mathbf{R}^m)$ for $\gamma > \max(0,1-2\min \left \{s_i \right \})$, $\mathbf s=(s_1,\cdots,s_m)$ where $0<s_i<1$ and $\nabla H (\mathbf u)=(H_{u_i}(u_1, u_2,...u_m))_{i}$. The notation $H_{u_i}$ stands for the partial derivative $\frac{\partial H}{\partial u_i}$. Therefore each component satisfies the equation $$(- \Delta)^{s_i} u_i = \partial_{u_i}H(\mathbf u) \ \ \text{in}\ \ \mathbf{R}^n.$$ For the local problems that is when $\mathbf s=1$, the above system has been studied for various purposes because of the interesting structure of the system. We refer interested readers to [@FG; @ali] for symmetry results and to [@CF] for the regularity of extremal solutions of eigenvalue problems.
It is now a well known fact, and extensively used for these equations, that the fractional laplacian can be realized as the boundary operator (more precisely the Dirichlet-to-Neumann operator) of a suitable extension in the half-space (see [@cafS]). In view of this result, we will be considering the following extended system of equations $$\begin{aligned}
\label{emain}
\left\{ \begin{array}{lcl}
\hfill {\mathop{\mathrm{div}}\nolimits}(y^{a_i} \nabla v_i)&=& 0 \ \ \text{in}\ \ \mathbf{R}_+^{n+1}=\left \{x \in \mathbf R^n, y>0 \right \},\\
\hfill -\lim_{y\to0}y^{a_i} \partial_{y} v_i&=& d_{s_i} \partial_{v_i}H(\mathbf v) \ \ \text{in}\ \ \partial\mathbf{R}_+^{n+1},
\end{array}\right.
\end{aligned}$$ where $a_i=1-2s_i$ and $d_{s_i} = \frac{\Gamma(1-s_i)}{2^{2s_i-1}\Gamma(s_i)}$. Here $v_i$ is the extension of the function $u_i$.
The main results of the present paper deal with the solutions of . The direct consequences of these results, in the light of [@cafS], leads us to similar results for the original system .
Our results are inspired by a famous conjecture of De Giorgi announced in [@DeG]. This conjecture concerns the flatness of level-sets of bounded monotone solutions of the scalar Allen-Cahn equation. The De Giorgi’s conjecture is known to be true in $n=2$ by Ghoussoub-Gui [@GG1], in $n=3$ by Ambrosio-Cabre [@AC], in $4 \leq n \leq 8$ by Savin [@savin] (with an additional natural hypothesis). A counterexample is provided in $n\ge 9$ by Del Pino-Kowalczyk-Wei [@PKW]. In addition, Fazly-Ghoussoub in [@FG] established De Giorgi type results for elliptic systems of the form $\Delta \mathbf u=\nabla H(\mathbf u)$ where $\mathbf u:\mathbf R^n\to\mathbf R^m$ in dimensions $n=2,3$. Corresponding symmetry results for non local equations are provided by Cabré-Sire in [@CS2] and by Sire-Valdinoci in [@SV] when $n=2$ and by Cabré-Cinti in [@cabre] for $n=3$. Moreover, Dipierro-Pinamonti in [@serena] provided symmetry results for the system (\[main\]) when $n=m=2$.
Before stating our main results, we would like to define the following concepts.
We say that a solution ${\bf u}=(u_i)_{i=1}^m$ of (\[main\]) is $H$-monotone if the following hold,
1. For every $i\in \{1,\cdots, m\}$, $v_i$ is strictly monotone in the $x_n$-variable (i.e., $\partial_n v_i\neq 0$).
2. For $i<j$, we have $$\hbox{$\partial_{u_iu_j}H({\mathbf u}) \partial_n u_i(\mathbf x) \partial_n u_j (\mathbf x)\ge 0$ for all $x\in\mathbf{R}^n$.}$$
\[weak\] We shall say that the system (\[main\]) is orientable, if there exist nonzero functions $\theta_k\in C^1(\mathbf{R}^{n+1}_+)$, $k=1,\cdots,m$, which do not change sign, such that for all $i,j$ with $1\leq i<j\leq m$, we have $$\label{oriantableu}
\hbox{$ \partial_{u_iu_j}H({\mathbf u}) \theta_i(\mathbf x)\theta_j(\mathbf x)\ge 0$ \, for all
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'From generation of backscatter-free transmission lines, to optical isolators, to chiral Hamiltonian dynamics, breaking time-reversal symmetry is a key tool for development of next-generation photonic devices and materials. Of particular importance is the development of time-reversal-broken devices in the low-loss regime, where they can be harnessed for quantum materials and information processors. In this work, we experimentally demonstrate the isolation of a single, time-reversal broken running-wave mode of a moderate-finesse optical resonator. Non-planarity of the optical path produces a round-trip geometrical (Pancharatnam) polarization rotation, breaking the inversion symmetry of the photonic modes. The residual time-reversal symmetry between forward-$\sigma^+$/ backwards-$\sigma^-$ modes is broken through an atomic Faraday rotation induced by an optically pumped ensemble of $^{87}$Rb atoms residing in the resonator. We observe a splitting of 6.3 linewidths between time-reversal partners and a corresponding optical isolation of $\sim$ 20.1(4) dB, with 83(1)% relative forward cavity transmission. Finally, we explore the impact of twisted resonators on T-breaking of intra-cavity Rydberg polaritons, a crucial ingredient of photonic materials and specifically topological optical matter. As a highly coherent approach to time-reversal breaking, this work will find immediate application in creation of photonic materials and also in switchable narrow-band optical isolators.'
author:
- Jia Ningyuan
- Nathan Schine
- Alexandros Georgakopoulos
- Albert Ryou
- Ariel Sommer
- Jonathan Simon
bibliography:
- 'polaritons.bib'
title: 'Photons and polaritons in a time-reversal-broken non-planar resonator'
---
[^1]
[^2]
Within the condensed matter community there is a growing interest in creating synthetic material analogs made of light to explore idealized models which are difficult to realize within the solid state. In such “photonic materials,” photons in either the optical- or microwave- domain may be made to behave as massive particles that are trapped and allowed to interact with one another. Using arrays of micro-fabricated waveguides [@rech2013phot] and resonators [@hafe2013imag; @ning2015time], or exotic Fabry Pérot cavities [@schine2016synthetic; @klae2010bose; @sommer2016engineering], it has even become possible to engineer the single-particle photonic dispersion to create gauge fields for these massive photons. To mediate interactions between photons they must be coupled to matter–to Josephson junctions in the microwave domain [@wallraff2004strong; @houck2012chip], and either to Rydberg-dressed atoms [@peyr2012quan; @dudi2012stro; @pari2012obse; @firs2013attr; @Jia2016CavityRydPol; @jia2017strongly] or other nonlinear emitters [@sun2016quantum] in the optical domain. A crucial missing ingredient is the ability to explicitly break time reversal symmetry without spoiling the exquisite longevity of the photonic particles. In the ring resonators or waveguides described above, such time-reversal symmetry breaking would energetically preclude backscattering, which would otherwise correspond to reversal of synthetic gauge fields, and more broadly to physics beyond the material dynamics under consideration. In interacting systems enforcing such a T-broken single particle sector is more crucial, as the interactions themselves will otherwise violate the symmetry which protects the topological character of the system [@fialko2014fragility; @lodahl2016chiral].
In the optical domain, time-reversal breaking has long been employed in isolators, where the Faraday effect provides a non-reciprocal polarization rotation. However, this approach is typically overlooked for breaking time reversal symmetry in photonic quantum materials due to significant single pass loss. Nonetheless, in a particular frequency band of interest, the fundamental limit on Faraday rotation compared to optical loss is favorable: for a typical Alkali metal atom like Rubidium (see appendix \[App:LimitTBreak\]), the ratio of intrinsic atomic linewidth to D-line fine structure is $\sim10^{-5}$, providing $\sim 10^5$ cycles of time-reversal-broken dynamics (for example, cyclotron orbits) within a photon lifetime (see appendix \[App:IsoTheory\]). Towards this end, early work realized small magneto-optic rotations in free-space atomic vapors [@franke2001magneto].
Multiple passes through the atomic ensemble may be employed to enhance the non-reciprocal polarization rotation [@RomalisFaraday2011], and indeed suggests that, in an optical cavity, the resonator geometry can be employed to control photon mass and trapping [@sommer2016engineering], with a Faraday rotation to break time-reversal. The challenge is that the optical Faraday effect cancels in a two-mirror cavity where the forward and backward paths comprise the same mode, while in a three-mirror (running-wave) cavity the birefringence and polarization-dependent transmission of the mirrors enforce spectrally split linearly-polarized eigen-modes with vastly different finesses [@nagorny2003collective; @klinner2006normal]. A cavity-enhanced non-reciprocity was recently demonstrated in a whispering gallery mode optical resonator [@sayrin2015nanophotonic], where the cavity-birefringence was circumvented by coupling the atoms to the longitudinal component of the resonator near-field. In the present work, we extend these ideas, employing a four-mirror running-wave resonator that we twist slightly out of the plane, as in a non-planar ring oscillator [@NPRO1986], to break inversion symmetry. An atomic ensemble provides a resonator-enhanced atomic Faraday effect that breaks time-reversal symmetry. Together, these broken symmetries result in a frequency shift between forward and backward propagating modes that we employ to demonstrate optical isolation. This is particularly exciting in light of the recent observation of photonic Landau levels in twisted optical resonators [@schine2016synthetic; @sommer2016engineering]; the technique demonstrated in this work would prevent interaction-induced backscattering between forward and backward propagating lowest Landau levels, paving the way to studies of Laughlin physics [@umuc2014prob; @somm2015quan; @grusdt2013fractional] when a Rydberg admixture [@pari2012obse; @Jia2016CavityRydPol; @jia2017strongly] induces interactions between the resonator photons. To isolate a single running-wave mode in an optical resonator, we begin by noting that even a single transverse mode of a running-wave optical resonator exhibits a four-fold degeneracy arising from the polarization-helicity degree of freedom, and the direction of propagation along the resonator axis (see Fig. \[fig:setup\](b)). It will thus be necessary to break *two* symmetries to isolate precisely one of these modes: inversion symmetry and time-reversal symmetry.
![(Color online) **T-Breaking in Twisted Resonators Coupled to Atoms**. In a birefringence-free planar resonator (**a**, left) each transverse mode exhibits a four-fold degeneracy that may be parametrized as forward (red right arrow) and backward (blue left arrow) propagation for each of positive and negative helicity (**b**, left): $\{\rightarrow,\leftarrow\}\otimes\{H^+,H^-\}$. Twisting the resonator breaks this four-fold degeneracy into two sub-manifolds of definite helicity (**a**,**b** middle). We couple the optical modes to spin-polarized atoms (a, right) to break the forward-backward symmetry (**b**, right): polarized atoms are sensitive not to the light’s helicity (defined relative to the direction of propagation) but to its absolute polarization (defined relative to a fixed axis); the difference in oscillator strengths for $\sigma^+$ and $\sigma^-$, for $^{87}$Rb atoms on the $|F_g=2,m_F=2\rangle\rightarrow |F_e=3'\rangle$ transition of the D2 line, is a factor of 15 [@steck2001rubidium]. The Zeeman splitting of the magnetic sublevels does not directly contribute to T-breaking, except insofar as it is employed to optically pump the atoms. **(c)** A schematic of the particular symmetries broken in the various aspects of the experiment, using the analogy of moving a rod into/out-of a plate. **Left**: A smooth rod can move into- or out-of- the page. **Center**: a [*threaded*]{} rod must twist clock-wise to move into the page, and counter-clock-wise to move out of the page. **Right**: a [*ratcheted threaded*]{} rod may only rotate clockwise, and thus may only move into the page. **(d)** The experimental apparatus consists of a twisted resonator coupled to an ensemble of laser-cooled $^{87}$Rb atoms (green spheres), and probed from both directions using laser fields injected through optical pickoffs (gray circles). The transmitted fields in both directions are detected through single photon counting modules (SPCMs) fiber-coupled to the light transmitted through the pickoffs. []{data-label="fig:setup"}](Fig1D6.pdf){width="8cm"}
{width="7in"}
To break inversion symmetry we twist the resonator slightly (6$^\circ$ see appendix \[App:NPC\]), resulting in a Pancharatnam
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We show that every finite subgroup of $\textrm{GL}_{2}(\mathbb{R})$ can be realized as the Veech group of some translation surface.'
author:
- Asaf Hadari
bibliography:
- 'finiteveech.bib'
title: Translation Surfaces With Finite Veech Groups
---
Introduction
============
Veech groups play a pivotal role in the study of translation surfaces, quadratic differentials, and geodesics in the moduli space of Riemann surfaces. It is well known that Veech groups are always discrete subgroups of $\textrm{GL}_{2}(\mathbb{R})$, though it is not known which discrete subgroups of $\textrm{GL}_{2}(\mathbb{R})$ are Veech groups. A generic translation surface has a Veech group which is trivial or cyclic of order $2$. On the other end of the spectrum, some translation surfaces have Veech groups which are lattices. These are known as *Veech surfaces*, and have been the object of much study.
The goal of this paper is to explore the smallest non-trivial possibilities: translation surfaces with finite Veech groups. The only finite subgroups of $GL_{2}(\mathbb{R})$ are cyclic and dihedral groups. We prove the following theorem.
Every finite subgroup of $GL_{2}(\mathbb{R})$ is can be realized as the Veech group of some translation surface.
Note that we allow both orientation-preserving and orientation-reversing elements in the Veech groups in this paper,that is, the Veech groups we consider are subgroups of $\textrm{GL}_{2}(\mathbb{R})$, and not just $\textrm{SL}_{2}(\mathbb{R})$.
Our method is constructive: we provide a translation surface for each such group. We make no effort at efficiency in terms of genus - for some finite groups there are examples of translation surfaces of lower genus that have the required Veech group.
#### Acknowledgements
The author wishes to thank Benson Farb ,Howard Masur, Alex Eskin, and Matthew Bainbridge for useful discussions on the ideas in the paper. He also wishes to thank his wife Nurit Kirshenbaum for creating the figures.
Preliminaries
=============
#### Translation Surfaces.
A translation surface $T$ is a $2$-dimensional manifold containing a discrete subset $\Sigma \subset T$ such that $T \backslash \Sigma$ is equipped with a maximal atlas with the property that the transition functions are translations. The set $\Sigma$ is called the set of *cone points* of $T$. Note that the atlas above imbues $T$ with a flat metric away from the set of cone points.
One way to construct translation surfaces is the following: start with a polygon in $\mathbb{R}^{2}$ with the property that each side of the polygon is parallel and congruent to a different edge. By gluing such edges in pairs, one obtains a translation surface. For example, by gluing parallel congruent edges of a rectangle in the plane one obtains a flat torus with no cone points. By gluing parallel congruent edges of a $4g$-gon ($g \geq 2$) one obtains a genus $g$ surface with one cone point.
Note that there are several equivalent ways in the literature of defining translation surfaces. We choose the point of view which is simplest for our needs.
#### The Developing Map and Holonomy.
Let $T$ be a translation surface with cone points $\Sigma$. Let $\tilde{T}$ be the universal cover of $T \backslash \Sigma$. The manifold $\tilde{T}$ has a flat metric, given by pulling back the metric from $T \backslash \Sigma$. Given any choice of basepoint $p \in \tilde{T}$, there is a unique locally isometric embedding $\tilde{T} \to \mathbb{R}^{2}$ sending $p$ to the origin in $\mathbb{R}^{2}$. This map is called the *developing map*. Given a path $\gamma \subset T \backslash \Sigma$ and a lift $\tilde{\gamma}$ of $\gamma$ to $\tilde{T}$, the difference of the image of the endpoints of $\tilde{\gamma}$ is independent of the lift and of the choice of basepoint $p$. We denote this difference $\textrm{hol}(\gamma)$. If $T$ itself is simply connected and a basepoint $p$ is chosen, then the holonomy map is a well defined map from $T$ to $\mathbb{R}^{2}$. We denote this map $\textrm{hol}_{p}$.
#### Saddle Connections.
A *saddle connection* is a geodesic segment whose endpoints are cone points, and whose interior does not contain any cone points. Notice that since the set of cone points is discrete, there are only finitely many saddle connections on a given translation surface whose lengths are less than or equal to a given number.
One important fact that we use about saddle connections is the following: let $\textrm{Hol}(T)$ be the $\mathbb{Z}$-module generated by the holonomy of all saddle connections. Then $\textrm{Hol}(T)$ is finitely generated as a $\mathbb{Z}$-module ([@KeSm]).
#### Affine Groups and Veech Groups.
Given a translation surface $T$ with set of cone points $\Sigma$, the flat structure gives a trivialization of the tangent bundle of $T \backslash \Sigma$. Thus, to each diffeomorphism $f: T \backslash \Sigma \to T \backslash \Sigma$ one can attach a map $T \backslash \Sigma \to \textrm{GL}_{2}(\mathbb{R})$ which assigns to a point $p$ the derivative of $f$ at $p$. The group consisting of all maps $f: T \to T$ that fix $\Sigma$, and that have constant derivatives away from $\Sigma$ is called the *affine group of T*, and we denote it $\textrm{Aff}(T)$. The group $\textrm{Aff}(T)$ has a natural projection $\textrm{Aff}(T) \to \textrm{GL}_{2}(\mathbb{R})$ given by taking derivatives. We denote its image by $\textrm{GL}_{2}(T)$. This group is called the *Veech group of T*.
Note that the group $\textrm{GL}_{2}(T)$ is actually a subgroup of $\textrm{SL}_{2}^{\pm}(\mathbb{R})$, where $\textrm{SL}_{2}^{\pm}(\mathbb{R})$ is the group of all elements whose determinant is $\pm 1$. To see this, consider the module $\textrm{Hol}(T)$ defined above. Since $\textrm{Aff}(T)(\Sigma) = \Sigma$, and elements of $\textrm{Aff}(T)$ send segments to segments, one has that $\textrm{GL}_{2}(T) \textrm{Hol}(T) = \textrm{Hol}(T)$. It’s a standard fact that the stabilizer of any finitely generated $\mathbb{Z}$-submodule of $\mathbb{R}^{2}$ must be a subgroup of $\textrm{SL}_{2}^{\pm}(\mathbb{R})$.
Recall that an element of $SL_{2}^{\pm}(\mathbb{R})$ is *elliptic* if its trace has absolute value $< 2$, *parabolic* if its trace has absolute value $2$, and *hyperbolic* if its trace has absolute value $> 2$. We can thus classify all elements of the Veech group of a translation surface as elliptic, parabolic, or hyperbolic. In this paper we are predominantly interested in elliptic elements of Veech groups. These always have finite order, and preserve the flat metric on the translation surface.
Proof of Theorem $1$.
=====================
Any finite subgroup $G < GL_{2}(\mathbb{R})$ preserves an Euclidean metric on $\mathbb{R}^{2}$, and can thus be conjugated into $O_{2}(\mathbb{R})$. The finite subgroups of $O_{2}(\mathbb{R})$ are exactly the finite cyclic groups and the finite dihedral groups. We denote the cyclic group of order $N$ by $C_{N}$. We denote the dihedral group of order $2N$ by $D_{N}$. In order to prove Theorem $1$, we prove the following two propositions.
For every $N \geq 3$, there exists a translation surface $T = T(N)$ such that $\textrm{GL}_{2}(T) \cong D_{N}$.
For every $N \geq 3$, there exists a translation surface $T = T(N)$ such that $\textrm{GL}_{2}(T) \cong C_{N}$.
The proofs of the the two propositions are very similar. The following lemmas are used in the proof of both propositions.
Suppose $T$ is a translation surface such that $\textrm{Hol}(T)$ contains two $\mathbb{R}$-linearly independent and algebraically independent vectors. Then $\textrm{GL}_{2}(T)$ contains no hyperbolic elements.
#### Proof.
Let $e_{1}$ and $e_{2}$ be two vectors as in the conditions of the lemma. Suppose that there was a hyperbolic element $T \in \textrm{GL}_{2}(T)$ with positive trace. Note that the square of any hyperbolic element is always a hyperbolic element with positive trace. Let $K = \mathbb{Q}[\textrm{Trace}(T)]$. The field $K$ is a number field, and $\textrm{Hol}(T) \otimes \mathbb{Q}$ is a $2$-dimensional
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The $\overline{B}_{q}^{\ast}$ ${\to}$ $DP$, $DV$ weak decays are studied with the perturbative QCD approach, where $q$ $=$ $u$, $d$ and $s$; $P$ and $V$ denote the ground $SU(3)$ pseudoscalar and vector meson nonet. It is found that the branching ratios for the color-allowed $\overline{B}_{q}^{\ast}$ ${\to}$ $D_{q}{\rho}^{-}$ decays can reach up to $10^{-9}$ or more, and should be promisingly measurable at the running LHC and forthcoming SuperKEKB experiments in the near future.'
author:
- Junfeng Sun
- Jie Gao
- Yueling Yang
- Qin Chang
- Na Wang
- Gongru Lu
- Jinshu Huang
title: 'Study of the $\overline{B}_{q}^{\ast}$ ${\to}$ $DM$ decays with perturbative QCD approach'
---
Introduction {#sec01}
============
In accordance with the conventional quark model assignments, the ground spin-singlet pseudoscalar $B_{q}$ mesons and spin-triplet vector $B^{\ast}_{q}$ mesons have the same flavor components, and consist of one valence heavy antiquark $\bar{b}$ and one light quark $q$, i.e., $\bar{b}q$, with $q$ $=$ $u$, $d$, $s$ [@pdg]. With the two $e^{+}e^{-}$ $B$-factory BaBar and Belle experiments, there is a combined data sample of over $1\,ab^{-1}$ at the ${\Upsilon}(4S)$ resonance. The $B_{u,d}$ meson weak decay modes with branching ratio of over $10^{-6}$ have been well measured [@epjc74]. The $B_{s}$ meson, which can be produced in hadron collisions or at/over the resonance ${\Upsilon}(5S)$ in $e^{+}e^{-}$ collisions, is being carefully scrutinized. However, the study of the $B_{q}^{\ast}$ mesons has not actually attracted much attention yet, subject to the relatively inadequate statistics. Because the mass of the $B_{q}^{\ast}$ mesons is a bit larger than that of the $B_{q}$ mesons, the $B_{q}^{\ast}$ meson should be produced at the relatively higher energy rather than at the resonance ${\Upsilon}(4S)$ in $e^{+}e^{-}$ collisions. With the high luminosities and large production cross section at the running LHC, the forthcoming SuperKEKB and future [*Super proton proton Collider*]{} (SppC, which is still in the preliminary discussion and research stage up to now), more and more $B_{q}^{\ast}$ mesons will be accumulated in the future, which makes the $B_{q}^{\ast}$ mesons another research laboratory for testing the Cabibbo-Kobayashi-Maskawa (CKM) picture for $CP$-violating phenomena, examining our comprehension of the underlying dynamical mechanism for the weak decays of the heavy flavor hadrons.
Having the same valence quark components and approximately an equal mass, both the $B^{\ast}_{q}$ and $B_{q}$ mesons can decay via weak interactions into the same final states. On the one hand, the $B^{\ast}_{q}$ and $B_{q}$ meson weak decays would provide each other with a spurious background; on the other hand, the interplay between the $B_{q}^{\ast}$ and $B_{q}$ weak decays could offer some potential useful information to constrain parameters within the standard model, and might shed some fresh light on various intriguing puzzles in the $B_{q}$ meson decays. The $B_{q}$ meson decays are well described by the bottom quark decay with the light spectator quark $q$ in the spectator model. At the quark level, most of the hadronic $B_{q}$ meson decays involve the $b$ ${\to}$ $c$ transition due to the hierarchy relation among the CKM matrix elements. As is well known, there is a more than $3\,{\sigma}$ discrepancy between the value of ${\vert}V_{cb}{\vert}$ obtained from inclusive determinations, ${\vert}V_{cb}{\vert}$ $=$ $(42.2{\pm}0.8){\times}10^{-3}$, and from exclusive ones, ${\vert}V_{cb}{\vert}$ $=$ $(39.2{\pm}0.7){\times}10^{-3}$ [@pdg]. Besides the semileptonic $\overline{B}_{q}^{(\ast)}$ ${\to}$ $D^{(\ast)}{\ell}\bar{\nu}$ decays, the nonleptonic $\overline{B}_{q}^{(\ast)}$ ${\to}$ $DM$ decays, with $M$ representing the ground $SU(3)$ pseudoscalar $P$ and the vector $V$ meson nonet, are also induced by the $b$ ${\to}$ $c$ transition, and hence could be used to extract/constrain the CKM matrix element ${\vert}V_{cb}{\vert}$.
From the dynamical point of view, the phenomenological models used for the $\overline{B}_{q}$ ${\to}$ $DM$ decays might, in principle, be extended and applied to the $\overline{B}_{q}^{\ast}$ ${\to}$ $DM$ decays. The practical applicability and reliability of these models could be reevaluated with the $\overline{B}_{q}^{\ast}$ ${\to}$ $DM$ decays. Recently, some attractive QCD-inspired methods, such as the perturbative QCD (pQCD) approach [@prd52.3958; @prd55.5577; @prd56.1615; @plb504.6; @prd63.054008; @prd63.074006; @prd63.074009; @epjc23.275], the QCD factorization (QCDF) approach [@prl83.1914; @npb591.313; @npb606.245; @plb488.46; @plb509.263; @prd64.014036; @npb774.64; @prd77.074013], soft and collinear effective theory [@prd63.014006; @prd63.114020; @plb516.134; @prd65.054022; @prd66.014017; @npb643.431; @plb553.267; @npb685.249] and so on, have been developed vigorously and employed widely to explain measurements on the $B_{q}$ meson decays. The $\overline{B}_{q}$ ${\to}$ $DM$ decays have been studied with the QCDF [@npb591.313; @plb476.339] and pQCD [@prd69.094018; @prd78.014018] approaches, but there are few research works on the $B_{q}^{\ast}$ meson weak decays. Recently, the $\overline{B}_{q}^{\ast}$ ${\to}$ $D_{q}V$ decays have been investigated with the QCDF approach [@epjc76.523], and it is shown that the $\overline{B}_{q}^{{\ast}0}$ ${\to}$ $D_{q}^{+}{\rho}^{-}$ decays with branching ratios of ${\cal O}(10^{-8})$ might be accessible to the existing and future heavy flavor experiments. In this paper, we will give a comprehensive investigation into the two-body nonleptonic $\overline{B}_{q}^{\ast}$ ${\to}$ $DM$ decays with the pQCD approach in order to provide the future experimental research with an available reference.
As is well known, the $B^{\ast}_{q}$ meson decays are dominated by the electromagnetic interactions rather than the weak interactions, which differs significantly from the $B_{q}$ meson decays. One can easily expect that the branching ratios for the $\overline{B}_{q}^{\ast}$ ${\to}$ $DM$ weak decays should be very small due to the short electromagnetic lifetimes of the $B_{q}^{\ast}$ mesons [@epja52.90], although these processes are favored by the CKM matrix element ${\vert}V_{cb}{\vert}$. Of course, an abnormal large branching ratio might be a possible hint of new physics beyond the standard model. There is still no experimental report on the $\overline{B}_{q}^{\ast}$ ${\to}$ $DM$ weak decays so far. Furthermore, the $\overline{B}_{q}^{\ast}$ ${\to}$ $DM$ weak decays offer the unique opportunity of observing the weak decay of a vector meson, where polarization effects could be explored.
This paper is organized as follows. In section \[sec02\], we present the theoretical framework, the conventions and notations, together with amplitudes for the $\overline{B}_{q}^{\ast}$ ${\to}$ $DM$ decays. Section \[sec03\] is devoted to the numerical results and discussion. The final section is a summary.
theoretical framework {#sec02}
=====================
The effective Hamiltonian {#sec0201}
-------------------------
As is well known, the weak decays of the $B_{q}^{(\ast)}$ mesons inevitably involve multiple length scales, including the mass of $m_{W}$ for the virtual gauge boson $W$, the mass of $m_{b}$ for the decaying bottom quark, the infrared confinement scale ${\Lambda}_{\rm QCD}$ of the strong interactions, and $m_{W}$ ${\gg}$ $m_{b}$ ${\gg}$ ${\Lambda}_{\rm QCD}$. So, one usually has to resort to the
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'David Harvey, Brendan Hassett, and Yuri Tschinkel'
bibliography:
- 'hodgehilb.bib'
title: Characterizing projective spaces on deformations of Hilbert schemes of K3 surfaces
---
Introduction {#sect:intro}
============
Let $X$ be an irreducible holomorphic symplectic manifold, i.e., a compact Kähler simply-connected manifold admitting a unique nondegenerate holomorphic two-form. Let $\left(,\right)$ denote the Beauville–Bogomolov form on the cohomology group ${\mathrm{H}}^2(X,{{\mathbb Z}})$, normalized so that it is integral and primitive. When $X$ is a K3 surface this coincides with the intersection form. In higher dimensions, the form induces an inclusion $$\label{eqn:incl}
{\mathrm{H}}^2(X,{{\mathbb Z}}) \subset {\mathrm{H}}_2(X,{{\mathbb Z}}),$$ which allows us to extend $\left(,\right)$ to a ${{\mathbb Q}}$-valued quadratic form.
Lagrangian projective spaces play a fundamental rôle in the birational geometry of these classes of manifolds. If $X$ contains a holomorphically embedded projective space ${{\mathbb P}}^{\dim(X)/2}$ we can consider the [*Mukai flop*]{} of $X$, obtained by blowing up the projective space and blowing down the exceptional divisor $$E\simeq {{\mathbb P}}(\Omega^1_{{{\mathbb P}}^{\dim(X)/2}})$$ along the opposite ruling. Our goal is to characterize possible homology classes of such submanifolds, modulo the monodromy representation on the cohomology of $X$.
Assuming $X$ contains a Lagrangian projective space ${{\mathbb P}}^{\dim(X)/2}$, let $\ell\in {\mathrm{H}}_2(X,{{\mathbb Z}})$ denote the class of a line in ${{\mathbb P}}^{\dim(X)/2}$, and $\lambda=N\ell\in {\mathrm{H}}^2(X,{{\mathbb Z}})$ a positive integer multiple. We can take $N$ to be the index of ${\mathrm{H}}^2(X,{{\mathbb Z}}) \subset {\mathrm{H}}_2(X,{{\mathbb Z}})$. Hodge theory [@Ran; @Voisin] shows that the deformations of $X$ containing a deformation of the Lagrangian space coincide with the deformations of $X$ for which $\lambda \in {\mathrm{H}}^2(X,{{\mathbb Z}})$ remains of type $(1,1)$. Infinitesimal Torelli implies this is a divisor in the deformation space, i.e., $$\lambda^{\perp} \subset {\mathrm{H}}^1(X,\Omega^1_X) \simeq {\mathrm{H}}^1(X,{{\mathcal T}}_X).$$
We seek to establish intersection theoretic properties of $\ell$ for various deformation-equivalence classes of holomorphic symplectic manifolds. Previous results in this direction include
1. If $X$ is a K3 surface then $\left(\ell,\ell\right)=-2$.
2. If $X$ is deformation equivalent to the Hilbert scheme of length-two subschemes of a K3 surface then $\left(\ell,\ell\right)=-5/2$. [@HTGAFA08]
3. If $X$ is deformation equivalent to a generalized Kummer fourfold then $\left(\ell,\ell\right)=-3/2$. [@HT10]
Here we prove
\[theo:main\] Let $X$ be a six-dimensional Kähler manifold, deformation equivalent to the Hilbert scheme of length-three subschemes of a K3 surface. Let ${{\mathbb P}}^3 \subset X$ be a smooth subvariety and $\ell \subset {{\mathbb P}}^3$ a line. Then $\left(\ell,\ell\right)=-3$ and $\rho=2\ell \in {\mathrm{H}}^2(X,{{\mathbb Z}})$. Furthermore, we have $$\left[ {{\mathbb P}}^3 \right]=\frac{1}{48}\left( \rho^3 + \rho^2c_2(X)\right).$$
This uniquely characterizes the class of the Lagrangian plane, modulo the monodromy action, which acts transitively on the $\rho \in {\mathrm{H}}^2(X,{{\mathbb Z}})$ with $\left(\rho,\rho\right)=-12$ and $\left(\rho,{\mathrm{H}}^2(X,{{\mathbb Z}})\right)=2{{\mathbb Z}}$ [@GHS §3].
In general, we conjectured in [@HT09] that if $X$ is of dimension $2n$ then $\left(\ell,\ell\right)= -(n+3)/2$, if $X$ is deformation equivalent to a Hilbert scheme of a K3 surface. Our main motivation for making these conjectures is to achieve a classification of extremal rational curves on irreducible holomorphic symplectic varieties (i.e., generators of extremal rays of birational contractions) in terms of intersection properties under the Beauville-Bogomolov form.
The structure of this paper is as follows: Section \[sect:cohomology\] reviews the cohomology groups of Hilbert schemes of K3 surfaces; Section \[sect:ring\] focuses on the ring structure. We employ representation theory to get results on the Hodge classes in Section \[sect:representation\]. The Hilbert scheme of length-three subschemes is studied in detail in Section \[sect:lengththree\]. We extract the distinguished absolute Hodge class in the middle cohomology in Section \[sect:indecomp\]; here ‘absolute Hodge classes’ are those that remain Hodge under arbitrary deformations of complex structure. The computation of the class of the Lagrangian three planes is worked out in Section \[sect:LTP\], modulo a number theoretic result. This is proved in Section \[sect:DA\].
[**Acknowledgments:**]{} We are grateful to Noam Elkies, Lothar Göttsche, Manfred Lehn, Eyal Markman, and Christoph Sorger for useful conversations. The second author was supported by National Science Foundation Grant 0554491 and 0901645; the third author was supported by National Science Foundation Grants 0554280 and 0602333. We appreciate the hospitality of the American Institute of Mathematics, where some of this work was done.
Cohomology of Hilbert schemes {#sect:cohomology}
=============================
Let $X$ be deformation equivalent to the punctual Hilbert scheme $S^{[n]}$, where $S$ is a K3 surface. For $n>1$ the Beauville-Bogomolov form can be written [@beauville §8] $${\mathrm{H}}^2(X,{{\mathbb Z}}) \simeq {\mathrm{H}}^2(S,{{\mathbb Z}})_{\left(,\right)} \oplus_{\perp} {{\mathbb Z}}\delta, \quad
\left(\delta,\delta\right)=-2(n-1)$$ where $2\delta$ is the class of the ‘diagonal’ divisor $\Delta^{[n]} \subset S^{[n]}$ parameterizing nonreduced subschemes. For each homology class $f\in {\mathrm{H}}^2(S,{{\mathbb Z}})$, let $f \in {\mathrm{H}}^2(X,{{\mathbb Z}})$ denote the class parameterizing subschemes with some support along $f$. This is compatible with the lattice embedding above. Duality gives a ${{\mathbb Q}}$-valued form on homology $${\mathrm{H}}_2(X,{{\mathbb Z}}) \simeq {\mathrm{H}}_2(S,{{\mathbb Z}})_{\left(,\right)} \oplus_{\perp} {{\mathbb Z}}\delta^{\vee}, \quad
\left(\delta^{\vee},\delta^{\vee}\right)=-\frac{1}{2(n-1)},$$ where $\delta^{\vee}$ is characterized as the homology class orthogonal to ${\mathrm{H}}^2(S,{{\mathbb Z}})$ and satisfying $\delta^{\vee}\cdot \delta =1$.
[@Gott90] Let $S$ be a K3 surface and $S^{[n]}$ its Hilbert scheme. Consider the Poincaré polynomial $$p(S^{[n]},z)=\sum_{j=0}^{4n} \beta_j(S^{[n]})z^j.$$ Then $$\sum_{n=0}^{\infty} p(S^{[n]},z)t^n= \prod_{m=1}^{\infty}
(1-z^{2m-2}t^m)^{-1}(1-z^{2m}t^m)^{-22}(1-z^{2m+2}t^m)^{-1}.$$
To save space, we write $$q(S^{[n]},z)=\sum_{j=0}^{n} \beta_{2j} z^j,$$ which determines the Poincaré polynomial by Poincaré duality. We have $$\begin{array}{rcl}
q(S,z)&=& 1+ 22z \\
q(S^{[2]},z) &=& 1 + 23 z + 276 z^2 \\
q(S^{[3]},z) &=& 1 + 23 z + 299 z^2 + 2554 z^3.
\end{array}$$
A theorem of Verbitsky [@Verb Theorem 1.5] asserts that the homomorphism arising from the cup product $$\mu_{k,n}:\mathrm{Sym}^k {\mathrm{H}}^2(S^{[n]},{{\mathbb Q}}) {\rightarrow}{\mathrm{H}}^{2k}(S^{[n]},{{\mathbb Q}})$$ is injective for $k \le n$. Thus its
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Let $s,n \ge 2$ be integers. We give a qualitative structural description of every matroid $M$ that is spanned by a frame matroid of a complete graph and has no $U_{s,2s}$-minor and no rank-$n$ projective geometry minor, showing that every such matroid is ‘close’ to a frame matroid. We also give a similar description of every matroid $M$ with a spanning projective geometry over a field $\GF(q)$ as a restriction and with no $U_{s,2s}$-minor and no $\PG(n,q'')$-minor for any $q'' > q$, showing that such an $M$ is ‘close’ to a $\GF(q)$-representable matroid.'
address: 'Department of Combinatorics and Optimization, University of Waterloo, Canada'
author:
- Jim Geelen
- Peter Nelson
title: The Structure of Matroids with a Spanning Clique or Projective Geometry
---
[^1]
Introduction
============
In \[\[highlyconnected\]\], Geelen, Gerards, and Whittle describe the structure of highly-connected matroids in minor-closed classes of matroids represented over a fixed finite field. In the same paper they conjecture extensions of their results to minor-closed classes of matroids omitting a fixed uniform minor. The main results in this paper are motivated by those conjectures, which we shall restate at the end of this introduction. Here we are primarily concerned with the structure of matroids having either the cycle-matroid of a complete graph or a projective geometry as spanning restriction.
An *elementary projection* of a matroid $M$ is a matroid obtained from an extension of $M$ by contracting the new element, and an *elementary lift* of $M$ is one obtained from a coextension by deleting the new element. Given two matroids $M$ and $N$ on the same ground set, we say that $N$ is a [*distance-$k$ perturbation*]{} of $M$ if $N$ can be obtained from $M$ by a sequence of $k$ elementary lifts and elementary projections. Perturbations play a natural role in considering minor-closed classes of matroids omiting a uniform matroid. In particular, if $\cM$ is a minor-closed class of matroids that omits a uniform matroid, then the set of matroids that are distance-$k$ perturbations of matroids in $\cM$ is also minor-closed and omits a uniform matroid; see Theorem \[perturbthm\]. Note that the uniform matroid $U_{r,n}$ is contained as a minor of $U_{s,2s}$ where $s=\max(r,n-r)$, so it suffices to consider classes omitting ‘balanced’ uniform matroids $U_{s,2s}$.
We start with the easier of our two main results which concerns matroids with a spanning projective geometry restriction.
\[main2\] For all integers $s,n \ge 2$, there exists an integer $k$ such that, for every prime power $q$ and every rank-$r$ matroid $M$ with a $\PG(r-1,q)$-restriction, either $M$ has a $U_{s,2s}$-minor, $M$ has a $\PG(n-1,q')$-minor for some $q' > q$, or there is a distance-$k$ perturbation of $M$ that is $\GF(q)$-representable.
A matroid $M$ is *framed by $B$* if $B$ is a basis of $M$ and each element of $M$ is spanned by a subset of $B$ with at most two elements. A *$B$-clique* is a matroid framed by $B$ so that each pair of distinct elements in $B$ is contained in a triangle. The second of our main results concerns matroids with a spanning $B$-clique restriction.
\[main1\] For all integers $s,n \ge 2$, there exists an integer $k$ such that, if $M$ is a matroid with a spanning $B$-clique restriction, then either $M$ has a $U_{s,2s}$-minor, $M$ has a rank-$n$ projective geometry minor, or there is a distance-$k$ perturbation of $M$ that is framed by $B$.
Theorem \[main1\] has an interesting special case where the spanning clique is ‘bicircular’; in this case we can avoid the outcome giving a large projective geometry as a minor. Given a graph $G = (V,E)$, we write $B^+(G)$ for the *framed bicircular matroid* of $G$; this is the matroid with ground set $E \cup V$, in which a set $X$ is independent if and only if $|X \cap (E(H) \cup V(H))| \le |V(H)|$ for each subgraph $H$ of $G$. Equivalently, $B^+(G)$ is constructed from the free matroid on $V$ by adding each $e = v_1v_2 \in E$ freely to the line between the basis elements $v_1$ and $v_2$. Note that $B^+(K_n)$ is a $V(K_n)$-clique. The *bicircular matroid* of $G$, in the more usual sense, is just the matroid $B(G) = B^+(G) \del V$. As a corollary of Theorems \[main1\] and \[main2\], we get the following strengthening of Theorem \[main1\].
\[bicirc\] For every integer $s \ge 2$ there is an integer $k$ such that, if $M$ is a rank-$r$ matroid with no $U_{s,2s}$-minor and with a $B^+(K_r)$-restriction framed by $B$, then there is a distance-$k$ perturbation of $M$ that is framed by $B$.
In Section \[selfdualsection\], we prove a result of independent interest, Theorem \[selfdual\], that finds the unavoidable minors for arbitrary large matroids that have two disjoint bases. A corollary is the following, which finds one of two specific minors in any matroid that is not close to being ‘trivial’.
\[unavoidable\] Let $s \ge 0$ be an integer and $k = 4^{4^{2s^2}}$. Then, for each matroid $M$, either
- $M$ has a $U_{s,2s}$-minor,
- $M$ has a minor isomorphic to the direct sum of $s$ copies of $U_{1,2}$, or
- there is a distance-$k$ perturbation of $M$ whose elements are all loops or coloops.
Structure Theory {#structure-theory .unnumbered}
----------------
Theorems \[main1\] and \[main2\] fit into a larger, mostly conjectural, regime of structure theory in minor-closed classes omitting a uniform matroid. The first of these conjectures predicts the unavoidable minors for very highly connected matroids. A matroid is *vertically $k$-connected* if, for every $A \subseteq E(M)$ with $\lambda_M(A) < k-1$, either $A$ or $E(M)-A$ is spanning in $M$. The following conjecture was posed in \[\[highlyconnected\]\].
\[highconn\] For all $n \ge 2$ there is an integer $k$ such that, if $M$ is a vertically $k$-connected matroid with $|M|\ge 2k$, then $M$ or $M^*$ has a minor isomorphic to one of $M(K_n),B(K_n),$ or $U_{n,2n}$.
While $M(K_n)^*$ and $B(K_n)^*$ are not even vertically $4$-connected themselves, they do contain minors with high vertical connectivity; indeed, for each $k$ there is a graph $G$ so that $M(G)^*$ and $B(G)^*$ are both vertically $k$-connected. To obtain such a graph one can take a $k$-regular Cayley graph with girth at least $k$ (see Margulis \[\[margulis\]\] for the construction); by \[\[gr\], Theorem 3.4.2\], these graphs are $k$-connected.
In any case, the dual outcomes in Conjecture \[highconn\] are perhaps not needed if $M$ has large co-rank.
For all $n \ge 2$ there is an integer $k$ so that, if $M$ is a vertically $k$-connected matroid with $|M|\ge 2k$ and $r(M^*) \ge r(M)$, then $M$ has a minor isomorphic to one of $M(K_n)$, $B(K_n)$ or $U_{n,2n}$.
The following conjecture, which is essentially posed in \[\[highlyconnected\]\], states that any highly vertically connected matroid omitting a given uniform minor is close to having one of three specific structures that preclude such a minor.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The simple physics of microlensing provides a well-understood tool with which to probe the atmospheres of distant stars in the Galaxy and Local Group with high magnification and resolution. Recent results in measuring stellar surface structure through broad band photometry and spectroscopy of high amplification microlensing events are reviewed, with emphasis on the dramatic expectations for future contributions of microlensing to the field of stellar atmospheres.'
author:
- 'Penny D. Sackett'
title: Microlensing and the Physics of Stellar Atmospheres
---
\#1[[*\#1*]{}]{} \#1[[*\#1*]{}]{} =
\#1 1.25in .125in .25in
Introduction
============
The physics of microlensing is simple. For most current applications, the principles of geometric optics combined with one relation (for the deflection angle) from General Relativity is all that is required. For observed Galactic microlensing events, the distances between source, lens and observer are large compared to intralens distances, so that small angle approximations are valid. Although it is possible that most lenses may be multiple, $\sim$90% of observed Galactic microlensing light curves can be modeled as being due to a single point lens. Usually, though not always (cf., Albrow et al. 2000), binary lenses can be considered static throughout the duration of the event.
The magnification gradient near caustics is large, producing a sharply peaked lensing “beam” that sweeps across the source due to the relative motion between the lens and the sight line to the source (Fig. 1). Furthermore, the combined magnification of the multiple microimages (which are too close to be resolved with current techniques) is a known function of source position that is always greater than unity, so that more flux is received from the source during the lensing event. The net result is a well-understood astrophysical tool that can simultaneously deliver high resolution and high magnification of tiny background sources. In Galactic microlensing, these sources are stars at distances of a few to a few tens of kiloparsecs.
The great potential of microlensing for the study of stellar polarization (Simmons, Willis, & Newsam 1995; Simmons, Newsam & Willis 1995; Newsam et al. 1998; Gray 2000), stellar spots (Heyrovský & Sasselov 2000; Bryce & Hendry 2000), and motion in circumstellar envelopes (Ignace & Hendry 1999) will not be treated here. Instead, the focus will be on how the composition of spherically-symmetric stellar atmospheres can be probed by microlensing.
Caustic Transits
================
The angular radius $\theta_{\rm E}$ of a typical Einstein ring is about two orders of magnitude larger than the size $\theta_*$ of a typical Galactic source star (few $\mu$as), but the gradients in magnification that generate source resolution effects are appreciable only in regions near caustics. For a single point lens, the caustic is a single point coincident with the position of the lens on the sky that must directly transit the background source in order to create a sizable finite source effect. The probability of such a point transit is of order $\rho \equiv \theta_*/\theta_{\rm E} \approx \, $2%. The amount of resolving power will depend on the dimensionless impact parameter $\beta$, the distance of the source center from the point caustic in units of $\theta_{\rm E}$. The first clear point caustic transit was observed in event MACHO 95-BLG-30 (Alcock et al. 1997).
Lensing stellar binaries with mass ratios $0.1 \la q \equiv m_2/m_1 \la 1$ and separations $0.6 \la d \equiv \theta_{\rm sep}/\theta_{\rm E} \la 1.6$ generate extended caustic structures that cover a sizable fraction of the Einstein ring (see, eg., Gould 2000). Since events generally are not alerted unless the source lies inside the Einstein ring, any alerted binary event with $q$ and $d$ in these ranges is highly likely to result in a caustic crossing. If the source crosses the caustic at a position at which the derivative of the caustic curve is discontinuous, it is said to have been transited by a cusp. For a given lensing binary, the probability of a cusp transit is of order $\rho \, N_{\rm cusps} \approx \, $10%. Since $\sim$10% of all events are observed to be lensing stellar binaries, the total cusp-transit probability is $\sim$1%. To date, two cusp-crossing events have been observed, MACHO 97-BLG-28 (Albrow et al. 1999a) and MACHO 97-BLG-41 (Albrow et al. 2000). The remaining caustic crossing are transits of simple fold (line) caustics, which are observed in $\la$10% of all events. Caustics thus present a non-negligible cross section to background stellar sources, with fold caustic transits being most likely by a factor of $\sim$5.
2.2cm 7.2cm -8.3cm
-5cm
4.5cm The largest effect of a caustic crossing over an extended source is a broadening and diminishment of the light curve peak at transit that depends on the finite size ($\rho \neq 0$) of the source. If the angular size $\theta_*$ of the source star can be estimated independently (eg., from color-surface brightness relations), then the time required for the source to travel its own radius, and thus its proper motion $\mu$ relative to the lens, can be determined from the light curve shape. Conversely, unless an independent method is available (see, Han 2000) to measure $\mu$ or $\theta_{\rm E}$, photometric microlensing cannot translate knowledge of the dimensional parameter $\rho$ into a measurement of source radius. What photometric or spectroscopic data alone [*can*]{} yield is a characterization of how the source profile differs from that of a uniform disk (Fig. 2). Microlensing has already yielded such information for stars as distant as the Galactic Bulge and Small Magellanic Cloud.
Recent Contributions of Microlensing to Stellar Physics
=======================================================
The potential to recover profiles of stellar atmospheres from microlensing has been recognized for several years (Bogdanov & Cherepashchuk 1995; Loeb & Sasselov 1995; Valls-Gabaud 1995), but made possible only recently, due to the improved photometry and especially temporal sampling now obtained for a large number of events by worldwide monitoring networks. For only a few stars, most of which are supergiants or very nearby, has limb darkening been observationally determined by any technique. Microlensing has the advantage that: (1) many types of stars can be studied, including those quite distant; (2) the probe is decoupled from the source; (3) the signal is amplified (not eclipsed); and (4) intensive observations need only occur over one night.
0.7cm
Limb Darkening
--------------
The first cusp crossing was observed in MACHO 97-BLG-28, and led to the first limb-darkening measurement of a Galactic Bulge star (Albrow et al 1999a). As the source crossed the caustic cusp, a characteristic anomalous bump was generated in the otherwise smooth light curve. First the leading limb, then the center, and finally the trailing limb of the stellar disk were differentially magnified (Fig. 1). Analysis of the light curve shape during the limb crossing allowed departures from a uniformly-bright stellar disk to be quantified (Fig. 2) and translated into a surface brightness profile in the $V$ and $I$ passbands. A two-parameter limb-darkened model provided a marginally better fit than a linear model. Spectra provided an independent typing of the source as a KIII giant. The stellar profile reconstructed from the microlensing light curve alone is in good agreement with those from stellar atmosphere models (van Hamme 1993; Claret, Diaz-Cordoves, & Gimenez 1995; Diaz-Cordoves, Claret, & Gimenez) for K giants fitted to the same two-parameter (square-root) law (Fig. 3).
This first microlensing measurement of limb darkening was encouraging, but constructing realistic error bars for the results proved awkward. In traditional parameterizations for limb-darkening the coefficients $c_{\lambda}$ and $d_{\lambda}$ defined by $$I_{\lambda}(\theta) = I_{\lambda}(0) \, \left[ 1 - c_{\lambda} (1 - \cos \theta)
- d_{\lambda} (1 - \cos^n \theta) \right]~~~~~{\rm where~} n = 0, 1/2, 2$$ are correlated not only with each another, but also with other parameters in the microlensing fit because they carry information about the total flux $F$ of the source. (Here $\theta$ is the angle between the normal to the stellar surface and the line of sight.) A different parameterization was therefore constructed for the analysis of fold caustic crossings (Albrow et al. 1999b), $$I_{\lambda}(\theta) = \left< I_{\lambda} \right> \, \left[ 1 - \Gamma_{\lambda}
(1 - {\frac{3}{2}} \, \cos \theta) \right]~~~~~{\rm where~} \left<
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Schrödinger cat states are crucial for exploration of fundamental issues of quantum mechanics and have important applications in quantum information processing. Here, we propose and experimentally demonstrate a method for manipulating cat states in a cavity with the Aharonov-Anandan phase acquired by a superconducting qubit, which is dispersively coupled to the cavity. Based on this dispersive coupling, the qubit can be forced to trace out a circuit in the projective Hilbert space conditional on one coherent state. By preparing the cavity in a superposition of two coherent states, the geometric phase associated with this transport is encoded to the relative probability amplitude of these two coherent states. We demonstrate the photon-number parity of a cat state in a cavity can be controlled by adjusting this geometric phase, which offers the possibility for protecting its quantum coherence from single-photon loss. Based on this geometric effect, we realize phase gates for one and two photonic qubits whose logical basis states are encoded in two quasi-orthogonal coherent states. We further demonstrate two-cavity gates with symmetric and asymmetric Fock state encoding schemes. Our method can be directly extended to implementation of controlled-phase gates between error-correctable logical qubits.'
author:
- 'Y. Xu'
- 'W. Cai'
- 'Y. Ma'
- 'X. Mu'
- 'W. Dai'
- 'W. Wang'
- 'L. Hu'
- 'X. Li'
- 'J. Han'
- 'H. Wang'
- 'Y. P. Song'
- 'Zhen-Biao Yang'
- 'Shi-Biao Zheng'
- 'L. Sun'
title: Geometrically manipulating photonic Schrödinger cat states and realizing cavity phase gates
---
0.5cm
When a quantum system is parallel-transported along a circuit in its quantum state space, it collects information about the geometry of this path, acquiring a “memory" of its motion in the form of a phase. This phase is referred to as the geometric phase and has close relations to many physical phenomena [@Anandan1992The; @Wilczek1989Geometric]. This effect was first discovered by Berry in the context of adiabatic passage [@berry1984quantal]. One remarkable feature of Berry phase is that it is robust against fast parameter fluctuations whose effect on the enclosed parameter-space area averages out [@Chiara2003Berry]. As such, Berry phase has been considered as a choice for fault-tolerant quantum computation [@duan2001geometric; @jones2000geometric]. So far, observation of this phase and demonstration of its noise-resilient feature have been reported in various physical systems [@jones2000geometric; @Tycko1987Adiabatic; @Leek1889Observation; @Filipp2009Experimental; @Pechal2012Geometric; @gasparinetti2016measurement]. Berry’s discovery has triggered considerable interest in quantum-mechanical geometric effects, leading to important generalizations in various directions [@aharonov1987phase; @Samuel1988General]. In particular, Aharonov and Anandan defined geometric phase in the projective Hilbert space, instead of in parameter space [@aharonov1987phase], removing the adiabatic condition. The geometric nature of Aharonov-Anandan (AA) phase lies in the fact that it is related to the area enclosed by the circuit traversed by the state vector.
When two or more quantum systems are coupled, the geometric phase acquired by one system can be employed to manipulate the quantum state of the others [@Zheng2004Unconventional; @Pechal2012Geometric]. The geometric phase of a harmonic vibrational mode of trapped ions has been utilized for implementing high-fidelity entangling gates for the ionic qubits [@Leibfried2003Experimental]. In a recent experiment [@Song2017Geometric], the geometric phase of a continuous-variable field mode was observed through Ramsey interference and used for realizing controlled phase gates with up to four qubits in a superconducting circuit. On the other hand, it has been shown that the geometric phase of a superconducting qubit can be used for realizing Selective Number-dependent Arbitrary Phase (SNAP) gates on a cavity [@Krastanov2015]. This kind of gates has been experimentally demonstrated and used to produce a single-photon state [@Heeres2015]. Recently, a quantum controlled-NOT (CNOT) gate between two cavity systems has been demonstrated by use of both the dynamical and AA phases produced by controllably coupling these cavities to a qubit [@Rosenblum2018]. This gate requires the logic states of the control qubit to be respectively encoded on the vacuum state and a nonzero photon-number state, which renders it incompatible with quantum error correction schemes; on the occurrence of single-photon loss the control qubit will collapse to a Fock state, leading to complete loss of the stored information.
![Geometric manipulation of a photonic cat state. (a) Schematic of the nonadiabatic AA phase of a qubit. Two successive $\pi$ rotations of the qubit produce a geometric phase $\gamma = \pi + \varphi$, where $\varphi$ is the angle between the two rotation axes. (b) Experimental sequence to manipulate the cat state. A cavity is dispersively coupled to the qubit and initialized in a cat state $\left({\ensuremath{\left|0\right\rangle}}+{\ensuremath{\left|2\sqrt{2}\right\rangle}}\right)/\sqrt{2}$ with the help of an ancillary qubit $Q_2$. The AA phase produced by the rotations of $Q_1$ conditional on the cavity’s vacuum state is encoded in the probability amplitude of ${\ensuremath{\left|0\right\rangle}}$, resulting in a phase gate. (c) Measured Wigner function of the cavity state before the phase gate, corresponding to fidelity of 0.980 to the ideal cat state. (d) Wigner function of the cavity state after the gate with $\varphi=0$. The slight rotation and deformation of the Wigner function is due to the self-Kerr effect of the cavity. (e) Measured parity of the cavity state as a function of $\varphi$ after a displacement $D(-\sqrt{2} e^{i\delta} )$ for different values of $\delta$. Symbols are experimental data, in excellent agreement with numerical simulations (solid lines).[]{data-label="fig:fig1"}](Figure1_final.pdf)
We here propose and experimentally demonstrate a scheme for manipulating the parity of a cat state in a cavity with the AA phase of a qubit dispersively coupled to the cavity in a superconducting circuit. Cat states are of fundamental interest [@Deleglise] and can be used to encode error-correctable logical qubits [@LeghtasPRL2013; @Mirrahimi2014; @Ofek2016; @heeres2017implementing]. Thus, manipulating these states and protecting them from decoherence is a subject of great importance. In our experiment, the qubit is parallel-transported along a closed loop on the Bloch sphere, picking up a geometric phase, conditional on one of the two quasiclassical components forming the cat state. We demonstrate the photon-number parity of the cat state can be manipulated by this geometric operation. This manipulation technique, in combination with the parity jump tracking method [@SunNature], allows for the protection of the quantum coherence of cat states from single-photon loss. We then employ this phase to realize logic gates for a cat-encoded qubit, and generalize our method to implementation of two-cavity controlled-phase gates with different encoding schemes and two-cavity SNAP gates for entangling two cavities. Our procedure can be directly generalized to implement gates between logic qubits with inherent error correction function.
![Quantum process tomography of single-cavity geometric phase gates. (a) Experimental sequence. (b) The Pauli transfer process $R$ matrix fidelity as a function of $m$, the number of the Z gate on the cavity state. The inserts show the measured $R$ matrices after one and nine Z gates, respectively. A linear fit of the process fidelity decay gives the Z gate fidelity $F_\mathrm{Z} = 0.987\pm0.001$. (c) The measured and ideal Pauli transfer $R$ matrices of the S gate and T gate with fidelities $F_{S} = 0.968$ and $F_{T} = 0.964$.[]{data-label="fig:fig2"}](Figure2_final.pdf)
![Two-cavity geometric phase gate. (a) A 3D view of Device B. A superconducting transmon qubit $Q_3$ at the center couples to two coaxial cavities $S_1$ and $S_2$, which couple to two other individual ancillary transmon qubits $Q_1$ and $Q_2$, respectively. Each of these transmon qubits independently couples to a stripline readout resonator used to perform simultaneous single-shot readout. (b) Schematic of the experimental sequence. (c) Ideal (left) and measured Pauli transfer $R$ matrices of two-cavity CZ gates with coherent state encoding {${\ensuremath{\left|0\right\rangle}}$, ${\ensuremath{\left|2\sqrt{2}\right\rangle}}$} (middle) and Fock state encoding {${\ensuremath{\left|0\right\rangle}}$, ${\ensuremath{\left|1\right\rangle}}$} (right) for both cavities. The process fidelities, $F_\mathrm{CZ\_ED}$ ($F_\mathrm{ED}$), for these two encodings are 0.727
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study the production of radioisotopes for nuclear medicine in $(\gamma,x{\rm n}+y{\rm p})$ photonuclear reactions or ($\gamma,\gamma''$) photoexcitation reactions with high flux \[($10^{13}-10^{15}$)$\gamma$/s\], small diameter $\sim (100 \, \mu$m$)^2$ and small band width ($\Delta E/E \approx 10^{-3}-10^{-4}$) $\gamma$ beams produced by Compton back-scattering of laser light from relativistic brilliant electron beams. We compare them to (ion,$x$n$ + y$p) reactions with (ion=p,d,$\alpha$) from particle accelerators like cyclotrons and (n,$\gamma$) or (n,f) reactions from nuclear reactors. For photonuclear reactions with a narrow $\gamma$ beam the energy deposition in the target can be managed by using a stack of thin target foils or wires, hence avoiding direct stopping of the Compton and pair electrons (positrons). However, for ions with a strong atomic stopping only a fraction of less than $10^{-2}$ leads to nuclear reactions resulting in a target heating, which is at least $10^{5}$ times larger per produced radioactive ion and is often limits the achievable activity. In photonuclear reactions the well defined initial excitation energy of the compound nucleus leads to a small number of reaction channels and enables new combinations of target isotope and final radioisotope. The narrow bandwidth $\gamma$ excitation may make use of the fine structure of the Pygmy Dipole Resonance (PDR) or fluctuations in $\gamma$-width leading to increased cross sections. Within a rather short period compared to the isotopic half-life, a target area of the order of $(100 \,\mu$m$)^2$ can be highly transmuted, resulting in a very high specific activity. $(\gamma,\gamma'')$ isomer production via specially selected $\gamma$ cascades allows to produce high specific activity in multiple excitations, where no back-pumping of the isomer to the ground state occurs. We discuss in detail many specific radioisotopes for diagnostics and therapy applications. Photonuclear reactions with $\gamma$ beams allow to produce certain radioisotopes, e.g. $^{47}$Sc, $^{44}$Ti, $^{67}$Cu, $^{103}$Pd, $^{117m}$Sn, $^{169}$Er, $^{195m}$Pt or $^{225}$Ac, with higher specific activity and/or more economically than with classical methods. This will open the way for completely new clinical applications of radioisotopes. For example $^{195m}$Pt could be used to verify the patient’s response to chemotherapy with platinum compounds before a complete treatment is performed. Also innovative isotopes like $^{47}$Sc, $^{67}$Cu and $^{225}$Ac could be produced for the first time in sufficient quantities for large-scale application in targeted radionuclide therapy.'
author:
- 'D. Habs, and U. Köster'
date: 'Received: date / Revised version: date'
title: 'Production of Medical Radioisotopes with High Specific Activity in Photonuclear Reactions with $\gamma$ Beams of High Intensity and Large Brilliance'
---
Introduction
============
In nuclear medicine radioisotopes are used for diagnostic and therapeutic purposes [@schiepers06; @cook06]. Many diagnostics applications are based on molecular imaging methods, i.e. either on positron emitters for 3D imaging with PET (positron emission tomography) or gamma ray emitters for 2D imaging with planar gamma cameras or 3D imaging with SPECT (single photon emission computer tomography)[^1]. The main advantage of nuclear medicine methods is the high sensitivity of the detection systems that allows using tracers at extremely low concentrations (some pmol in total, injected in typical concentrations of nmol/l). This extremely low amount of radiotracers assures that they do not show any (bio-)chemical effect on the organism. Thus, the diagnostic procedure does not interfere with the normal body functions and provides direct information on the normal body function, not perturbed by the detection method. Moreover, even elements that would be chemically toxic in much higher concentrations can be safely used as radiotracers (e.g. thallium, arsenic, etc.). To maintain these intrinsic advantages of nuclear medicine diagnostics one has to assure that radiotracers of relatively high specific activity are used, i.e. that the injected radiotracer is not accompanied by too much stable isotopes of the same (or a chemically similar) element.
Radioisotopes are also used for therapeutic applications, in particular for endo-radiotherapy. Targeted systemic therapies allow fighting diseases that are non-localized, e.g. leukemia and other cancer types in an advanced state, when already multiple metastases have been created. Usually a bioconjugate [@schiepers06] is used that shows a high affinity and selectivity to bind to peptide receptors or antigens that are overexpressed on certain cancer cells with respect to normal cells. Combining such a bioconjugate with a suitable radioisotope such as a (low-energy) electron or alpha emitter, allows irradiating and destroying selectively the cancer cells. Depending on the nature of the bioconjugate, these therapies are called Peptide Receptor Radio Therapy (PRRT) [@cook06; @Reu06] when peptides are used as bioconjugates or radioimmunotherapy (RIT) [@cook06; @Jac10], when antibodies are used as bioconjugates. Bioconjugates could also be antibody-fragments, nanoparticles, microparticles, etc. For cancer cells having only a limited number of selective binding sites, an increase of the concentration of the bioconjugates may lead to blocking of these sites and, hence, to a reduction in selectivity. Therefore the radioisotopes for labeling of the bioconjugates should have a high specific activity to minimize injection of bioconjugates labeled with stable isotopes that do not show radiotherapeutic efficiency. Thus often high specific activities are required for radioisotopes used in such therapies.
The tumor uptake of bioconjugates varies considerably from one patient to another. This leads to an important variation in dose delivered to the tumor if the same activity (or activity per body mass or activity per body surface) was administered. Ideally a personalized dosimetry should be performed by first injecting a small quantity of the bioconjugate in question, marked by an imaging isotope (preferentially $\beta^+$ emitter for PET). Thus the tumor uptake can be quantitatively determined and the injected activity of the therapy isotope can be adapted accordingly. To assure a representative in-vivo behaviour of the imaging agent, the PET tracer should be ideally an isotope of the same element as the therapy isotope, or, at least of a chemically very similar element such as neighboring lanthanides. Thus so-called “matched pairs” of diagnostic and therapy isotopes are of particular interest: $^{44}$Sc/$^{47}$Sc, $^{61}$Cu or $^{64}$Cu/$^{67}$Cu, $^{86}$Y/$^{90}$Y, $^{123}$I or $^{124}$I/$^{131}$I or $^{152}$Tb/$^{149}$Tb or $^{161}$Tb. Often the production of one of these isotopes is less straightforward with classical methods. Therefore “matched pairs” are not yet established as standard in clinical practice. The “matched pairs” of scandium and copper can be produced much better with $\gamma$ beams. Valence-III elements do not necessarily show an identical in-vivo behaviour [@Bey00; @Reu00] but in many cases they are sufficiently similar. For example the 68 min PET tracer $^{68}$Ga is conveniently eluted from $^{68}$Ge generators and used as imaging analog for the therapy isotopes $^{90}$Y, $^{177}$Lu or $^{213}$Bi [@Mae05].
The radioisotopes for diagnostic or therapeutic nuclear medicine applications are usually produced by nuclear reactions. The required projectiles are typically either neutrons (from dedicated irradiation reactors) or charged particles (from small or medium-sized cyclotrons or other accelerators). In section 2 we shortly discuss these presently used techniques and then introduce in section 3 the new $\gamma$ beams with high $\gamma$ energies, high intensities and small bandwidth. Such a $\gamma$ facility will typically consist of an electron linac, delivering a relativistic electron beam with high brilliance and high intensity from which intense laser beams are Compton back-scattered. These $\gamma$ facilities allow to produce many radioisotopes in new photonuclear reactions with significantly higher specific activity. In section 4 we compare for certain radioisotopes of interest the specific activities achievable with presently used production reactions and $\gamma$-beams respectively. In section 5 the energy deposition of $\gamma$ beams is compared to ion beams, showing that targets can endure higher intensities of $\gamma$ beams than ion beams allowing for higher $\gamma$ flux densities. We will discuss interesting cases of specific radioisotopes in section 6. Besides attaching radiosiotopes to biomolecules in therapeutical applications, we discuss in section 7 new ways of brachytherapy. Finally in section 8 the advantages of producing radioisotopes by $\gamma$ beams are outlined.
Presently used Nuclear Reactions to Produce Medical Radioisotopes
=================================================================
Today the most frequently employed nuclear reactions for the production of medical radioisotopes are:
- [*Neutron capture*]{}\
Neutron capture (n,$\gamma$) reactions transmute a stable isotope into a radioactive isotope of the same element. High specific activities are obtained, when the (n,$\gamma$) cross section is high and the target is irradiated in a high neutron flux.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'How to improve the quality of conversations in online communities has attracted considerable attention recently. Having engaged, urbane, and reactive online conversations has a critical effect on the social life of Internet users. In this study, we are particularly interested in identifying a post in a multi-party conversation that is unlikely to be further replied to, which therefore kills that thread of the conversation. For this purpose, we propose a deep learning model called the ConverNet. ConverNet is attractive due to its capability of modeling the internal structure of a long conversation and its appropriate encoding of the contextual information of the conversation, through effective integration of attention mechanisms. Empirical experiments on real-world datasets demonstrate the effectiveness of the proposal model. For the widely concerned topic, our analysis also offers implications for improving the quality and user experience of online conversations.'
author:
- Yunhao Jiao
- Cheng Li
- Fei Wu
- Qiaozhu Mei
title: 'Find The Conversation Killers: A Predictive Study of Thread-ending Posts'
---
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'An operator $T$ from vector lattice $E$ into vector topology $(F,\tau)$ is said to be order-to-topology continuous whenever $x_\alpha\xrightarrow{o}0$ implies $Tx_\alpha\xrightarrow{\tau}0$ for each $(x_\alpha)_\alpha\subset E$. The collection of all order-to-topology continuous operators will be denoted by $L_{o\tau}(E,F)$. In this paper, we will study some properties of this new classification of operators. We will investigate the relationships between order-to-topology continuous operators and others classes of operators such as order continuous, order weakly compact and $b$-weakly compact operators.'
author:
- Kazem Haghnejad Azar
date: 'Received: date / Accepted: date'
title: 'Order-to-topology continuous operators'
---
[example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
Introduction
============
In locally solid vector lattice, topologies for which order convergence implies topological convergence are very useful. They are known as order continuous topologies. A linear topology $\tau$ on a vector lattice is said to be order continuous whenever $x_\alpha\xrightarrow{o}0$ implies $x_\alpha\xrightarrow{\tau}0$. In normed vector lattice, it is also favourite to us, when order convergence is norm convergent. A normed lattice $E$ has order continuous norm if $\| x_\alpha\|\rightarrow 0$ for every decreasing net $(x_\alpha)_\alpha$ with $\inf_\alpha x_\alpha=0$. Let $E$ be a vector lattice and $(F,\tau)$ be a vector topology. In this manuscript, we will investigate on operators $T:E\rightarrow F$ which carrier every order convergence net into topological convergence. To state our results, we need to fix some notation and recall some definitions. A net $(x_{\alpha})_{\alpha \in A}$ in a vector lattice $ E $ is said to be strongly order convergent to $x\in E$ if there is a net $(z_{\beta})_{\beta \in B} $ in $ E $ such that $ z_{\beta} \downarrow 0 $ and for every $ \beta \in B$, there exists $\alpha_{0} \in A$ such that $ | x_{\alpha} - x |\leq z_{\beta}$ whenever $ \alpha \geq \alpha_{0}$. For short, we will denote this convergence by $ x_{\alpha} \xrightarrow{so} x $ and write that $ x_{\alpha} $ is $so$-convergent to $x$. Obviusely every order convergence net in a vector lattice is strongly order convergent, but converse not holds and for Dedekind complete vector lattice both definitions are the same, for detile see [@1b]. A net $ (x_{\alpha})_{\alpha}$ in vector lattice $ E $ is unbounded order convergent to $ x \in E $ if $ | x_{\alpha} - x | \wedge u \xrightarrow{so} 0$ for all $ u \in E^{+} $. We denote this convergence by $ x_{\alpha} \xrightarrow{uo}x $ and write that $ x_{\alpha} $ $uo$-convergent to $ x $. It is clear that for order bounded nets, $uo$-convergence is equivalent to $so$-convergence. In [@7], Wickstead characterized the spaces in which $w$-convergence of nets implies $uo$-convergence and vice versa and in [@5g1], characterized the spaces $E$ such that in its dual space $ E^{\prime} $, $uo$-convergence implies $w^{*}$-convergence and vice versa. A Banach lattice $E$ is said to be an $AM$-space if for each $x,y\in E$ such that $|x|\wedge |y|=0$, we have $\|x+y\|= max \{\|x\|, \|y\|\}$. A Banach lattice $E$ is said to be $KB$-space whenever each increasing norm bounded sequence of $E^+$ is norm convergent. An operator $T: E\rightarrow F$ between two vector lattices is positive if $T(x)\geq 0$ in $F$ whenever $x\geq 0$ in $E$. Note that each positive linear mapping on a Banach lattice is continuous. In this manuscript $L_b(E,F)$ is the all of bounded operators and the collection of all order continuous operators of $L_b(E,F)$ will be denoted by $L_n(E,F)$; the subscript $n$ is justified by the fact that the order continuous operators are also known as normal operators. That is, $$L_n(E,F) :=\{T \in L_b(E,F): T~ \text{is~ order~ continuous}\}.$$ Similarly, $L_c(E,F)$ will denote the collection of all order bounded operators from $E$ to $F$ that are $\sigma-$order continuous. An operator $T$ from a Banach space $X$ into a Banach space $Y$ is compact (resp. weakly compact) if $\overline{{T(B _ X)}}$ is compact (resp. weakly compact) where $B _ X$ is the closed unit ball of $X$. A continuous operator from Banach lattice $E$ into Banach space $X$ is called $M$-weakly compact if $\lim \Vert Tx_n\Vert=0$ holds for every norm bounded disjoint sequence $(x_n)_n$ of $E$. A subset $A$ of a vector lattice $E$ is called $b$-order bounded in $E$ if it is order bounded in $E^{\sim\sim}$. An operator $T:E\rightarrow X$, mapping each $b$-order bounded subset of $E$ into a relatively weakly compact subset of $X$ is called a $b$-weakly compact operator, see [@3]. An operator $T:E\rightarrow X$ from vector lattice into normed space is called interval-bounded if the image of every order interval is norm bounded. For every interval-bounded linear operator $T:E\rightarrow X$, set $$q_T(x)=\sup\{\Vert Ty\Vert:~\vert y\vert\leq \vert x\vert\},$$ be the absolute monotone seminorm induced by $T$ where $x\in E$. For terminology concerning Banach lattice theory and positive operators, we refer the reader to the excellent book of [@1].\
Main results
============
Let $E$ be a vector lattice and $F$ be a vector topology with topology $\tau$. An operator $T$ from $E$ into $F$ is said to be order-to-topology continuous whenever $x_\alpha\xrightarrow{o}0$ implies $Tx_\alpha\xrightarrow{\tau}0$ for each $(x_\alpha)_\alpha\subset E$. For each sequence $(x_n)\subset E$, if $x_n\xrightarrow{o}0$ implies $Tx_n\xrightarrow{\tau}0$, then $T$ is called $\sigma$-order-to-topology continuous operator. The collection of all order-to-topology continuous operators will be denoted by $L_{o\tau}(E,F)$; the subscript $o\tau$ is justified by the fact that the order-to-topology continuous operators; that is, $$L_{o\tau}(E,F)=\{T\in L(E,F):~T~\text{is order-to-topology continuous }\}.$$ Similarly, $L^\sigma_{o\tau}(E,F)$ will be denote the collection of all $\sigma$-order-to-topology continuous operators, that is, $$L^\sigma_{o\tau}(E,F)=\{T\in L(E,F):~T~\text{is} ~\sigma-\text{order-to-topology continuous }\}.$$
For a normed space $F$, we write $L_{on}(E,F)$ and $L_{ow}(E,F)$ for collection of order-to-norm topology continuous operators and order-to-weak topology continuous operators, respectively. $L^\sigma_{on}(E,F)$ and $L^\sigma_{ow}(E,F)$ have similar definitions. Clearly $L^\sigma_{on}(E,F)$ is a subspace of $L^\sigma_{ow}(E,F)$ and if $F$ has the Schur property, then $L^\sigma_{on}(E,F)=L^\sigma_{ow}(E,F)$. Let $T$ be an order-to-norm topology continuous operators from a vector lattice $E$ into a normed vector lattice $F$ and $0\leq S\leq T$ where $S\in L(E,F)$. Then observe that $S$ is an order-to-norm topology continuous operator. It is clear that for a locally solid vector lattice $E$, if $E$ has order continuous topology, then every continuous operator from $E$ into a vector topology $F$ is order-to-topology continuous. If an operator $T:E\rightarrow F$ from a Banach lattice into a normed vector lattice is positive (and so continuous), in general, $T$ is not order-to-norm continuous as shown in the following examples.
1. Consider the operator $T:\ell^\
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study diffuse gamma-ray emission at intermediate Galactic latitudes measured by the Fermi Large Area Telescope with the aim of searching for a signal from dark matter annihilation or decay. In the absence of a robust dark matter signal, constraints are presented. We set both, conservative dark matter limits requiring that the dark matter signal does not exceed the observed diffuse gamma-ray emission and limits derived based on modeling the foreground astrophysical diffuse emission. Uncertainties in several parameters which characterize conventional astrophysical emission are taken into account using a profile likelihood formalism. The resulting limits impact the range of particle masses over which dark matter thermal production in the early Universe is possible, and challenge the interpretation of the PAMELA/Fermi-LAT cosmic ray anomalies as annihilation of dark matter.'
author:
- 'G. Zaharijas'
- 'J. Conrad, A. Cuoco, Z. Yang'
title: 'Constraints on the Galactic Dark Matter signal from the Fermi-LAT measurement of the diffuse gamma-ray emission'
---
Introduction
============
Most of the mass in our Universe is in the form of yet un-identified particles (i.e. Dark Matter (DM)) which have been detected only through their gravitational interactions thus far. In one of the most attractive frameworks to explain the DM problem (the ÔWIMP paradigmÕ) those particles are expected to self annihilate to stable standard model particles, producing gamma rays, electrons and protons. Due to our proximity to the center of the Milky Way DM halo, such gamma ray emission originating in our Galaxy would appear as a diffuse signal.
At the same time, the majority of the Galactic diffuse emission is produced through radiative losses of cosmic-ray (CR) electrons and nucleons in the interstellar medium. Modeling of this emission presents one of the major challenges when looking for subdominant signals from dark matter. In this analysis we test the diffuse LAT data for a contribution from the DM signal by performing a fit of the spectral and spatial distributions of the expected photons at intermediate Galactic latitudes. In doing so, we take into account the most up-to-date modeling of the established astrophysical signal, [@paper2; @us]. Our aim is to constrain the DM properties and treat the parameters of the astrophysical diffuse gamma-ray background as nuisance parameters. Those parameters are typically correlated with the assumed DM content and it is thus important to scan over them together with the DM parameter space, since they affect directly the DM fit. Besides this approach, we will also quote conservative upper limits using the data only (i.e. without performing any modeling of the astrophysical background).
Modeling of the high-energy Galactic diffuse emission {#diffusemodeling}
=====================================================
We follow [@paper2] in using the `GALPROP` code v54 [@galprop], to calculate the propagation and distribution of CRs in the Galaxy and the whole sky diffuse emission, as well as the signal from DM. Several parameters enter the CR propagation modeling, see [@paper2] for more detail: the distribution of CR sources, the half-height of the diffusive halo $z_h$, the radial extent of the halo $R_h$, the nucleon and electron injection spectrum, the normalization of the diffusion coefficient $D_0$, the rigidity dependence of the diffusion coefficient $\delta$, ($D(\rho)=D_0 (\rho/\rho_0)^{-\delta}$ with $\rho_0$ being the reference rigidity) and the Alfv[é]{}n speed $v_A$, (parametrizing the strength of re-acceleration of CRs in the ISM via Alfv[é]{}n waves) and the velocity of the Galactic winds perpendicular to the Galactic Plane $V_c$. Interactions of the CRs with the interstellar medium (ISM) and interstellar radiation field (ISRF) produce three distinct components of the gamma-ray emission: photons from the [*decay of neutral pions*]{} produced in the interaction of the CR nucleons with the interstellar gas, [*bremsstrahlung*]{} of the CR electron population on the interstellar gas and their [*inverse Compton*]{} scattering off the interstellar radiation field.
In [@paper2] various standard parameters of the CR propagation were studied in a fit to CR data and it was shown that they represent well the gamma-ray sky, although various residuals ([at a $\sim 30\%$ level [@paper2]]{}), both at small and large scales, remain. These residuals can be ascribed to various limitations of the models: imperfections in the modeling of gas and ISRF components, simplified assumptions in the propagation set-up, unresolved point sources, and large scale structures like Loop I [@Casandjian:2009wq] or the Galactic Bubbles [@Su:2010qj]. Since residuals do not seem obviously related to DM, we focus in the following on setting limits on the possible DM signal, rather than *searching* for a DM signal.
In our work, we use the results of the fits to the CR data from [@paper2] but we allow for more freedom in certain parameters governing the CR distribution and known astrophysical diffuse emission and constrain these parameters by fitting the models to [*the LAT gamma-ray data*]{}.
DM maps
=======
Numerical simulations of Milky Way size halos reveal a smooth halo which contains large number of subhalos [@Diemand:2007qr; @Springel:2008cc]. The properties of the smooth halo seem to be well understood, at least on the scales resolved by simulations, while the properties of the subhalo population are more model dependent. In the inner ${\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}
{\textstyle\sim}$}\,}20^\circ$ region of the Galaxy, the smooth component is expected to dominate, [@Diemand:2006ik; @Springel:2008by; @Pieri:2009je] and we conservatively consider only the smooth component in this work.
We parametrize the smooth DM density $\rho$ with a NFW spatial profile [@Navarro:1995iw] $$\rho(r)=\frac{\rho_0\,R_s}{r \, \left(1+r/R_s \right)^{2}}$$ and a cored (isothermal-sphere) profile [@Begeman:1991iy]: $$\rho(r) = \frac{\rho_0 \left( {R_\odot^2+R_c^2}\right)}{\left({r^2+R_c^2}\right)}.$$ For the local density of DM we take the value of $\rho_0=0.43$ GeV cm$^{-3}$ [@Salucci:2010qr], and the scale radius of $R_s=$ 20 kpc (for NFW) and $R_c=$ 2.8 kpc (isothermal profile). We also set the distance of the solar system from the center of the Galaxy to the value $R_\odot=$ 8.5 kpc. For the annihilation/decay spectra we consider three channels with distinctly different signatures: annihilation/decay into the $b{\bar b}$ channel, into $\mu ^+ \mu^-$, and into $\tau^+\tau^-$. In the first case gamma rays are produced through hadronization of annihilation products and subsequent pion decay. The resulting spectra are similar for all channels in which DM produces heavy quarks and gauge bosons and this channel is therefore representative for a large set of particle physics models. The choice of leptonic channels provided by the second and third scenarios, is motivated by the dark matter interpretation [@Grasso:2009ma] of the PAMELA positron fraction [@Adriani:2008zr] and the [*Fermi*]{} LAT electrons plus positrons [@Abdo:2009zk] measurements. In this case, gamma rays are dominantly produced through radiative processes of electrons, as well as through the Final State Radiation (FSR). We produce the DM maps with a version of `GALPROP` slightly modified to implement custom DM profiles and injection spectra (which are calculated by using [the `PPPC4DMID` tool]{} described in [@Cirelli:2010xx] and include a contribution from electro-weak bremsstrahlung).
Approach to set DM limits {#outline}
=========================
We use 24 months of LAT data in the energy range between 1 and 100 GeV (but, we use energies up to 400 GeV when deriving DM limits with no assumption on the astrophysical background). We use only events classified as gamma rays in the P7CLEAN event selection and the corresponding `P7CLEAN_V6` instrument response functions (IRFs)[^1]. Structures like Loop I and the Galactic Bubbles appear mainly at high Galactic latitudes and to limit their effects on the fitting we will consider a ROI in Galactic latitude, $b$, of $5^{\circ} \leq |b|\leq 15^{\circ}$, and Galactic longitude, $l$, $|l|\leq 80^{\circ}$. We mask the region $|b|~{\,\raisebox{-0.13cm}{$\stackrel{\textstyle<}
{\textstyle\sim}$}\,}5^{\circ}$ along the Galactic Plane, in order to reduce the uncertainty due to the modeling of the astrophysical and DM emission profiles.
DM limits with no assumption on the astrophysical background {#nobkg}
------------------------------------------------------------
To set these type of limits we first convolve a given DM model with the Fermi LAT instrument response functions (IRFs) to obtain the counts expected from DM annihilation. The expected counts are then compared with the observed counts in our ROI and the upper limit is set to the *minimum* DM normalization which gives counts in excess of the observed ones in at least one bin, i.e.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Maria-Florina Balcan[^1]'
- 'Yingyu Liang[^2]'
- 'Pramod Gupta[^3]'
bibliography:
- 'jmlr-ref.bib'
title: 'Robust Hierarchical Clustering [^4] '
---
[^1]: `ninamf@cs.cmu.edu`. School of Computer Science, Carnegie Mellon University.
[^2]: `yliang39@gatech.edu`. College of Computing, Georgia Institute of Technology.
[^3]: `pramodg@google.com`. Google, Inc..
[^4]: A preliminary version of this article appeared under the title *Robust Hierarchical Clustering* in the Proceedings of the Twenty-Third Conference on Learning Theory, 2010.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We use the Sloan Digital Sky Survey to investigate the properties of massive elliptical galaxies in the local Universe ($z\leq 0.08$) that have unusually blue optical colors. Through careful inspection, we distinguish elliptical from non-elliptical morphologies among a large sample of similarly blue galaxies with high central light concentrations ($c_r\geq 2.6$). These blue ellipticals comprise 3.7 per cent of all $c_r\geq 2.6$ galaxies with stellar masses between $10^{10}$ and $10^{11}~h^{-2}~{\rm M}_{\sun}$. Using published fiber spectra diagnostics, we identify a unique subset of 172 [*non-star-forming*]{} ellipticals with distinctly blue $urz$ colors and young ($<3$Gyr) light-weighted stellar ages. These [*recently quenched ellipticals*]{} (RQEs) have a number density of $2.7-4.7\times 10^{-5}\,h^3\,{\rm Mpc}^{-3}$ and sufficient numbers above $2.5\times10^{10}~h^{-2}~{\rm M}_{\sun}$ to account for more than half of the expected quiescent growth at late cosmic time assuming this phase lasts 0.5Gyr. RQEs have properties that are consistent with a recent merger origin (i.e., they are strong ‘first-generation’ elliptical candidates), yet few involved a starburst strong enough to produce an E$+$A signature. The preferred environment of RQEs (90 per cent reside at the centers of $<3\times 10^{12}\,h^{-1}{\rm M}_{\sun}$ groups) agrees well with the ‘small group scale’ predicted for maximally efficient spiral merging on to their halo center and rules out satellite-specific quenching processes. The high incidence of Seyfert and LINER activity in RQEs and their plausible descendants may heat the atmospheres of small host halos sufficiently to maintain quenching.'
date: '[Draft: ]{} '
title: A new population of recently quenched elliptical galaxies in the SDSS
---
\[firstpage\]
galaxies: elliptical and lenticular, cD — galaxies: evolution — galaxies: formation — galaxies: star formation
Introduction
============
Documenting the assembly history of massive elliptical galaxies in detail remains an elusive problem. A central aspect of galaxy evolution over a significant portion of cosmic time has been the build up of quiescent (i.e., non-star-forming and red) galaxies [@bell04b; @brown07; @faber07; @brammer11]. The growth of the red galaxy population has occurred largely above the characteristic mass limit of ${\rm M}_{{\rm gal,}{\star}}\geq 3\times 10^{10}~{\rm M}_{\sun}$ that broadly divides galaxies into the blue-cloud of late-type (disk-dominated) systems and the red-sequence of early-type (elliptical, S0, and bulge-dominated spiral) galaxies, which have been well-documented at $z\sim0$ [@strateva01; @blanton03d; @kauffmann03b; @baldry04]. With the advent of better and larger surveys of distant galaxies, the bimodality in color [@willmer06; @brammer09; @whitaker11; @muzzin13b] and structure [i.e., the high early-type fraction on the red sequence – @bell04a; @bell12; @wuyts11b; @cheung12] is found at all redshifts out to $z\sim3$. A general consensus has emerged in the literature to explain galaxy bimodality and its evolution, whereby the low to moderate-mass red sequence is fed by migrating blue-cloud galaxies that experience star formation (SF) quenching, and the assembly of the most massive galaxies occurs by dissipationless (so-called ‘dry’) merging of pre-existing red systems [see Fig.10 in @faber07 for an illustrative schematic diagram of the blue-to-red migration scenario]. The evidence for the role of merging in the formation of $> 10^{11}~{\rm M}_{\sun}$ galaxies is convincing [@bell06a; @white07; @mcintosh08; @skelton09; @vanderwel09b; @robaina10]. What remains difficult to constrain in this model are the variety of physical processes at play that are needed to [*both*]{} govern SF [*and*]{} alter structure to maintain the high fraction of red early-type galaxies and the bimodality in galaxy properties at masses below $10^{11}~{\rm M}_{\sun}$. Cosmological simulations [e.g., @oser10] make it clear that galaxies experience multiple processes over their lifetimes. To gain further insights into galaxy evolution, the work to be done is to disentangle the complex interplay of processes and identify which dominate under different physical conditions, and as a function of cosmic time.
A host of physical processes are predicted to quench SF by either removing, heating, or cutting off the cold gas supply necessary to fuel new star production. Energy released from accretion on to the central massive black hole can produce dynamic-mode AGN feedback in the form of powerful gas outflows [@granato04] typically associated with gas-rich mergers [@dimatteo05; @springel05a; @hopkins06b], or ’radio-mode’ heating of the interstellar medium in galaxies [@sazonov05; @hopkins10a] or of the intracluster medium (ICM) in groups and clusters [@cattaneo06; @croton06a; @sijacki06]. During starbursts, energetic feedback from supernovae and stellar mass loss can also provide thermal heating, strong winds and galactic outflows [@springel03a; @martin05; @cox06b; @tremonti07; @diamondstanic12b]. Stellar or supernova (SN) feedback is argued to dominate in low-mass galaxies [e.g., $<10^{10}~{\rm M}_{\sun}$ @kaviraj07d], while AGN feedback is predicted to dominate at higher masses [@kaviraj07d; @dimatteo08b]. Gas exhaustion and shock heating from major gas-rich mergers [*without*]{} AGN or SN feedback are predicted to at least temporarily quench SF [@hopkins08b], while other dynamical mechanisms may reduce the efficiency of SF including secular bar-driven quenching [e.g., @masters11a], and morphological quenching in which a large spheroidal bulge can stabilize the gas disk against fragmentation [@crocker12; @martig13]. Additionally, the atmosphere of a galaxy’s dark matter halo can impede SF in a number of ways. Virial shock heating of the halo gas creates conditions for efficient shutdown of the hot halo gas by feedback mechanisms [e.g., @keres05; @dekel06a]. Cosmological and hydrodynamical simulations show that modest-sized halos ($\geq 10^{11}-10^{12}\,{\rm M}_{\sun}$) can become dominated by hot gas [@birnboim03; @keres05]. Once hot, radio-mode AGN heating [or gravitational heating for larger halos, @dekel08] can maintain halo quenching of both central and satellite galaxies [@gabor12]. New cold-gas accretion on to the centers of small group or galaxy size hot halos would require additional energetic feedback to quench subsequent central SF [@keres12]. Besides preventing gas cooling, the parent halo atmosphere can quench orbiting satellites by either tidally stripping their hot-halo gas resulting in a so-called ‘strangulation’ of future SF after the existing cold fuel is consumed [@larson80; @balogh00a; @bekki02c; @kawata08], or rapid ($\sim 10^7$years) ram-pressure stripping of the cold gas reservoir producing a fast truncation of SF [@spitzer51; @gunn72; @abadi99; @fujita99; @quilis00]. Observational results support strangulation as the dominant quenching mechanism for the bulk of low-redshift satellite galaxies [@vandenbosch08] including those in the outskirts of galaxy clusters [@lewis02].
As with quenching, many physical processes are cited to transform late-type disks into the variety of observed early-type galaxy (ETG) morphologies. Foremost, the hierarchical assembly of dark matter halos [@white78a] drives galaxy merging and the formation of the spheroidal components of galaxies [@kauffmann93; @baugh96; @cole00]. Numerical simulations have long shown that the violent merging of comparable mass disk galaxies (major merging) disrupts the rotational stellar orbits and produces remnants with spheroidal light profiles thereby giving rise to the “merger hypothesis” for the formation of elliptical galaxies [@toomre72; @toomre77]. Soon thereafter, modellers realized the need for progenitor bulge components [@hernquist93d] and the dissipative effects of gas [@barnes92a; @hernquist93h] to produce remnants that were reasonable matches to ellipticals; i.e., pure spheroid galaxies. As merger simulations have become more sophisticated, the specific details of the progenitor mass ratios [@naab99; @naab03; @cox08a], and gas fractions [@cox06a; @naab06d] are now understood to critically shape the kinematic and photometric structure of merger remnants, providing realistic merger scenarios for the formation of both elliptical galaxy families found in nature [@kormendy96; @emsellem07]: low-mass disky fast rotators
|
{
"pile_set_name": "ArXiv"
}
|
---
address: |
Department of Physics, The Ohio State University,\
174 W. 18th Ave., Columbus, OH 43210, USA\
E-mail: raby@pacific.mps.ohio-state.edu
author:
- 'Stuart Raby[^1]'
title: 'Gauge Coupling Unification and Neutrino Masses in 5D SUSY SO(10)'
---
Gauge coupling unification in 5D
================================
We consider gauge coupling unification in $SO(10)$ in five dimensions [@Kim:2002im]. In particular we discuss hybrid gauge symmetry breaking with both orbifold and Higgs vevs on the brane. We calculate the GUT scale threshold corrections to gauge coupling unification. We then show that the compactification scale $M_c \approx 10^{14} \; {\rm GeV}$ and the cutoff scale $M_*
\approx 10^{17} \; {\rm GeV}$ are fixed by the low energy data. Finally we consider neutrino masses and determine the See – Saw scale determining light neutrino masses [@Kim:2003vr]. Let us first define some notation.
Charge quantization & Family structure {#subsec:chquant}
--------------------------------------
The Pati–Salam gauge symmetry $SU(4)_c \times SU(2)_L \times
SU(2)_R$ unifies quarks and leptons of one family into two irreducible representations given by $$\bf \psi \;\; = \;\; (4, 2, 1) \;\; = \;\; \{ Q = \left(\begin{array}{c}
u \\ d \end{array} \right), \;\;\; L = \left(\begin{array}{c}
\nu \\ e
\end{array}\right) \}$$ and $$\bf \psi^c \;\; = \;\; (\bar
4, 1, \bar 2) \;\; = \;\; \{ Q^c = \left(\begin{array}{c} u^c
\\ d^c
\end{array} \right), \;\;\;
L^c = \left(\begin{array}{c} \nu^c \\ e^c \end{array}\right) \} .$$ The two Higgs doublets of the minimal supersymmetric standard model are contained in one irreducible representation $$\hspace{.5in} {\cal H} \;\; = \;\; (1, \bar 2, 2) \;\; =\;\; \{ H_u ,\;\; H_d \} .$$ Hence Pati-Salam naturally describes the family structure of the standard model. Moreover since there are no U(1) symmetries, charge quantization is enforced. There are however three independent gauge couplings \[two if one also demands parity\] and thus no prediction for gauge coupling unification.
The gauge group SO(10) then unifies quarks and leptons into one irreducible spinor representation $$\hspace{.5in} \bf \psi \;\; + \;\; \bf \psi^c \;\;\; \subset \;\; {\bf 16}.$$ With the addition of Higgs triplets, the Higgs doublets are contained in the defining representation $$\hspace{.5in} {\bf {\cal H}} \;\; + \;\; (6, 1,
1) \ ( = \{ T, \;\; \bar T \} )
\;\; \subset \;\; {\bf 10_H} .$$ Of course, SO(10) also predicts gauge coupling unification.
Finally both symmetry groups, Pati-Salam and SO(10), lead naturally to Yukawa unification for the third generation with $$\hspace{.5in} \lambda \ 16_3 \ 10_H \ 16_3 \ \supset \ \lambda \ \bf
\psi_3 \; {\cal H} \; \psi_3^c$$ and a single Yukawa coupling $\lambda$. Given the above brief review, let us consider the virtues and problems of four dimensional SUSY GUTs.
Virtues and problems of 4D SUSY GUTs
------------------------------------
Four dimensional SUSY GUTs have the following virtues.
- Charge quantization — [*No U(1) factors*]{}
- Family structure — [*Quarks and leptons are in the smallest chiral (i.e. non vector-like) representations*]{}
- Neutrino Mass \[ $\nu \ m \ \nu^c \ + \ \frac{1}{2} \ \nu^c
\ M \ \nu^c $\] with $m = \lambda \langle H_u \rangle$ and $M$ \[ = See–Saw scale\] $\sim M_{GUT}$, we obtain $\; m_\nu \; = \;
m^2/M$ — [*A right-handed neutrino $\nu_R = (\nu^c)^*$ is required in either PS or SO(10)*]{}
- Gauge coupling unification — [*Fits the low energy data*]{}
- Yukawa coupling unification — [*This is a prediction of minimal PS and SO(10)*]{}
- Dark matter candidate — [*With a conserved R-parity, the LSP (typically the lightest neutralino ($\tilde \chi^0_1$)) is stable*]{}
4D SUSY GUTs have the following problems.
- Gauge symmetry breaking requires a complicated symmetry breaking sector.
- Higgs doublet – triplet splitting can be accommodated but is not required by the theory.
- A supersymmetric $\mu$ term, with a dimensionful parameter of order the electroweak scale, must be generated.
- Proton decay, due to dimension 5 operators, must be suppressed to satisfy the Super-Kamiokande bound - 1/$\Gamma(p
\rightarrow K^+ \ \bar \nu) > 1.9 \times 10^{32} \; {\rm yrs}$.
- In order to obtain Majorana neutrino masses consistent with atmospheric neutrino oscillations with $\Delta m^2_{atm} \sim 3
\times 10^{-3} \; {\rm eV}^2$, one needs a See-Saw scale $M \ \sim
\ 10^{-2} \ M_{GUT} \ll M_{GUT}$ for the tau neutrino (assuming the light neutrino spectrum is hierarchical).
We now consider SUSY SO(10) on an orbifold in 5D and show how some of these problems can be resolved, while at the same time retaining the virtues of 4D SUSY GUTs.
SUSY SO(10) on ${\cal M}_4 \ \times \ S_1/(Z_2 \times
Z_2^\prime)$
-----------------------------------------------------
The 5D orbifold is a line segment \[0, $\pi R/2$\] in the fifth dimension $y$ defined in terms of a $Z_2 \times Z_2^\prime$ orbifolding of the circle with radius $R$. The first $Z_2$ breaks the effective 4D N=2 SUSY to N=1 SUSY, while the second breaks SO(10) to Pati-Salam \[PS\].[^2] Hence the brane at $y = 0$ has the full SO(10) symmetry, while the brane at $y = \pi R/2$ has only the PS symmetry. The 5D bulk fields (given in terms of 4D superfields) include the gauge sector \[$V, \Phi$\] in the adjoint representation and the Higgs hypermultiplet \[$10_H, 10_H^c$\] in the defining representation. In a standard notation we then have the $Z_2 \times Z_2^\prime$ eigenstates $V_{+ +}, \ \Phi_{- -} \;\; \subset$ PS; $\;\; V_{+
-}, \ \Phi_{- +} \;\; \subset$ SO(10)$/$PS; $\;\; {\cal H}_{+ +},
\ {\cal H}^c_{- -} \;\; \subset \;\; (1,\bar 2, 2) \;$ and $ \;
T_{+ -}, \ \bar T_{+ -}, \ T^c_{- +}, \ \bar T^c_{- +} \;\;
\subset \;\; (6, 1, 1)$. [*Thus only $V_{+ +}$ (the PS gauge sector) and ${\cal H}_{+ +}$ (the Higgs doublets) contain zero modes.*]{} We then assume that the fields $\langle \chi^c \rangle =
\langle \bar \chi^c \rangle$, located on the PS brane, develop a vev of order $ \sim M_*$ \[ = cutoff scale\]; spontaneously breaking PS to the standard model gauge group. The three families of quarks and leptons live either on the PS or SO(10) brane or in the bulk. They come in complete families either under SO(10) or PS. With this construction our 5D theory has the properties described in the following theoretical score card. \[$\surd$, means it is a property of the construction, while [**?**]{}, will be discussed further in this talk.\]
5D SO(10) — Theoretical Score Card
----------------------------------
- Charge quantization & Family structure — $\surd$
- Gauge coupling unification — [**?**]{}
- Yukawa coupling unification for the third generation — $\surd$
- R parity $\Longrightarrow$ dark matter candidate — $\surd$
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Dipolar Bose and Fermi gases, which are currently being studied extensively experimentally and theoretically, interact through anisotropic, long-range potentials. Here, we replace the long-range potential by a zero-range pseudo-potential that simplifies the theoretical treatment of two dipolar particles in a harmonic trap. Our zero-range pseudo-potential description reproduces the energy spectrum of two dipoles interacting through a shape-dependent potential under external confinement very well, provided that sufficiently many partial waves are included, and readily leads to a classification scheme of the energy spectrum in terms of approximate angular momentum quantum numbers. The results may be directly relevant to the physics of dipolar gases loaded into optical lattices.'
author:
- 'K. Kanjilal'
- 'John L. Bohn'
- 'D. Blume'
title: 'Pseudo-potential treatment of two aligned dipoles under external harmonic confinement'
---
Introduction
============
Many-body systems with dipolar interactions have attracted a lot of attention recently. Unlike the properties of ultracold atomic alkali vapors, which can be described to a very good approximation by a single scattering quantity (the $s$-wave scattering length), those of dipolar gases additionally depend on the dipole moment. This dipole moment can be magnetic, as in the case of atomic Cr [@grie05; @stuh05], or electric, as in the case of heteronuclear molecules such as OH [@meer05; @boch04], KRb [@wang04a] or RbCs [@kerm04]. Furthermore, dipolar interactions are long-ranged and anisotropic, giving rise to a host of novel many-body effects in confined dipolar gases such as roton-like features [@dell03; @sant03; @rone06a] and rich stability diagrams [@sant00; @yi00; @gora00; @mart01; @yi01; @gora02; @rone06; @bort06]. The physics of dipolar gases loaded into optical lattices promises to be particularly rich. For example, this setup constitutes the starting point for a range of quantum computing schemes [@bren99; @jaks00; @demi02; @bren02]. Additionally, a variety of novel quantum phases have already been predicted to arise [@gora02a; @dams03; @barn06; @mich06]. Currently, a number of experimental groups are working towards loading dipolar gases into optical lattices.
This paper investigates the physics of doubly-occupied optical lattice sites in the regime where the tunneling between neighboring sites and the interactions with dipoles located in other lattice sites can be neglected. In this case, the problem reduces to treating the interactions between two dipoles in a single lattice site. Assuming that the lattice potential can be approximated by a harmonic potential, the center of mass motion separates and the problem reduces to solving the Schrödinger equation for the relative distance vector $\vec{r}$ between the two dipoles. The interaction between the two aligned dipoles is angle-dependent and falls off as $1/r^3$ at large interparticle distances. In this work, we replace the shape-dependent interaction potential by an angle-dependent zero-range pseudo-potential, which is designed to reproduce the scattering properties of the full shape-dependent interaction potential, and derive an implicit eigenequation for two interacting identical bosonic dipoles and two interacting identical fermionic dipoles analytically.
Replacing the full interaction potential or a shape-dependent pseudo-potential by a zero-range pseudo-potential [@ferm34; @huan57; @busc98; @blum02; @bold02; @kanj04; @stoc04] often allows for an analytical description of ultracold two-body systems in terms of a few key physical quantities. Here we show that the eigenequation for appropriately chosen zero-range pseudo-potentials reproduces the energy spectrum of two dipoles under harmonic confinement interacting through a shape-dependent model potential; that the applied zero-range treatment readily leads to an approximate classification scheme of the energy spectrum in terms of angular momentum quantum numbers; and that the proposed pseudo-potential treatment breaks down when the characteristic length of the dipolar interaction becomes comparable to the characteristic length of the external confinement. The detailed understanding of two interacting dipoles obtained in this paper will guide optical lattice experiments and the search for novel many-body effects.
Section \[sec\_pp\] introduces the Hamiltonian under study and discusses the anisotropic zero-range pseudo-potential that is used to describe the scattering between two interacting dipoles. In Sec. \[sec\_ho\], we derive an implicit eigen equation for two dipoles under external spherical harmonic confinement interacting through the zero-range pseudo-potential and show that the resulting eigenenergies agree well with those obtained for a shape-dependent model potential. Finally, Sec. \[sec\_conclusion\] concludes.
System under study and anisotropic pseudo-potential {#sec_pp}
===================================================
Within the mean-field Gross-Pitaevskii formalism, the interaction between two identical bosonic dipoles, aligned along the space-fixed $\hat{z}$-axis by an external field, has been successfully modeled by the pseudo-potential $V_{pp}(\vec{r})$ [@yi00], $$\begin{aligned}
\label{eq_dipole}
V_{pp}(\vec{r})=
\frac{2 \pi \hbar^2}{\mu} a_{00} \delta(\vec{r})+
d^2 \frac{1-3 \cos^2 \theta}{r^3}.\end{aligned}$$ Here, $\mu$ denotes the reduced mass of the two-dipole system, $d$ the dipole moment, and $\theta$ the angle between $\hat{z}$ and the relative distance vector $\vec{r}$. The $s$-wave scattering length $a_{00}$ depends on both the short- and long-range parts of the true interaction potential. The second term on the right hand side of Eq. (\[eq\_dipole\]) couples angular momentum states with $l=l'$ ($l >0$) and $|l - l'| = 2$ (any $l,l'$). For identical fermions, $s$-wave scattering is absent and the interaction is described, assuming the long-range dipole-dipole interaction is dominant, by the second term on the right hand side of Eq. (\[eq\_dipole\]).
Our goal in this paper is to determine the eigenequation of two identical bosonic dipoles and two identical fermionic dipoles under external spherically harmonic confinement with angular trapping frequency $\omega$ analytically. The Schrödinger equation for the relative position vector $\vec{r}$ reads $$\begin{aligned}
\label{eq_se}
[H_0 + V_{int}(\vec{r}) ] \psi(\vec{r}) = E \psi (\vec{r}),\end{aligned}$$ where the Hamiltonian $H_0$ of the non-interacting harmonic oscillator is given by $$\begin{aligned}
\label{eq_ham}
H_0 = -\frac{\hbar^2}{2 \mu} \nabla^2 _{\vec{r}}
+\frac{1}{2} \mu \omega^2 r^2.\end{aligned}$$ In Eq. (\[eq\_se\]), $V_{int}(\vec{r})$ denotes the interaction potential. The pseudo-potential $V_{pp}(\vec{r})$ cannot be used directly in Eq. (\[eq\_se\]) since both parts of the pseudo-potential lead to divergencies. The divergence of the $\delta$-function potential arises from the singular $1/r$ behavior at small $r$ of the spherical Neumann function $n_0(r)$, and can be cured by introducing the regularization operator $\frac{\partial}{\partial r} r$ [@huan57]. Curing the divergence of the long-ranged $1/r^3$ term of $V_{pp}$ is more involved, since it couples an infinite number of angular momentum states, each of which gives rise to a singularity in the $r \rightarrow 0$ limit. The nature of each of these singularities depends on the quantum numbers $l$ and $l'$ coupled by the pseudo-potential, and hence has to be cured separately for each $l$ and $l'$ combination.
In this work, we follow Derevianko [@dere03; @dere05] and cure the divergencies by replacing $V_{pp}(\vec{r})$ with a regularized zero-range potential $V_{pp,reg}(\vec{r})$, which contains [*[infinitely]{}*]{} many terms, $$\begin{aligned}
\label{eq_ppreg}
V_{pp,reg}(\vec{r}) =
\sum_{ll'} V_{ll'}(\vec{r}).\end{aligned}$$ The sum in Eq. (\[eq\_ppreg\]) runs over $l$ and $l'$ even for identical bosons, and over $l$ and $l'$ odd for identical fermions. For $l \ne l'$, $V_{ll'}$ and $V_{l'l}$ are different and both terms have to be included in the sum. In Sec. \[sec\_ho\], we apply the pseudo-potential to systems under spherically symmetric external confinement. For these systems, the projection quantum number $m$ is a good quantum number, i.e., the energy spectrum for two interacting dipoles under spherically symmetric confinement can be solved separately for each allowed $m$ value.
|
{
"pile_set_name": "ArXiv"
}
|
Absorbing phase transitions (APT) are a category of critical nonequilibrium phase transitions, widespread in condensed matter physics and population and epidemics modeling [@reviews]. Directed percolation (DP) [@reviews; @kinzel83] has been recognized as the paradigmatic example of a system exhibiting a transition from an active to a unique absorbing phase. DP defines a precise universality class (theoretically described by the Reggeon field theory [@grassberger79; @rft]) which has proven to be very robust with respect to the introduction of microscopic modifications. The Reggeon field theory is at the heart of a strong claim of universality, summarized in the following conjecture [@grassberger82]: [*Continuous absorbing phase transitions to a unique absorbing state fall generically in the universality class of directed percolation*]{}. This conjecture is expected to hold for models with short range interactions that, most importantly, do not posses additional symmetries.
Many examples of APT subject to extra symmetries, and thus out of the DP class, have been identified in recent years. Among them we find systems with symmetric absorbing states [@cardytauber96], models of epidemics with perfect immunization (the so-called dynamic percolation class) [@dynperc], and systems with an infinite number of absorbing states [@many]. Very recently, it has been pointed out that the critical point of self-organized critical (SOC) [@jenssen98; @mannamodel] sandpile models can also be interpreted as a continuous phase transition with many absorbing states [@fes; @bigfes]. What distinguishes sandpile models from other models with absorbing states, is that the control parameter, represented by the global density of particles, is a conserved quantity.
Given the large class of systems whose dynamics involves conserved fields, it becomes particularly interesting to explore in general the effect of conservation rules in APT. With this purpose in mind, in this Letter we report the critical behavior of several models showing absorbing transitions that strictly conserve the number of particles or energy. In particular, we introduce a conserved lattice gas (CLG) [@noteje] with short range stochastic microscopic dynamical rules, that undergoes a continuous phase transition to an absorbing state at a critical value of the particle density. We present extensive numerical simulations in $d=2$ of the stationary and spreading properties of the model, and determine the full set of critical exponents. In order to prove definitively the existence of a well-defined universality class we have also performed simulations of a conserved threshold transfer process (CTTP) [@mendes94], and several fixed energy sandpile models with stochastic rules [@mannamodel; @fes; @bigfes]. All models provide critical exponents compatible with a single and broad universality class that embraces all APT in stochastic models with a conserved field. This evidence leads us to conjecture that, in absence of additional symmetries, [ *absorbing phase transitions in stochastic models with infinite absorbing states and activity coupled to a static conserved field define a unique and per se universality class*]{} [@notewij]. This result is relevant in the understanding of several reaction-diffusion systems, sandpile models and activated processes that could share the same theoretical description.
The CLG model is defined on a $d$-dimensional square lattice. To each site $i$ it is associated a binary variable $n_i$ that assumes the values $n_i=1$ if the site is occupied by a particle or $n_i=0$ if the site is empty. Double occupancy is strictly forbidden. Nearest neighbors particles repel each other via repulsive short range interactions. As a product of this interaction, at each time step particles with nearest neighbors jump into one of their empty nearest neighbor sites, selected at random. The only dynamics in the model is due to these [*active*]{} particles; isolated particles do not move. The dynamics can be implemented with either sequential or parallel updating. In the latter case, an exclusion principle is applied so that two particles never attempt to move into the same site. We impose periodic boundary conditions, and since the dynamics admits neither input nor loss, the total number of particles $N=\sum_i n_i(t)$ is a conserved quantity. It is clear that the model allows an infinite number (in the thermodynamic limit) of absorbing configurations, in which there are no nearest neighbor particles.
In the CLG model, the constant particle density $n=N/L^d$ acts as a tuning parameter. Initial conditions are generated by placing at random in the lattice $nL^d$ particles, generating an homogeneous and uncorrelated distribution. For small densities, the system will very likely fall into an absorbing configurations with only isolated particles. For large densities, the system reaches a stationary active state with everlasting activity (this is trivially the case for $n>1/2$). We shall see in the following that as we vary $n$, the CLG model exhibits a continuous transition separating an absorbing phase from an active phase. The phase transition occurs for a nontrivial density $n_c$ ($<1/2$). APT are characterized by the order parameter $\rho_a$ measuring the density of dynamical entities, in our case the density of nearest neighbor particles. The order parameter is null for $n<n_c$, and follows a power law $\rho_a\sim (n-n_c)^\beta$ for $n>n_c$. The system correlation length $\xi$ and time $\tau$ both diverge as $n\to n_c^+$. In the critical region the system is characterized by power law behavior, namely $\xi\sim
(n-n_c)^{-\nu_\perp}$ and $\tau\sim (n-n_c)^{-\nu_\parallel}$. The dynamical critical exponent is defined as $\tau\sim\xi^z$, with $z=\nu_\parallel/\nu_\perp$. These exponents fully define the critical behavior of the stationary state of the model.
In order to study the critical point of the CLG model, we performed numerical simulations in $d=2$ for systems with size ranging from $L=64$ to $L=512$, averaging over $10^4-10^5$ independent initial configurations. Very close to the critical point we have $\xi\gg L$, so that the actual characteristic length of the system is the lattice size $L$. Because of its finite size, the system will enter sometimes an absorbing configuration even for values of $n$ in the supercritical region. It is then convenient to introduce averages over a set of independent trials and calculate the quasi-stationary properties in the active phase from a restricted average over surviving trials with nonzero final activity.
As shown in Fig. \[fig:steady1\], after a transient which depends on the system size $L$ and $\Delta n\equiv n-n_c$, the surviving samples average of the density of active sites reaches a stationary state $\rho_a(L,\Delta n)$. Close to the critical point, the finite size scaling ansatz tells us that all quantities depend on the system size through the ratio $L/\xi$, and the order parameter follows the finite size scaling form [@jensen93b] $$\rho_a (\Delta n,L) = L^{-\beta/\nu_{\perp}} {\cal G} (L^{1/\nu_{\perp}} \Delta n) \;,
\label{fss}$$ where ${\cal G}$ is a scaling function with ${\cal G}(x) \sim x^{\beta}$ for large $x$. For $\Delta n=0$ the stationary density follows the pure power law behavior $\rho_a\sim L^{-\beta/\nu_{\perp}}$. On the other hand, for values of $n$ in the supercritical regime $\rho_a$ should be independent of $L$ for $L\gg\xi$, while in the subcritical regime $\rho_a$ should decay faster than a power law. This allows us to locate the critical value $n_c$ of the particle density as the only value of $n$ at which we recover a nontrivial power law scaling for the density of active sites. In Fig. \[fig:steady1\] we observe power law scaling for $n=0.23875$, but clearly not for $0.2387$ or $0.2388$, indicating that $n_c=0.23875(5)$ (Figures in parenthesis indicate the statistical uncertainty in the last digit). From the power law decay we find the exponent ratio $\beta/\nu_\perp=0.81(3)$. An independent estimate of the exponent $\beta$ can be obtained by looking at the scaling of the active-site density with respect to $\Delta n$ for the size $L=320$. The resulting power law behavior yields $\beta=0.63(1)$, where the error is mainly due to the uncertainty in the critical point $n_c$. A consistency test can be performed by considering the active site density away from the critical point. In Fig. \[fig:steady2\] we plot $\rho_a(\Delta n, L)L^{\beta/\nu_{\perp}}$ versus $\Delta n L^{1/\nu_{\perp}}$ for $\nu_{\perp}=0.78$, $\beta/\nu_{\perp}=0.81$ and $n_c=0.23875$. As one would expect all the data collapse onto a single curve, following the scaling form Eq. (\[fss\]). A further check is provided by the direct fitting of the large $x$ behavior of the scaling function ${\cal G}(x)$ that gives $\beta=0.63$, recovering the independent measurement at $L=320$.
To determine the dynamical exponents we turn our attention to the scaling properties of time dependent quantities. In particular, we
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Consider exponential Carmichael function $\lae$ such that $\lae$ is multiplicative and $\lae(p^a) \hm= \lambda(a)$, where $\lambda$ is usual Carmichael function. We discuss the value of $\sum \lae(n)$, where $n$ runs over certain subsets of $[1,x]$, and provide bounds on the error term, using analytic methods and especially estimates of $\int_1^T \bigl| \zeta(\sigma+it) \bigr|^m dt$.'
address: 'I. I. Mechnikov Odessa National University'
author:
- 'Andrew V. Lelechenko'
bibliography:
- 'taue.bib'
title: Exponential Carmichael function
---
Introduction
============
Consider an operator $E$ over arithmetic functions such that for every $f$ the function $Ef$ is multiplicative and $$(Ef)(p^a) = f(a), \qquad p \text{~is prime}.$$
For various functions $f$ (such as the divisor function, the sum-of-divisor function, Möbius function, the totient function and so on) the behaviour of $Ef$ was studied by many authors, starting from Subbarao [@subbarao1972]. The bibliography can be found in [@bibliography].
The notation for $Ef$, established by previous authors, is $f^{(e)}$.
Carmichael function $\lambda$ is an arithmetic function such that $$\lambda(p^a) = \begin{cases}
\phi(p^a), & p>2 \text{~or~} a=1,2, \\
\phi(p^a)/2, & p=2 \text{~and~} a>2,
\end{cases}$$ and if $n=p_1^{a_1} \cdots p_m^{a_m}$ is a canonical representation, then $$\lambda(n) = \lcm\bigl( \lambda(p_1^{a_1}), \ldots, \lambda(p_m^{a_m}) \bigr).$$
This function was introduced at the beginning of the XX century in [@carmichael1909], but intense studies started only in 1990-th, e. g. [@erdos1991]. Carmichael function finds applications in cryptography, e. g. [@friedlander1999].
Consider also the family of multiplicative functions $$\delta_r(p^a) = \begin{cases}
0, & a<r, \\
1, & a\ge r,
\end{cases}
\qquad r \text{~is integer.}$$
Function $\delta_2$ is a characteristic function of the set of square-full numbers, $\delta_3$ — of cube-full numbers and so on. Of course, $\delta_1 \equiv 1$.
Denote $\lae_r$ for the product of $\delta_r$ and $\lae$: $$\lae_r(n) = \delta_r(n) \lae(n).$$
The aim of our paper is to study asymptotic properties of $\lae \equiv \lae_1$, $\lae_2$, $\lae_3$ and $\lae_4$.
Note that all proofs below remains valid for $\phie_r(n) = \delta_r(n) \phie(n)$ instead of $\lae_r(n)$ for $r=1,2,3,4$.
Notations
=========
Letter $p$ with or without indexes denotes a prime number.
We write $f\star g$ for Dirichlet convolution $$(f \star g)(n) = \sum_{d|n} f(d) g(n/d).$$
Denote $$\tau(a_1,\ldots,a_k; n) := \sum_{d_1^{a_1}\cdots d_k^{a_k} = n} 1.$$
In asymptotic relations we use $\sim$, $\asymp$, Landau symbols $O$ and $o$, Vinogradov symbols $\ll$ and $\gg$ in their usual meanings. All asymptotic relations are given as an argument (usually $x$) tends to the infinity.
Everywhere $\eps>0$ is an arbitrarily small number (not always the same even in one equation).
As usual $\zeta(s)$ is Riemann zeta-function. Real and imaginary components of the complex $s$ are denoted as $\sigma:=\Re s$ and $t:=\Im s$, so $s=\sigma+it$.
For a fixed $\sigma\in[1/2,1]$ define $$m(\sigma) := \sup\biggl\{
m \biggm|
\int_1^T \bigl| \zeta(\sigma+it) \bigr|^m dt \ll T^{1+\eps}
\biggr\}.$$ and $$\mu(\sigma) := \limsup_{t\to\infty} {\log \bigl|\zeta(\sigma+it)\bigr| \over \log t}.$$
Below $H_{2005}=(32/205+\eps, 269/410+\eps)$ stands for Huxley’s exponent pair from [@huxley2005].
Preliminary lemmas
==================
\[l:rational-maximal-order\] Let $F\colon \Z\to\CC$ be a multiplicative function such that $F(p^a) \hm= f(a)$, where $f(n) \ll n^\beta$ for some $\beta>0$. Then $$%\label{eq:rational-maximal-order}
\limsup_{n\to\infty} {\log F(n)\llog n \over\log n} = \sup_{n\ge1} {\log f(n)\over n}.$$
See [@suryanarayana1975].
\[l:log-sum\] Let $f(t)\ge 0$. If $$\int_1^T f(t) \, dt \ll g(T),$$ where $g(T) = T^\alpha \log^\beta T$, $\alpha\ge 1$, then $$%\label{eq:log-summing}
I(T):= \int_1^T {f(t)\over t} dt \ll
\left\{ \begin{matrix}
\log^{\beta+1} T & \text{if } \alpha=1, \\
T^{\alpha-1} \log^{\beta} T & \text{if } \alpha>1.
\end{matrix} \right.$$
Let us divide the interval of integration into parts: $$I(T)
\le
\sum_{k=0}^{\log_2 T}
\int_{T/2^{k+1}}^{T/2^k} {f(t)\over t} dt
<
\sum_{k=0}^{\log_2 T} {1\over T/2^{k+1}}
\int_1^{T/2^k} f(t) dt
\ll
\sum_{k=0}^{\log_2 T} {g(T/2^{k})\over T/2^{k+1}}.$$ Now the lemma’s statement follows from elementary estimates.
\[l:order-in-the-critical-strip\] For $\sigma\ge1/2$ and for any exponent pair $(k,l)$ such that $l-k \ge \sigma$ we have $$\mu(\sigma) \le {k+l-\sigma\over2} + \eps.$$
See [@ivic2003 (7.57)].
A well-known application of Lemma \[l:order-in-the-critical-strip\] is $$\label{eq:mu-1/2}
\mu(1/2) \le 32/205,$$ following from the choice $(k,l) = H_{2005}$. Another (maybe new) application is $$\label{eq:mu-3/5}
\mu(3/5) \le 1409 / 12170,$$ following from $$(k,l) = \left( {269\over 2434}, {1755\over2434} \right) = ABAH_{2005},$$ where $A$ and $B$ stands for usual $A$- and $B$-processes [@kratzel1988 Ch. 2].
\[l:phragmen\] Let $\eta>0$ be arbitrarily small. Then for growing $|t|\ge3$ $$\label{eq:convexity}
\zeta(s) \ll \begin{cases}
|t|^{1/2 - (1-2\mu(1/2))\sigma}, & \sigma\in[0, 1/2],
\\
|t|^{2\mu(1/2)(1-\sigma)} , & \sigma\in[1/2, 1-\eta], \\
|t|^{2\mu(1/2)(1-\sigma)} \log^{2/3} |t| , & \sigma\in[1-\eta, 1], \\
\log^{2/3} |t|, & \sigma\ge1.
\end{cases}$$ More exact estimates for $\sigma\in[1/2, 1-\eta]$ are also available,
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study the chaos decomposition of self-intersection local times and their regularization, with a particular view towards Varadhan’s renormalization for the planar Edwards model.'
author:
- |
**Jinky Bornales**\
[Physics Department, MSU-IIT, Iligan City, The Philippines]{}\
[jinky.bornales@g.msuiit.edu.ph]{}
- |
**Maria João Oliveira**\
[Universidade Aberta, P 1269-001 Lisbon, Portugal]{}\
[CMAF, University of Lisbon, P 1649-003 Lisbon, Portugal]{}\
[oliveira@cii.fc.ul.pt, mjoliveira@ciencias.ulisboa.pt (New)]{}
- |
**Ludwig Streit**\
[Forschungszentrum BiBoS, Bielefeld University, D 33501 Bielefeld, Germany]{}\
[CCM, University of Madeira, P 9000-390 Funchal, Portugal]{}\
[streit@physik.uni-bielefeld.de]{}
title: 'Chaos Decomposition and Gap Renormalization of Brownian Self-Intersection Local Times'
---
**Keywords:** Edwards model, self-intersection local time, Varadhan renormalization, white noise analysis
**Mathematics Subject Classifications (2010):** 28C20, 41A25, 60H40, 60J55, 60J65, 82D60
Introduction
============
The self-intersection local time of $d$-dimensional Brownian motion, informally, is given as $$L=\int_{0}^{T}dt_2\int_{0}^{t_2}dt_1\,\delta \left(\mathbf{B}(t_2)-\mathbf{B}(t_1)\right) . \label{L}$$We shall see that, while “reasonably well defined” for $d=1$, these local times become more and more singular as the dimension $d$ increases. Intersections have thus been the object of extensive study by authors such as Dvoretzky, Erdös, Kakutani [@Dv], [@Dv1], [@Dv2], Varadhan [@v], Westwater [@Westwater], [@Westwater2], [@Westwater3], Le Gall [@LeGall], [@LeGall2], Rosen [@R1], [@R2], [@R3], Dynkin [@Dy1], [@Dy2], [@Dynkin], Watanabe [@Wat], Yor [@Yor1], [@Yor2], Imkeller et al. [@Imkeller], Albeverio et al. [@AOS], [@AlbHu]. For fractional Brownian motion there are papers e.g. by Rosen [@R4], Hu & Nualart [@Hu1], Grothaus et al. [@GOSS].
Apart from its intrinsic mathematical interest the self-intersection local time has played a role in constructive quantum field theory, and is a standard model in polymer physics for the self-repulsion (“excluded volume effect”) of chain polymers in solvents [@Schaefer].
Replacement of the Dirac delta function in (\[L\]) by a Gaussian $$\delta _{\varepsilon }(x):=\frac{1}{(2\pi \varepsilon )^{d/2}}e^{-\frac{|x|^{2}}{2\varepsilon }},\quad \varepsilon >0,$$ leads to regularized local times $$L_{\varepsilon }:=\int_{0}^{T}dt_2\ \int_{0}^{t_2}dt_1\,\delta _{\varepsilon
}(\mathbf{B}(t_2)-\mathbf{B}(t_1)) $$ and for $d=1$ one can show $L^{2}$ convergence w.r.t. white noise or Wiener measure space. But already for $d=2$ this fails since the expectation of $L_{\varepsilon }$ will diverge in the limit, asymptotically $$\mathbb{E}(L_{\varepsilon })\approx -\frac{T}{2\pi }\ln \varepsilon.$$ In this case it is sufficient to subtract the expectation, i.e. the centered regularized local time does have a well-defined $L^2$ limit: $$L_{\varepsilon ,c}:=L_{\varepsilon }-\mathbb{E}(L_{\varepsilon })\rightarrow
L_{c}.$$ Apart from the Gaussian regularization above, others have been considered to remove the singularity at $t_1=t_2$ in the integral (\[L\]). The “staircase regularization” avoids the line $t_1=t_2$ as in see e.g. Bolthausen [@Bolthausen] (Fig. 1).

[: Domain of integration for the staircase-regularized local time.]{}
The widely used “gap regularization” does the same by omitting the strip $t_2-t_1<\Lambda$ in the integral. In the modelling of chain polymers the gap size $\Lambda$ will be a “microscopic” quantity, i.e. of the order of the inter-monomer distance, more precisely the “Kuhn” or “persistence” length. It plays an important role in renormalization group calculations [@Schaefer]: critical parameters are obtained from the postulate that macroscopic quantities do not depend on microscopic length scales.
Tools from White Noise Analysis [@hkps]
=======================================
Based on a $d$-tuple of independent Gaussian white noises $\bm{\omega}=(\omega _{1},...,\omega _{d})$ one defines a $d$-dimensional Brownian motion $\mathbf{B}$ through $$\mathbf{B}(t)\equiv \langle\bm{\omega},\I_{[0,t]}\rangle=\int_{0}^{t}ds\,
\bm{\omega }(s).$$
We shall use a multi-index notation $$\mathbf{n}=(n_{1},\ldots ,n_{d}),\;\;n=\sum_{i=1}^{d}n_{i},\;\;\mathbf{n}!=\prod_{i=1}^{d}n_{i}!$$and for $d$-tuples of Schwartz test functions $\mathbf{f}=(f_{1},\ldots,f_{d})\in
S(\mathbb{R},\mathbb{R}^{d})$, $$\langle\mathbf{f},\mathbf{f}\rangle=\sum_{i=1}^{d}\int dt\, f_{i}^{2}(t)$$ $$\langle F_{\mathbf{n}},\mathbf{f}^{\otimes \mathbf{n}}\rangle=\int d^nt\,
F_{\mathbf{n}}(t_1,\ldots,t_n) \underset{i=1}{\overset{d}{\otimes }}
f_i^{\otimes n_i}(t_1,\ldots,t_n)$$ and similarly for $\langle:\bm{\omega}^{\otimes \mathbf{n}}:,F_{\mathbf{n}}
\rangle$ where for $d$-tuples of white noise the Wick product $: \cdot :$ [@hkps] generalizes to $$:\bm{\omega }^{\otimes \mathbf{n}}:
=\underset{i=1}{\overset{d}{\otimes }}:\omega _{i}^{\otimes n_{i}}:$$ The vector valued white noise $\omega$ has the characteristic function $$C(\mathbf{f}):=\mathbb{E}(e^{i\langle\bm{\omega },\mathbf{f}\rangle})=\int_{S^{\ast }(\mathbb{R},\mathbb{R}^{d})}d\mu(\bm{\omega }) e^{i\langle\bm{\omega },\mathbf{f}\rangle}=e^{-\frac{1}{2}\langle\mathbf{f},\mathbf{f}\rangle},$$ where $\langle\bm{\omega},\mathbf{f}\rangle
=\sum_{i=1}^d\langle\omega_{i}, f_{i}\rangle$ and $f_{i}\in S(\R, \R)$.
Writing $$(L^{2}):= L^{2}(S^{\ast }(\R,\R^{d}), d\mu)$$ there is the Itô-Segal-Wiener isomorphism with the Fock space of symmetric square integrable functions: $$(L^{2})\simeq \left(\underset{k=0}{\overset{\infty }{\oplus }}\mathrm{Sym}\,
L^{2}(\mathbb{R}^{k},k!d^{k}t)\right)^{\otimes d}. $$ This implies the chaos expansion $$\varphi (\bm{\omega })=\sum_{\mathbf{n}\in\N_0^d}
\langle:\bm{\omega }^{\otimes \mathbf{n}}:,F_{\mathbf{n}}\rangle\textrm{ for }
\varphi \in (L^{2}) $$ with kernel functions $F_{\mathbf{n}}$ in Fock space.
Generalized functionals are constructed via a Gel’fand triple $$(S)\subset (L^{2})\subset (S)^{\ast }.$$
The generalized functionals in ${(S)}^{\ast }$ are conveniently characterized by their action on exponentials. In particular we use the $$:\exp (\langle\bm{\omega},\mathbf{f}\rangle):\,= C(\mathbf{f})
\exp(\langle\bm{\omega},\mathbf{f}\rangle)\in (S) $$ to make the
The transformation defined for all test functions $\mathbf{f}\in S(\R
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'One of the most celebrated discoveries of twentieth century is the existence of limiting mass of white dwarfs, which is one of the compact objects formed once nuclear burning stops inside the star. On approaching this limiting mass $\sim1.4M_\odot$, called the Chandrasekhar mass-limit, a white dwarf is believed to spark off with an explosion called type Ia supernova, which is considered to be a standard candle. However, observations of several over-luminous, peculiar type Ia supernovae indicate that the Chandrasekhar mass-limit to be significantly larger. By considering noncommutativity of components of position and momentum variables, hence uncertainty in their measurements, at the quantum scales, we show that the mass of white dwarfs could be significantly super-Chandrasekhar and thereby arrive at a new mass-limit $\sim 2.6M_\odot$, explaining a possible origin of over-luminous peculiar type Ia supernovae. The idea of noncommutativity, apart from the Heisenberg’s uncertainty principle, is there for quite sometime, without any observational proof however. Our finding offers a plausible astrophysical evidence of noncommutativity, arguing for a possible second standard candle, which has many far reaching implications.'
author:
- Surajit Kalita
- Banibrata Mukhopadhyay
- 'T. R. Govindarajan'
bibliography:
- 'mypaper4.bib'
title: 'Violation of Chandrasekhar mass-limit in noncommutative geometry: A strong possible explanation for the super-Chandrasekhar limiting mass white dwarfs'
---
Introduction {#Introduction}
============
Einstein’s theory of general relativity (GR) and quantum mechanics are considered to be among the greatest discoveries in the twentieth century. GR is undoubtedly the most panoramic theory to explain the theory of gravity. It can easily explain a large number of phenomena where Newtonian gravity falls short. It also helps to understand the stability of Chandrasekhar’s mass-limit for the white dwarf with finite radius. White dwarf is the end state of a star with mass $\lesssim 8 M_\odot$, where the inward gravitational force is balanced by the force due to outward electron degeneracy pressure arising from Fermi statistics. Moreover, if the white dwarf has a binary partner, it starts pulling matter out off the partner due to its high gravity, resulting in the increase in the mass of the white dwarf. When it gains sufficient amount of matter, beyond a certain mass, known as the Chandrasekhar mass-limit (currently accepted value $\sim 1.4M_\odot$ [@1931ApJ....74...81C] for a carbon-oxygen non-magnetized and non-rotating white dwarf), this pressure/force balance is no longer sustained and it sparks off to produce type Ia supernova (SNIa) [@choudhuri_2010]. The luminosity of SNIa is very important as it is used as one of the standard candles to measure the luminosity distance of various objects in cosmology.
However, recent observations have questioned the complete validity of GR near the compact objects. For example, in the past decade, a number of peculiar over-luminous SNeIa, viz. SN 2003fg, SN 2006gz, SN 2007if, SN 2009dc [@2006Natur.443..308H; @2010ApJ...713.1073S] etc. have been observed, which were inferred to be originating from white dwarfs of super-Chandrasekhar mass as high as $2.8M_\odot$. In this scenario, the Chandrasekhar mass-limit is well violated. Different theories and models have been proposed to explain this class of the white dwarfs. Our group started exploring the significant violation of the Chandrasekhar mass-limit based on the effect of magnetic fields [@2012MPLA...2750084K; @2012PhRvD..86d2001D]. Subsequently, there are enormous interest in re-exploring the Chandrasekhar mass-limit by introducing various new physical effects in white dwarfs. Some such physics are (1) effects of strong magnetic field leading to significantly super-Chandrasekhar mass: quantum, through Landau orbital effects above the Schwinger limit $4.414\times 10^{13}$ G, which affects the equation of state (EoS) [@2013PhRvL.110g1102D], and classical: through the field pressure affecting the macroscopic structural properties [@2014JCAP...06..050D; @2015JCAP...05..016D; @2015MNRAS.454..752S; @2019MNRAS.490.2692K]; (2) modified gravity effect, leading to significantly sub- and super-Chandrasekhar mass-limits [@2015JCAP...05..045D; @2017EPJC...77..871C; @2018JCAP...09..007K]; (3) ungravity effect [@2016PhRvD..93j4046B]; (4) consequence of total lepton number violation in magnetized white dwarfs [@2015NuPhA.937...17B]; (5) charged white dwarfs leading to super-Chandrasekhar mass [@2014PhRvD..89j4043L]; (6) generalized Heisenberg uncertainty principle [@2018JCAP...09..015O]; (7) effects of momentum-momentum noncommutativity in the white dwarf matter and hence the equation of state, leading to super-Chandrasekhar mass-limit [@2019PhLB..79734859P]; and many more.
In the present work, we plan to analyze the possible noncommutativity effects. Many researchers earlier used the idea of noncommutativity to explain the physics of various systems [@1992CQGra...9...69M; @1995JMP....36.6194C; @1999JHEP...09..032S; @2000IJMPA..15.4301A; @2001PhLB..510..255A; @2002PhRvL..88s0403M; @2002PhLB..549..253A; @2002Natur.418...34A; @2003JHEP...08..057L; @2006PhLB..632..547N; @2012PhRvL.109r1602S; @2014JPhA...47R5203C; @2015JPhA...48C5401A; @2015PhRvD..92l5013S; @2017EPJC...77..577K; @2018EPJP..133..421F]. However, unfortunately, there is no direct way to confirm the natural evidence of such noncommutativity and hence it still remains as a hypothesis. Nevertheless, our observable universe abides by position-position and momentum-momentum commutative rules, which implies that two position coordinates and two momentum coordinates can be measured simultaneously. However, at a very small length scale (and/or at a very high energy regime), the position and corresponding conjugate momentum follow the Heisenberg’s uncertainty principle. Nevertheless, there are proposals that at a very high energy regime, e.g. at Planck’s scale, position-position noncommutativity arises [@2001PhLB..510..255A; @2002PhRvL..88s0403M; @2006PhLB..632..547N; @2017EPJC...77..577K; @2018EPJP..133..421F]. On the other hand, the density and the corresponding energy scale of white dwarfs are significantly lower than those at the Planck’s scale and, hence, any implementation of the position-position noncommutativity in white dwarf matter is still in the level of strong hypothesis. Moreover, the Chandrasekhar mass-limit arises from the interplay of pressures due to fermionic statistics and gravitational attraction. One of the important outcomes of noncommutative (NC) geometry is that the statistics of particles gets modified due to the starproduct [@2007PhRvD..75d5009B; @2006JPhA...39.9557C]. Effectively a fermion behaves less of a fermion and, hence, the pressure is reduced allowing collapse continuing till smaller radius with more mass accumulated. Therefore, although the scale of NC geometry in quantum spacetime namely Planck length is too small, the coherent effect of large density of white dwarf can enhance the effective NC scale to a larger value. This will be argued with the realistic densities of white dwarfs taken into account.
One way of interpreting this noncommutativity is the existence of spacetime magnetic field, almost equivalent to Landau quantization. This states that in the presence of external magnetic field, position coordinates perpendicular to the direction of magnetic field become NC, and hence the corresponding generalized momentum components also become NC. It is a single parameter, field strength, which controls the noncommutativity of position and momentum coordinates. Now, the hypothesis in NC geometry is that in place of external field, there is an effective inherent magnetic field in the spacetime itself at the microscopic level. If so, at which length scale such a field, equivalent to external field producing Landau orbitals, becomes significant is a big question mark. However, if noncommutativity is present, with the analogy of external field effect, a single parameter should control both the position and momentum noncommutativities apart from the Heisenberg’s uncertainty principle. Note that the position-position noncommutativity is more fundamental in order to describe the NC universe and the momentum-momentum noncommutativity may arise as a consequence of it. Indeed it is a matter of fact that the curvature in position space leads to the noncommutativity in the conjugate momentum variables. Interestingly, in the energy dispersion relation, only the momentum-momentum noncommutativity parameter appears explicitly, hence mathematically speaking, whether position coordinates
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Junya [Hashida]{} and Yuichiro [Kiyo]{}'
title: More on Large $Q^2$ Events with Polarized Beams
---
In 1997, an event excess in the neutral current process $e^{+}p\rightarrow e^{\prime +} X$ in the region of high momentum transfer $Q^2 \geq 15,000$ GeV$^2$ was reported by H1 and ZEUS at HERA [@H1ZEUS]. The observed cross section was $0.71^{+0.14}_{-0.12}$ pb, whereas the standard model (SM) predicts $0.49$ pb. The new data [@TRAPE] analyzed in 1998 are in agreement with the SM up to $Q^2\simeq 10,000$ GeV$^2$. The excess at $Q^2 \geq 20,000$ GeV$^2$ is not confirmed by the new data but is still present. The present situation is rather vague, [@TRAPE; @ALTA1; @VALE] and it is still an open question whether this is really an anomalous event or if it just results from statistical fluctuation. If the excess is not just a statistical fluctuation, it must be an indication of new interactions beyond the SM, because it appears to be very difficult to explain the data in the framework of the SM.
There have appeared many proposals and analyses of this problem. New contact interactions (CI) stemming from high energy scale physics have been analyzed, [@ALTA; @BARG; @DESH] and supersymmetric (SUSY) models with R-parity violating ($R_{p}\hspace{-11pt}/~~$) interactions have also been discussed. [@ALTA] The two-stop scenario, [@KON] left stop $\tilde{t}_{L}$ is a mixture of the almost degenerate mass eigenstates of $\tilde{t}_{1}$ and $\tilde{t}_{2}$, with $R_{p}\hspace{-11pt}/~~$ interactions was proposed as one of the candidates to explain broad mass distribution in the data.
HERA will begin a polarized experiment, [@EXP] polarized proton $p(\uparrow/\downarrow)$ and lepton (positron in our discussion) scattering, in the near future. The polarized experiment is important because the polarization of the proton and lepton beams make it possible to test the chiral structure of the interactions. [@VIRE] Thus it is interesting to ask what HERA will teach us about the models in the future polarized program.
In this paper, we examine two scenarios, the CI and the two-stop scenarios in the context of the large $Q^{2}$ events at the polarized HERA. Our interest is in determining how we can examine these scenarios and what the characteristics of the models are. Thus we discuss these scenarios with regard to the future polarized experiment $e^{+} p(\uparrow/\downarrow) \rightarrow e^{+ \prime}X$. After giving the model Lagrangians, we calculate the parton level cross sections which will be convoluted with parton distributions to form the physical cross section.
The Lagrangian for the CI [@ALTA; @BARG; @DESH] assumes the form $$\begin{aligned}
L_{CI}
&=&
\frac{4 \pi}{\Lambda^2}
\sum_{\stackrel{q=u,d}{a,b=L,R}}
\eta^q_{ab}
\left(
\bar{e}_{a}\gamma^\mu e_{a}
\right)
\left(
\bar{q}_{b}\gamma_\mu q_{b}
\right),\end{aligned}$$ which is the effective interaction of a certain underlying high energy physics describing low energy phenomena in the neutral current process. The subscript $R (L)$ denotes the chirality of the fields, $\eta^q_{ab}=\pm 1, 0$, and $\Lambda$ is the mass scale of a heavy particle which might be exchanged among quarks and leptons. Thus these interactions are suppressed by the mass scale of the new physics, and some constraints [@TRAPE; @GCHO] have been obtained for $\Lambda$ in many experiments. The superpotential, for the stop scenario with $R_{p}\hspace{-11pt}/~~$ interaction [@ALTA; @KON], is given by $$W_{R\hspace{-6pt}/~}
=
\lambda^{\prime}_{131}
L_{1} Q_{3} D^{c}_{1},$$ where $L_{1}$ and $Q_{3}$ are the superfields of the $SU(2)_L$ lepton and quark doublet, respectively, and $D^{c}_{1}$ is the singlet down type quark. Here the subscripts 1, 2 and 3 are the generation indices. The interaction Lagrangian can be obtained from the superpotential $$L_{\lambda^\prime}
=
\lambda^{\prime}_{131}
\left(
\tilde{t}_{L} \bar{d}P_{L}e +
\tilde{e}_{L} \bar{d}P_{L}t +
\bar{\tilde{d}}_{R}\bar{e}^{c}P_{L}t
-
\tilde{b}_{L}\bar{d}P_{L}\nu_{e}-
\tilde{\nu}_{L}\bar{d}P_{L}b -
\bar{\tilde{d}}_{R}\bar{\nu}^{c}_{e} P_{L}b
\right)
+ h.c.$$ For the scalar fields, $R (L)$ denotes the chirality of their superpartners. We discuss the proton-positron scattering, so only the first term $\tilde{t}_{L} \bar{d}P_{L}e + h.c.$ is relevant. In the two-stop scenario, the left stop $\tilde{t}_L$ is the superposition of the two mass eigenstates $\tilde{t}_1$ and $\tilde{t}_2$ with the mixing angle $\theta_t$; namely $\tilde{t}_L= \tilde{t}_1 \cos\theta_t - \tilde{t}_2 \sin\theta_t$. The stop $\tilde{t}_{L}$ can couple only to the left handed lepton field $e_{L}$ and the right handed down quark $d_{R}$. This is an important point in our discussion, because the polarized experiment can distinguish the chiral structure of the interactions in the parton-lepton scattering.
The partonic cross sections $\hat{\sigma}$ for the models are given by $$\begin{aligned}
\frac{ d \hat{\sigma}(e^{+}_{I} f_{J}) }{dx_B dQ^{2}}
&=&
\delta(x_B - x)\frac{(4\pi \alpha_{e})^{2}}{8 \pi \left( \hat{s} ~ Q^{2} \right)^{2} }
\left[
(1+ I \cdot J )\hat{s}^{2} + (1- I\cdot J) \hat{u}^{2}
\right]
\nonumber \\
&\times&
\left|
Q_{\gamma}(e)Q_{\gamma}(f)
+ \frac{Q_{Z}^{-I}(e)Q_{Z}^{J}(f)
}{\sin^2\theta_W}
\frac{Q^{2}}{Q^{2}+M_{Z}^{2}}
+\Delta
\right|^{2},\end{aligned}$$ where $I(J)=\pm$ correspond to the helicities $\pm 1/2$ of the positron (quark), $x_B$ is the Bjorken variable and $x$ is the momentum fraction of the parton, $\alpha_{e}=e^{2}/(4\pi)$, $\theta_W$ is the electro-weak angle, and $\hat{s}$ and $\hat{u}$ are the Mandelstam variables with respect to the parton-positron system, which are defined by $\hat{s}=x s$ and $\hat{u}=x u$. $\Delta$ is the contribution from the CI or $R_{p}\hspace{-11pt}/~~$ interaction. We neglect the masses of the quarks and positron in this paper. The coupling constants of the electron and up and down quarks to the photon and Z boson are given by $$\begin{aligned}
Q_{\gamma}(e)&=&-1,~~~
Q_{Z}^{+}(e)=
\frac{\sin^2\theta_W}{\cos^2\theta_W},~~~
Q_{Z}^{-}(e)=
\frac{2 \sin^2\theta_W-1}{2 \cos\theta_W},~~~
\\
%
Q_{\gamma}(u)&=&\frac{2}{3},~~~
Q_{Z}^{+}(u)=
\frac{-2 \sin^2\theta_W}{3 \cos\theta_W},~~~
Q_{Z}^{-}(u)=
\frac{3- 4 \sin^2\theta_W}{6 \cos\theta_W},~~~
\\
%
Q_{\gamma}(d)&=&\frac{-1}{3},~~~
Q_{Z}^{+}(d)=
\frac{\sin^2\theta_W}{3 \cos^2\theta_W},~~~
Q_{Z}^{-}(d)=
\frac{-3+ 2 \sin^2\theta_W}{6 \cos\theta_W}.\end{aligned
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present a general model-independent and rephase-invariant formalism that cleanly relates CP and CPT noninvariant observables to the fundamental parameters. Different types of CP and CPT violations in the $K^{0}$-, $B^{0}$-, $B_{s}^{0}$- and $D^0$-systems are explicitly defined. Their importance for interpreting experimental measurements of CP and CPT violations is emphasized. In particular, we show that the time-dependent measurements allow one to extract a clean signature of CPT violation.'
author:
- |
K.C. Chou$^{a}$, W.F. Palmer$^{b}$, E.A. Paschos$^{c}$ and Y. L. Wu$^{a,c}$\
\
$a:$ Institute of Theoretical Physics, Chinese Academy of Sciences\
Beijing 100080, China\
\
$b:$ Department of Physics, Ohio-State University\
Columbus, OH 43210, USA\
\
$c:$ Institut für Physik, Universität Dortmund\
D-44221 Dortmund, Germany
date: |
palmer@mps.ohio-state.edu\
paschos@hal1.physik.uni-dortmund.de\
ylwu@itp.ac.cn
title: |
\
\
Searching for Rephase-Invariant CP- and CPT-violating Observables in Meson Decays
---
[**PACS numbers: 11.30.Er, 13.25.+m**]{}
Introduction
============
For the discrete symmetries of nature, violations have been observed for C, P and the combined CP symmetries[@P; @P1; @P2; @P3; @CP]. In fact two types of CP violation have now been established in the $K$-meson system. It remains an active problem of research to observe CP asymmetries in heavier mesons. In addition there is new interest in investigations of properties of the CPT symmetry[@CPTT]. Up to now, there are only bounds on CPT-violating parameters[@CPT], which are sensitive to the magnitude of amplitudes, but tests of the relative phases have not yet been carried out.
In this article we present tests of CPT and CP, separately, and discuss which measurements distinguish between the various symmetry breaking terms. In addition, we derive formulae which are manifestly invariant under rephasing of the original mesonic states. The hope is to call attention to several measurements which will be accessible to experiments in the future.
Our paper is organized as follows: In section 2, we present a complete set of parameters characterizing CP, T and CPT nonconservation arising from the mass matrix, i.e., the so-called indirect CP-, T- and CPT-violation. A set of direct CP-, T- and CPT-violating parameters originating from the decay amplitudes are defined in section 3. In section 4, we defind all possible independent observables and relate them directly to fundamental parameters which are manifestly rephasing invariant and can be applied to all meson decays. The various types of CP and CPT violation are classified, indicating how one can extract purely CPT or CP violating effects. In section 5, we investigate in detail the time evolution of mesonic decays and introduce several time-dependent CP- and CPT-asymmetries which allow one to measure separately the indirect CPT- and CP-violating observables as well as direct CPT- and CP-violating observables. In particular, we show how one can extract a clean signature of CPT violation from asymmetries in neutral meson decays. In section 6, we apply the general formalism to the semileptonic and nonleptonic K-meson decays and show how many rephasing invariant CP and CPT observables can be extracted separately. Our conclusions are presented in the last section.
CP- and CPT-violating Parameters in Mass Matrix
===============================================
Let $M^0$ be the neutral meson (which can be $K^0$ or $D^0$ or $B^0$ or $B^{0}_{s}$) and $\bar{M}^0$ its antiparticle. The evolution of $M^0$ and $\bar{M^0}$ states is dictated by $$\frac{d}{dt} \left( \begin{array}{c}
M^{0} \\ \bar{M}^{0} \end{array} \right)
= -i \left( \begin{array}{cc}
H_{11} & H_{12} \\
H_{21} & H_{22} \\
\end{array} \right)
\left( \begin{array}{c}
M^{0} \\ \bar{M}^{0} \end{array}\right)$$ with $H_{ij} = M_{ij} -i \Gamma_{ij}/2$ the matrix elements, and $M_{ij}$, $\Gamma_{ij}$ being the dispersive and absorptive parts, respectively.
The eigenvalues of the Hamiltonian are $$\begin{aligned}
H_{1} & = & H_{11} - \sqrt{H_{12}H_{21}}\ \frac{1-\Delta_{M}}{1+\Delta_{M}}\ , \nonumber \\
H_{2} & = & H_{22} + \sqrt{H_{12}H_{21}}\ \frac{1-\Delta_{M}}{1+\Delta_{M}}\ , \end{aligned}$$ with $$\frac{1-\Delta_{M}}{1+\Delta_{M}} = \left[ 1 + \frac{\delta_{M}^{2}}{2} -
\delta_{M} \sqrt{1 + \frac{\delta_{M}^{2}}{4}} \right]^{1/2} \ , \qquad and
\qquad \delta_{M} = \frac{H_{22} - H_{11}}{\sqrt{H_{12}H_{21}}}$$ We note already that $\delta_M$ is invariant under rephasing of the states $M^0$ and $\bar{M^0}$. The eigenfunctions of the Hamiltonian define the physical states. Following Bell and Steinberger[@BS], $M^0$ and $\bar{M^0}$ mix with each other and form two physical mass eigenstates $$M_1 = p_{S}| M^0 > + q_{S} | \bar{M^0} >, \qquad
M_2 = p_{L}| M^0 > - q_{L} | \bar{M^0} >$$ with normalization $|p_{S}|^{2} + |q_{S}|^{2} = |p_{L}|^{2} + |q_{L}|^{2} = 1 $. The coeficients are given by $$\begin{aligned}
\frac{q_{S}}{p_{S}} & = & \frac{q}{p}\ \frac{1+\Delta_{M}}{1-\Delta_{M}}
\equiv \frac{1-\epsilon_{S}}{1+ \epsilon_{S}} \ , \qquad
\frac{q_{L}}{p_{L}} = \frac{q}{p}\ \frac{1-\Delta_{M}}{1+\Delta_{M}}
\equiv \frac{1-\epsilon_{L}}{1+ \epsilon_{L}} \nonumber \\
\frac{q}{p} & = & \sqrt{\frac{H_{21}}{H_{12}}} \equiv
\frac{1-\epsilon_{M}}{1+ \epsilon_{M}} \end{aligned}$$ We have also introduced the paramters $\epsilon_{S,L,M}$ following ref.[@CRONIN]. In the CPT conserving case they reduce to the known parameter $\epsilon_{M}$. Thus we have a complete description of the physical states in terms of the mass matrix, and the time evolution is determined by the eigenvalues: $$H_{1} = M_{1} - i\Gamma_{1}/2; \qquad H_{2} = M_{2} - i\Gamma_{2}/2$$ and is given simply by $$M_{1} \rightarrow e^{-iH_{1}t} M_{1}; \qquad M_{2} \rightarrow e^{-iH_{2}t} M_{2}$$
We discuss next several properties related to the symmetries of the system. The parameters $\delta_M$ and $|q/p|$ are rephasing invariant and so are also other parameters defined in terms of them. CPT invariance requires $M_{11} = M_{22}$ and $\Gamma_{11} = \Gamma_{22}$, and implies that $\delta_M = 0$. Thus the difference between $q_{S}/p_{S}$ and $q_{L}/p_{L}$ represents a signal of CPT violation. In other words, $\Delta_{M}$ different from zero indicates CPT violation.
CP invariance requires the dispersive and absorptive parts of $H_{12}$ and $H_{21}$ to be, respectively, equal and implies $ q/p = 1$. Also if T invariance holds, then independently of CPT symmetry, the dispersive and absorptive parts of $H_{12}$ and $H_{21}$ must be equal up to a total relative common phase, implying $ |q/p| = 1$. Therefore a $Re\epsilon_M$
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Andrzej Dbrowski, Nursena Günhan and Gökhan Soydan'
title: 'On a class of Lebesgue-Ljunggren-Nagell type equations'
---
[*Abstract*]{}. Given odd, coprime integers $a$, $b$ ($a>0$), we consider the Diophantine equation $ax^2+b^{2l}=4y^n$, $x, y\in\Bbb Z$, $l \in \Bbb N$, $n$ odd prime, $\gcd(x,y)=1$. We completely solve the above Diophantine equation for $a\in\{7,11,19,43,67,163\}$, and $b$ a power of an odd prime, under the conditions $2^{n-1}b^l\not\equiv \pm 1(\mod a)$ and $\gcd(n,b)=1$. For other square-free integers $a>3$ and $b$ a power of an odd prime, we prove that the above Diophantine equation has no solutions for all integers $x$, $y$ with ($\gcd(x,y)=1$), $l\in\mathbb{N}$ and all odd primes $n>3$, satisfying $2^{n-1}b^l\not\equiv \pm 1(\mod a)$, $\gcd(n,b)=1$, and $\gcd(n,h(-a))=1$, where $h(-a)$ denotes the class number of the imaginary quadratic field $\mathbb Q(\sqrt{-a})$.
Key words: Diophantine equation, Lehmer number, Fibonacci number, class number, modular form, elliptic curve
2010 Mathematics Subject Classification: 11D61, 11B39
Introduction
============
The Diophantine equation $x^2+C=y^n$ ($x\geq 1$, $y\geq 1$, $n\geq 3$) has a rich history. Lebesgue proved that this equation has no solution when $C=1$, and Cohn solved the equation for several values of $1\leq C\leq 100$. The remaining values of $C$ in the above range were covered by Mignotte and de Weger, and finally by Bugeaud, Mignotte and Siksek. Barros in his PhD thesis considered the range $-100\leq C\leq -1$. Also, several authors (Abu Muriefah, Arif, Dbrowski, Le, Luca, Pink, Soydan, Togbé, Ulas,...) became interested in the case where only the prime factors of $C$ are specified. Surveys of these and many others topics can be found in [@AB] and [@BP]. Some people studied the more general equation $ax^2+C=2^iy^n$, $a>0$ and $i\leq 2$.
Given odd, coprime integers $a$, $b$ ($a>0$), we consider the Diophantine equation $$\label{maineq}
ax^2+b^{2l}=4y^n, \quad x, y\in \Bbb Z,\,\, l, n \in \Bbb N, \, n \, odd \, \, prime, \gcd(x,y)=1.$$ If $a \equiv 1 \, \mod 4$, then reducing modulo $4$ we trivially obtain that the equation has no solution.
It is known (due to Ljunggren [@Lj]) that the Diophantine equation $ax^2+1=4y^n$, $n\geq 3$, has no positive solution with $y>1$ such that $a \equiv 3 (\mod 4)$ and the class number of the quadratic field $\mathbb Q(\sqrt{-a})$ is not divisible by $n$. When $a=3$, then $3x^2+1=4y^n$ has the only positive solution $(x,y)=(1,1)$.
As our first result, we completely solve the equation for $a\in\{7,11,19,$ $43,67,163\}$, under the conditions $2^{n-1}b^l\not\equiv \pm 1(\mod a)$ and $\gcd(n,b)=1$.
\[thm.1\] Fix $p\in\{7,11,19,43,67,163\}$ and $b= \pm q^r$, with $q$ an odd prime different from $p$ and $r\geq 1$.
$(i)$ The Diophantine equation $$\label{sec.eq}
px^2+b^{2l}=4y^n,\, l\in\mathbb{N},\, \gcd(x,y)=1$$ has no solutions $(p,x,y,b,l,n)$ with integers $x$, $y$ and primes $n>3$, satisfying the conditions $2^{n-1}b^l\not\equiv \pm 1(\mod p)$ and $\gcd(n,b)=1$.
$(ii)$ If $n=3$ and $p\not= 7$, then the equation has no solutions $(p,x,y,b,l,3)$ satisfying the conditions $4b^l\not\equiv \pm 1(\mod p)$ and $\gcd(3,b)=1$.
$(iii)$ If $n=3$ and $p=7$, then the equation leads to $6$ infinite families of solutions, corresponding to solutions of Pell-type equations , , , , and satisfying the conditions $4b^l\not\equiv \pm 1(\mod 7)$ and $\gcd(3,b)=1$.
[**Remarks.**]{} (i) The Diophantine equation has many solutions (infinitely many ?) satisfying the conditions $2^{n-1}b^l\equiv \pm 1(\mod p)$ and $\gcd(n,b)=1$. Examples include
$(p,x,y,b,l,n) \in\{ (7,\pm 1,$ $2,\pm 11,1,5),(11,\pm 1,3,\pm 31,1,5)$, $(7,\pm 7,2,\pm 13,1,7),$ $(19,\pm 1,5,\pm 559,1,7)$, $(11, \pm 253, 3, \pm 67, 1,11), (19, \pm 2531, 5, \pm 8579, 1, 11)$,\
$(7,\pm 1,2,\pm 181,1,13), (11, \pm 1801, 3, \pm 21929, 1,17 ),
(7,\pm 457, 2, \pm 797, 1, 19), \\
(7, \pm 967, 2, \pm 5197, 1, 23)\}$.
\(ii) If $b$ is divisible by at least two different odd primes, then the Diophantine equation may have solutions satisfying the conditions $2^{n-1}b^l\not\equiv \pm 1(\mod p)$. Examples include
$(p,x,y,b,l,n) \in
\{ (7,103820535541,4,10341108537,1,37)$,\
$(7,4865,46,1320267,1,7)$, $(19,315003,49,909715,1,7)$,\
$(19,581072253,49,3037108805,1,11) \}$.
\(iii) Write the equation as $px^2+b^{2l}=4y(y^{(n-1)/2})^2$ (compare [@Lj p.116]). Now using $4y=u^2+pv^2$, taking $u=\pm 1$, and multiplying the equation by $p$, we arrive at the equation
$$\label{Ljunggren}
X^2 - p(1+pv^2)Y^2 = -pb^{2l}.$$
If $b=\pm 1$, we obtain the equation (7’) in [@Lj]. Ljunggren used an old result by Mahler to deduce that, if $p>3$, then has no solution with $Y>1$ such that any prime divisor of $Y$ divides $p(1+pv^2)$ as well.
\(iv) Question: may we extend Ljunggren’s idea to prove non-existence of solutions of our equation for some $b^l$ ?
For a family of positive square-free integers $a$ with $h(-a)>1$ we can prove the following result (a variant of the results by Bugeaud [@Bu] and Arif and Al-Ali [@AA] in a case of the equation $ax^2+b^{2l+1}=4y^n$). Let $h(-a)$ denote the class number of the imaginary quadratic field $\mathbb Q(\sqrt{-a})$.
\[thm.2\] Fix a positive square-free integer $a$, different from $3$, $7$, $11$, $19$, $43$, $67$, $163$, and $b= \pm q^r$, with $q$ an odd prime not dividing $a$ and $r\geq 1$. Then the Diophantine equation has no solutions $(a,x,y
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Justin C. Smith, Francisca Sagredo, and Kieron Burke'
bibliography:
- 'main.bib'
- 'renew.bib'
- 'exp.bib'
- 'Master.bib'
title: Warming Up Density Functional Theory
---
Introduction {#intro}
============
[**Warm dense matter:**]{} The study of warm dense matter (WDM) is a rapidly growing multidisciplinary field that spans many branches of physics, including for example astrophysics, geophysics, and attosecond physics[@MD06; @DOE09; @LHR09; @KDBL15; @KD09; @KDP15; @HRD08; @KRDM08; @RMCH10; @SEJD14; @GDRT14]. Classical (or semiclassical) plasma physics is accurate for sufficiently high temperatures and sufficiently diffuse matter[@I04]. The name WDM implies too cool and too dense for such methods to be accurate, and this regime has often been referred to as the malfunction junction, because of its difficulty[@DOE09]. Many excellent schemes have been developed over the decades within plasma physics for dealing with the variety of equilibrium and non-equilibrium phenomena accessed by both people and nature under the relevant conditions[@BL04]. These include DFT at the Thomas-Fermi level (for very high temperatures) and use of the local density approximation (LDA) within Kohn-Sham (KS) DFT at cold to moderate temperatures (at very high temperatures, sums over unoccupied orbitals fail to converge). The LDA can include thermal XC corrections based on those of the uniform gas, for which simple parametrizations have long existed[@SD13b; @KSDT14].
[**Electronic structure theory:**]{} On the other hand, condensed matter physicists, quantum chemists, and computational materials scientists have an enormously well-developed suite of methods for performing electronic structure calculations at temperatures at which the electrons are essentially in their ground-state (GS), say, 10,000K or less[@B12]. The starting point of many (but not all) such calculations is the KS method of DFT for treating the electrons[@KS65]. Almost all such calculations are within the Born-Oppenheimer approximation, and ab initio molecular dynamics (AIMD) is a standard technique, in which KS-DFT is used for the electronic structure, while Newton’s equations are solved for the ions[@CP85].
[**DFT in WDM:**]{} In the last decade or so, standard methods from the electronic structure of materials have had an enormous impact in warm dense matter, where AIMD is often called QMD, quantum molecular dynamics[@GDRT14]. Typically a standard code such as VASP is run to perform MD[@KRDM08]. In WDM, the temperatures are a noticeable fraction of the Fermi energy, and thus the generalization of DFT to thermal systems must be used. Such simulations are computationally demanding but they have the crucial feature of including realistic chemical structure, which is difficult to include with any other method while remaining computationally feasible. Moreover, they are in principle exact[@M65; @KS65], if the exact temperature-dependent exchange-correlation free energy could be used because of Mermin’s theorem establishing thermal DFT(thDFT). In practice, some standard ground-state approximation is usually used. (There are also quantum Monte Carlo calculations which are typically even more computationally expensive[@MD00; @FBEF01; @M09b; @SBFH11; @DM12; @SGVB15; @DGSM16]. The beauty of the QMD approach is that it can provide chemically realistic simulations at costs that make useful applications accessible[@MMPC12].) There have been many successes, such as simulation of Hugoniot curves measured by the $Z$ machine[@RMCH10] or a new phase diagram for high density water which resulted in improved predictions for the structure of Neptune[@MD06]. Because of these successes, QMD has rapidly become a standard technique in this field.
[**Missing temperature dependence:**]{} However, the reliability and domain of applicability of QMD calculations are even less well understood than in GS simulations. At the equilibrium level of calculation, vital for equations of state under WDM conditions and the calculation of free-energy curves, a standard generalized gradient approximation (GGA) calculation using, e.g., PBE[@PBE96], is often (but not always) deemed sufficient, just as it is for many GS materials properties. Such a calculation ignores thermal exchange-correlation (XC) corrections, i.e., the changes in XC as the temperature increases, which are related to entropic effects. We believe we know these well for a uniform gas (although see the recent string of QMC papers[@SGVB15; @DGSM16] and parametrizations[@KSDT14]), but such corrections will be unbalanced if applied to a GGA such as PBE. So how big a problem is the neglect of such corrections?
[**(A little) beyond equilibrium:**]{} On the other hand, many experimental probes of WDM extract response functions such as electrical or thermal conductivity[@MD06]. These are always calculated from the equilibrium KS orbitals, albeit at finite temperature. Work on molecular electronics shows that such evaluations suffer both from inaccuracies in the positions of KS orbitals due to deficiencies in XC approximations, and also require further XC corrections, even if the [*exact*]{} equilibrium XC functional were used[@TFSB05; @QVCL07; @KCBC08].
Acronym Meaning Acronym Meaning
--------- ------------------------------ --------- ----------------------------
GGA Generalized Gradient Approx. RPA Random Phase Approximation
GS ground-state TDDFT Time-dependent DFT
HXC Hartree XC thDFT thermal DFT
KS Kohn-Sham unif uniform gas
LDA Local Density Approx. XC exchange-correlation
PBE Perdew-Burke-Ernzerhof ZTA Zero-Temperature Approx.
QMC quantum Monte Carlo
: Acronyms frequently used in this chapter.
\[acrodef\]
ł[\^]{}
Background
==========
[**Generalities:**]{} Everything described within uses atomic units, is non-relativistic and does not include external magnetic fields. Unless otherwise noted, all results are for the electronic contributions within the Born-Oppenheimer approximation. While all results are stated for density functionals, in practice, they are always generalized to spin-density functionals in the usual way.
Ground-state DFT
----------------
[**Hohenberg-Kohn functional:**]{} Just over 50 years ago, in 1964, Hohenberg and Kohn wrote down the foundations of modern DFT[@HK64]. They start with the many-body Hamiltonian = + + , where $\hat{T}$, $\hat{V}\ee$, and $\hat{V}$ are the kinetic, electron-electron, and potential energy operators, respectively. Assuming a non-degenerate ground-state, they proved by *reductio ad absurdum* that the external potential, $v(\br)$ is a unique functional of the density $\n(\br)$, and therefore all observables are also density functionals. More directly Levy defines the functional F\[\] = \_ | + | , \[Ffun\] where $\Psi$ is normalized and antisymmetric, and uses it to define the energy functional E\_v\[\] = F\[\] + v() (), whose minimization over normalized non-negative densities with finite kinetic energy yields the ground-state energy and density[@L81].
[**Kohn-Sham scheme:**]{} In 1965, Mermin generalized the Hohenberg-Kohn theorems for electrons in the grand canonical potential with fixed non-zero temperature $\tau$ and chemical potential $\mu$[@M65]. Later in 1965, Kohn and Sham created an exact method to construct the universal functional (see Eq. (\[FfunKS\])). The Kohn-Sham scheme imagines a system of $N$ non-interacting electrons that yield the electronic density of the original interacting $N$ electron system. These fictitious electrons sit in a new external potential called the KS potential. The KS scheme is written as a set of equations that must be solved self-consistently: {-\^2 + v() }\_i() = \_i \_i(), () = \_i\^N |\_i()|\^2, \[KSeq\] v() = v() + v() + v(), v() = , \[XCpot\] where $\phi_i(\br)$ and $\epsilon_i$ are the KS orbitals and energies, $v\H(\br)$ is the classical Hartree potential, and $v\xc(\br)$ is the exchange-correlation potential defined by the unknown XC energy, $E\xc$, in Eq. (\[XCpot\]). These must be solved self-consistently since the Hartree potential and $E\xc$ depend explicitly on the density. Lastly, the total energy can be found via F\[\] = T+ U+ E\[FfunKS\] where $T\s$ is the kinetic energy of the KS electrons and $U\H$ is the Hartree energy.
In practice, an approximation to $E\xc$ must be supplied. There exists a wealth of approximations for $E\xc$[@MOB12]. The simplest, LDA, uses the XC per electron of the homogeneous electron gas[@PW92]: E= e(()) \[L
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We describe an optical scheme for optimal Gaussian $n \rightarrow m$ cloning of coherent states. The scheme, which generalizes a recently demonstrated scheme for $1 \rightarrow 2$ cloning, involves only linear optical components and homodyne detection.'
author:
- Stefano Olivares
- 'Matteo G. A. Paris'
- 'Ulrik L. Andersen'
title: Optimal cloning of coherent states by linear optics
---
Introduction {#s:intro}
============
The generation of perfect copies of a given, unknown, quantum state is impossible [@wooters82.nat; @dieks82.pla; @cl3; @cl4]. Analogously, starting from $n$ copies of a given, unknown, quantum state no device exists that provides $m > n$ perfect copies of those states. On the other hand, one can make approximate copies of quantum states by means of a quantum cloning machine [@buzek96.pra], whose performances may be assessed by the [*single clone fidelity*]{}, namely, a measure of the similarity between each of the clones and the input state. A cloner is said to be [*universal*]{} if the fidelity is independent on the input state, whereas the cloning process is said to be [*optimal*]{} if the fidelity saturate an upper bound $F^{\rm(opt)}$, which depends on the class of states under investigation, as well as on the class of involved operations. For coherent states and Gaussian cloning (i.e., cloning by Gaussian operations) $F^{\rm(opt)}=2/3$ whereas, using non-Gaussian operations, it is possible to achieve $F \approx 0.6826 > 2/3$ [@cerf05.prl]. Therefore, though non-Gaussian operations are of some interest [@opatr; @weng:PRL:04; @sanchez; @ips], the realization of optimal Gaussian cloning would provide performances not too far from the ultimate bound imposed by quantum mechanics.
Optimal Gaussian cloning of coherent states may be implemented using an appropriate combination of beam splitters and a single phase insensitive parametric amplifier [@braunstein01.prl; @fiurasek01.prl]. However, the implementation of an efficient phase insensitive amplifier operating at the fundamental limit is still a challenging task. This problem was solved by Andersen et al. [@andersen05.prl], who proposed and experimentally realized an optimal cloning machine for coherent states, which relies only on linear optical components and a feed-forward loop [@josse06.prl]. As a consequence of the simplicity and the high quality of the optical devices used in this experiment, performances close to optimal ones were attained. The thorough theoretical description of this cloning machine as well as its average fidelity for different ensembles of input states has been given in [@OPA:PRA:06], and a generalization to asymmetric cloning was presented in [@zhai]
In this paper we describe in details a generalization of the cloning machine considered in [@andersen05.prl] to realize $n \rightarrow m$ universal cloning of coherent states. The scheme involves only linear optical components and homodyne detection and yields the optimal cloning fidelity [@UlBook]. Analogue schemes has been proposed for broadcasting a complex amplitude bby Gaussian states [@sbdc].
The paper is structured as follows: in section \[s:1tom\] we described the linear cloning machine for $1
\to m$ cloning of coherent states and we give the conditions to achieve universal and optimal cloning as in the case of $1 \to 2$. In section \[s:ntom\] we deal with a scheme to realize $n \to m$ optimal universal cloning. Finally, in section \[s:remarks\] we draw some concluding remarks.
The $\boldsymbol{1 \to m}$ cloning machine {#s:1tom}
==========================================
![\[f:cl:scheme\] Gaussian cloning of coherent states by linear optics: the input state ${\left| \left. \alpha \right\rangle \right.}$ is mixed with the vacuum ${\left| \left. 0 \right\rangle \right.}$ at a beam splitter (BS) of transmissivity $\tau$. The reflected beam is measured by double-homodyne detection and the outcome of the measurement $x + i y$ is forwarded to a modulator, which imposes a displacement $g (x + iy)$ on the transmitted beam, $g$ being a suitable amplification factor. Finally, the displaced state is impinged onto a multi-splitter (MS), where it is mixed with $m-1$ vacuum modes. The states $\varrho_{k}$, $k=1,m$, are the $m$ clones.](Fig1_Scheme.eps){width=".6\textwidth"}
The scheme of the $1\to m$ Gaussian cloning machine is sketched in Fig. \[f:cl:scheme\]. The coherent input state ${\left| \left. \alpha \right\rangle \right.}$ is mixed with the vacuum at a beam splitter (BS) with transmissivity $\tau$. On the reflected part, double-homodyne detection is performed using two detectors with equal quantum efficiencies $\eta$: this measurement is executed by splitting the state at a balanced beam splitter and, then, measuring the two conjugate quadratures $\hat x =
\frac{1}{\sqrt{2}}(\hat{a}+\hat{a}^{\dag})$ and $\hat y =
\frac{1}{i\sqrt{2}}(\hat{a} - \hat{a}^{\dag})$, with $\hat{a}$ and $\hat{a}^\dagger$ being the field annihilation and creation operator. The outcome of the double-homodyne detector gives the complex number $z = x + i
y$. According to these outcomes, the transmitted part of the input state undergoes a displacement by an amount $g z$, where $g$ is a suitable electronic amplification factor. Finally, the $m$ output states, denoted by the density operators $\varrho_k$, $k=1,\ldots,m$, are obtained by dividing the displaced state using a multi-splitter (MS). When $m=2$ the present scheme reduces to a $1\to 2$ Gaussian cloning machine recently experimentally realized [@andersen05.prl] and studied in details [@OPA:PRA:06].
If we denote with $U_{\tau}$ the evolution operator of the first BS with transmissivity $\tau$, after the BS we have: $$\label{BS:evol}
U_{\tau} {\left| \left. \alpha \right\rangle \right.}\otimes{\left| \left. 0 \right\rangle \right.} = {\left| \left. \alpha
\sqrt{\tau} \right\rangle \right.}\otimes{\left| \left. \alpha\sqrt{1-\tau} \right\rangle \right.}\,;$$ the reflected beam, i.e., ${\left| \left. \alpha\sqrt{1-\tau} \right\rangle \right.}$ undergoes a double-homodyne detection described by the positive operator-valued measure (POVM) [@FOP:napoli:05] $$\Pi_\eta(z) = \int_{\mathbbm{C}}d^2\zeta\,
\frac{1}{\pi\sigma_\eta^2}\exp\left\{-\frac{|\zeta-z|^2}{\sigma_\eta^2}\right\}
\frac{{\left| \left. \zeta \right\rangle \right.}{\left\langle \left. \zeta \right| \right.}}{\pi}\,,$$ with $\sigma_\eta^2 = (1-\eta)/\eta$, $\eta$ being the detection quantum efficiency, and, in turn, the probability of getting $z$ as outcome is given by: $$\begin{aligned}
p_\eta(z) &= {\rm Tr}[\Pi_\eta(z)\,
{\left| \left. \alpha\sqrt{1-\tau} \right\rangle \right.}{\left\langle \left. \alpha\sqrt{1-\tau} \right| \right.}]\\
&= \frac{\eta}{\pi}\exp\left\{-\eta |z - \alpha\sqrt{1-\tau}|^2
\right\}\,.\end{aligned}$$ After the measurement, the transmitted part of the input state, i.e., ${\left| \left. \alpha\sqrt{\tau} \right\rangle \right.}$, is displaced by an amount $g z$, and, averaging over all the possible outcomes $z$, we obtain the following state: $$\begin{aligned}
\label{ave:before}
\varrho &= \int_{\mathbbm{C}} d^2z\, p_\eta(z)\,
D(g z) {\left| \left. \alpha\sqrt{\tau} \right\rangle \right.}{\left\langle \left. \alpha\sqrt{\tau} \right| \right.} D^{\dag}(gz)\\
&= \int_{\mathbbm{C}} d^2z\, \frac{\eta}{\pi}\exp\left\{-\eta |z - \alpha\sqrt{1-\tau}|^2
\right\}\, {\left| \left. \alpha\sqrt{\tau} + g z \right\rangle \right.}{\left\langle \left. \alpha\sqrt{\tau} + g z \right| \right.}\,,\end{aligned}$$ which is then mixed in the MS with $m-1$ vacuum modes (Fig.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Oxides $R$NiO$_3$ ($R =$ rare-earth, $R \neq$ La) exhibit a metal-insulator (MI) transition at a temperature $T_{\rm MI}$ and an antiferromagnetic (AF) transition at $T_{\rm N}$. Specific heat ($C_{\rm P}$) and anelastic spectroscopy measurements were performed in samples of Nd$_{1-x}$Eu$_x$NiO$_3$, $0 \leq x \leq 0.35$. For $x = 0$, a peak in $C_{\rm P}$ is observed upon cooling and warming at essentially the same temperature $T_{\rm MI}= T_{\rm N} \sim 195$ K, although the cooling peak is much smaller. For $x \geq 0.25$, differences between cooling and warming curves are negligible, and two well defined peaks are clearly observed: one at lower temperatures, that define $T_{\rm N}$, and the other one at $T_{\rm MI}$. An external magnetic field of 9 T had no significant effect on these results. The elastic compliance ($s)$ and the reciprocal of the mechanical quality factor ($Q^{-1}$) of NdNiO$_3$, measured upon warming, showed a very sharp peak at essentially the same temperature obtained from $C_{\rm P}$, and no peak is observed upon cooling. The elastic modulus hardens below $T_{\rm MI}$ much more sharply upon warming, while the cooling and warming curves are reproducible above $T_{\rm MI}$. On the other hand, for the sample with $x = 0.35$, $s$ and $Q^{-1}$ curves are very similar upon warming and cooling. The results presented here give credence to the proposition that the MI phase transition changes from first to second order with increasing Eu doping.'
author:
- 'V. B. Barbeta'
- 'R. F. Jardim'
- 'M. S. Torikachvili'
- 'M. T. Escote'
- 'F. Cordero'
- 'F. M. Pontes'
- 'F. Trequattrini'
title: 'Metal-insulator transition in Nd$_{1-x}$Eu$_x$NiO$_3$ probed by specific heat and anelastic measurements'
---
A number of $R$NiO$_3$ compounds ($R =$ rare-earth, $R \neq$ La) are metallic at high temperatures and display a metal-insulator (MI) transition at a temperature $T_{\rm MI}$, which depends of the $R$ ion-size of the rare-earth. They also exhibit an antiferromagnetic (AF) transition at $T_{\rm N}$, due to the spin ordering of the Ni sublattice. For $R =$ Nd and Pr, $T_{\rm MI} \approx T_{\rm N}$, while for the other rare-earths $T_{\rm MI}$ is higher than $T_{\rm N}$, with $T_{\rm N}$ increasing slightly and $T_{\rm MI}$ decreasing as a function of the $R$ ionic radius.[@MED-A]
The magnetic order of NdNiO$_3$ was studied by powder neutron diffraction (PND), revealing the presence of a wave propagation vector $k = (1/2,0,1/2)$, and an unusual up-up-down-down stacking of ferromagnetically (FM) ordered planes along the simple cubic (111) direction was proposed.[@GAR-A] On the other hand, soft x-ray resonant scattering experiments showed that the (1/2,0,1/2) reflection is of magnetic origin, without orbital ordering. Besides, the results were not consistent with the spin arrangement proposed by PND, and indicated a non-collinear antiferromagnetic ordering scheme.[@SCA-A]
Recently, high resolution PND experiments in NdNiO$_{3}$ unambiguously established the occurrence of two different NiO$_{6}$ octahedra at low temperatures, as well as the corresponding change from orthorhombic ($Pbnm$) to monoclinic ($P\rm 2_{\rm 1}/n$) symmetry.[@GAR-C] This being the case, a charge ordered state is observed at low temperatures and the twofold e$_{g}$ orbital degeneracy is lifted, opening an energy gap. Therefore, the low temperature phase may not be classified as a charge transfer insulator, as originally suggested, but could be better described as a band insulator.[@GAR-C]
Although much work have addressed the general physical properties of these systems, there are still many open questions, regarding the role played by the correlation between magnetic and electronic properties. Within this context, here we present and discuss measurements of specific heat ($C_{\rm P}$) and anelastic spectroscopy near the MI phase transformation.
Polycrystalline samples of Nd$_{1-x}$Eu$_x$NiO$_3$, $0 \leq x \leq 0.35$, were prepared from sol-gel precursors, sintered at temperatures $\sim 1000 ^{\rm o}$C, under oxygen pressures up to 80 bar. Details of sintering process for preparing these samples are described elsewhere. [@ESC-A] All samples were characterized by X-ray powder diffraction in a Brucker D8 Advance diffractometer. The X-ray diffraction patterns showed no extra reflections due to impurity phases, and indicated that all samples have a high degree of crystallinity.
Specific heat ($C_{\rm P}$) measurements in the temperature range from 2 to 310 K, upon cooling and warming,were performed in a Physical Property Measurement System (PPMS) from Quantum Design equipped with a superconducting 9 T magnet.
Complex Young’s modulus measurements $E(\omega,T)=E^{\prime} + iE^{\prime\prime}$ were performed as a function of temperature, by electrostatically exciting the fundamental flexural modes of the samples, and detecting the vibration amplitude. The energy dissipation or reciprocal of the mechanical quality factor, $Q^{-1}(\omega,T) = E^{\prime\prime}/E^{\prime}$, was determined from the decay of the free oscillations or from the width of the resonance peak. In light of the porosity of the sintered materials, the values of elastic compliance $s = E^{-1}$ were not absolute, and therefore were normalized to the $s_{0}$ value, obtained at the fundamental frequency $f_{0}=f(T=0)$.
The $C_{\rm P}$ results for NdNiO$_{3}$ are displayed in Figure 1 for both the warming and cooling cycles. The sharp peak observed upon warming at $T_{\rm MI} = T_{N} \sim 195$ K defines the MI and AF transitions. The peak in $C_{\rm P}(T)$ at $T_{\rm MI}$ upon cooling is much reduced. The difference in $C_{\rm P}(T)$ near $T_{\rm MI}$ upon cooling and warming suggests a complex interaction between the crystalline and magnetic structures, and perhaps that the phase transition at $T_{\rm MI}$ has a first order character. Difficulties in extracting accurate values of $C_{\rm P}(T)$ near first order transitions using relaxation calorimetry are known. [@LAS-A] However, the large difference between the cooling and warming cycles in this case is compelling enough to suggest intrinsic behavior.
\[htp\] ![\[fig:epsart1\] Temperature dependence of $C_{p}$ for NdNiO$_3$ upon cooling and warming. The upper inset displays the transition region with $H = 0$ and $H = 9$ T. The lower inset shows the $C_{p}(T)$ data for the Nd$_{0.65}$Eu$_{0.35}$NiO$_3$ sample.](fig1.eps "fig:"){width="39.00000%"}
When Nd is partially replaced by Eu, both $T_{\rm MI}$ and $T_{\rm N}$ are shifted to higher temperatures. For the $x = 0.35$ sample (lower inset of Figure 1), electronic and magnetic transitions are separated in temperature, and two peaks in $C_{\rm P}(T)$ are clearly identified. In this case, there is no significant difference between the cooling and warming curves. The application of an external magnetic field, as high as 9 T, resulted in no appreciable change in $C_{\rm P}(T)$ data, as displayed in the upper inset of Figure 1 for the NdNiO$_3$ sample. Similar field independent behavior was also observed in the Nd$_{0.65}$Eu$_{0.35}$NiO$_3$ sample (not shown).
\[htp\] ![\[fig:epsart2\] Temperature dependence of the specific heat ($C_{R}$) of Nd$_{1-x}$Eu$_{x}$NiO$_3$, for three selected samples, obtained after subtracting the background contribution (see text). Lines are just a guide to the eye.](fig2.eps "fig:"){width="39.00000%"}
A background contribution to $C_{\rm P}(T)$ was subtracted from the curves and the resulting specific heat ($C_{\rm R}(T)$) is displayed in Figure 2. Such a subtraction was performed by excluding the region close to the phase transition in the warming cycle, and fitting the resulting curve to a smooth base-line. The resulting curve for the $ x = 0$ sample displays a very sharp peak at $T_{\rm MI} = T_{N} \sim 195$ K. However, the partial substitution of Nd with Eu results in a separation of the two transitions. This is clearly seen in the $x = 0.25$ sample
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A key problem in network coding (NC) lies in the complexity and energy consumption associated with the packet decoding processes, which hinder its application in mobile environments. Controlling and hence limiting such factors has always been an important but elusive research goal, since the packet degree distribution, which is the main factor driving the complexity, is altered in a non-deterministic way by the random recombinations at the network nodes. In this paper we tackle this problem with a new approach and propose Band Codes (BC), a novel class of network codes specifically designed to preserve the packet degree distribution during packet encoding, recombination and decoding. BC are random codes over GF(2) that exhibit low decoding complexity, feature limited and controlled degree distribution by construction, and hence allow to effectively apply NC even in energy-constrained scenarios. In particular, in this paper we motivate and describe our new design and provide a thorough analysis of its performance.We provide numerical simulations of the BC performance in order to validate the analysis and assess the overhead of BC with respect to a conventional random NC scheme. Moreover, experiment in a real-world application, namely peer-to-peer mobile media streaming using a random-push protocol, show that BC reduce the decoding complexity by a factor of two with negligible increase of the encoding overhead, paving the way for the application of NC to power-constrained devices.'
author:
- 'Attilio Fiandrotti, Valerio Bioglio, Marco Grangetto, Rossano Gaeta, and Enrico Magli, [^1]'
bibliography:
- 'main.bib'
title: 'Band Codes for Energy-Efficient Network Coding with Application to P2P Mobile Streaming'
---
Network Coding, Rateless codes, P2P, Mobile Streaming, Energy-Efficiency.
Conclusions and Future Work {#sec:conclusions}
===========================
In this paper we presented Band Codes (BC), a family of codes that preserve the packet degree distribution enabling controlled-complexity NC independently from the network topology. Our experiments show that BC reduce the decoding complexity by a factor of two with almost no losses in encoding efficiency with respect to random NC. Furthermore, a reduction of up to four times is achieved while maintaining the encoding overhead below 5%. Experiments with a mobile phone showed that the reduced computational complexity reduces its energy consumption extending the operational lifetime. Streaming experiments show that our P2P protocol designed around BC is capable to deliver high quality video on the global scale testbed, showing the benefits of BC in a realistic setting. Although in this work we have focused on P2P video streaming, BC are well suited in any scenario where energy consumption is a critical issue, such as sensor networks. Finally, while we have considered NC over $GF(2)$ thanks to its low complexity, the main concepts behind BC can be extended to Galois fields of larger size.
\[sec:appendix\] We indicate with $\Omega^j$ the packet degree distribution in the network after $j$ recombinations, i.e. the probability that a randomly selected packet in the network has degree $i$ after $j$ recombinations is $\Omega^j_i$. The source node encodes packets of degree $d$ with probability $\Omega_d^0$. Let $P^1$ and $P^2$ be two packets in the network with degree $d^1$ and $d^2$. We define as $s_N(d^1, d^2, d^r)$ the probability that the recombination of $P^1$ and $P^2$ produces a packet $P^r$ with degree $d^r = d^1 + d^2 - 2\chi$, where $\chi$ is the random variables that counts the number of times that $g_i^1 = g_i^2 = 1$ for $i \in [0, N-1]$.
$$s_N (d^1, d^2, d^r) = s_N (d^1, d^2, d^1 + d^2 - 2\chi) = \mathbb{P}(2\chi = d^1 + d^2 - d^r) = \mathbb{P}(\chi = \frac{ d^1 + d^2 - d^r }{2}).$$ $$\begin{array}{r c l}
s_N (d^1, d^2, d^r)& = & s_N (d^1, d^2, d^1 + d^2 - 2\chi)\\
& = & \mathbb{P}(2\chi = d^1 + d^2 - d^r)\\
& = & \mathbb{P}(\chi = \frac{ d^1 + d^2 - d^r }{2})\\
\end{array}$$ As $\chi$ follows the Hypergeometric Distribution $\mathcal{H}(N, d^1, d^2)$, we rewrite the above equation as
$$\begin{array}{r c l}
s_N (d^1, d^2, d^r)& = & \frac{\binom{d^1}{\frac{ d^1 + d^2 - d^r }{2}}\binom{N-d^1}{d^2-\frac{ d^1 + d^2 - d^r }{2}}}{\binom{N}{d^2}}.
\end{array}$$ For the law of total probability, we have that\
$$\label{eqn:omega_recursive}
\Omega^j_i = \sum_{d^1=0}^{N} \sum_{d^2=0}^{N} s_N(d^1,d^2,i) \Omega_{d^1}^{j-1} \Omega_{d^2}^{j-1}.$$ We indicate as $\Omega^\infty$ the distribution of the degree of the packets in the network after a number of recombinations that tends to infinite. If $\Omega^0$ is not degenerate, we have from Equation \[eqn:omega\_recursive\] that $$\Omega^{ \infty }_i =\frac{\binom{N}{i}}{2^{N}}.$$ Therefore, the distribution of the degree of the packets in the network follows the Binomial Distribution $\mathcal{B}(N, \frac{1}{2})$ and the average degree of the packets in the network tends to $\frac{N}{2}$.
[^1]: Copyright (c) 2013 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to pubs-permissions@ieee.org.
This publication is based partly on work performed within Project COAST-ICT-248036 which is funded by the European Union, partly on work performed within project AMALFI which is funded by Università di Torino and Compagnia di San Paolo, partly on work performed within project ARACNE, a PRIN funded by the Italian Ministry of Education and Research.
A.Fiandrotti, V.Bioglio, and E. Magli are with the Department of Electronics and Telecommunications, Politecnico di Torino, 10129, Torino, Italy (e-mail: attilio.fiandrotti@polito.it; valerio.bioglio@polito.it; enrico.magli@polito.it).
M.Grangetto and R.Gaeta are with the Department of Computer Science, Università di Torino, 10149 Torino, Italy (e-mail: marco.grangetto@di.unito.it; rossano.gaeta@di.unito.it).
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'A. Gusdorf'
- 'S. Anderl'
- 'R. Güsten'
- 'J. Stutzki'
- 'H.-W. Hübers'
- 'P. Hartogh'
- 'S. Heyminck'
- 'Y. Okada'
bibliography:
- 'biblio.bib'
date: 'Received September 15, 1996; accepted March 16, 1997'
title: 'Probing MHD Shocks with high-$J$ CO observations: W28F'
---
[Observing supernova remnants (SNRs) and modelling the shocks they are associated with is the best way to quantify the energy SNRs re-distribute back into the Interstellar Medium (ISM).]{} [We present comparisons of shock models with CO observations in the F knot of the W28 supernova remnant. These comparisons constitute a valuable tool to constrain both the shock characteristics and pre-shock conditions.]{} [New CO observations from the shocked regions with the APEX and SOFIA telescopes are presented and combined. The integrated intensities are compared to the outputs of a grid of models, which were combined from an MHD shock code that calculates the dynamical and chemical structure of these regions, and a radiative transfer module based on the large velocity gradient’ (LVG) approximation.]{} [We base our modelling method on the higher *J* CO transitions, which unambiguously trace the passage of a shock wave. We provide fits for the blue- and red-lobe components of the observed shocks. We find that only stationary, C-type shock models can reproduce the observed levels of CO emission. Our best models are found for a pre-shock density of 10$^4$ cm$^{-3}$, with the magnetic field strength varying between 45 and 100 $\mu$G, and a higher shock velocity for the so-called blue shock ($\sim$25 km s$^{-1}$) than for the red one ($\sim$20 km s$^{-1}$). Our models also satisfactorily account for the pure rotational H$_2$ emission that is observed with *Spitzer*.]{}
Introduction
============
The interstellar medium (ISM) is in constant evolution, ruled by the energetic feedback from the cosmic cycle of star formation and stellar death. At the younger stages of star formation (bipolar outflows), and after the death of massive stars (SNRs), shock waves originating from the star interact with the ambient medium. They constitute an important mechanical energy input, and lead to the dispersion of molecular clouds and to the compression of cores, possibly triggering further star formation. Studying the signature of these interactions in the far-infrared and sub-mm range is paramount for understanding the physical and chemical conditions of the shocked regions and the large-scale roles of these feedback mechanisms.
Supernovae send shock waves through the ISM, where they successively carve out large hot and ionised cavities. They subsequently emit strong line radiations (optical/UV), and eventually interact with molecular clouds, driving lower-velocity shocks. Similar to their bipolar outflow equivalents, these shocks heat, compress, and accelerate the ambient medium before cooling down through molecular emission (@Vandishoeck93, @Yuan11, hereafter Y11).
Valuable information has been provided by ISO [@Cesarsky99; @Snell05] and *Spitzer* (@Neufeld07, hereafter N07), but neither of those instruments provided sufficient spectral resolution to allow for a detailed study of the shock mechanisms. High-$J$ CO emission is one of the most interesting diagnostics of SNRs. CO is indeed a stable and abundant molecule, and an important contributor to the cooling of these regions, whose high-frequency emission is expected to be a pure’ shock tracer. Observations of the latter must be carried out from above the Earth’s atmosphere. As part of a multi-wavelength study of MHD shocks that also includes *Herschel* data, we present here the first velocity-resolved CO (11–10) observations towards a prominent SNR-driven shock with the GREAT spectrometer onboard SOFIA, and combine them with new lower-$J$ ones in a shock-model analysis.
The supernova remnant W28
=========================
W28 is an old ($>$10$^{4.5}$ yr, @Claussen99) SNR in its radiative phase of evolution, with a non-thermal radio shell centrally filled with thermal X-ray emission. Lying in a complex region of the Galactic disk at a distance of 1.9$\pm$0.3 kpc [@Velazquez02], its structure in the 327 MHz radio continuum represents a bubble-like shape of about 40$\times$30 pc [@Frail93]. Early on, molecular line emission peaks, not associated with star formation activity, but revealing broad lines, were suggested as evidence for interaction of the remnant with surrounding molecular clouds [@Wootten81]. Later studies spatially resolved the shocked CO gas layers from the ambient gas [@Frail98; @Arikawa99]. OH maser spots line up with the post-shock gas layers [@Frail94; @Claussen97; @Hoffman05], for which the strongest masers VLBA polarisation studies yield line-of-sight magnetic field strengths of up to 2 mG [@Claussen99]. Pure rotational transitions of H$_2$ have been detected with ISO [@Reach00] and were more recently observed with *Spitzer*, better resolved spatially and spectrally, by N07 and Y11.
Recently, very high energy (TeV) $\gamma$-ray emission has been detected by HESS [@Aharonian08], Fermi [@Abdo10], and AGILE [@Giuliani10], spatially slightly extended and coincident with the bright interaction zones, W28-E and -F. If interpreted as the result of hadronic cosmic ray interactions in the dense gas ($\pi^0$ decay), a cosmic ray density enhancement by an order of magnitude is required (which is supplied/accelerated by the SNR).
Sub-mm CO observations of W28F {#sub:opcooow}
------------------------------
APEX[^1] [@Guesten06] observations towards W28F were conducted in 2009 and will be the subject of a forthcoming publication (Gusdorf et al., in prep.). For the present study, we used 100$'' \times$100$''$ maps in the $^{13}$CO (3–2), CO (3–2), (4–3), (6–5), and (7–6) transitions, described in Appendix \[sec:tao\].
![Overlay of the velocity-integrated CO (6–5) (colour background) with the CO (3–2) (white contours) emission observed with the APEX telescope. For both lines, the intensity was integrated between -30 and 40 km s$^{-1}$. The wedge unit is K km s$^{-1}$ in antenna temperature. The CO (3–2) contours are from 30 to 160 $\sigma$, in steps of 10$\sigma$ = 16 K km s$^{-1}$. The half-maximum contours of the CO (3–2) and (6–5) maps are indicated in red and black, respectively. The dark blue circle indicates the position and beam size of the SOFIA/GREAT observations. The APEX beam sizes of our CO (3–2), (4–3), (6–5), and (7–6) observations are also provided (upper right corner light green circles, see also Table \[tablea1\]). The maps are centred at (R.A.$_{[\rm{J}2000]}$=$18^h01^m52\fs3$, Dec$_{[\rm{J}2000]}$=$-23^\circ19'$25$''$). The black and light blue hexagons mark the position of the OH masers observed by @Claussen97 and @Hoffman05.[]{data-label="figure1"}](figure1.eps){width="9cm"}
In Fig. \[figure1\] the velocity-integrated CO (6–5) broad-line emission of W28F is shown overlaid with the CO (3–2) emission (white contours): a north-south elongated structure of about 100$''$ height and 30$''$ width traces the same warm accelerated post-shocked gas. In our high-resolution CO (6–5) data the structure is resolved, though probably still sub-structured similar to what is seen in H$_2$, e.g., Y11. Comparison with the distributions of excited H$_2$ and OH masers (whose locations also mark the leading edge of the non-thermal radio shell) suggests a textbook morphology of an SNR-molecular cloud interaction: the shock propagates E-NE into the ambient cloud that extends east for several arcmins. Hot H$_2$ and OH masers mark the first signposts of the shock-compressed gas. Farther downstream, the gas cooling is seen prominently in warm CO. The shock impact appears edge-on, but the fact that high - projected - streaming velocities are indeed observed (-30 km s$^{-1}$ with respect to the ambient cloud) requires a significant inclination angle.
![CO transitions observed in the position (+7$''$,-26$''$) indicated in Fig. \[figure1\]: APEX (3–2), black (corresponding $^{13}$CO, green); (4–3), pink; (6–5), dark blue; (7–6), light blue; and SOFIA (11–10), red. The $^{13}$CO (3–2)
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This paper presents new analytical results for a class of nonlinear parabolic systems of partial different equations with small cross-diffusion which describe the macroscopic dynamics of a variety of large systems of interacting particles. Under suitable assumptions, we prove existence of classical solutions and we show exponential convergence in time to the stationary state. Furthermore, we consider the special case of one mobile and one immobile species, for which the system reduces to a nonlinear equation of Fokker-Planck type. In this framework, we improve the convergence result obtained for the general system and we derive sharper $L^{\infty}$-bounds for the solutions in two spatial dimensions. We conclude by illustrating the behaviour of solutions with numerical experiments in one and two spatial dimensions.'
address:
- 'Gran Sasso Science Institute, Viale Francesco Crispi 7, L’Aquila, 67100, Italy'
- 'University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria'
- 'Laboratoire Jacques-Louis Lions, Sorbonne-Université, 4 place Jussieu, 75252 Paris'
- 'Mathematics Institute, University of Warwick, Gibbet Hill Road, CV47AL Coventry, UK'
- ' Radon Institute for Computational and Applied Mathematics, Altenbergerstr. 69, 4040 Linz, Austria.'
author:
- Luca Alasio$^1$
- Helene Ranetbauer$^2$
- Markus Schmidtchen$^3$
- 'Marie-Therese Wolfram$^4$'
title: 'Trend to equilibrium for systems with small cross-diffusion'
---
Introduction
============
Background and motivation
-------------------------
In this paper we focus on a class of nonlinear cross-diffusion systems with sufficiently small off-diagonal diffusion terms. The smallness assumption ensures that the system we consider is [“close”]{}, in a suitable sense, to a linear, decoupled system. This allows us to adapt techniques developed in the theory of linear, parabolic systems of partial differential equations (PDEs) in order to study long-time behaviour of solutions of the nonlinear system. More specifically, we consider a parabolic system of PDEs of the form
$$\label{main0}
{\dfrac{\partialu}{\partial t}}
-\nabla \cdot \left\{
D(x) [ (I + \delta \Phi(u)) \nabla u
+ (\diag(\nabla V) + \delta \Psi(u) ) u ]
\right\}
=
0,$$
which is a compact notation for the system $${\displaystyle}{\dfrac{\partialu_{i}}{\partial t}} - {\displaystyle}\sum_{\alpha, \beta} {\dfrac{\partial}{\partial x_\alpha}}
\left\{ D_{i}^{\alpha\beta}(x)
\left[ \left(
{\dfrac{\partialu_{i}}{\partial x_\beta}} + \delta \sum_j \Phi_{ij}(u){\dfrac{\partialu_{j}}{\partial x_\beta}}
\right)
+
\left({\dfrac{\partialV_{i}}{\partial x_\beta}} u_i
+ \delta \sum_j \Psi_{ij}(u)u_{j} \right)
\right] \right\} = 0,$$ for $1\leq i, j \leq m,$ and $1\leq \alpha, \beta \leq d$. Here $m\geq 1$ represents the number of components (or species) and $d\in\{1,2,3\}$ the number of space dimensions. We denote by $u_i$ the $i$-th species. We will specify the assumptions on the diffusion tensor $D$, nonlinear mobilities $\Phi$ and $\Psi$ and the potential $V$ in Section \[sec:model\].
Cross-diffusion systems arise in multiple contexts in Physics, Life Sciences and Social Sciences; in particular, they have been derived as formal macroscopic limits of several microscopic models describing multi-species systems in the presence of finite volume effects, size exclusion or joint population pressures (see, for example, [@Bruna:2012cg; @Burger:2010gb; @perthame2015parabolic; @simpson2009multi]). Models with finite volume effects, which ensure physical bounds on the density, received significant attention in the past years. Recently Gavish et al. derived a mean-field model for a one-dimensional hard rod system rigorously, see [@gavish2019large]. However these techniques can be used in 1D only; in higher space dimensions only approximate models have been developed so far. These mean field models share common features, such as degenerate diffusion and small cross-diffusion terms (see, e.g., Examples 1.1-1.3 in [@alasio2018stability]). They can be derived from a lattice based microscopic description, see, for instance, [@bodnar2005] and a subsequent formal Taylor expansion of the associated master equation, as considered in [@burger2012nonlinear; @burger2016]. Another derivation was performed in [@Bruna:2012cg; @Bruna:2012wu], where the authors derived a cross-diffusion system from an underlying stochastic microscopic representation using the method of matched asymptotic expansions. We highlight that the two approaches yield different continuum models, however, they share the property of having small cross-diffusion terms. The smallness assumption is justified since the cross-diffusion terms are of the same order of magnitude as the microscopic particle size.
Gradient flow techniques
------------------------
In recent years, gradient flow methods have been successfully employed to study certain families of cross-diffusion systems, see, among others, [@jungel2016entropy; @difrancesco2018nonlinear; @desvillettes2015entropic]. Techniques such as the boundedness-by-entropy principle introduced in [@jungel2015boundedness], provide a mathematical framework to ensure existence and uniqueness of solutions to general nonlinear cross-diffusion systems that exhibit a gradient flow structure. In general, entropy methods have been proven to be a very useful tool to analyse the long time behavior of evolution equations. Especially the Bakry-Emery strategy (see [@bakry1985]) provides the necessary convex Sobolev inequalities to quantify the trend to equilibrium. Then entropy-entropy dissipation estimates allow to deduce exponential decay rates for general classes of linear and nonlinear scalar evolution equations, see for example [@arnold2001convex]. The cross-diffusion systems mentioned above lack a full gradient flow structure (in the Wasserstein sense) in certain parameter ranges, even though the underlying microscopic system possesses a natural one. This lack is caused by approximations made due to the finite volume effects. A first connection between the large deviations of stochastic particle systems and macroscopic Wasserstein gradient flows was established in [@adams2011large] without finite volume constraints. Questions related to structural features of cross-diffusion systems to be interpreted as macroscopic gradient flows (as in [@zamponi2017analysis]) or to have a strong solution (see [@Berendsen2019]) were investigated rather recently.\
Unfortunately, the results and techniques mentioned above cannot be applied at this stage to the systems of PDEs we want to study, nevertheless in this work we present an alternative strategy that relies on the smallness of the off-diagonal cross-diffusion terms.
Numerical methods
-----------------
The development of computational methods for non-linear cross-diffusion systems, especially structure preserving schemes, advanced significantly with the recent analytic progress. Structure preserving methods are designed in such a fashion that they preserve important physical and structural features such as positivity, conservation of mass or the dissipation of the associated entropy. Owing to the fact that Wasserstein gradient flows are posed in the set of probability measures, conservation of mass (or probability) of solutions is an important physical feature and finite volume discretisations are a natural framework to guarantee this. In addition, in recent years, several advances have been made in designing flux approximations that are in agreement with the energy dissipation. Bessemoulin-Chatard and Filbet, see [@bessemoulin2012finite], were among the first to present a finite volume method for nonlinear degenerate parabolic equations, which resolved the long-time behaviour correctly. Based on their scheme, different finite volume schemes have been proposed for systems, see for example [@CHS18; @carrillo2018fvconvergence]. Other numerical approximations are based on the underlying variational Wasserstein gradient flow structure. These so-called variational schemes are often restricted to one space dimension, as the computational complexity of computing the Wasserstein distance in higher space dimension is significant, see, for instance, [@carrillo2016diffeo]. Also, convergence results are, to the best of our knowledge, restricted to one spatial dimension; cf. [@matthes2014].
Summary of the main results
---------------------------
We consider a family of systems of PDEs with small cross-diffusion terms (namely system ) for which existence and uniqueness of solutions were investigated in [@alasio2018stability], as recalled in Proposition \[prop:ABC\]. We extend the analysis presented in [@alasio2018stability] by showing in Theorem \[thm:main1\] that (under suitable assumptions) the solutions are global-in-time and classical. Furthermore, we provide insights into the equilibration behaviour of such cross-diffusion systems in Theorem \[lem:energysys\]. A special instance of system , to which we shall return in Section \[sec:agf\], was analysed in [@bruna2017cross] using gradient flow techniques. This reduced PDE (namely problem ) can be interpreted as the
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study an insulator-metal transition in a ternary chalcogenide glass (GeSe$_3$)$_{1-x}$Ag$_x$ for $x$=0.15 and 0.25. The conducting phase of the glass is obtained by using “Gap Sculpting" (Prasai et al, Sci. Rep. 5:15522 (2015)) and it is observed that the metallic and insulating phases have nearly identical DFT energies but have a conductivity contrast of $\sim 10^8$. The transition from insulator to metal involves growth of an Ag-rich phase accompanied by a depletion of tetrahedrally bonded 2 in the host network. The relative fraction of the amorphous Ag$_2$Se phase and GeSe$_2$ phase is shown to be a critical determinant of DC conductivity.'
author:
- Kiran Prasai
- Gang Chen
- David Drabold
title: 'Amorphous to amorphous insulator-metal transition in GeSe$_3$:Ag glasses'
---
Metal-Insulator transitions (MIT) and their associated science are among the cornerstones of condensed matter physics [@mott2012]. In this Letter, we describe the atomistics of a technically important but poorly understood MIT in GeSe:Ag glasses, a prime workhorse of conducting bridge memory (CBRAM) devices [@patent1; @valov2011]. By [*design*]{}, we construct a stable conducting model from a slightly favored insulating phase. Predictions are made for structural, electronic and transport properties. We demonstrate the utility of our “Gap sculpting" method [@prasai2015] as a tool of Materials Design.
We report metallic phases of amorphous (GeSe$_3$)$_{1-x}$Ag$_{x}$ at $x=0.15$ and $0.25$. These are canonical examples of Ag-doped chalcogenide glasses, which are studied in relation to their photo-response and diverse opto-electronic applications [@kolobov2006; @inbook2]. Ag is remarkably mobile making the material a solid electrolyte and is known to act as “network-modifier" in these glasses and alter the connectivity of network. Experiments have shown Se rich ternaries ((Ge$_y$Se$_{1-y}$)$_{1-x}$Ag$_x$ with y $< 1/3$) to be phase-separated into Ag-rich Ag$_2$Se phase and residual Ge$_t$Se$_{1-t}$ phase [@mitkova1999].
Using first-principles calculations, we show that stable amorphous phases with at least $\sim 10^8$ times higher electronic conductivity exist with only small ($\approx 0.04$ eV/atom) difference in total energy. These conducting states present the same basic structural order in the glass, but have a higher relative fraction of an [*a-*]{}Ag$_2$Se phase compared to the insulating states. It is known that amorphous materials are characterized by large numbers of degenerate conformations that are mutually accessible to each other at small energy cost, but those usually have identical macroscopic properties. The remarkable utility of these materials accrues from states with distinct properties, nevertheless readily accessible to each other.
We discover the conducting phase of GeSe$_3$Ag glass by [*designing*]{} atomistic models with a large density of states (DOS) near the Fermi energy [@prasai2015]. This is achieved by utilizing Hellmann-Feynman forces from the band edge states. These forces are used to bias the true forces in [*ab initio*]{} molecular dynamics (AIMD) simulations to form structures with a large DOS at the Fermi level. The biased force on atom $\alpha$, $F^{bias}_{\alpha}$, is obtained by suitably summing Hellmann Feynman forces for the band edge states (second term in Eq. \[eq\_a\]) with the total force from AIMD calculations, $F^{AIMD}_{\alpha}$. $$\label{eq_a}
{F}^{bias}_{\alpha} = {F}^{AIMD}_{\alpha}+\sum \limits_{i} \gamma_{i} \langle \psi_{i}| \frac{\partial H}{\partial R_{\alpha}}|\psi_{i} \rangle$$ Here, $\gamma$’s set the sign and magnitude of the HF forces from individual states [*i*]{}. To maximize the density of states near $\epsilon_F$, gap states closer to the valence edge will have $\gamma > 0$, whereas the states in the conduction edge will have $\gamma < 0$. The magnitude of $\gamma$ determines the size of biasing force (with $\gamma=0$ representing true AIMD forces). We have employed biased forces as an electronic constraint to model semiconductors and insulators in our recently published work [@prasai2016] where the biasing is done in just the opposite sense: to force to states out of the band gap region.
We start with conventional 240 atom models of (GeSe$_3$)$_{1-x}$Ag$_x$, $x$=0.15 and 0.25, at their experimental densities 5.03 and 5.31 gm/cm$^3$ [@piarristeguy2000] respectively. These were prepared using melt-quench MD simulations, followed by conjugate-gradient relaxation to a local energy minimum. The MD simulations are performed using the Vienna [*Ab initio*]{} Simulation Package (VASP) [@kresse1; @*kresse2]. Plane waves of up to 350 eV are used as basis and DFT exchange correlation functionals of Perdew-Burke-Ernzerhof [@perdew1996] were used. Brillouin zone (BZ) is represented by $\Gamma$-point for bulk of the calculations. For static calculations, BZ is sampled over 4 k-points. These models fit the experimental structure factor reasonably well (Figure \[fig1\]).
![The structure factor of (GeSe$_3$)$_{1-x}$Ag$_x$ models (solid red line) compared with experiment (black squares)[@piarristeguy2000][]{data-label="fig1"}](sq.eps){width="0.8\linewidth"}
We obtain conducting conformations by annealing the starting configurations using biased forces at 700 K for 18 ps. The electronic states in the energy range \[$\epsilon_{F}$–0.4 eV, $\epsilon_{F}$+0.4 eV\] are included in the computation of bias force and $\gamma = 3.0$ is used. The bias potential ($\Phi_{b}(R_{1},..,R_{3N})= \sum -\gamma_{i} \langle \psi_{i}|H(R_{1},..,R_{3N})|\psi_{i} \rangle$) shepherds the electronic states in the band edges into the band-gap region. Since we want any proposed metallic conformation to be a true minimum of the unbiased DFT energy functional, we relax instantaneous snapshots of biased dynamics (taken at the interval of 0.2 ps, leaving out the first 4 ps) to their nearest minima using conjugate gradient algorithm with true DFT-GGA forces. We study all relaxed snapshots by i) gauging the density and localization of states around Fermi energy and, ii) testing the stability of the configurations by annealing them at 300 K ([*n.b.*]{} glass transition temperatures ($T_g$) are 488 K and 496 K for compositions $x$=0.15 and 0.25 respectively [@arcondo2007]). At each composition, we selected five models that display a large density of extended states around Fermi energy and are stable against extended annealing at 300 K as the ‘metallized’ models. These metallized systems are, on average, 0.040$\pm$0.009 eV/atom above their insulating counterparts.
![The electronic density of states (DOS) of the insulating model (black curve) and the metallized model (red curve). Energy axis is shifted to have Fermi level at 0 eV (the broken vertical line)[]{data-label="fig2"}](f51_DOSall_Fermi_0.eps){width="\linewidth"}
![The (black curve) electronic density of states (DOS) and (orange drop lines) Inverse Participation Ratio (IPR) of the insulating model (a) and the metallized model (b). Energy axis for all datasets is shifted to have Fermi level at 0 eV (highlighted by the broken vertical line)[]{data-label="fig3"}](DOSnIPR25.eps){width="\linewidth"}
The metallized models, by construction, show a large density of states around Fermi energy (Fig. \[fig2\]) whereas the insulating models display small but well defined PBE gap of 0.41 eV and 0.54 eV for $x$=0.15 and 0.25 respectively. For disordered materials, a high DOS at $\epsilon_F$ [*alone*]{} may not produce conducting behaviour since these states can be localized (example: amorphous graphene, [@pablo]). We gauge the localization of these states by computing inverse participation ratio (IPR, [@ziman])(plotted for $x$=0.25 system in Figure \[fig3\]) and show that these states [*are*]{} indeed extended. We compute the electronic conductivity \[$\sigma(\omega)$\] using Kubo-Greenwood formula (KGF) in the following form: $$\label{eq_KGF}
\begin{aligned}
{\sigma}_{k}(\omega) = \frac{ 2 \pi e^{2} \hslash^{2}}{3 m^{2} \omega \Omega} \sum \limits_{j=1}^{N} \sum \limits_{i=1}^{N} \sum
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'I discuss Haldane’s concept of generalised exclusion statistics (Phys. Rev. Lett. [**67**]{}, 937, 1991) and I show that it leads to inconsistencies in the calculation of the particle distribution that maximizes the partition function. These inconsistencies appear when mutual exclusion statistics is manifested between different subspecies of particles in the system. In order to eliminate these inconsistencies, I introduce new mutual exclusion statistics parameters, which are proportional to the dimension of the Hilbert sub-space on which they act. These new definitions lead to properly defined particle distributions and thermodynamic properties. In another paper (arXiv:0710.0728) I show that fractional exclusion statistics manifested in general systems with interaction have these, physically consistent, statistics parameters.'
address: 'Department of Theoretical Physics, National Institute for Physics and Nuclear Engineering–”Horia Hulubei”, Str. Atomistilor no.407, P.O.BOX MG-6, Bucharest - Magurele, Romania'
author:
- 'Dragoş-Victor Anghel'
title: The thermodynamic limit for fractional exclusion statistics
---
Introduction
============
Haldane’s concept of fractional exclusion statistics (FES) [@PhysRevLett.67.937.1991.Haldane] have been applied to the study of many types of physical systems, revealing interesting properties. For example it has been applied to strongly interacting systems, such as Tomonaga-Luttinger model [@ProgrTheorPhys.5.544.1950.Tomonaga; @JMathPhys.4.1154.1963.Luttinger; @JMathPhys.6.304.1965.Mattis; @PhysRevLett.81.489.1998.Carmelo], Colagero-Sutherland model [@JMathPhys.10.2191.1969.Colagero; @JMathPhys.12.247.1971.Sutherland; @PhysRevA.4.2019.1971.Sutherland; @PhysRevA.5.1372.1972.Sutherland; @PhysRevB.60.6517.1999.Murthy], fractional quantum Hall effect [@PhysRevLett.72.600.1994.Veigy; @NuclPhysB470.291.1996.Hansson; @IntJModPhysA12.1895.1997.Isakov], or to interacting particles in one or two-dimensional systems, described in the mean field approximation [@PhysRevLett.73.3331.1994.Murthy; @PhysRevLett.74.3912.1995.Sen; @JPhysB33.3895.2000.Bhaduri; @PhysRevLett.86.2930.2001.Hansson]. The statistical properties of FES systems have been calculated mainly by Isakov [@PhysRevLett.73.2150.1994.Isakov] and Wu [@PhysRevLett.73.922.1994.Wu], while Iguchi extended the Fermi liquid model to the model of a FES liquid [@PhysRevLett.80.1698.1998.Iguchi; @PhysRevB.61.12757.2000.Iguchi]; the microscopic reason for the manifestation of FES have also been discussed by several authors [@PhysRevLett.73.3331.1994.Murthy; @PhysRevLett.74.3912.1995.Sen; @PhysRevB.60.6517.1999.Murthy; @NuclPhysB470.291.1996.Hansson; @IntJModPhysA12.1895.1997.Isakov; @PhysRevLett.86.2930.2001.Hansson; @PhysRevLett.85.2781.2000.Iguchi].
Although the concept received so much attention and has been applied in general to many types of systems, I will show here that when mutual exclusion statistics is manifested between different subspecies of particles in the system, FES leads to thermodynamic inconsistencies. I will also show that these inconsistencies can be corrected by a redefinition of the exclusion statistics parameters.
In a related paper I showed that fractional exclusion statistics appears in general systems of interacting particles and the statistics parameters indeed obey the rules conjectured here [@submitted.FESinteraction].
Thermodynamic inconsistencies in FES {#inconsistent}
====================================
In this section I will prove using two model systems, that in FES systems the equilibrium particle populations are ambiguously defined, if *mutual* statistics parameters are not zero. For this, I will recalculate the partition function and the most probable particle distribution in a FES system, following the procedure used by Wu in Ref. [@PhysRevLett.73.922.1994.Wu].
Haldane defined the fractional exclusion statistics as acting on Hilbert spaces of finite dimensions [@PhysRevLett.67.937.1991.Haldane]. If we have only one such a space, in which we put $N$ ideal bosons or fermions, then the number of microscopic configurations we have in the system is $W_b=(G+N-1)!/[N!(G-1)!]$ (for bosons) or $W_f=G!/[N!(G-N)!]$ (for fermions). Fractional exclusion statistics of parameter $\alpha$ is an interpolation between these two cases and the number of configurations is $W=[G+(N-1)(1-\alpha)]!/\{N![G-\alpha N-(1-\alpha)]\}$–we say that the addition of $\delta N$ particles in the system reduces the number of available states in the system by $\alpha\delta N$ [@PhysRevLett.67.937.1991.Haldane; @PhysRevLett.73.922.1994.Wu].
Now let us generalize the problem to the case when we have more than one Hilbert space. The spaces are ${{\mathcal H}}_0$, ${{\mathcal H}}_1$, …, of dimensions $G_0$, $G_1$, …, and which contain $N_0$, $N_1$, …, particles. In this case we have the FES parameters $\alpha_{ij}$, with $i,j=0,1,\ldots$. Mutual exclusion statistics is manifested between the spaces ${{\mathcal H}}_i$ and ${{\mathcal H}}_j$ ($i\ne j$) if $\alpha_{ij}\ne 0$–we say that the addition of $\delta N_j$ particles in the space ${{\mathcal H}}_j$ changes the number of available states in the space ${{\mathcal H}}_i$ by $-\alpha_{ij}\delta N_j$. With these notations, the total number of configurations is [@PhysRevLett.73.922.1994.Wu] $$\label{conf_number1}
W = \prod_i\frac{\left[G_i+N_i-1-\sum_j\alpha_{ij}(N_j-\delta_{ij})
\right]!}
{N_i!\left[G_i-1-\sum_j\alpha_{ij}(N_j-\delta_{ij})\right]!} .$$ Having the number of microscopic configurations (\[conf\_number1\]), if we asign the energy $\epsilon_i$ and the chemical potential $\mu_i$ to the states in the space $i$, we can calculate the grandcanonical partition function, ${{\mathcal Z}}$ [@PhysRevLett.73.922.1994.Wu], $${{\mathcal Z}}= \sum_{\{N_i\}} W(\{N_i\})\exp\left[
\sum_i \beta N_i(\mu_i-\epsilon_i) \right]\,, \label{cZ_gen}$$ and the total energy of the system in the given configuration, $E=\sum_i N_i\epsilon_i$–we use the notation $\beta=1/k_B T$, where $T$ is the temperature of the system.
The most probable configuration, $\{N_i\}$, is obtained by maximizing ${{\mathcal Z}}$ with respect to the set $\{N_i\}$. If we introduce the notations $n_i\equiv N_i/G_i$ and $\beta_{ij}\equiv\alpha_{ij}G_j/G_i$, and assume that for each $i$ both, $G_i$ and $N_i$ are sufficiently large, so that we can use the Stirling approximation \[$\ln G_i! \approx G_i\ln(G_i/e)$ and $\ln N_i! \approx N_i\ln(N_i/e)$\] the maximization procedure gives us the system of equations, $$n_i e^{\beta(\epsilon_i-\mu_i)} =
\left[1+\sum_k(\delta_{ik}-\beta_{ik})n_k\right]
\prod_j\left[\frac{1-\sum_k\beta_{jk}n_k}
{1+\sum_k(\delta_{jk}-\beta_{jk})n_k}\right]^{\alpha_{ji}} \label{system_Wu}$$ The system (\[system\_Wu\]) is solved easier if we denote $w_i\equiv n_i^{-1}-\sum_k\beta_{ik}n_k/n_i$. Using this notations (\[system\_Wu\]) becomes $$(1+w_i)\prod_j\left(\frac{w_j}{1+w_j}\right)^{\alpha_{ji}} =
e^{\beta(\epsilon_i-\mu_i)} \label{EqforwWu}$$ and $n_i$s can be calculated from the new system, $$\sum_j(\delta_{ij}w_j+\
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Collisional–radiative (CR) models based on *ab initio* atomic structure calculation have been utilized over 20 years to analyze many-electron atomic and ionic spectra. Although the population distribution of the excited states in plasmas and their emission spectra are computed using the CR models, systematic and analytical understanding of the population kinetics are still lacking. In this work, we present a continuous CR model (CCRM), in which we approximate the dense energy structure of complex many-electron atoms by a continuum. Using this model, we predict asymptotic population distributions of many-electron atoms in plasmas and their electron-density and temperature dependence. In particular, the CCRM shows that the population distribution of highly excited states of many-electron atoms in plasmas resembles a Boltzmann distribution but with an effective excitation temperature. We also show the existence of three typical electron-density regions and two electron-temperature regions where the parameter dependence of the excitation temperature is different. Analytical representations of the effective excitation temperature and the boundaries of these phases are also presented.'
author:
- Akira Nishio
- 'Julian C. Berengut'
- Masahiro Hasuo
- Keisuke Fujii
bibliography:
- 'refs.bib'
title: ' Population kinetics of many-electron atoms in ionizing plasmas studied using a continuous collisional radiative model '
---
Introduction
============
The spectra of many-electron atomic ions can be seen in various optically thin plasmas. In the stellar atmosphere, neutral and singly charged iron (Fe) are the dominant components in the absorption spectra in terms of the number of lines [@Tousey1988]. Many of Fe absorption lines have been identified to study the stellar atmosphere [@Nave1994; @Castelli2010; @Peterson2017]. Highly charged tin and actinoide ions play an important role in realizing ultraviolet light sources [@Osullivan2015; @Suzuki2012; @Torretti2020], in which quasi-continuum emission in laser-produced plasmas is used. Since the radiative power should be concentrated into a particular energy region for the commercial light source realization, many works have been carried out to understand the population dynamics in plasmas [@Osullivan2015; @Torretti2020]. In fusion tokamak plasmas, highly charged tungsten ions convert electron kinetic energy to strong radiation and therefore need to be controlled [@Putterich2008; @Murakami2015]. The thermalization process of the nuclei in kilonovae, which has recently been probed from the emission of neutral transition metals, is yet to be understood [@Pian2017; @Tanaka2018].
The collisional–radiative (CR) model is the key tool to study the population kinetics of many-electron atoms in plasmas and their emission and absorption spectra. This model solves the steady state equation of the excited state population of ions by taking into account the rates of elementary processes in plasmas. In order to perform accurate predictions, accurate atomic data are required, i.e., energy levels and transition rates of many elementary processes, including electron-impact excitations and spontaneous transitions. Therefore, many works have been dedicated to develop and improve *ab initio* calculation of these atomic data [@Gu2008; @amusia1997computation; @Bar-Shalom2001]. Although this first-principles approach has been successful in many cases [@TheVenin1999; @Dodin2014; @Torretti2020; @Murakami2015], accurate calculations are still difficult and computationally demanding, particularly for many-valence-electron atoms and ions. This difficulty comes from their strong wavefunction mixing, which requires an unacceptably large Hilbert basis space to represent their wavefunctions. Due to the complexity and difficulty of the first-principle computation of the atomic structure, it is difficult to understand and validate the CR model result.
A probabilistic model may provide a complementary approach to the first-principle calculation. For a system with sufficiently strong mixing of basis states, i.e., systems exhibiting many-body quantum chaos, it is known that some of the properties of their atomic structure can be represented using a statistical theory [@Flambaum1994]. Although its applications to plasma diagnostics are very limited, we have recently shown that the intensity statistics of many-electron atoms can be understood from this structure, and can be used to measure electron temperature in plasmas [@Fujii2020]. Since the use of probabilistic nature of many-electron atoms requires only a small amount of atomic data, this approach is not only robust against possible numerical errors, but also gives us a systematic insight of the population kinetics in plasmas.
In this work, we develop a continuous CR model (CCRM), in which we approximate the dense energy levels of many-electron atoms by a continuum based on the statistical theory. For our simplified theory, only two atomic parameters are used to represent the spectrum: the energy scale of the level density growth; and another energy scale that describes the decay of transition strengths. Based on this model, we will show that the population distribution of highly excited states of many-electron atoms is similar to Boltzmann’s distribution but with an effective excitation temperature $T_\mathrm{ex}$. The dependence of this excitation temperature on electron density ($n_\mathrm{e}$) and temperature ($T_\mathrm{e}$) are then studied using the CCRM, revealing the existence of three typical $n_\mathrm{e}$ regions and two $T_\mathrm{e}$ regions. In particular, it is shown that in low $T_\mathrm{e}$ regions, the excitation temperature becomes almost $T_\mathrm{e}$ even in low $n_\mathrm{e}$ plasmas. This property indicates much wider applicability of the Boltzmann’s method, which is a well-known method to estimate $T_\mathrm{e}$ values from the emission spectra in high $n_\mathrm{e}$ plasmas. It also indicates the wider applicability of the new temperature diagnostics based on the line intensity statistics, in which the Boltzmann population distribution is assumed [@Fujii2020]. The population kinetics is also compared with that of hydrogen (H)-like ions, which has been extensively studied as shown in Fig. \[fig:diagram\] (a) (see also Appendix \[sec:hydrogen\] for details) [@Fujimoto].
In section \[sec:crmodel\], we briefly describe the principle of the CR model and show some simulation results obtained using an *ab initio* calculation code for several many-electron atomic ions. In section \[sec:model\], we present our CCRM to study the population kinetics of many-electron atoms and compare it with the *ab initio* simulation result. In section \[sec:discussions\], we discuss its parameter dependence.
{width="13.5cm"}
Summary
=======
In this work, we studied the population kinetics of many-electron atoms in plasmas. From the statistical theory of the many-electron-atom structure, we constructed a continuous CR model that has only two atom-specific energy scales as parameters, $\epsilon_0$ and $\sigma$. From this model, the population distribution in highly excited states was found to be Boltzmann-like, but with excitation temperature sometimes smaller than the electron temperature. We also clarified that there are different phases depending on values of $n_\mathrm{e}$ and $T_\mathrm{e}$ and derived analytical representations of the boundaries.
Some of our findings can be directly used for plasma diagnostics. For example, the Boltzmann method has been frequently used to estimate $T_\mathrm{e}$, based on the slope of the population distribution and the assumption of the saturation phase, and therefore, the applicability of this method has been limited only to high-density plasmas. However, as can be seen in Fig. \[fig:Tex\], if $T_\mathrm{ex} < \epsilon_0 / 2k$, then $T_\mathrm{ex} \approx T_\mathrm{e}$ can be inferred. This clearly shows much wider applicability of the Boltzmann method for low-temperature plasmas. This property also enables us to use a new temperature diagnostics using line intensity statistics, which has been proposed in Ref. [@Fujii2020]. By contrast, if $T_\mathrm{ex} \gtrsim \epsilon_0 / 2k$, then the inference of $T_\mathrm{e}$ may be difficult without knowing $n_\mathrm{e}$.
For highly charged ions in low-density and high-temperature plasmas, such as heavy ions in tokamak core plasmas or in electron-beam ion traps, the population is mostly concentrated in the low excited states, to which our model is not applicable. However, our finding [Eq. (\[eq:Tex\_highT\_lowN\])]{} may still be useful to estimate the cascade contribution from very highly excited states, which is difficult to consider from first-principles owing to the enormous requirement of the computation resources.
In this work, we only compared our model with another simulation model, FAC. Comparison with experimental observation is desirable; however, because of the difficulty in the level identification and accurate computation of the transition rates for highly excited states, it is not available at the current stage. We leave it for future studies.
In principle, our CCRM could be further developed to include additional atomic structure data, such as more sophisticated line-strength functions, based on individual orbitals within the statistical theory of many-body quantum chaos [@Flambaum1998]. While this would not add significant computational overhead, the simplicity of our current formulation, [Eq. (\[eq:strength-function\])]{}, allows for analytical exploration of the effective excitation temperature through phase space.
This work was partly supported by JSPS K
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider the linear programming approach for constrained and unconstrained Markov decision processes (MDPs) under the long-run average cost criterion, where the class of MDPs in our study have Borel state spaces and discrete countable action spaces. Under a strict unboundedness condition on the one-stage costs and a recently introduced majorization condition on the state transition stochastic kernel, we study infinite-dimensional linear programs for the average-cost MDPs and prove the absence of duality gap and other optimality results. Our results do not require a lower-semicontinuous MDP model and as such, they can be applied to countable action space MDPs where the dynamics and one-stage costs are discontinuous in the state variable. The proofs of these results make use of the continuity property of Borel measurable functions asserted by Lusin’s theorem.'
author:
- 'Huizhen Yu[^1]'
bibliography:
- 'minpair\_lp\_bib.bib'
title: 'On Linear Programming for Constrained and Unconstrained Average-Cost Markov Decision Processes with Countable Action Spaces and Strictly Unbounded Costs'
---
[**Keywords:**]{}\
Markov decision processes; Borel state space; countable action space; average cost; constraints\
minimum pair; majorization condition; infinite-dimensional linear programs; duality
Introduction
============
We consider discrete-time Markov decision processes (MDPs) with the long-run average cost criterion. Our focus will be on the linear programming (LP) approach, for a class of unconstrained and constrained MDPs that have Borel state spaces, discrete countable action spaces, and strictly unbounded one-stage costs.
LP methods for average-cost MDPs have a long history and an extensive literature (see e.g., [@DeF68; @HoK79; @HoK84; @Kar83] for some early work on finite-space MDPs, [@Bor88; @Bor94; @HoL94; @HuK97; @Las94] on countable state space and countable or compact action space MDPs, and [@HGL03; @HL94; @HL02; @KNH00; @Yam75] on Borel space MDPs; see also the books [@Alt99; @FS02; @HL96; @HL99; @Puterman94] and their references). For the special case of strictly unbounded costs we consider, where the one-stage costs are nonnegative and grow unboundedly outside certain increasing compact sets of the state-action spaces (cf. Assumption \[cond-pc-3\](SU)), there is a line of research that connects the LP approach with the minimum pair approach for average-cost MDPs (see e.g., [@HL99 Chaps. 11-12]), which is also related to the convex analytic approach [@Bor88]. The idea of the minimum pair approach is to consider average costs of all policies for all initial distributions, with the goal of finding a stationary policy and an associated initial distribution that together attain the minimum average cost. The associated initial distribution here is not an arbitrary one but an invariant probability measure induced by that stationary policy; we shall call such a pair of policy and initial distribution a “stationary minimum pair.” When the one-stage costs are strictly unbounded, under various conditions on the MDP model (to be explained below), a stationary minimum pair can be proved to exist. Finding such a pair can be formulated as a linear program in the space of stationary policies and their induced invariant probability measures. This provides a method to solve the average-cost MDP or to gain further insights about its dynamic programming-related properties, such as optimality equations, through the duality relationships in LP. This method can be applied to general multichain MDPs, which is an advantage since the chain structure of an MDP can be complicated and hard to analyze especially when the state space is uncountably infinite.
To our knowledge, Denardo [@Den70] was the first to propose this LP method for solving finite-space multichain MDPs (although he focused more on algorithms than on the minimum pair idea as a general approach). For infinite Borel space MDPs, the minimum pair approach was introduced by Kurano [@Kur89], motivated by the ideas of occupancy measures from Borkar [@Bor83; @Bor84], and it was further developed by Hernández-Lerma [@HLe93], Lasserre [@Las99], and Vega-Amaya [@VAm99] (see also [@HL99 Chap. 11]). (Kurano considered compact state and action spaces; Borkar’s work was for countable state spaces. Hernández-Lerma, Lasserre, and Vega-Amaya considered non-compact spaces with strictly unbounded costs, and obtained further results, including strong and pathwise average-cost optimality results, besides the existence of a stationary minimum pair.) Hernández-Lerma and Lasserre [@HL94] (see also the book chapters [@HL99 Chap. 12] and [@HL02]) formulated an LP framework for Borel space average-cost MDPs by using the theory of infinite-dimensional LP (Anderson and Nash [@AnN87]). They characterized the relation between the values of the primal/dual linear programs and the minimum average cost of an MDP, and proved the absence of duality gap under tightness conditions closely related to the minimum pair method. Before [@HL94], a much earlier duality result was proved by Yamada [@Yam75] for compact Euclidean state and action spaces and bounded costs, under geometric ergodicity conditions. Additional results and generalizations of some of the results of [@HL94] were given by Hernández-Lerma and González-Hernández [@HG98]. Extensions of the LP framework to constrained MDPs were subsequently studied by Kurano et al. [@KNH00] for compact spaces and by Hernández-Lerma et al. [@HGL03] for non-compact spaces.
Our work builds upon the earlier researches on Borel space constrained and unconstrained MDPs just mentioned. The action space in those prior results is more general than the countable action space we deal with in this paper. However, except for [@Yam75], they all require a lower-semicontinuous MDP model assumption—namely, they require the one-stage cost functions to be lower semicontinuous and the state transition stochastic kernels to be (weakly) continuous ([@Yam75] involves different continuity conditions; see Remark \[rmk-Yamada\] for details). This is a restriction.
Recently, to deal with Borel space MDPs without such continuity properties, we introduced in [@Yu19-minp] a majorization condition on the state transition stochastic kernel instead, for the case of countable action spaces (with the discrete topology). We obtained the existence of a stationary minimum pair and other average-cost optimality results analogous to those for lower-semicontinuous MDPs given by [@HLe93; @Kur89; @Las99; @VAm99]. The purpose of the majorization condition was to make use of Lusin’s theorem on the continuity of Borel measurable functions [@Dud02 Thm. 7.5.2]. Roughly speaking, we require the existence of finite Borel measures on the state space that can majorize certain sub-stochastic kernels created from the state transition stochastic kernel, at all admissible state-action pairs (see Assumption \[cond-pc-3\](M)). We then use those majorizing finite measures in combination with Lusin’s theorem, so as to extract arbitrarily large sets (large as measured by a given finite measure) on which certain Borel measurable functions involved in our analysis have desired continuity properties. With this technique—although its application range is currently limited to the case of countable action spaces, we are able to avoid the lower-semicontinuous model assumption and obtain results in [@Yu19-minp] that can be applied to MDPs with discontinuous dynamics and one-stage costs.
The purpose of the present paper is to study further the implications of the majorization condition and Lusin’s theorem in the LP context, for both unconstrained and constrained MDPs. Our main contributions can be summarized as follows:
1. For unconstrained average-cost MDPs, under the strictly unbounded cost condition and the majorization condition, we prove there is no duality gap between the primal and dual linear programs in an LP formulation (see Theorem \[thm-1\]).
2. For constrained average-cost MDPs, under conditions similar to those in (i), we first prove the existence of stationary optimal pair and stationary lexicographically optimal pair (which are analogous to stationary minimum pairs for unconstrained MDPs), and we then prove the absence of duality gap for an LP formulation (see Theorem \[thm-4.1\] and Theorem \[thm-4.2\], respectively).
In addition, we also discuss the maximizing sequences of dual linear programs and their relation with certain versions of average cost optimality equations (ACOE) (see Prop. \[prp-2\] for unconstrained MDPs and Props. \[prp-4.3\], \[prp-4.4\] for constrained MDPs). Our results for unconstrained (resp., constrained) MDPs given in this paper can be compared with some of the prior results in [@HL99 Chap. 12] and [@HL02] (resp., [@HGL03] and [@KNH00]) for lower-semicontinuous models.
While this paper focuses on the average cost criterion, the analysis we give, with
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The unique ghost-free mass and nonlinear potential terms for general relativity are presented in a diffeomorphism and local Lorentz invariant vierbein formalism. This construction requires an additional two-index Stückelberg field, beyond the four scalar fields used in the metric formulation, and unveils a new local SL(4) symmetry group of the mass and potential terms, not shared by the Einstein-Hilbert term. The new field is auxiliary but transforms as a vector under two different Lorentz groups, one of them the group of local Lorentz transformations, the other an additional global group. This formulation enables a geometric interpretation of the mass and potential terms for gravity in terms of certain volume forms. Furthermore, we find that the decoupling limit is much simpler to extract in this approach; in particular, we are able to derive expressions for the interactions of the vector modes. We also note that it is possible to extend the theory by promoting the two-index auxiliary field into a Nambu-Goldstone boson nonlinearly realizing a certain space-time symmetry, and show how it is “eaten up" by the antisymmetric part of the vierbein.'
author:
- Gregory Gabadadze
- Kurt Hinterbichler
- David Pirtskhalava
- Yanwen Shang
title: On the Potential for General Relativity and its Geometry
---
1. Introduction and Summary
===========================
Einstein’s gravity is the theory that describes the two degrees of freedom of the massless helicity-2 representation of the Poincaré group, and their two derivative self-interactions. One may ask whether it is possible to alter the interactions of the graviton beyond those dictated by the Einstein - Hilbert (EH) action. At the lowest, zero-derivative level, such a deformation would correspond to adding a potential for the metric perturbation. An obvious example is the potential described by the cosmological constant (CC) term, ${\cal L}_0\sim \sqrt{-g}\Lambda$. This changes neither the number of propagating degrees of freedom of general relativity (GR), nor the consistency of the theory, but necessarily alters the background spacetime.
The CC is the only such term – other potentials inevitably change the number of degrees of freedom. The Fierz-Pauli term [@Fierz:1939ix] is the unique consistent quadratic potential that gives rise to 5 degrees of freedom, as required by the massive spin-2 representation of the Poincaré group. Adding a generic potential to the EH action however leads to the loss of all four Hamiltonian constraints of GR, and thus a total of six propagating degrees of freedom, one of which is necessarily a ghost [@Boulware:1973my].
Nevertheless, there exists a special class of mass and potential terms (the often-called dRGT terms [@deRham:2010ik; @deRham:2010kj], see [@Hinterbichler:2011tt] for a review) that make the graviton massive, while retaining one of the four Hamiltonian constraints. This remaining constraint projects out the ghostly sixth degree of freedom [@Hassan:2011hr; @Hassan:2011ea], see also [@Mirbabayi:2011aa; @Hinterbichler:2012cn; @Deffayet:2012nr].
In addition to the CC term, the dRGT construction allows for $3$ free parameters. One combination is the graviton mass, $m$, and the other two independent combinations, $\alpha_3$ and $\alpha_4$, set the strength of the nonlinear potential. The theory can be formulated by using four spurious diffeomorphism scalars, $\phi^{\bar a}$ – first introduced in an earlier proposal for massive gravity [@Siegel:1993sk] – to allow for a manifestly diffeomorphism-invariant description. Adopting these four scalars, and following [@deRham:2010kj], one can define a matrix with components $\mathcal{K}^\mu_{~\nu}=\delta^\mu_\nu-\sqrt{g^{\mu\alpha}\p_\alpha\phi^{\bar a}
\p_\nu\phi^{\bar b}\eta_{\bar a\bar b}}$ , that can be used to build invariants supplementing the EH action by the graviton mass as well as zero-derivative interactions that guarantee 5 degrees of freedom on an arbitrary background. One such term is given by [@deRham:2010kj] \[u2\] [L]{}\_2\~ \_[\_1\_2]{} \^[\_1\_2]{}\^[\_1]{}\_[ \_1]{}\^[\_2]{}\_[ \_2]{} . The remaining two possible terms ${\cal L}_{3,4}$ , cubic and quartic in $\mathcal{K}$ respectively, can be obtained by the higher order generalization of [^1], \[u34\] [L]{}\_3 \~\_3 \^2 m\^2 \_[\_1\_2\_3]{}\^[\_1\_2\_3]{} \^[\_1]{}\_[ \_1]{}\^[\_2]{}\_[ \_2]{}\^[\_3]{}\_[ \_3]{},\
[L]{}\_4 \~\_4 \^2 m\^2 \_[\_1\_2\_3\_4]{}\^[\_1\_2\_3\_4]{} \^[\_1]{}\_[ \_1]{}\^[\_2]{}\_[ \_2]{}\^[\_3]{}\_[ \_3]{}\^[\_4]{}\_[ \_4]{}.
In addition to being invariant under the global Poincaré subgroup, $ISO(3,1)_{\text{GCT}}$ , of the group of general coordinate transformations (GCT), the theory is invariant under an additional, global internal Poincaré group, $ISO(3,1)_{\text{INT}}$, realized on the “flavor" indices of the scalars, as first pointed out by Siegel in an earlier context [@Siegel:1993sk] \^[|a]{}L\^[|a]{}\_[ |b]{}\^[|[b]{}]{}+c\^[|b]{}. \[Siegel\] Generation of the graviton mass occurs in the phase defined by the vacuum expectation value (VEV) of the order parameter $\langle \p_\mu\phi^{\bar a}\rangle=\delta^{\bar a}_{\mu} $. This results in the spontaneous symmetry breaking pattern of the global symmetry group ISO(3,1)\_ISO(3,1)\_ISO(3,1)\_. The unbroken $ISO(3,1)_{\text{ST}}$ group guarantees that the resulting theory is invariant under the ordinary spacetime (ST) Poincaré transformations. Three of the four auxiliary scalars $\phi^{\bar a}$ are “eaten" by the graviton to form a massive spin-2 representation of the latter group, while the fourth, potentially ghostly scalar is made non-dynamical by the single remaining Hamiltonian constraint of massive GR, originating from the specific structure of the dRGT terms ${\cal L}_{2,3,4}$.
The dRGT theory gets rid of the sixth ghostly mode, and also guarantees that the remaining 5 are unitary degrees of freedom at low energies and on nearly-Minkowski backgrounds (i.e., the backgrounds with typical curvature smaller than the graviton mass square). However, the theory does not guarantee that for more general backgrounds the 5 physical modes are healthy. In fact, some of their kinetic terms may change signs around certain cosmological backgrounds. Moreover, for a large region of the $\alpha_2,\alpha_3$ parameter space, the potential is known to violate the null energy condition and one often gets kinetic and gradient terms that give rise to superluminal group and phase velocities. Most of the above issues stem from one and the same source: the dRGT theory is strongly coupled at the energy/momentum scale $\Lambda_3 \equiv (M_{Pl} m^2)^{1/3}$ [@deRham:2010ik; @deRham:2010kj]. As a result, a typical curvature of order $m^2$ produces order 1 corrections to the kinetic terms for fluctuations, often giving rise to vanishing or negative kinetic terms, or superluminal group and phase velocities (for brief comments on the current state of affairs on all these issues, see Section 6).
As for any strongly coupled theory, an extension above the scale $\Lambda_3$ is desirable[^2]. However, it is hard to think of such an extension since the Lagrangian contains square roots of the longitudinal modes (represented by the $\phi^{\bar a}$’s). This inconvenience might be mitigated by using the vierbeins, which are square roots of the metric. The goal of the present work is to rewrite the theory in terms of the vierbeins in a GCT and local Lorentz transformation (LLT) invariant form. The hope is that this form of the theory might make it easier to find a weakly coupled completion. Also, irrespectively of that, the vierbein formulation itself merits a separate consideration.
A vierbein reformulation of the theory was given by one of us and R. A. Rosen[^3] [@Hinterbichler:2012cn]. That work focused on a unitary gauge description, which for a single massive graviton is not GCT or LLT invariant. In the present work, we give a GCT and LLT invariant action for a massive graviton.
We find that such a formulation requires a new two-index Stückelberg field, $\lambda^a_{~\bar a}$, in addition to the four scalar fields $\phi^{\bar a}$ used in the metric description. The new field is auxiliary and enters the action algebraically. To recover dRGT, this field should transforms as a vector under two different Lorentz groups, $ \lambda^a_{~\bar a} \to Q^a_{~b}(x) \lambda^b_{~\bar a} $, and
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Collisionless plasmas, mostly present in astrophysical and space environments, often require a kinetic treatment as given by the Vlasov equation. Unfortunately, the six-dimensional Vlasov equation can only be solved on very small parts of the considered spatial domain. However, in some cases, e.g. magnetic reconnection, it is sufficient to solve the Vlasov equation in a localized domain and solve the remaining domain by appropriate fluid models. In this paper, we describe a hierarchical treatment of collisionless plasmas in the following way. On the finest level of description, the Vlasov equation is solved both for ions and electrons. The next courser description treats electrons with a 10-moment fluid model incorporating a simplified treatment of Landau damping. At the boundary between the electron kinetic and fluid region, the central question is how the fluid moments influence the electron distribution function. On the next coarser level of description the ions are treated by an 10-moment fluid model as well. It may turn out that in some spatial regions far away from the reconnection zone the temperature tensor in the 10-moment description is nearly isotopic. In this case it is even possible to switch to a 5-moment description. This change can be done separately for ions and electrons. To test this multiphysics approach, we apply this full physics-adaptive simulations to the Geospace Environmental Modeling (GEM) challenge of magnetic reconnection.'
author:
-
bibliography:
- 'lit.bib'
title: Multiphysics simulations of collisionless plasmas
---
Introduction
============
One of the most important challenges in astrophysical, space and fusion plasmas is the treatment of different spatial and temporal scales and the correct physical description on each of these different scales.
In order to give a rough estimate for different plasma systems, let us first consider the warm ionized phase (diffuse ionized hydrogen) in the interstellar medium. Here, the smallest relevant kinetic scales are in the order of magnitude of kilometres, while the global scale of the system is about $10^{13}$km. In the heliosphere the scales are altogether smaller (kinetic scales about $2$km, system scale about $10^8$km), but the ratio of global to kinetic scales is still astronomical in the truest sense. The situation is similar in fusion plasmas: the electron skin depth is about $5\cdot 10^{-4}$m and the vessel measures about $10$meters. In all these cases, it is not possible to carry out simulations which represent all scales with the finest level (kinetic equations) of the physical description. Most of these plasmas can be considered as collisionless, since collision times are orders of magnitude larger than time scales relevant for the dynamical evolution of the plasma. Such plasmas can be modelled with the kinetic Vlasov equation. Nevertheless, kinetic models are inherently computationally expensive, so that large–scale simulations of typical phenomena, as for example magnetic reconnection or collisionless shocks, are hardly feasible and only possible in localized regions of interest. As an alternative, much cheaper fluid models can be considered, but they lack the expressiveness and some physics of full kinetic models, even though some of the effects may be included. Simple treatments and modelling of Landau damping in the same context were proposed and analyzed in [@wang-et-al:2015; @ng-hakim-etal:2017; @allmann-rahn-trost-grauer:2018]. These studies were based on the closure introduced by @hammett-perkins:1990 and successive work in this direction [@hammett-dorland-perkins:1992; @passot-sulem:2003]. An extension providing heat fluxes in the parallel and perpendicular directions (with respect to the magnetic field) was presented in [@sharma-hammett-etal:2003]. An excellent overview is given in @chust-belmont:2006.
Fortunately, many relevant problems like magnetic reconnection or collisionless shocks exhibit a rather clear separation of scales and regimes such that an adaptive approach is promising and might combine the best of the two worlds: cheap models where they are sufficient and detailed models where they are necessary and interesting. The idea of coupling different physical models is not new and has been applied in different physical contexts. Schulze et al. [@Schulze2003] couple kinetic Monte-Carlo and continuum models in the context of epitaxial growth. Considerable efforts have been made to couple kinetic Boltzmann descriptions with fluid models (see e.g. [@deg2010; @del2003; @gou2013; @tiwari-klar:1998; @Tal1997]). In the context of plasma physics Sugiyama and Kusano [@Sug2007], Markidis et al. [@Mar2014] and Daldorf et al. [@daldorff-et-al:2014] show ways to combine PIC and MHD fluid models, and Kolobov and Arslanbekov [@Kol2012] describe the transition from neutral gas models to models of weakly ionized plasmas.
We take a slightly different route in solving the Vlasov equation on the finest relevant scales and then adaptively use less and less detailed fluid models outside the kinetic region. In this way we have some control where to use which kind of physical model at the expense of dealing with a substantially more complicated computational infrastructure.
Our group has developed and is continuously developing and improving methods and codes that are capable of combining kinetic and fluid models during runtime [@rieke-et-al:2015], making it possible to consider problems of the type mentioned above at much lower expenses than before.
A sketch of this hierarchy is depicted in figure \[fig:sketch\]. In the inner zone, both ions and electrons are treated kinetically and solved with the Vlasov equation. Adjacent to this zone, ions are still modelled with the Vlasov equation but electrons are described with a 10-moment fluid model. On the next coarser level of description, the ions are also described by a 10-moment fluid model. To ease the transition from the kinetic to the 10-moment fluid description we apply the Landau closure developed in [@wang-et-al:2015] in the fluid description.
![Oversimplified sketch of a multiphysics approach for tail reconnection[]{data-label="fig:sketch"}](earthfieldlines){width="90.00000%"}
It may turn out that in some spatial regions outside the reconnection zone the temperature tensor in the 10-moment description is nearly isotopic. In this case it is even possible to switch to a 5-moment description. This change can be done separately for ions and electron. In future studies we will also try to include the coupling of the 5-moment model to magnetohydrodynamic (MHD) models (with generalised Ohms law) which would represent the last step in this hierarchy.
With this multiphysics strategy, these codes can be applied to problem sizes that are otherwise impossible to reach with kinetic simulations and the understanding of the impact of small scale phenomena on the dynamics on global scales is in reach.
The outline of the paper is the following: first we briefly describe all the plasma models and the necessary numerical schemes (Vlasov equation, 10- and 5-moment fluid equations, Maxwell’s equations, the coupling procedure, the Landau fluid closure). We will then study the Geospace Environmental Modeling (GEM) reconnection setup [@birn2001] and perform comparisons to pure kinetic and pure fluid simulations.
Plasma Models
=============
The plasma models that we have to consider are: i) the Vlasov equation, ii) Maxwell’s equations and iii) the 10- and 5-moment fluid equations. We will briefly summarise these sets of equations.
Vlasov equation
---------------
Collisionless plasmas on the finest level of description are governed by the Vlasov equation $$\label{eq:vlasov-eq}
\partial_t f_s({\textbf{x}},{\textbf{v}},t) + {\textbf{v}}\cdot \nabla_{{\textbf{x}}}f_s({\textbf{x}},{\textbf{v}},t)
+ \frac{q_s}{m_s}\big({\textbf{E}} + {\textbf{v}} \times {\textbf{B}} \big)\cdot \nabla_{{\textbf{v}}}f_s({\textbf{x}},{\textbf{v}},t) = 0\;,$$ where $f_s({\textbf{x}},{\textbf{v}},t)$ denotes the phase-space density, $q_s$ and $m_s$ the particle charge and mass for species $s \in \{e,i\}$ (electrons and ions). The electric and magnetic fields ${\textbf{E}}$ and ${\textbf{B}}$ are given by Maxwell’s equations:
\[eq:maxwell-eq\] $$\begin{aligned}
\nabla \cdot {\textbf{E}} &= \frac{\rho}{\varepsilon_0} \\
\nabla \cdot {\textbf{B}} &= 0 \\
\partial_t {\textbf{B}} &= - \nabla \times {\textbf{E}} \label{eq:faradays-law} \\
\partial_t {\textbf{E}} &= c^2\left(\nabla\times{\textbf{B}} - \mu_0 {\textbf{j}}\right) \label{eq:amperes-law}
\end{aligned}$$
with speed of light $c$ and electric constant $\varepsilon_0$. Maxwell’s equations depend on charge and current densities $\rho$ and ${\textbf{j}}$, which are obtained from the phase
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The thermal and magnetic properties of spin-$1$ magnetic chain compounds with large single-ion and in-plane anisotropies are investigated via the integrable $su(3)$ model in terms of the quantum transfer matrix method and the recently developed high temperature expansion method for exactly solved models. It is shown that large single-ion anisotropy may result in a singlet gapped phase in the spin-$1$ chain which is significantly different from the standard Haldane phase. A large in-plane anisotropy may destroy the gapped phase. On the other hand, in the vicinity of the critical point a weak in-plane anisotropy leads to a different phase transition than the Pokrovsky-Talapov transition. The magnetic susceptibility, specific heat and magnetization evaluated from the free energy are in excellent agreement with the experimental data for the compounds Ni(C$_2$H$_8$N$_2$)$_2$Ni(CN)$_4$ and Ni(C$_{10}$H$_8$N$_2$)$_2$Ni(CN)$_4$$\cdot$H$_2$O.'
author:
- 'M.T. Batchelor, Xi-Wen Guan and Norman Oelkers'
title: 'Thermal and magnetic properties of spin-$1$ magnetic chain compounds with large single-ion and in-plane anisotropies'
---
Introduction
============
Haldane’s [@Hald] conjecture that spin-$S$ chains exhibit an energy gap in the lowest magnon excitation for $2S$ even with no significant gap for $2S$ odd inspired a great deal of experimental and theoretical investigation. Rich and novel quantum magnetic effects, including valence-bond-solid Haldane phases and dimerized phases [@AFF1; @LADD1], fractional magnetization plateaux [@FPL] and spin-Peierls transitions [@SPT] have since been found in low-dimensional spin systems. In this light, the spin-$1$ Heisenberg magnets have been extensively studied in Haldane gapped materials [@SP1C1; @SP1C2]. The valence-bond-solid ground state and the dimerized state form the Haldane phase with an energy gap [@AFF1]. The Haldane gap in integer spin chains may close in the presence of additional biquadratic terms or in-plane anisotropies. In particular a large single-ion anisotropy may result in a singlet ground state [@AFF3; @Tsvelik] which is significantly different from the standard Haldane phase.
The difference between the two gapped phases appears to arise from the ground state and excitations. In the Haldane nondegenerate ground state, a single valence bond connects each neighbouring pair to form a singlet. An expected excitation comes from breaking down the valence bond solid state where a nonmagnetic state $S_i=0$ at site $i$ is substituted for a state $S_i=1$. In this way a total spin $S=1$ excitation causes an energy gap referred to as the Haldane gap. Whereas the large-anisotropy-induced gapped phase in the spin-$1$ chain is caused by trivalent orbital splitting. For a large single-ion anisotropy, the singlet can occupy all states such that the ground state lies in the nondegenerate gapped phase. The lowest excitation arises as the lower component of the doublet is involved in the ground state. This excitation results in the energy gap.
A number of spin-1 magnetic chain compounds have been identified as planar Heisenberg magnetic chains with large anisotropy. These include Ni(C$_2$H$_8$N$_2$)$_2$Ni(CN)$_4$ (abbreviated NENC), Ni(C$_{11}$H$_{10}$N$_2$O)$_2$Ni(CN)$_4$ (abbreviated NDPK) [@NENC; @sus] and Ni(C$_{10}$H$_8$N$_2$)$_2$Ni(CN)$_4$$\cdot$H$_2$O (abbreviated NBYC) [@NBYC]. This kind of system exhibits a nondegenerate ground state which can be separated from the lowest excitation. This gapped phase also occurs in some nickel salts with a large zero-field splitting, such as NiSnCl$_6\cdot 6$H$_2$O [@PRB3488], \[Ni(C$_5$H$_5$NO)$_6$\](ClO$_4$)$_2$ [@PRB3523] and Ni(NO$_3$)$_2\cdot 6$H$_2$O [@PRB4009]. The theoretical study of these compounds relies on a molecular field approximation for the Van Vleck equation [@Carlin]. To first-order Van Vleck approximation, the exchange interaction is neglected. To obtain a good fit to the experimental data an effective crystalline field has to be incorporated. This approximation causes uncertainties and discrepancies in fitting the experimental data. Here we take a new approach via the theory of integrable models.
It recently has been demonstrated [@HTE1] that integrable models can be used to study real ladder compounds via the thermodynamic Bethe Ansatz (TBA) [@TBA] and the exact high temperature expansion (HTE) method [@HTE2; @ZT]. In this paper we present an integrable spin-$1$ chain with additional terms to account for planar single-ion anisotropy and in-plane anisotropy. The ground state properties and the thermodynamics of the chains are studied via the TBA and HTE. We show that a large planar single-ion anisotropy results in a nondegenerate singlet ground state which is significantly different from the Haldane phases found in Haldane gapped materials [@SP1C1; @SP1C2]. We examine the thermal and magnetic properties of the compounds NENC [@NENC; @sus] and NBYC [@NBYC]. Excellent agreement between our theoretical results and the experimental data for the magnetic susceptibility, specific heat and magnetization confirms that the strong single-ion anisotropy, which is induced by an orbital splitting, can dominate the low temperature behaviour of this class of compounds. Our exact results for the integrable spin-$1$ model may provide widespread application in the study of thermal and magnetic properties of other real compounds, such as NDPK [@NENC; @sus] and certain nickel salts [@PRB3488; @PRB3523; @PRB4009; @Carlin].
The integrable spin-$1$ model
=============================
In contrast to the standard Heisenberg spin-$1$ materials, experimental measurements on the new spin-$1$ compound LiVGe$_2$O$_6$ [@SP1] and the compounds NENC and NBYC [@NENC; @NBYC] exhibit unexpected behaviour, possibly due to the presence of biquadratic interaction and a strong single-ion anisotropy, making it very amenable to our approach. The axial distortion of the crystalline field in the compounds NENC and NBYC results from the triplet $^3A_{2g}$ splitting. Specifically, the triplet orbit splits into a low-lying doublet ($d_{xy}, d_{yz}$) and a singlet orbital ($d_{xz}$) at an energy $\Delta_{CF}$ above the doublet. Inspired by the high temperature magnetic properties of this kind of material, we consider an integrable spin-$1$ chain with Hamiltonian $$\begin{aligned}
{\cal H}&=&J\,{\cal H}_0+D\sum_{j=1}^N(S_j^z)^2+E\sum_{j=1}^N((S_j^x)^2-(S_j^y)^2) \nonumber\\
& &
-\mu_Bg H\sum_{j=1}^N S^z_j, \label{Ham1}\\
{\cal H}_0&=& \sum_{j=1}^{N}\left\{\vec{S}_j\cdot \vec{S}_{j+1}+(\vec{S}_j\cdot
\vec{S}_{j+1})^2\right\}. \nonumber\end{aligned}$$ ${\cal H}_0$ is the standard $su(3)$ integrable spin chain, which is well understood [@U; @BA; @Fujii; @sun]. Here $\vec{S}_i$ denotes the spin-$1$ operator at site $i$, $N$ is the number of sites and periodic boundary conditions apply. The constants $J$, $D$ and $E$ denote exchange spin-spin coupling, single-ion anisotropy and in-plane anisotropy, respectively. The Bohr magneton is denoted by $\mu_B$ and $g$ is the Land$\acute{e}$ factor. We consider only antiferromagnetic coupling, i.e. $J>0$ and $D>0$.
The ground state at zero temperature {#sec:TBA}
------------------------------------
For the sake of simplicity in analyzing the ground state properties at zero temperature, we first take $E=0$, i.e., no in-plane anisotropy. In this case Hamiltonian (\[Ham1\]), which can be derived from the $su(3)$ row-to-row quantum transfer matrix with appropriate chemical potentials in the fundamental basis, is integrable by the Bethe Ansatz. The energy is given by $${\cal E}=-J\sum_{j=1}^{M_1}\frac{1}{(v_j^{(1)})^2+\frac{1}{4}}-DN_0-\mu_BgH(N_+-N_-),$$ where the parameters $v_j^{(1)}$ satisfy the Bethe equations [@U; @BA] $$\begin{aligned
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The application of Machine Learning (ML) techniques to complex engineering problems has proved to be an attractive and efficient solution. ML has been successfully applied to several practical tasks like image recognition, automating industrial operations, etc. The promise of ML techniques in solving non-linear problems influenced this work which aims to apply known ML techniques and develop new ones for wireless spectrum sharing between Wi-Fi and LTE in the unlicensed spectrum. In this work, we focus on the LTE-Unlicensed (LTE-U) specification developed by the LTE-U Forum, which uses the duty-cycle approach for fair coexistence. The specification suggests reducing the duty cycle at the LTE-U base-station (BS) when the number of co-channel basic service sets (BSSs) increases from one to two or more. However, without decoding the packets, detecting the number of BSSs operating on the channel in real-time is a challenging problem. In this work, we demonstrate a novel ML-based approach which solves this problem by using energy values observed during the OFF duration. It is relatively straightforward to observe only the energy values during the BS OFF time compared to decoding the entire packet, which would require a full receiver at the LTE-U base-station. We implement and validate the proposed ML-based approach by real-time experiments and demonstrate that there exist distinct patterns between the energy distributions between one and many AP transmissions. The proposed ML-based approach results in a higher accuracy (close to 99% in all cases) as compared to the existing auto-correlation (AC) and energy detection (ED) approaches.'
author:
- 'Adam Dziedzic$^\dag\text{*}$, Vanlin Sathya$^\dag\text{*}$, Muhammad Iqbal Rochman$^\dag$, Monisha Ghosh$^\dag$, and Sanjay Krishnan$^\dag$'
bibliography:
- 'ref.bib'
title: 'Machine Learning enabled Spectrum Sharing in Dense LTE-U/Wi-Fi Coexistence Scenarios'
---
[^1]
LTE, Unlicensed Spectrum, Wi-Fi, Machine Learning.
Introduction {#sec:introduction}
============
The growing penetration of high-end consumer devices like smartphones and tablets running bandwidth hungry applications (e.g. mobile multimedia streaming) has led to a commensurate surge in demand for mobile data (pegged to soar up to 77 exabytes by 2022 [@cisco2018cisco]). An anticipated second wave will result from the emerging Augmented/Virtual Reality (AR/VR) industry [@al2017energy] and more broadly, the Internet-of-Things that will connect an unprecedented number of intelligent devices to next-generation (5G and beyond) mobile networks as shown in Fig. \[mle\]. Existing wireless networks, both cellular and Wi-Fi, must therefore greatly expand their aggregate [*network*]{} capacity to meet this challenge. This is being achieved by a combination of approaches including use of multi-input, multi-output (MIMO) techniques [@gampala2018massive], network densification (i.e. deploying small cells [@sathya2014placement]) and more efficient traffic management and radio resource allocation.
Since licensed spectrum is a limited and expensive resource, its optimal utilization may require spectrum sharing between multiple network operators/providers of different types -increasingly licensed-unlicensed sharing is being contemplated to enhance network spectral efficiency, beyond the more traditional unlicensed-unlicensed sharing. As the most common unlicensed incumbent, Wi-Fi is now broadly deployed in the unlicensed $5$ GHz band in North America where approximately $500$ MHz of bandwidth is available. However, these $5$ GHz unlicensed bands are also seeing increasing deployment of cellular services such as Long Term Evolution (LTE) Licensed Assisted Access (LTE-LAA). Recently, the Federal Communications Commission (FCC) sought to open up 1.2 GHz of additional spectrum for unlicensed operation in the 6 GHz band through a Notice of Proposed Rule Making (NPRM) [@FCC1]. This allocation of spectrum for unlicensed operation will thus only accelerate the need for further coexistence solutions among heterogeneous systems.
![Future Applications on Unlicensed Spectrum Band.[]{data-label="mle"}](ML.pdf){height="5.3cm" width="9cm"}
However, the benefits of spectrum sharing are not devoid of challenges, the foremost being the search for effective coexistence solutions between cellular (LTE and 5G) and Wi-Fi networks whose medium access control (MAC) protocols are very different. While cellular systems employ a Time Division Multiple Access (TDMA)/Frequency Division Multiple Access (FDMA) scheduling mechanism, Wi-Fi depends on the Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) mechanism. The 5 GHz band being unlicensed and offering 500 MHz of available bandwidth has prompted several key players in the cellular industry to develop the LTE-LAA specification within the Third Generation Partnership Project (3GPP). Specification differences between LTE and the incumbent Wi-Fi will lead to many issues due to the incompatibility between the two standards. Therefore, to ensure fair coexistence, certain medium access protocols have been developed as an addition to the licensed LTE standard. In addition to LTE-LAA, there also exists LTE-U which was developed by an industry consortium called the LTE-U Forum and will be the main focus of this paper.
LTE-LAA was proposed by 3GPP [@3gpp; @TCCN] and its working mechanism is similar to the Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) protocol used by Wi-Fi. In LTE-LAA, an LAA base station (BS) acts essentially similar to a Wi-Fi access point (AP) in terms of channel access, *i.e.*, a BS needs to ensure that the channel is free before transmitting any data, otherwise it will perform an exponential back-off procedure similar to CSMA/CA in Wi-Fi. Therefore, there is no need to precisely determine the number of coexisting Wi-Fi APs, due to the channel sensing and back-off mechanism which is adaptable to varying channel occupancy. However, LTE-U which was developed by the LTE-U forum [@forum], uses a simple duty-cycling technique where the LTE-U BS will periodically switch between ON and OFF states in an interval set according to the number of Wi-Fi APs present in the channel. In the ON state, the BS transmits data as a normal LTE transmission while in the OFF state, the BS does not transmit any data but passively senses the channel for the presence of Wi-Fi. The number of sensed Wi-Fi APs is then used to properly adjust the duty cycle interval, and this process is known as Carrier Sense Adaptive Transmission (CSAT). Therefore, accurately determining the number of coexisting Wi-Fi APs is important for optimum operation of the CSAT procedure.
Existing literature addresses the LTE-U and Wi-Fi coexistence in terms of optimizing the ON and OFF duty cycle [@singh2018wi], power control [@chaves2013lte], hidden node problem [@atif2019complete], etc. On the other hand, the LTE-U specification does not specify, and there has been relatively less work on, how a LTE-U operator should detect the number of Wi-Fi APs on the channel to adjust the duty cycle appropriately. There are a number of candidate techniques to determine the number of Wi-Fi APs as follows:
- **Header-Based CSAT (HD):** Wi-Fi APs transmit beacon packets every 102.4 ms, containing important information about the AP, such as the Basic Service Set Identification (BSSID) which is unique to each AP. This is a straightforward way to identify the Wi-Fi AP, but it adds additional complexity since the LTE-U BS would require a full Wi-Fi decoder to obtain this information from the packet.
- **Energy-Based CSAT (ED):** Rather than a full decoding process, it is hypothesized that sensing the energy level of the channel is enough to detect the number of Wi-Fi APs on the channel. However, it is still a challenging problem since the energy level may not correctly correlate to the number of APs under varying conditions (*e.g.*, different category of traffic, large number of Wi-Fi APs, variations in transmission powers, multipath, etc).
- **Autocorrelation-Based CSAT (AC):** To detect the Wi-Fi signal at the LTE-U BS, one can develop an auto-correlation (AC) based detector where the LTE-U BS performs auto-correlation on the Wi-Fi preamble, without fully decoding the preamble. This is possible since all Wi-Fi preambles [^2] contain the legacy short training field (L-STF) and legacy long training field (L-LTF) symbols which contain multiple repeats of a known sequence. However, the AC function can only determine whether a signal is a Wi-Fi signal and cannot derive any distinct information pertaining to each APs.
Table \[table:csat\] lists the different types of CSAT approaches with their own pros and cons. We studied energy detection (ED) and AC based detection of APs in our previous work [@sathya2018energy][@sathya2019auto] [^3], and proved that our algorithms performed reasonably well under various scenarios.
Of late, Machine Learning (ML) approaches are beginning to be used in wireless networks to solve problems such as ag
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Calculations of the potential energy surface for tracer Ga and In adatoms above three GaAs (111)A surface reconstructions are presented in order to understand the growth conditions required to form axial heterostructures in GaAs/InGaAs nano-pillars. In all calculations the Ga adatom has a stronger bond energy to the surface than the In adatom. The diffusion barriers for Ga adatoms are 140meV larger than for In adatoms on the Ga vacancy surface, but they are comparable on the As trimer surface. Also the binding energy for an In adatom is closer to that of a Ga adatom on the As trimer surface. We conclude that the As trimer surface is preferable for adsorption of In and thus for selective formation of hetero-interfaces on (111) facets. This work helps explain the recent successful formation of axial GaAs/InGaAs hetero-interfaces in catalyst free nano-pillars.'
author:
- 'J. N. Shapiro'
- 'D. L. Huffaker'
- 'C. Ratsch'
bibliography:
- 'GaAs111.bib'
title: 'Ab-Initio calculations of binding energy of In and Ga adatoms on three GaAs(111)A surface reconstructions'
---
Introduction {#introduction .unnumbered}
============
Semiconductor nanowires(NWs) and nanopillars(NPs) are exciting materials for probing mesoscopic physics and as building blocks for future high performance opto-electronic devices on Si[@Fuhrer2007Few; @Xiang2006; @Lu:2005lr]. NP synthesis by catalyst-free selective area metal-organic chemical vapor deposition (SA-MOCVD) is a growth technique for forming large arrays of uniform NPs in lithographically defined locations with the inclusion of optical alignment marks for device integration [@Akabori:2003].
The absence of a metal particle to catalyze growth means that atoms adsorb directly onto the crystal surfaces from the vapor, and the resulting crystal shape is controlled in part by minimization of the total surface energy[@ikejiri:2008]. GaAs nanopillars grow in the \[111\] direction, and have hexagonal symmetry with side facets composed of the $\{01\bar{1}\}$ family of planes. Atoms from the vapor adsorb on all facets of the NP and then diffuse to the (111) surface at the tip where they incorporate. The polar (111) surface has a higher surface energy than the stoichiometric $\{01\bar{1}\}$ planes, making the observed crystal shape energetically favorable.
Heterostructure formation is a necessary capability to master in catalyst-free NP synthesis in order to create efficient optical devices[@Agarwal:2008lr]. Core-shell hetero-structures have been studied in a variety of material systems, but axial hetero-structure formation has been elusive in this growth mode. When a new atomic species is introduced, the surface energetics must promote incorporation of the new species on the top (111) surface while simultaneously suppressing incorporation on the side walls. Despite this challenge, axial InGaAs segments of varying composition and thickness were recently demonstrated in GaAs catalyst free NPs grown by SA-MOCVD[@shapiro:2010a]. High V/III ratios ($V/III\sim50$) were required to promote incorporation of In in the axial direction with negligible shell growth. At the lower V/III ratios ($V/III\sim10$) typically used for GaAs NP homoepitaxy, indium is not selective to the (111) surface, and instead nucleates on the side-walls, deforming the crystal facets. Fig. \[fig:pillar\_image\](a) shows scanning electron micrograph (SEM) of NPs formed by SA-MOCVD with axial InGaAs inserts at high V/III ratio, and the vertical side-walls and hexagonal symmetry are evident. Fig. \[fig:pillar\_image\](b) shows a dark field scanning transmission electron micrograph (STEM) of the same pillars revealing the axial InGaAs segment. In contrast, Fig. \[fig:pillar\_image\](c) shows pillars terminated with InGaAs sections at low V/III ratios, and having deformed crystal facets due to indium nucleation on the side-walls. This tendency for indium to bond to all available crystal surfaces has also been reported in Ref \[\].
To investigate possible reasons for the observed differences in behavior between In and Ga during nanopillar epitaxy, we present a theoretical investigation of the potential energy surface (PES) for Ga and In tracer adatoms situated above three common surface reconstructions of GaAs(111)A. The technique of calculating a PES has been applied by numerous researchers as a tool for studying diffusion, adsorption and desorption and for understanding epitaxy on crystal surfaces[@Taguchi2000Firstprinciples; @Taguchi1999Stable; @Penev2004Anisotropic]. A similar study of In and Ga tracer diffusion on GaAs $\{01\bar{1}\}$ is necessary for a more complete understanding of NP epitaxy, and will be presented in a future publication. Computational methods are discussed first, followed by a description of the calculations and their results. We conclude with a discussion and interpretation of the results.
 SEM of GaAs nanopillars containing axial InGaAs inserts grown at high V/III ratio. (b) Dark field STEM of single InGaAs insert. (c) SEM of GaAs nanopillars terminated with InGaAs at low V/III ratio.](Figure1-SEM)
Computational Methods {#computational-methods .unnumbered}
=====================
To calculate the potential energy surface (PES) of a Ga or In adatom above a GaAs(111)A surface reconstruction, we begin by computing the equilibrium surface geometry of the three reconstructions depicted in Fig. \[fig:surface\_reconstructions\]. From left to right, the surfaces are the Ga vacancy surface, the As trimer surface and the As adatom surface. All three surfaces possess a 2x2 unit cell indicated by a shaded parallelogram. Slabs 9 mono-layers thick are iteratively relaxed, keeping the bottom three mono-layers fixed, until residual atomic forces are $<0.02$eV/Å.
The total energy of the surface with an additional Ga or In adatom is then computed using a 4x4 super cell. The entire system, slab and adatom, is allowed to relax, but the adatom coordinates are fixed perpendicular to the \[111\] direction (the adatom is fixed in the x-y plane and allowed to relax in z). All three surfaces possess 3-fold rotational symmetry, and each rotationally symmetric slice posses a mirror symmetry such that only 6-8 points are sampled in a triangle above the 2x2 unit cell. The calculated energies are then reflected, rotated twice through $120^\circ$ and mapped to a rectilinear grid using a cubic interpolation to generate a PES for the adatom of interest. The energy zero-point is chosen to be the total energy of the relaxed reconstructed surface plus the total energy of an isolated atom of In or Ga.

Calculations were performed within the framework of density-functional theory (DFT) as implemented in the software package FHI-AIMS[@Blum20092175], which uses numeric atom centered orbitals for its basis set. The Perdew-Burke-Ernzerhof (PBE) parameterization of the generalized gradient approximation is used for the exchange correlation functional[@PBEGGA]. Approximately 16 layers of vacuum and 64 equivalent k-points in the 1x1 unit cell are specified. Convergence of the energy difference between the maximum and minimum on the PES is confirmed for the k-points, slab thickness, vacuum layers and super-cell size for the Ga vacancy reconstruction. In addition, total energy differences were tested for the FHI-AIMS built in settings “light” and “tight” [@Blum20092175]. The “light” setting, having fewer basis functions and a smaller numerical integration mesh, is found to increase the speed of the calculation while generating results that differ from “tight” by only a few meV. Calculations are therefore carried out using the “light” setting.
Results {#results .unnumbered}
=======
The potential energy surfaces for indium and gallium adatoms above each surface reconstruction are presented in this section. The binding energies at adsorption sites, $A_i$, and transition points, $T$ and $T^\prime$, for In and Ga above each surface are collected in Table \[tab:summary\]. The dominant diffusion energy barriers, calculated as the difference $E_D = T - A_1$, are also tabulated. The Ga vacancy and the As trimer surfaces are the primary surfaces of interest because they are energetically favorable in a vapor consisting of mixed As and Ga atoms[@PhysRevB.54.8844]. The Ga vacancy surface has the lower surface energy at low As chemical potentials and the As trimer surface as the lower surface energy at high As chemical potentials. The As adatom reconstruction always has the highest relative surface energy, and is presented here for completeness, even though this surface does not exist with high probability.
Surface Adatom $E_D$ **$A_1$** **$A_2$** **$T$** **$T^\prime$**
--------- -------- ---------- ----------- ----------- --------- ----------------
Ga **1.06** -2.87 -2.21 -1.81 -
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We compare the vibrational properties of model SiO$_2$ glasses generated by molecular-dynamics simulations using the effective force field of van Beest [*et al.*]{} (BKS) with those obtained when the BKS structure is relaxed using an [*ab initio*]{} calculation in the framework of the density functional theory. We find that this relaxation significantly improves the agreement of the density of states with the experimental result. For frequencies between 14 and 26 THz the nature of the vibrational modes as determined from the BKS model is very different from the one from the [*ab initio*]{} calculation, showing that the interpretation of the vibrational spectra in terms of calculations using effective potentials can be very misleading.'
author:
- Magali Benoit and Walter Kob
title: 'The vibrational dynamics of vitreous silica: Classical force fields vs. first-principles'
---
Motivation
==========
Understanding the microscopic properties of vibrational excitations in disordered systems is a long standing challenge in basic physics as well as in material science since the lack of positional order makes both the experimental study and the theoretical interpretations of the results very difficult. For instance the mechanism leading to the existence the so-called boson peak present in many glasses is the subject of a long standing debate, and the reason for the presence of the D$_1$ and D$_2$ lines in the Raman spectra of amorphous SiO$_2$ has remained unclear for a long time [@winterling75; @galeener76; @buchenau86; @foret96; @benassi96; @wischnewski98; @hehlen00; @pilla00].
In principle molecular dynamics (MD) computer simulations overcome these difficulties since one has direct access to all the necessary microscopic information. Therefore in recent years many studies of this kind have been carried out with the aim of shedding some light on the nature of these vibrational excitations, in particular for the case of silica, the paradigm of network forming glasses [@pasquarello_prl95; @wilson96; @guillot97; @pasquarello_science97; @taraskin97; @elliott_197; @elliott_297; @pasquarello_Sqw98; @pasquarello_raman98; @uchino_raman00; @horbach01]. Due to the large computational costs of such simulations the vast majority of them were done with [*effective*]{} classical force fields, i.e. potentials which were optimized to reproduce certain (somewhat arbitrarily chosen) experimental features of SiO$_2$. It is clear that the reliability of the results of these investigations depends crucially on whether or not the interactions used are sufficiently accurate to allow a faithful description of the real material. Hence a considerable effort has been made to check that the models used do reproduce the salient structural and dynamical features of real silica. In these studies it has been shown that the classical force fields employed are indeed able to give a good description of quantities like the structure factor, the diffusion constant or viscosity, etc. [@garofalini82; @vashishta93; @dellavalle94; @vollmayr96; @guillot97; @horbach99], so their use in investigations of the vibrational properties also appears a reasonable undertaking. Nevertheless, it was found that certain features of the vibrational density of states (DOS) are not well reproduced by these models and more sophisticated calculations, such as the numerical studies based on first-principles, seem to be required [@zotov99], in agreement with conclusions drawn for the case of liquids [@silvestrelli97]. Pasquarello [*et al.*]{} showed that using an [*ab initio*]{} scheme it is possible to reproduce many structural, electronic and vibrational properties of real silica [@pasquarello_prl95; @pasquarello_science97; @pasquarello_Sqw98; @pasquare; @llo_raman98]. However this approach suffers from its heavy computational cost which restricts this type of calculation to the study of very small systems with a rather low statistical accuracy.
One possibility to overcome this limitation, at least partially, is to use a combined approach which consists in generating a glass using an effective potential and subsequently refining the structure obtained by means of first-principles [@EPJB00]. In previous work we showed that the [*structure*]{} of vitreous SiO$_2$ generated using the effective force field by van Beest, Kramer and van Santen (BKS) [@BKS90] is only modified weakly by a first-principles calculation, thus validating the structural model generated with this potential.
In contrast to this we show in this letter that the DOS of a SiO$_2$ glass generated by classical MD simulations using the BKS potential is strongly modified by using an [*ab initio*]{} treatment of the forces, and that this treatment leads to a much better agreement with experimental results. Moreover, in a large frequency range, the nature of the excitations as determined from the effective potential differs significantly from the one determined from the [*ab initio*]{} forces thus raising doubts as to the detailed analysis of the nature of the vibrational excitations determined from the BKS force field.
Simulation details
==================
Molecular-dynamics simulations were done using the BKS potential on systems containing 26 SiO$_2$ units at the experimental density (2.2 g/cm$^{3}$). For this we used the velocity form of the Verlet algorithm with a time step of 1.63 fs. Three different samples were generated by quenching liquids well-equilibrated at 3500 K to 300 K, using three different cooling rates: $5\cdot 10^{12}$ K/s, $3\cdot
10^{11}$ K/s, and $7\cdot 10^{10}$ K/s. The glasses obtained this way (which are non-equilibrium structures) were annealing for 70 ps at 300 K, and subsequently quenched to 0 K, at which their dynamical matrices were evaluated and diagonalized in order to obtain the vibrational frequencies and the corresponding (normalized) eigenmodes. In parallel the final atomic coordinates and velocities after the annealing at 300 K were used as initial coordinates and velocities for short ($\approx$ 0.12 ps) [*ab initio*]{} molecular-dynamics simulations of the Car-Parrinello type [@CP85], using the CPMD code [@CPMD95]. The technical details of these simulations were identical to the ones described in Ref. [@EPJB00]. At the end of these simulations the structures of the three glasses were relaxed to 0 K and the dynamical matrices were computed by evaluating the second derivatives of the total energy with respect to atomic displacements by taking finite differences of the atomic forces. Subsequently the vibrational frequencies and the corresponding eigenmodes were obtained from these matrices. Hence we obtained $g(\omega)$, the [*true*]{} DOS for this system. Note that although the cooling rates are high and the system size is small, the DOS depends only weakly on these parameters [@vollmayr96].
In the following, the quantities computed by means of classical molecular-dynamics simulations using the BKS potential and the CPMD code will be labeled “BKS" and “CP", respectively.
Results
=======
Since we found that the DOS from the three different cooling rates are identical to within the statistical error - which is relatively large due to the small system size -, we decided to treat the three glasses as independent statistical samples and analyzed the three sets of vibrational frequencies/modes together. The resulting vibrational DOS was used to compute an [*effective*]{} neutron scattering cross section $G(\omega)=C(\omega)g(\omega)$. This was done by using the incoherent approximation and by calculating $C(\omega)$ as in Ref. [@elliott_197]. We note that the correction functions $C(\omega)$ for the BKS and CP are very similar and hence differences in the respective $G(\omega)$ are mainly due to differences in the respective $g(\omega)$. In Fig. \[fig1\] we compare the $G(\omega)$ obtained and we see that at intermediate frequencies the two curves are very different. In particular we see that the CP curve has a pronounced peak at around 12 THz and a smaller one at around 24 THz. Finally there is a small peak at 18 THz, the so-called D$_2$ line, which is due to a ring of size three. Overall the CP curve is in very good agreement with previous investigations [@pasquarello_raman98]. All these features are missing in the BKS curve, despite the good agreement between CPMD and BKS with regard to structural properties. Also included is the result of a neutron scattering experiment by Carpenter and Price [@carpenter85] and from the reasonable agreement between this curve and the one from the CP calculation we conclude that the latter is reliable. \[ Note that i) there is no fit parameter whatsoever, and ii) the lack of a small peak at around 4 THz in the experimental data is related to the insufficient experimental resolution [@wischnewski98] \]. Hence we conclude from this figure that the DOS as calculated from the BKS model is not very reliable at intermediate frequencies, in agreement with Ref. [@guillot97]. Note that similar discrepancies between experiments and simulations with various effective interactions have already been observed in previous studies [@vashishta93; @vollmayr96; @elliott_197; @elliott_297; @guillot97] and hence we conclude that many other types of force fields also lead to a density of states which is not trustworthy and that most probably the conclusions drawn in this paper hold for these other potentials as well. However, the good agreement between the CP and the experimental DOS
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present (with proof) a new family of decomposable Specht modules for the symmetric group in characteristic $2$. These Specht modules are labelled by partitions of the form $(a,3,1^b)$, and are the first new examples found for thirty years. Our method of proof is to exhibit summands isomorphic to irreducible Specht modules, by constructing explicit homomorphisms between Specht modules.'
author:
- |
Craig J. Dodge\
Department of Mathematics, University at Buffalo, SUNY,\
244 Mathematics Building, Buffalo, NY 14260, U.S.A.\
\
Matthew Fayers\
Queen Mary, University of London, Mile End Road, London E1 4NS, U.K.
title: Some new decomposable Specht modules
---
=12.5cm =8.839cm 0 [This is the second author’s version of a work that was accepted for publication in the Journal of Algebra. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in\
*J. Algebra* ]{}
=8.27in =11.69in
1
Introduction
============
Let $n$ be a positive integer, and let ${\mathfrak{S}_}n$ denote the symmetric group on $n$ letters. For any field $\bbf$, the *Specht modules* form an important family of modules for $\bbf{\mathfrak{S}_}n$. If $\bbf$ has characteristic zero, then the Specht modules are precisely the irreducible modules for $\bbf{\mathfrak{S}_}n$. If $\bbf$ has positive characteristic, the simple $\bbf{\mathfrak{S}_}n$-modules arise as quotients of certain Specht modules. In addition, the Specht modules arise as the ‘cell modules’ for Murphy’s cellular basis of $\bbf{\mathfrak{S}_}n$.
A great deal of effort is devoted to determining the structure of Specht modules; in particular, finding the composition factors of Specht modules and the dimensions of the spaces of homomorphisms between Specht modules. In this paper, we consider the question of which Specht modules are decomposable. It is known that in odd characteristic the Specht modules are all indecomposable, so we can concentrate on the case where ${\operatorname{char}}(\bbf)=2$. In fact, since any field is a splitting field for ${\mathfrak{S}_}n$, we can assume that $\bbf=\bbf_2$. In this case, there are decomposable Specht modules, but remarkably few examples are known. Murphy [@gm] analysed the Specht modules labelled by ‘hook partitions’, i.e. partitions of the form $(a,1^b)$, computing the endomorphism ring of every such Specht module (and thereby determining which ones are decomposable). However, in the last thirty years no more progress seems to have been made.
Our main result is the discovery of a new family of decomposable Specht modules, the first examples of which were discovered by the two authors independently using computations with GAP and MAGMA. These new decomposable Specht modules are labelled by partitions of the form $(a,3,1^b)$, where $a,b$ are even. So in this paper we make a case study of partitions of this form; we are unable to apply Murphy’s method to determine exactly which of these Specht modules are decomposable, but by considering homomorphisms between Specht modules, we are able to show which irreducible Specht modules arise as summands of these Specht modules. We then apply this result to determine which of our Specht modules have a summand isomorphic to an irreducible Specht module.
We now briefly indicate the layout of this paper. In the next section, we recall some basic definitions and results in the representation theory of the symmetric group, which enable us to state our main results in Section \[resultsec\]. In Section \[homsec\] we go into more detail on homomorphisms between Specht modules. In Sections \[uvsec\] and \[uv2sec\] we consider the two classes of irreducible Specht modules which can occur as summands of our decomposable Specht modules. We then apply these results in Section \[whichdec\] to complete the proof of our main results. Finally, we make some concluding remarks in Section \[concsec\].
The authors are indebted to David Hemmer, who first made us aware of each other’s work and initiated this collaboration, and also invited the second author to SUNY Buffalo in September 2011, where some of this work was carried out. This work continued during the ‘New York workshop on the symmetric group’; we are grateful to Rishi Nath of CUNY for inviting us to this conference.
The research of the first author was supported in part by NSA grant H98230-10-1-0192.
Background results {#backsec}
==================
In this section, we summarise some basic results on the representation theory of the symmetric group. For brevity, we specialise some results to characteristic $2$, referring the reader to the literature for general results.
We begin by fixing a field $\bbf$; all our modules will be modules for the group algebra $\bbf{\mathfrak{S}_}n$. We assume familiarity with James’s book [@j2]; in particular, we refer the reader there for the definitions of partitions, the dominance order, the permutation modules $M^\la$, the Specht modules $S^\la$ and the simple modules $D^\la$. We shall also briefly use the Nakayama Conjecture [@j2 Theorem 21.11] which describes the block structure of the symmetric group.
We also need the following two results; recall that if $\la$ is a partition then $\la'$ denotes the conjugate partition.
\[isospecht\] Suppose ${\operatorname{char}}(\bbf)=2$ and $\la$ is a partition such that $S^\la$ is irreducible. Then $S^\la\cong S^{\la'}$.
By [@j2 Theorem 8.15] we have $S^\la\cong(S^{\la'})^\ast$, since the sign representation is trivial in characteristic $2$. But by [@j2 Theorem 11.5], all simple modules for the symmetric group are self-dual.
\[815hom\] If $\la,\mu$ are partitions of $n$, then $$\dim_\bbf{\operatorname{Hom}}_{\bbf{\mathfrak{S}_}n}(S^\la,S^\mu)=\dim_\bbf{\operatorname{Hom}}_{\bbf{\mathfrak{S}_}n}(S^{\mu'},S^{\la'}).$$
This also follows from [@j2 Theorem 8.15].
Regularisation
--------------
We recall here a useful lemma which we shall use later; this is due to James, although it does not appear in the book [@j2]. We concentrate on the special case where $\bbf$ has characteristic $2$, referring to [@j1] for the full result.
For any $l\gs1$, the $l$th *ladder* in $\bbn^2$ is $$\call_l=\lset{(i,j)}{i+j=l+1}.$$ If $\la$ is a partition, the *$2$-regularisation* of $\la$ is the partition $\la{^{\operatorname{reg}}}$ whose Young diagram is obtained by moving the nodes in $[\la]$ as high as possible within their ladders. For example, $(8,3,1^6){^{\operatorname{reg}}}=(8,7,2)$, as we see from the following Young diagrams, in which nodes are labelled according to the ladders in which they lie. 0 $$\young(12345678,234,3,4,5,6,7,8)\qquad\qquad
\young(12345678,2345678,34)$$ 1 It is a simple exercise to show that $\la{^{\operatorname{reg}}}$ is a $2$-regular partition, and we have the following result.
[ ]{}\[jreg\] Suppose $\la$ and $\mu$ are partitions of $n$, with $\mu$ $2$-regular. Then $[S^\la:D^{\la{^{\operatorname{reg}}}}]=1$, while $[S^\la:D^\mu]=0$ if $\mu\ndom\la{^{\operatorname{reg}}}$.
In this paper we shall be concerned with the Specht modules labelled by partitions of the form $(a,3,1^b)$; so we compute the regularisations of these partitions.
\[reg\] Suppose $a\gs4$ and $b\gs2$. Then $$(a,3,1^b){^{\operatorname{reg}}}=
\begin{cases}
(a,b+1,2)&(a>b)\\
(b+2,a-1,2)&(a\ls b).
\end{cases}$$
Irreducible Specht modules
--------------------------
It will be very helpful to know the classification of irreducible Specht modules, which (in characteristic $2$) was discovered by James and Mathas [@jmp2]. If $k$ is a non-negative integer we let $l(k)$ denote the smallest positive integer such that $2^{l(k)}>k$.
[ ]{}\[irrs
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We provide a local class purity theorem for Lipschitz continuous, half-rectified DNN classifiers. In addition, we discuss how to train to achieve classification margin about training samples. Finally, we describe how to compute margin p-values for test samples.'
author:
- 'George Kesidis and David J. Miller [^1]'
title: |
Notes on Lipschitz Margin,\
Lipschitz Margin Training,\
and Lipschitz Margin p-Values\
for Deep Neural Network Classifiers
---
Introduction
============
A variety of papers have been recently produced on “robustifying" Deep Neural Networks (DNNs), particularly to adversarial Test-Time Evasion (TTE) attacks [@Tsuzuku18; @shiqi; @Madry-robust]. We discuss some of this work in Sections III.A and IV.A of [@Miller19] and argue for the need for TTE-attack detection [@MLSP18-ADA] for robustness.
In this note, we derive a [**local class purity**]{} result under the assumption of Lipschitz continuity, discuss Lipschitz margin training, and define an associated p-value. Estimation of the Lipschitz parameter for a given DNN is discussed in, [@Szegedy_seminal; @Tsuzuku18; @Weng18; @Pappas19].
Margin in DNN classifiers
=========================
Consider the DNN $f:\Reals^n\rightarrow (\Reals^+)^C$ where $C$ is the number of classes. Further suppose that for a test-time, input pattern $x\in\Reals^{n}$ to the DNN, the class decision is (x) = \_i f\_i(x), where $f_i$ is the $\th{i}$ component of the $C$-vector $f$. That is, we have defined a class-discriminant output layer of the DNN. Here assume that a class for $x$ is chosen arbitrarily among those that tie for the maximum.
Define the [**margin**]{} of $x$ as \[margin-def\] \_f(x) := f\_[(x)]{}(x)-\_[i=(x)]{} f\_i(x) & & 0. The normalized Lipschitz margin \[nlm\] can roughly be interpreted as a kind of confidence in classifying $x$ to class $\hatc(x)$, Section \[sec:margin-atypical\].
Now suppose the $\ell_{\infty}$ (max-norm) Lipschitz continuity parameter for $f$ is estimated as $L_\infty>0$ satisfying:[^2] x,y, |f(x)-f(y)|\_& & L\_|x-y|\_.
Now consider samples in a open $\ell_\infty$ hypercube centered at $x$, $$y\in \mbox{B}_\infty(x,\varepsilon):=\{z\in\Reals^n~:~|x-z|_\infty
< \varepsilon\}$$ for $\varepsilon>0$.
The following “locally robust classification" result depends on the sample-dependent margin. This result is similar to that of [@Tsuzuku18]. \
\[thm:local-purity\] If $f$ is $\ell_\infty$ Lipschitz continuous with parameter $L_\infty >0$ and $\mu_f(x)>0$ then $$B\left( x,\frac{\mu_f(x)}{2L_\infty}\right)$$ is class pure.
\
[**Proof:**]{} For any $y\in B(x,\frac{1}{2}\mu_f(x)/L_\infty)$ we get by the assumed Lipschitz continuity that \_f(x) & >& |f(x)-f(y)|\_\
& := & \_i |f\_i(x)-f\_i(y)|\
& & \_i |f\_i(x)|-|f\_i(y)| \
& = & \_i f\_i(x)-f\_i(y) \
& & f\_[(x)]{}(x)-f\_[(x)]{}(y) So, \[y-bound1\] f\_[(x)]{}(y) & > & f\_[(x)]{}(x) -\_f(x).
If we instead write $|f_i(y)|-|f_i(x)|$ in the triangle inequality above and then replace $\hatc(x)$ by any $i\not=\hatc(x)$, we get that \[y-bound2\] i=(x), f\_[i]{}(y) & < & f\_[i]{}(x) +\_f(x). So, by (\[y-bound1\]) and (\[y-bound2\]), i=(x), f\_[i]{}(y) & < & f\_[i]{}(x) +\_f(x)\
& & f\_[(x)]{}(x) -\_f(x) \
& < & f\_[(x)]{}(y)
Lipschitz-margin training
=========================
Robust training is surveyed in [@shiqi; @Miller19]. We focus herein on attempting to achieve a prescribed Lipschitz margin. Recall that, by Cover’s theorem [@Cover65], class separation is achieved if the DNN’s penultimate layer is sufficiently large.
Let $\theta$ represent the DNN parameters. Let $\Tcal$ represent the training dataset and let $c(x)$ for any $x\in\Tcal$ be the [*ground truth*]{} class of $x$.
To try to achieve a common Lipschitz margin of $\mu$ for all training samples, [@Tsuzuku18] suggests to add the margin “to all elements in logits except for the index corresponding to" $c(x)$. For example, train the DNN by finding: & &\
& = & \_-\_[x]{}() \[tsuzuku-obj\] For a softmax example, one could train the DNN using the modified cross-entropy loss[^3]: \_-\_[x]{}() \[cel-obj\] These DNN objectives do not guarantee the margins for training samples will be met.
Alternatively, for each training sample $x$, one could augment the training set with plural samples $y$ such that $|x-y|_\infty = \mu$ and simply train using an unmodified logit or cross-entropy loss objective.
Alternatively, one could first train an “original" DNN with an unmodified objective and unaugmented training dataset. Then the original DNN is used to produce [*adversarial examples*]{} by some strategy, [@Papernot; @Goodfellow; @CW; @MLSP18-ADA], each of bounded perturbation ($\sim\mu$) starting from training samples. The training dataset is then augmented by these adversarial examples and the DNN retrained (say starting from the parameters of the original DNN). See [@Madry-robust; @Zhang-blindspot] (and Sections III.A, IV.A of [@Miller19]).
Alternatively, one can achieve Lipschitz-margin DNN training by (dual) optimization of the weighted margin constraints, [*e.g.*]{}, \_ \_[x]{} \_x (), \[cel-dual0\] where the DNN mappings $f_i$ obviously depend on the DNN parameters $\theta$, and the weights $\lambda_x\geq 0$ $\forall x\in\Tcal$. For hyperparameter $\delta>1$, training can proceed simply as:
- Select initially equal $\lambda_x >0$, say $\lambda_x=1$ $\forall x\in\Tcal$.
- Optimize over $\theta$ (train the DNN).
- If all margin constraints are satisfied then stop.
- For all $x\in \Tcal$: if margin constraint $x$ is not satisfied then $\lambda_x \rightarrow\delta \lambda_x$.
- Go to step 1.
Again, the parameters of the previous DNN could initialize the training of the next, and an initial DNN can be trained instead by using a logit or cross-entropy loss objective, as above. There are many other variations including also decreasing $\lambda_x$ when the $x$-constraint is satisfied, or additively (rather than exponentially) increasing $\lambda_x$ when they are not, and changing $\lambda_x$ in a way that depends on the degree of the corresponding margin violation. Clearly this approach may require frequent retraining of the DNN. Finally, let $-\sum_{x\in\Tcal} L(\theta,x,c(x))$ be a cross-entropy loss. For example, [@shiqi] discloses the training problem, \_\_[zB(x,), x]{} -\_[x]{} L(,z,c(x)), but notes that the inner maximization is NP hard [@Reluplex].
Low-margin atypicality of test samples {#sec:margin-atypical}
======================================
Given an arbitrary DNN $f:\Reals^n\rightarrow (\Reals^+)^C$ , let $\Tcal_\kappa$ be the training samples of class $\kappa\in\{1,2,...,C\}$, $\forall x\in\Tcal_\kappa$, $\hatc(x)=c(x)=\kappa$. Recall (\[margin-def\]) and suppose a Gaussian Mixture Model (GMM
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The possible transport of fibers by fluid flow in fractures is investigated experimentally in transparent models using flexible polyester thread (mean diameter $280 \mu\mathrm{m}$) and Newtonian and shear thinning fluids. In the case of smooth parallel walls, fibers of finite length $\ell = 20-150\, \mathrm{mm}$ move at a constant velocity of the order of the maximum fluid velocity in the aperture. In contrast, for fibers lying initially at the inlet side of the model and dragged by the flow inside it, the velocity increases with the depth of penetration (this results from the lower velocity - and drag - in the inlet part). In both cases, the friction of the fiber with the smooth walls is weak. For rough self-affine walls and a continuous gradient of the local mean aperture transverse to the flow, transport of the fibers by a water flow is only possible in the region of larger aperture ($\bar{a} \gtrsim 1.1 \mathrm{mm}$) and is of “stop and go” type at low velocities. Time dependent distorsions of the fiber are also often observed. When water is replaced by a shear thinning polymer solution, the fibers move faster and continuously in high aperture regions and their friction with the walls is reduced. Fiber transport becomes also possible in narrower regions where irreversible pinning occurred for water. In a third rough model with no global aperture gradient but with rough walls and a channelization parallel to the mean flow, fiber transport was only possible in shear-thinning flows and pinning and entanglement effects were studied.'
author:
- 'Maria Veronica D’Angelo$^{1}$'
- Harold Auradou$^1$
- Guillemette Picard$^2$
- 'Martin E. Poitzsch$^2$'
- 'Jean-Pierre Hulin$^1$'
title: 'Single fiber transport by a fluid flow in a fracture with rough walls: influence of the fluid rheology'
---
Introduction {#intro}
============
Fiber transport by flowing fluids is of interest in many areas of physics, biology and engineering: examples include the transport of paper pulp [@Stockie1998], the manufacturing of fiber-reinforced composites [@Yasuda2002], the rheology of biological polymers [@Lagomarsino2005] and the motility of micro-organisms [@Lowe2003; @Purcell1977]. Recently, also, using long optical fibers has been suggested as a method to realize distributed in-situ measurements (temperature for instance) on natural water flows [@Selker2006]: this raises the problem of the possible transport of the fiber by the moving fluid which often occurs in geometries confined by solid walls. More precisely, little work has been devoted to flow channels with rough walls such as fractures of natural rocks and of materials of industrial interest in civil, environmental and petroleum engineering. In that case, the interactions of the fibers with the walls are particularly important and may lead to blockage of the motion of the fibers and/or to clogging of the channels. In addition to their practical applications, these processes raise fundamental questions regarding the motion of flexible solid bodies in complex flow fields.
The objective of the present experimental study is to understand the transport of fibers by the flow and to investigate the role of the flow geometry and fluid rheology. Here, single fractures are modeled by the space (saturated by a flowing fluid) between either two parallel plane walls or two complementary rough self-affine walls with a relative shear displacement from their contact position. This latter configuration allows one to reproduce preferential flow channels [@NAS; @Adler1999] which are a widespread feature of natural fractures and influence strongly their transport properties. The walls of the fracture are transparent to allow for optical observations of the motion of the fibers.
In addition to hydrodynamic forces on the fibers due to the relative velocity with the fluid, their motion and deformation is influenced by different effects. A first one is the interaction forces with the walls: they are particularly important when the diameter of the fibers is comparable to the local channel aperture or when they are close to one of the walls [@Sugihara-Seki1993; @Petrich1998]. Second, tension forces reflecting the mechanical cohesion of the fiber are present all along its length so that the motion of each region influences the other ones. Tension and hydrodynamic forces add-up and their spatial variations in shear flows or flows with curved streamlines deform the fibers (although their length remains constant): this may finally lead to entanglement and blockage. Next, these deformations are opposed by elastic forces reflecting the non-zero stiffness of the fibers: their relative magnitude with respect to the hydrodynamic forces is a particularly important parameter of the problem [@Forgac1959]. On the one hand, very flexible fibers would seem to be able to follow the streamlines but, on the other hand, loops may appear easily and lead to trapping in narrow zones [@Forgac1959b]. Another key element is the rheology of the fluids which influences strongly the hydrodynamic forces on slender bodies, even in simple shear flows [@Leal1975]. Finally, inertia (particularly in regions of large spatial flow velocity variations) and gravity may also be of importance by inducing motions transverse to the flow and towards the walls.
In the following, the feasibility of fiber transport and its dependence on the mechanisms discussed above are studied in three different model fractures. A first one has smooth walls and is used as a reference case and the two others have rough walls with self affine geometries. For one of the rough fractures ($F3$), the mean planes of the walls are parallel. For the other one ($F2$), they have a small angle resulting in a non-zero transverse gradient of the mean aperture: this wedge shape mimicks the edge of many natural fractures. In this work, the influence of the geometry of the fibers on their transport is analyzed by comparing the motion of finite length segments (still with an aspect ratio greater than $200$) and of continuous threads injected at the inlet of the fracture. The influence of deformations due to flow velocity gradients could be investigated by using flexible thin fibers made of polyester thread. Special attention has been brought to the mechanisms of pinning during the motion of the fibers and of possible depinning after enough elastic energy has been accumulated (see sections \[sec:F2W\] and \[sec:F2NN\]). Finally, the influence of the fluid rheology has been studied by comparing fiber transport by Newtonian and shear thinning fluids.
Experimental set-up and procedure {#sec:exp}
=================================
Fiber characteristics {#fiber}
---------------------
The experimental fibers are prepared from commercial polyester thread used for needlework and made of two strands twisted together. The section is not circular so that the diameter varies between $220$ and $340\ \mu m$. The specific mass of the fiber is $\rho=1.8 \pm 0.1 \times 10^{-3} \, kg/m^3$. Its bending stiffness $EI$ (ratio of the applied bending momentum by the curvature) is of the order of $10^{-9}\, kg.m^3/s^2$ (a value similar to that reported in ref. [@habibi07] for a comparable material). This value has been estimated by measuring the deflection under its own weight of a horizontal fiber segment attached at one end [@Landau].
The choice of these multifilament fibers reflects a trade-off between several requirements. The fiber diameter is large enough to be clearly visible and its flexibility is both high enough to allow for deformations by the fluid velocity gradients and low enough so that coils do not build up too easily (see introduction). We verified indeed that monofilament wires of similar (and even smaller) diameters were too rigid and retain often, in addition, a permanent curvature.Ó
Both segments of fiber and continuous fibers were used in the present work. The segments are cut out of polyester thread with a length $20\, \mathrm{mm} \le \ell \le 150\, \mathrm{mm}$. The continuous fibers are also cut out of a spooled thread but with a length larger than that of the fracture: they are progressively injected from above into the fracture under zero applied tension conditions. A specific procedure is used to avoid blunting the ends of the fibers and the fibers are carefully saturated with liquid prior to the experiments.
Model fractures {#model}
---------------
![Schematic view of the experimental models. (a) fracture with flat parallel walls - (b) fracture with complementary self-affine walls with a relative displacement $\vec{u}$. For all cells : $w=90\ \mathrm{mm}$, $a_b =20\ \mathrm{mm}$, $a_e = 5\ \mathrm{mm}$, $l_e=52\ \mathrm{mm}$, $l_b \sim 20\ \mathrm{mm}$ and $L=288\ \mathrm{mm}$. The mean flow is vertical and parallel to $x$[]{data-label="fig:setup"}](figure1.eps){width="\W"}
The fracture models are manufactured with the same technique as in Ref. [@Boschan2006] by carving two parallelepipedic plexiglas blocks using a computer controlled milling machine. The two blocks are then clamped together in a preset position determined by the geometry of the sides of the block: these act as spacers leaving a controlled interval between the surfaces for the fluid flow (Fig. \[fig:setup\]). This procedure allows for the realization of model fractures with rough (or smooth) walls of arbitrary geometries and three of them have been used.
The first sample (refered to as $F1$) has smooth parallel plane walls separated by a fixed distance $a(x,y)=0.65\ \mathrm{mm}$ (Fig.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Central exclusive double diffractive Higgs boson production, $pp\to p \oplus H \oplus p$, is now recognised as an important search scenario for the LHC. We consider the case when the Higgs boson decays to two $W$ bosons, one of which may be off-mass-shell, that subsequently decay to the $q\bar q l \nu$ final state. An important background to this is from the QCD process $gg\to Wq\bar q$, where the two gluons are required to be in a $J_z=0$, colour-singlet state. We perform an explicit calculation and investigate the salient properties of this potentially important background process.'
---
IPPP/05/12\
DCPT/05/24\
13th April 2005\
[**Diffractive $W+2$ jet production: a background to\
exclusive $H\to WW$ production at hadron colliders**]{}
<span style="font-variant:small-caps;">V.A. Khoze$^{a,b}$, M.G. Ryskin$^{a,b}$ and W.J. Stirling$^{a,c}$</span>\
$^a$ Department of Physics and Institute for Particle Physics Phenomenology,\
University of Durham, DH1 3LE, UK\
$^b$ Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg, 188300, Russia\
$^c$ Department of Mathematical Sciences, University of Durham, DH1 3LE, UK\
Introduction
============
Within the last few years the unique environment for investigating new physics using forward proton tagging at the LHC has become fully appreciated, see for example [@KMRProsp; @DKMOR; @KKMRext; @cox; @JE; @CR] and references therein. Of particular interest is the ‘central exclusive’ Higgs boson production $pp\to p \oplus H \oplus p$. The $\oplus$ signs are used to denote the presence of large rapidity gaps; here we will simply describe such processes as ‘exclusive’, with ‘double-diffractive’ production being implied. In these exclusive processes there is no hadronic activity between the outgoing protons and the decay products of the central system. The predictions for exclusive production are obtained by calculating the diagram of Fig. \[fig:H\] using perturbative QCD [@KMR; @KMRProsp]. In addition, we have to calculate and include the probability that the rapidity gaps are not populated by secondary hadrons from the underlying event [@KMRsoft] .
=0.4
There are three reasons why central exclusive production is so attractive. First, if the outgoing protons remain intact and scatter through small angles then, to a very good approximation, the primary active di-gluon system obeys a $J_z=0$, CP-even selection rule [@Liverpool; @KMRmm]. Here $J_z$ is the projection of the total angular momentum along the proton beam axis. This selection rule readily permits a clean determination of the quantum numbers of the observed Higgs-like resonance which will be dominantly produced in a scalar state. Secondly, because the process is exclusive, the energy loss of the outgoing protons is directly related to the mass of the central system, allowing a potentially excellent mass resolution, irrespective of the decay mode of the produced particle.[^1] And, thirdly, a signal-to-background ratio of order 1 (or even better) is achievable, even with a moderate luminosity of 30 fb$^{-1}$ [@DKMOR; @cox]. In some MSSM Higgs scenarios central exclusive production provides an opportunity for lineshape analysing [@KKMRext; @JE] and offers a way for direct observation of a CP-violating signal in the Higgs sector [@KMRCP; @JE]. The analysis in [@KMR; @DKMOR; @KKMRext] was focused primarily on light SM and MSSM Higgs production, with the Higgs decaying to 2 $b-$jets. The potentially copious $b-$jet (QCD) background is controlled by a combination of the spin-parity selection rules [@Liverpool; @KMRmm], which strongly suppress leading-order $b \bar b$ production, and the mass resolution from the forward proton detectors. The missing mass resolution is especially critical in controlling the background, since poor resolution would allow more background events into the mass window around the resonance. Assuming a mass window $\Delta M \sim 3 \sigma \sim 3-4$ GeV, it is estimated that 11 signal events, with a signal-to-background ratio of order 1, can be achieved with a luminosity of 30 fb$^{-1}$ in the $b \bar b$ decay channel [@KMRmm; @DKMOR].[^2] Whilst the $b \bar b$ channel is theoretically very attractive, allowing direct access to the dominant decay mode of the light Higgs boson, there are some basic problems which render it challenging from an experimental perspective, see [@ww] for more details. First, it relies heavily on the quality of the mass resolution from the proton taggers to suppress the background. Secondly, triggering on the relatively low-mass dijet signature of the $H \rightarrow b \bar b$ events is a challenge for the Level 1 triggers of both ATLAS and CMS. And, thirdly, this measurement requires double $b-$tagging, with a corresponding price to pay for tagging efficiencies. In Ref. [@ww], attention was turned to the $WW^*$ decay mode of the light Higgs Boson, and above the 2 $W$ threshold, the $WW$ decay mode.[^3] This channel does not suffer from any of the above problems: suppression of the dominant backgrounds does not rely so strongly on the mass resolution of the detectors, and, certainly, in the semi-leptonic decay channel of the $WW$ system Level 1 triggering is not a problem. The advantages of forward proton tagging are, however, still explicit. Even for the double leptonic decay channel (i.e. with two leptons and two final state neutrinos), the mass resolution will be very good, and of course observation of the Higgs in the double tagged channel immediately establishes its quantum numbers. It is worth mentioning that the mass resolution should improve with increasing Higgs mass [@RO]. Moreover, the semileptonic ‘trigger cocktail’ may allow the combination of signals not only from $H\to WW$ decays but also from the $\tau\tau$, $ZZ$ and even the semileptonic $b-$decay channels.
The central exclusive production cross section for the Standard Model Higgs boson was calculated in [@KMR; @KMRProsp]. In Fig. \[fig:tanbeta\] we show the cross section for the process $pp \rightarrow p H p \rightarrow p WW p$ as a function of the Higgs mass $M_H$ at the LHC. The increasing branching ratio to $WW^{(*)}$ (from $12 \%$ at $M_H = 120$ GeV to $\sim 100 \%$ at $160$ GeV) as $M_H$ increases (see for example [@CH]) compensates for the falling central exclusive production cross section. For comparison, we also show the cross section times branching ratio for $pp \rightarrow p H p \rightarrow p b \bar b p$. Here, and in what follows, we use version 3.0 of the HDECAY code [@HDEC]. For reference purposes, the cross sections in Fig. \[fig:tanbeta\] are normalised in such a way that $\sigma_H = 3$ fb for $M_H = 120$ GeV.
Note also that nowadays there is renewed interest in MSSM scenarios with low $\tan\beta$. This is because the most recent value of the top quark mass weakens the low $\tan\beta$ exclusion bounds from LEP (see for example [@tanbeta]), and the experimental coverage of this range of the MSSM parameter space becomes more attractive. In Fig. \[fig:tanbeta\] we show the results for $\tan\beta=2,3,4$. Evidently the expected central exclusive diffractive production yield is promising in the low $\tan\beta$ region.
=0.8
Experimentally, events with two $W$ bosons in the final state fall into 3 broad categories — fully-hadronic, semi-leptonic and fully-leptonic — depending on the decay modes of the $W$’s. Events in which at least one of the $W$s decays in either the electron or muon channel are by far the simplest, and Ref. [@ww] focuses mainly on these semi- and fully-leptonic modes. As mentioned above, one of the attractive features of the $WW$ channel is the absence of a relatively large irreducible background, [*cf.*]{} the large central exclusive $b \bar b$ QCD background in the case of $H \rightarrow b \bar b$, suppression of which relies strongly on the experimental missing mass resolution and di-jet identification. The primary exclusive backgrounds in the case of the $WW$ channel can be divided into two broad categories:
1. Central production of a $WW^*$ pair $pp\to
p+(WW^*)+p$ from either the (a) $\gamma\gamma\to WW^*$ or (b) $gg^{PP}\to WW^*$ subprocess.
2. The $W$-strahlung process $pp\to p+Wjj+p$ originating in the $gg^{PP}\to Wq \bar q$ subprocess, where the $W^*$ is ‘faked’ by
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The paper considers a coupled system of linear Boltzmann transport equations (BTE), and its Continuous Slowing Down Approximation (CSDA). This system can be used to model the relevant transport of particles used e.g. in dose calculation in radiation therapy. The evolution of charged particles (e.g. electrons and positrons) are in practice often modelled using the CSDA version of BTE because of the so-called forward peakedness of scattering events contributing to the particle fluencies (or particle densities), which causes severe problems in numerical methods. We shall verify, after the preliminary discussion, that CSDA-type modelling is actually necessary due to hyper-singularities in the differential cross-sections of certain interactions, that is, first-order partial derivatives with respect to energy must be included into the transport part of charged particles. The existence and uniqueness of (weak) solutions is shown, under sufficient criteria and in appropriate $L^2$-based spaces, for a single (particle) CSDA-equation by using three techniques, the Lions-Lax-Milgram Theorem (variational approach), the theory of $m$-dissipative operators and the theory evolution operators (semigroup approach). The necessary a priori estimates are derived. In addition, we prove the corresponding results and estimates for the system of coupled transport equations. The related results are given for the adjoint problem as well. We also give some computational points (e.g. certain explicit formulas), and we outline a related inverse problem at the end of the paper.'
address:
- '$^1$University of Eastern Finland, Department of Applied Physics, Kuopio, Finland'
- '$^2$Varian Medical Systems Finland Oy, Helsinki, Finland'
- '$^3$RWTH Aachen University, MATHCCES, Schinkelstrasse 2, 52062, Germany'
- '$^4$RWTH Aachen University, IGPM, Templergraben 5, 52062, Germany'
author:
- 'J. Tervo$^1$'
- 'P. Kokkonen$^2$'
- 'M. Frank$^3$'
- 'M. Herty$^4$'
title: 'On existence of $L^2$-solutions of Coupled Boltzmann Continuous Slowing Down transport equation system'
---
Introduction {#intro}
============
The *Boltzmann transport equation* (BTE) models changes of the number density of particles in phase space (position, velocity direction, energy). In this paper the species of particles include photons, electrons and positrons and the explored analysis of transport equations is mainly intended for dose calculation in radiation treatment planning. However, various other kinds of transport phenomena can be modelled by equations of similar type including in e.g. transport of particles in optical tomography ([@anikonov02], [@arridge]), in cosmic radiation ([@wilson]) and in solid state physics ([@madelung78]). For general theory of linear BTE with relevant boundary conditions we refer to [@dautraylionsv6] and [@agoshkov]. See also [@case], [@cercignani], [@duderstadt], [@pomraning] where the subject is considered from a physical point of view. For some recent issues (including certain inverse problems) related to linear BTE can be found in [@mokhtarkharroubi], and general non-linear aspects e.g. in [@ukai], [@bellamo]. A thorough mathematical survey (mathematical and physical foundations, results, problems) of non-linear collision theory of particle transport is given in [@villani]. This survey is mainly intended to collision processes in dilute gases and plasmas but analogous results and problems arise in other fields of particle physics. Finally, for topics related to Monte–Carlo methods in the context of BTE, both from theoretical and practical point of view, we refer to [@lapeyre], [@seco] and [@spanier08].
Dose calculation is of crucial importance in radiation therapy. Relevant dose calculation models require (approximate) solution of a coupled system of (linear) transport equations for fluencies (number densities in the phase space) for all considered particles. This is a difficult problem, at least from computational point of view, due to the different particle species and their dependence on a high–dimensional phase space. For that reason traditional dose calculation algorithms have applied some closed-form formulas which have their origins in analytical solutions, or Monte–Carlo derived solutions, of simplified problems. The latter however contain often empirically derived corrections to take more accurately into account the underlying particle physics ([@mayles07], [@seco]). Certain “factors” which account for e.g. the spatial inhomogeneities must be included to improve the accuracy of the final solution. These approaches lead to methods that are fast enough but have typically a limited accuracy. Commonly used models are based on the so-called *pencil beams*, or *point kernels*, see [@asadzadeh], [@borgers], [@larsen], [@mayles07], [@tillikainen08], [@ulmer] for more details. A notable exception to these approximate (deterministic) methods is the Acuros code [@vassiliev], which is based on a discretization of the BTE.
In radiation therapy BTE describes the evolution of radiative particles due to scattering and absorption in tissue. The dose delivery methods can be roughly divided into two categories. In *external therapy* the sources (below denoted by $g$) of high energy particles (usually photons, electrons or protons) are on the patches of patient’s surface. In *internal therapy*, on the other hand, the sources (below denoted by $f$) are inside the patient close to the cancerous tissue. In the energy range, say up to 25 MeV, relevant for photon and electron therapy, the three species of particles whose simultaneous evolution should be taken into account in a realistic transport model, are photons, electrons and positrons. In this setting, the potential creation of (or contamination by) other heavy particles will not be taken into account since their contribution to the dose is negligible (see [@seco]).
The transport of relevant particles in tissue (in an appropriate energy range) can be modelled by the following linear *coupled system of three BTEs* \[intro1\] \_x\_j(x,,E)+\_j(x,,E)\_j(x,,E)-(K\_j)(x,,E)=f\_j(x,,E), for $j=1,2,3$, combined with an *inflow boundary condition* (for the definition of $\Gamma_-$, see section \[fs\]) \[intro2\] [\_j]{}\_[|\_-]{}=g\_j,j=1,2,3, where for $j=1,2,3$, \[intro3\] (K\_j)(x,,E)=\_[k=1]{}\^3\_[SI]{}\_[kj]{}(x,’,,E’,E)\_k(x,’,E’) [[d]{}]{}’ [[d]{}E]{}’. For a derivation of linear BTE, see e.g. [@agoshkov], [@allaire12], [@duderstadt], [@stacey01]. The first term on the left in (\[intro1\]) is called a *convection (or advection) operator*, the second term is a *(total) scattering operator* and the third one is a *collision operator*. Notice that the (total) scattering operator $$\Sigma=\Sigma_{\rm t}=\Sigma_{\rm a}+\Sigma_{\rm s}$$ (we drop the index $j$ here to simplify the notation) contains contribution from both the absorption (term $\Sigma_{\rm a}$) and the scattering (term $\Sigma_{\rm s}$), see [@stacey01 Sec. 9.1]. On the right in (\[intro1\]), the functions $f_j$ represent (internal) sources and $g_j$ in (\[intro2\]) are (inflow) boundary sources. The system is coupled through the integral operators $K_j$ (unless, of course, $\sigma_{kj}=0$ for $j\not= k$). The solution $\psi=(\psi_1,\psi_2,\psi_3)$ of the problem (\[intro1\])-(\[intro2\]) is a vector-valued function whose components describe the radiation fluxes of photons, electrons and positrons, respectively. Roughly speaking, the flux $\psi(x,\omega,E)$ is the flux of energy through a surface located at $x$ and normal to the direction $\omega$. The particle number density $N$, which is another usual unknown in kinetic theory, is related to $\psi$ by $\psi = {\left\Vert v\right\Vert} N$, where ${\left\Vert v\right\Vert}$ is the particle speed ([@stacey01]), which is often relativistic.
The equation (\[intro1\]) is a steady state counterpart of the dynamical equation \[intro4\] [1]{}[t]{}+\_x\_j+\_j\_j-K\_j=f\_j,j=1,2,3, where $v_j$ is the velocity of the $j$-th particle type. In radiation therapy related applications, it is sufficient to consider the steady state equations because the flux $\psi$ reaches the steady state nearly instantly ([@borgers99]). The existence of solutions for the problem (\[intro1\]), (\[intro2\]), as well as for the time-dependent problem (\[intro4\]), (\[intro2\]) (with an appropriate initial condition) in $L^1$-based spaces has been studied in [@tervo14] (the results of which remaining valid, after slight modifications, for any
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We calculate the entropy in a trapped, resonantly interacting Fermi gas as a function of temperature for a wide range of magnetic fields between the BCS and Bose-Einstein condensation endpoints. This provides a basis for the important technique of adiabatic sweep thermometry, and serves to characterize quantitatively the evolution and nature of the excitations of the gas. The results are then used to calibrate the temperature in several ground breaking experiments on $^6$Li and $^{40}$K.'
author:
- Qijin Chen
- Jelena Stajic
- 'K. Levin'
title: Thermodynamics of Interacting Fermions in Atomic Traps
---
The claims [@Jin4; @Ketterle3a; @Thomas2a; @Grimm3a; @KetterleV] that superfluidity has been observed in fermionic atomic gases have generated great excitement. Varying a magnetic field, one effects a smooth evolution from BCS superfluidity to Bose-Einstein condensation (BEC) [@Eagles; @Leggett]. In this Letter we use a BCS-BEC crossover theory to study the entropy $S$ over the entire experimentally accessible crossover regime. Our goal is to help establish a methodology for obtaining the temperature $T$ of a strongly interacting Fermi gas via adiabatic sweeps. This addresses an essential need of the experimental cold atom community by providing a temperature calibration for their ground breaking experiments [@Jin3; @Grimm4a; @Ketterle3a]. In the process, we characterize quantitatively the evolution of the excitations and show how their character evolves smoothly from fermionic to bosonic, and conversely. In adiabatic sweeps, the starting $T$ at either a BEC or BCS endpoint is estimated from the “known" shape of the profile in the trapped cloud. Then, the temperature (near unitarity, say) is obtained by equating the entropy before the sweep to that in the strongly interacting regime after the sweep. Conventionally, the temperature scale which appears in the superfluid phase diagram [@Jin3; @Ketterle3a] involves an isentropic sweep between the unitary and the non-interacting Fermi gas regimes. The direction of the sweep is irrelevant in these reversible processes. The important experimental phase diagrams plot the condensate fraction, $N_s/N$ near unitarity vs this Fermi gas-projected temperature, $T_{\text{eff}}$. In this paper, our thermodynamical calculations are used to relate the actual physical temperatures $T$ to $T_{\text{eff}}$, where, in general, $T$ is significantly greater than $T_{\text{eff}}$. A calculation of $N_s(T)$ is simultaneously undertaken [@footnoteonN0; @ourreview] which provides an important self-consistency condition on the thermodynamics, since the same excitations appear in both, Moreover, a calculation of $N_s$ has to be done with proper attention paid to collective modes and gauge invariance [@Kosztin2]. Here we address the various condensate fractions found experimentally [@Jin4; @Ketterle3a], (with emphasis on $^6$Li), as a function of $T_{\text{eff}}$, in the experimental range of magnetic fields.
Our work is based on the BCS-Leggett ground state [@Eagles; @Leggett] and its finite $T$ extension [@ourreview]. Four different classes of experiments have been successfully addressed in this framework. These include (i) $T \approx 0$ breathing modes experiments [@Thomas2a; @Grimm3a] and theory [@Tosia; @Heiselberg], (ii) radio frequency (RF) pairing gap experiments [@Grimm4a] and theory [@Torma2; @yanhe], and (iii) $T$-dependent density profiles [@JS5]. Finally, (iv) plots of the energy $E$ vs $T$ at unitarity [@ThermoScience] yield very good agreement with experiment and serve to calibrate the present thermometry. Two well-known weaknesses of the mean field approach (an underestimate of $\beta$ and an overestimate of the inter-boson scattering length $a_B$ in the deep BEC regime) should be noted. The first affects $E(T)$ but not $S(T)$. However, for the second we introduce a caveat: if the initial endpoint of sweep thermometry is sufficiently deep in the BEC regime (say, $k_F a \leq
0.3$) the accuracy of the final temperature, we calculate for the unitary regime, could be improved by computing the initial $S$ in the deep BEC regime using a pure-boson model with $a_B$ set by hand to the Petrov result [@Petrov].
Because previous thermodynamic theories did not address unitarity, it has not been possible until now to arrive at a temperature scale in the experimentally interesting resonant superfluid regime. Carr *et al.* [@Carr; @Carr3] calculated $S$ at the BCS and weakly interacting, deep-BEC endpoints. The latter true Bose limit which they considered does not appear to be appropriate to current collective mode experiments, [@Thomas2a; @Grimm3a], which show [@Tosia; @Heiselberg] that for physically accessible (i.e., near-BEC) fields, fermions are playing an important role. Thus, the BCS-Leggett ground state appears to be more appropriate than one deriving from Bose-liquid-based theory. Williams *et al.* [@Williams] calculated $S$ for a BCS-BEC crossover theory using a mixture of noninteracting fermions and bosons [@Williams]. This work, omits the important and self consistently determined fermionic excitation gap $\Delta$ which is an essential component for describing the thermodynamics of fermionic superfluids.
Our thermodynamical calculations focus on this self-consistently determined $\Delta$; they are based, for completeness, on a two-channel Hamiltonian [@ourreview; @Griffin; @Milstein]. Here $\Delta$ appears in the fermionic dispersion ${E_{\mathbf{k}}}= \sqrt{({\epsilon_{\mathbf{k}}}-\mu)^2 + \Delta^2}$. (We define ${\epsilon_{\mathbf{k}}}=\hbar^2 k^2/2m$ as the kinetic energy of free atoms, and $\mu$ the fermionic chemical potential.) Importantly, this $\Delta$ provides a measure of bosonic degrees of freedom. In the fermionic regime ($\mu > 0)$, $\Delta$ is just the energy required to dissociate the pairs and thereby excite fermions. At finite $T$, the closed-channel molecular bosons and the open-channel finite momentum Cooper pairs are strongly hybridized with each other, making up the “bosonic" excitations which contribute to thermodynamics.
Our many-body formalism has been described below the superfluid transition temperature $T_c$[@ourreview]. The parameter $\Delta$, (when squared), is the analogue of the total number of particles, in the simplest theory of BEC. Just as in BEC, there are two self-consistency conditions: (i) the effective chemical potential of the pairs, $\mu_{pair}$, is zero, for $ T \leq T_c$ (as is that of the closed-channel molecular bosons $\mu_{mb}$), and, (ii) the number of pairs, reflected in $\Delta^2(T)$ contains two additive contributions representing condensed ($\tilde{\Delta}_{sc}^2$) and noncondensed ($\Delta_{pg}^2$) pairs. The first condition implies that $\Delta(T)$ satisfies a BCS-like gap equation. Then, the condensate is deduced, just as in BEC, by determining the difference between $\Delta^2$ and $\Delta_{pg}^2$. In this approach the hybridized pairs have dispersion ${\Omega_{\mathbf{q}}}= \hbar^2 q^2/2M^*$, with effective pair mass $M^*$.
We now extend this approach above $T_c$. Our first equation represents the important defining condition on $\mu_{pair}$: that the inverse pair propagator (or $T$-matrix) $\left. t^{-1}(Q)\right|_{Q \equiv 0} = Z
\mu_{pair}$, with (inverse) “residue" $Z$. While in the superfluid regions $\mu_{pair}$ and $\mu_{mb}$ vanish, in general, we have $$U^{-1}_{eff}(0) + \sum_{\bf k}
\frac{1-2 f({E_{\mathbf{k}}})}{2 {E_{\mathbf{k}}}}= Z\mu_{pair} \,,
\label{eq:1}$$ where $U_{eff}(0)=U+g^2/(2\mu-\nu)$ involves the sum of the direct attraction $U$ between open-channel fermions, as well as the virtual processes associated with the Feshbach resonance. Here $f(x)$ is the Fermi distribution function. The determination of the inter-channel coupling constant, $g$, and the magnetic field detuning, $\nu$, is described elsewhere [@ClosedChannel], as are the residues $Z$ and $Z_b$ [@ourreview]. The contribution from hybridized bosons will lead to a normal state excitation gap [@JS2a; @ourreview; @Grimm4a; @Jin5] or pseudogap (pg). This can be written in terms of the Bose distribution function $b(x)$ as $$\Delta_{pg}^2=Z^{-1} \sum_{\bf q}\, b(\Omega_q -\mu_{pair})\,.
\label{eq:2}$$
We use the
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Electroweak, gluons, and gravity fields arise as gauge fields from probabilities of physical events.'
author:
- |
G. Quznetsov\
gunn@mail.ru, gunn@chelcom.ru
date: 'August 17, 2007'
title: |
All four forces without superstrings.\
(Other game in town)
---
Introduction
============
As is obvious, the strings theory arrives at it’s logical finish [@Sch]. But it is possible that Nature is done more simply and more naturally than that. In this article I’m propose the deduction of the electroweak, gluons, and gravity forces from the representation of physical events’ probabilities by spinors as “other game in town” [@Sch1].
Electroweak fields
==================
Let $\left\langle \rho \left( \underline{x}\right) ,j_1\left( \underline{x}%
\right) ,j_2\left( \underline{x}\right) ,j_3\left( \underline{x}\right)
\right\rangle =\left\langle \rho \left(t,\mathbf{x}\right) ,\mathbf{j}%
\left( t,\mathbf{x}\right) \right\rangle $ be a probability density 3+1-vector of any physical event [@Q0].
Complex functions $\varphi _1\left( \underline{x}%
\right) $, $\varphi _2\left( \underline{x}\right) $, $\varphi _3\left(
\underline{x}\right) $, $\varphi _4\left( \underline{x}\right) $ exist [@Q1] such that
$$\begin{aligned}
\rho &=&\sum_{s=1}^4\varphi _s^{*}\varphi _s\mbox{,}
\label{j} \\
\frac{j_{\alpha}}{\mathrm{c}} &=&-\sum_{k=1}^4\sum_{s=1}^4%
\varphi _s^{*}\beta _{s,k}^{\left[ \alpha \right] }\varphi _k \nonumber\end{aligned}$$
for every such density vector. Here $\alpha \in \left\{ 1,2,3\right\} $ and $\beta ^{\left[ \alpha \right]
}$ - are diagonal elements of the light Clifford’s pentad [@Q2].
If
$$\varphi =\left[
\begin{array}{c}
\varphi _1 \\
\varphi _2 \\
\varphi _3 \\
\varphi _4
\end{array}
\right]$$
then [@Q4]
$$\frac 1{\mathrm{c}}\partial _t\varphi +\left( \mathrm{i}\Theta _0+\mathrm{i}%
\Upsilon _0\gamma ^{\left[ 5\right] }\right) \varphi =\left(
\begin{array}{c}
\beta ^{\left[ 1\right] }\partial _1+\mathrm{i}\Theta _1\beta ^{\left[
1\right] }+\mathrm{i}\Upsilon _1\beta ^{\left[ 1\right] }\gamma ^{\left[
5\right] }+ \\
+\beta ^{\left[ 2\right] }\partial _2+\mathrm{i}\Theta _2\beta ^{\left[
2\right] }+\mathrm{i}\Upsilon _2\beta ^{\left[ 2\right] }\gamma ^{\left[
5\right] }+ \\
+\beta ^{\left[ 3\right] }\partial _3+\mathrm{i}\Theta _3\beta ^{\left[
3\right] }+\mathrm{i}\Upsilon _3\beta ^{\left[ 3\right] }\gamma ^{\left[
5\right] }+ \\
+\mathrm{i}M_0\gamma ^{\left[ 0\right] }+\mathrm{i}M_4\beta ^{\left[
4\right] }- \\
-\mathrm{i}M_{\zeta ,0}\gamma _\zeta ^{[0]}+\mathrm{i}M_{\zeta ,4}\zeta
^{[4]}- \\
-\mathrm{i}M_{\eta ,0}\gamma _\eta ^{[0]}-\mathrm{i}M_{\eta ,4}\eta ^{[4]}+
\\
+\mathrm{i}M_{\theta ,0}\gamma _\theta ^{[0]}+\mathrm{i}M_{\theta ,4}\theta
^{[4]}
\end{array}
\right) \varphi \mbox{.} \label{ham0}$$
with real $\Theta _k$, $\Upsilon _k$, $M_0$, $M_4$, $M_{\zeta ,0}$, $%
M_{\zeta ,4}$, $M_{\eta ,0}$, $M_{\eta ,4}$, $M_{\theta ,0}$, $M_{\theta ,4}$ and $$\gamma ^{\left[ 5\right] }\stackrel{def}{=}\left[
\begin{array}{cc}
1_2 & 0_2 \\
0_2 & -1_2
\end{array}
\right] \mbox{,} \label{g5}$$
here $\gamma ^{\left[ 0\right] }$, $\beta ^{\left[ 4\right] }$ are antidiagonal elements of the light Clifford’s pentad, and $\gamma _\zeta
^{[0]}$, $\zeta ^{[4]}$, $\gamma _\eta ^{[0]}$, $\eta ^{[4]}$, $\gamma
_\theta ^{[0]}$, $\theta ^{[4]}$ are antidiagonal elements of colored Clifford’s pentads [@Q3].
If $M_{\zeta ,0}=0$, $M_{\zeta ,4}=0$, $M_{\eta ,0}=0$, $M_{\eta ,4}=0$, $%
M_{\theta ,0}=0$, $M_{\theta ,4}=0$ then the Dirac lepton moving equation is derived from (\[ham0\]):
$$\left( \mathrm{i}\frac 1{\mathrm{c}}\partial _t-\Theta _0-\Upsilon _0\gamma
^{\left[ 5\right] }\right) \varphi =\sum_{k=1}^3\left( \beta ^{\left[
k\right] }\left( \mathrm{i}\partial _k-\Theta _k-\Upsilon _k\gamma ^{\left[
5\right] }\right) -m\gamma \right) \varphi \label{eq3}$$
with $m=\sqrt{M_0^2+M_4^2}$ and $\gamma =\left( \frac{M_0}{\sqrt{M_0^2+M_4^2}%
}\gamma ^{\left[ 0\right] }+\frac{M_4}{\sqrt{M_0^2+M_4^2}}\beta ^{\left[
4\right] }\right) $.
Let $x_4$, $x_5$ be some real variables such that
$$-\frac {\pi\mathrm{c}}{\mathrm{h}}\leq x_5\leq \frac {\pi\mathrm{c}}{\mathrm{%
h}},-\frac {\pi\mathrm{c}} {\mathrm{h}}\leq x_4\leq \frac {\pi\mathrm{c}}{%
\mathrm{h}}\mbox{.}$$
and let
$$\begin{aligned}
\widetilde{\varphi }\left( t,x_1,x_2,x_3,x_5,x_4\right) \stackrel{def}{=}%
\varphi \left( t,x_1,x_2,x_3\right) \cdot \nonumber \\
\cdot \left( \exp \left(\mathrm{i}\left( x_5M_0\left( t,x_1,x_2,x_3\right)
+x_4M_4\left( t,x_1,x_2,x_3\right) \right) \right) \right) \mbox{.}
\nonumber\end{aligned}$$
In this case $\widetilde{\varphi }$ obeys to the following moving equation:
$$\left( \sum_{s=0}^3\beta ^{\left[ s\right] }\left( \mathrm{i}\partial
_s-\Theta _s-\Upsilon _s\gamma ^{\left[ 5\right] }\right) -\gamma ^{\left[
0\right] }\mathrm{i}\partial _5-\beta ^{\left[ 4\right] }\mathrm{i}\partial
_4\right) \widetilde{\varphi }=0$$
(here $\beta ^{\left[ 0\right] }=-1$).
This equation can be transformated to the following form [@Q5]:
$$\left( \sum_{s=0}^3\beta ^{\left[ s\right] }\left( \mathrm{i}\partial
_s+F_s+0.5g_1YB_s\right) -\gamma ^{\left[ 0\right] }\mathrm{i}\partial
_5-\beta ^{\left[ 4\right] }\mathrm{i}\partial _4\right
|
{
"pile_set_name": "ArXiv"
}
|
6.0in 8.6in -0.25truein 0.30truein 0.30truein =1.5pc
**NEW INSIGHTS INTO THE PRODUCTION**
**OF HEAVY QUARKONIUM[^1]**
ERIC BRAATEN[^2]
*Department of Physics and Astronomy, Northwestern University*
*Evanston IL 60208 USA*
E-mail: braaten@nuhep.phys.nwu.edu
Introduction
============
The typical high energy physics conference these days includes talk after talk showing remarkable agreement between experiment and theory. There is an occasional two-sigma discrepancy, but most such problems will go away if you have the patience to wait for better data. However there is one problem where experimental results have differed from theoretical predictions by orders of magnitude. This problem is the production of charmonium at large transverse momentum at the Tevatron.
Color-singlet Model
===================
Until recently, the conventional wisdom on the production of heavy quarkonium was based primarily on the [*color-singlet model*]{}.[@schuler] In this model, the cross section for producing a charmonium state is proportional to the perturbative cross section for producing a color-singlet $c \bar c$ pair with vanishing relative momentum and with appropriate angular-momentum quantum numbers: $^1S_0$ for $\eta_c$, $^3S_1$ for $J/\psi$ and $\psi'$, $^3P_J$ for $\chi_{cJ}$, etc. The color-singlet model has great predictive power. The cross section for producing a quarkonium state in any high energy process is predicted in terms of a single nonperturbative parameter for each orbital-angular-momentum multiplet. The nonperturbative factor is $|R(0)|^2$ for S-wave states, $|R'(0)|^2$ for P-wave states, etc., where $R(r)$ is the radial wavefunction. For example, the inclusive differential cross sections for producing $J/\psi$ and $\chi_{cJ}$ in the color-singlet model have the form $$\begin{aligned}
d \sigma (\psi + X) &=&
d \widehat{\sigma}(c \bar c(\underline{1},{}^3S_1) + X) \; |R_\psi(0)|^2 ,
\\
d \sigma (\chi_{cJ} + X) &=&
d \widehat{\sigma}(c \bar c(\underline{1},{}^3P_J) + X) \; |R_{\chi_c}'(0)|^2 .
\label{csm-P}\end{aligned}$$ As the name suggests, the color-singlet model is not a complete theory of quarkonium production derived from QCD. The model ignores relativistic corrections, which take into account the nonzero relative velocity $v$ of the quark and antiquark. These corrections may be numerically significant, since the average value of $v^2$ is only about 1/3 for charmonium and 1/10 for bottomonium. The color-singlet model also assumes that a $c \bar c$ pair produced in a color-octet state will never bind to form charmonium. This assumption must break down at some level, since a color-octet $c \bar c$ pair can make a nonperturbative transition to a color-singlet state by radiating a soft gluon. The clearest evidence that the color-singlet model is incomplete comes from radiative corrections. In the case of S-waves, these can be calculated consistently within the color-singlet model. However, in the case of P-waves, the radiative corrections contain infrared divergences that cannot be factored into $|R'(0)|^2$. This problem was first noted in 1976 in connection with the decays of $\chi_c$ states,[@barbieri] but it was solved only recently.[@bbly] The divergence arises from the radiation of a soft gluon from either the quark or the antiquark that form the color-singlet $^3P_J$ bound state. The infrared divergence can be factored into a matrix element $\langle {\cal O}^{\chi_c}_8(^3S_1) \rangle$ that is proportional to the probability for a pointlike $c \bar c$ pair in a color-octet $^3S_1$ state to form $\chi_c$ plus anything. Thus, perturbative consistency demands that the formula (\[csm-P\]) of the color-singlet model be modified to take into account the nonzero probability for a $c \bar c$ pair produced in a color-octet state to bind to form charmonium: $$\begin{aligned}
d \sigma (\chi_{cJ} + X) &=&
d \widehat{\sigma}(c \bar c(\underline{1},{}^3P_J) + X) \; |R'_{\chi_c}(0)|^2
\nonumber \\
&& \;+\; (2J+1) \;
d \widehat{\sigma}(c \bar c(\underline{8},{}^3S_1) + X) \;
\langle {\cal O}^{\chi_c}_8(^3S_1) \rangle .
\label{Pwave}\end{aligned}$$ The color-singlet model can be used to predict the production rate of charmonium at large transverse momentum in hadron colliders. The first thorough treatment of this problem was given by Baier and Rückl in 1981, and their analysis remained the conventional wisdom for the next decade.[@baier-ruckl] A $\psi$ with large $p_T$ can be produced either directly, or from a $\chi_{cJ}$ with large $p_T$ that decays via $\chi_{cJ} \to \psi \gamma$, or by the decay of a $B$ hadron with large $p_T$. Baier and Rückl assumed that the direct production of charmonium is dominated by the parton processes that are lowest order in the QCD coupling constant $\alpha_s$. The relevant parton processes that produce $c \bar c$ pairs at large $p_T$ are $g g \to c \bar c + g$, $g q \to c \bar c + q$, $g \bar q \to c \bar c + \bar q$, and $q \bar q \to c \bar c + g$. The cross sections $d \widehat{\sigma}$ for these processes are all of order $\alpha_s^3$, but they have different dependences on $p_T$. The only parton process that produces direct $\psi$ is $g g \to c \bar c + g$, and it gives a cross section that has the behavior $d \widehat{\sigma}/d p_T^2 \sim 1/p_T^8$ at large $p_T$. The dominant parton process for direct $\chi_{cJ}$ is $g g \to c \bar c + g$, and it gives $d \widehat{\sigma}/d p_T^2 \sim 1/p_T^6$. Both of these cross sections fall more rapidly with $p_T$ than typical jet production cross sections, which behave like $d \widehat{\sigma}/d p_T^2 \sim 1/p_T^4$. The cross section for $b$ quark production has this scaling behavior when $p_T \gg m_b$. Thus the conventional wisdom was that $\psi$’s at large $p_T$ should come predominantly from $b$ quarks, with direct $\chi_{cJ}$’s being the next most important source, and direct $\psi$’s being negligible. This conventional wisdom has been completely overthrown by recent experimental data from the Tevatron.
Prompt Charmonium at the Tevatron
=================================
In the 1988-89 run of the Tevatron, the cross section for $\psi$ production at large $p_T$ was measured by the CDF collaboration. They also measured the cross section for $\chi_c$ at large $p_T$. Assuming the conventional wisdom that $\psi$’s at large $p_T$ come predominantly from $b$ quarks and from direct $\chi_c$’s, they inferred that the fraction $f_b$ of $\psi$’s that come from $b$ quarks was about 60%.[@CDF] This number was then used to determine the $b$-quark cross section. Unfortunately, the conventional wisdom proved to be wrong. The fraction $f_b$ is actually closer to $15 \%$, and the $b$ quark cross section is significantly smaller than the result obtained in the 1988-89 run.
The breakthrough came in the 1992-93 run of the Tevatron, with the installation of a silicon vertex detector at CDF. This device can be used to measure the separation between the collision point of the $p$ and $\bar p$ and the point where the $\psi$ decays into leptons. If a $\psi$ with large $p_T$ is produced by QCD mechanisms, then the leptons from its decay will trace back to the $p \bar p$ collision point and the $\psi$ is called [*prompt*]{}. Similarly, if a $\chi_c$ is produced by QCD mechanisms and decays radiatively, the resulting $\psi$ is also prompt. On the other hand, the $\psi$’s coming from $b$ quarks are not prompt. A $B$ hadron with large $p_T$ will travel a distance on the
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We analyze multidimensional BSDEs in a filtration that supports a Brownian motion and a Poisson random measure. Under a monotonicity assumption on the driver, the paper extends several results from the literature. We establish existence and uniqueness of solutions in $L^p$ provided that the generator and the terminal condition satisfy appropriate integrability conditions. The analysis is first carried out under a deterministic time horizon, and then generalized to random time horizons given by a stopping time with respect to the underlying filtration. Moreover, we provide a comparison principle in dimension one.'
author:
- 'T. Kruse [^1] , A. Popier [^2]'
bibliography:
- 'biblio\_revised\_version.bib'
title: BSDEs with monotone generator driven by Brownian and Poisson noises in a general filtration
---
Introduction {#introduction .unnumbered}
============
The notion of nonlinear backward stochastic differential equations (BSDEs for short) was introduced by Pardoux and Peng [@pard:peng:90]. A solution of this equation, associated with a [*terminal value*]{} $\xi$ and a [*generator or driver*]{} $f(t,\omega,y,z)$, is a couple of stochastic processes $(Y_t,Z_t)_{t\leq T}$ such that $$\label{eq:standbsde}
Y_t=\xi+\int_t^Tf(s,Y_s,Z_s)ds-\int_t^TZ_sdW_s,$$ a.s. for all $t\le T$, where $W$ is a Brownian motion and the processes $(Y_t,Z_t)_{t\leq T}$ are adapted to the natural filtration of $W$.
In their seminal work [@pard:peng:90], Pardoux and Peng proved existence and uniqueness of a solution under suitable assumptions, mainly square integrability of $\xi$ and of the process $(f(t,\omega,0,0))_{t\leq T}$, on the one hand, and, the Lipschitz property w.r.t. $(y,z)$ of the generator $f$, on the other hand. Since this first result, BSDEs have proved to be a powerful tool for formulating and solving a lot of mathematical problems arising for example in finance (see e.g. [@barr:elka:05; @elka:peng:quen:97; @roug:elka:00]), stochastic control and differential games (see e.g. [@hama:lepe:95; @hama:lepe:peng:97]), or partial differential equations (see e.g. [@pard:99; @pard:peng:92]).
Main results {#main-results .unnumbered}
------------
The aim of this paper is to establish existence and uniqueness of solutions to BSDE in a general filtration that supports a Brownian motion $W$ and an independent Poisson random measure $\pi$. We consider the following multi-dimensional BSDE: $$\label{eq:gene_BSDE}
Y_t = \xi + \int_t^T f(s,Y_s, Z_s,\psi_s) ds - \int_t^T\int_\cU \psi_s(u) \tpi(du,ds) -\int_t^TZ_sdW_s- \int_t^T dM_s.$$ The solution is given by the usual triple $(Y,Z,\psi)$ and also an orthogonal local martingale $M$ which can not be reconstructed by the integrals w.r.t. to the Brownian and Poisson noise. We assume that the generator $f$ is monotonic (one-sided Lipschitz continuous) w.r.t. the $y$-variable and Lipschitz continuous w.r.t. to $z$ and $\psi$. Under the condition that the data $\xi$ and $f(t,0,0,0)$ are in $L^p$, $p > 1$, we provide existence and uniqueness results in $L^p$ spaces (the precise defintion will be given in Section \[sect:setting\]).
Further contributions are a comparison result in dimension one and existence and uniqueness when the terminal time is a non necessarily bounded stopping time.
Related literature {#related-literature .unnumbered}
------------------
There are already a lot of works which provide existence and uniqueness results under weaker assumptions than the ones of Pardoux and Peng [@pard:peng:90] or El Karoui et al [@elka:kapo:pard:97]. A huge part of the literature focuses on weakening the Lipschitz property of the coefficient $f$ w.r.t. the $y$-variable. For example, Briand and Carmona [@bria:carm:00] and Pardoux [@pard:99] consider the case of a monotonic generator w.r.t. $y$ with different growth conditions. There have been relatively few papers which deal with the problem of existence and uniqueness of solutions in the case where the coefficients are not square integrable. El Karoui et al. [@elka:peng:quen:97] and Briand et al. [@bria:dely:hu:03] have proved existence and uniqueness of a solution for the standard BSDE in the case where the data belong only to $L^p$ for some $p\geq 1$.
Another strand of research in the theory of BSDEs concerns the underlying filtration. In [@pard:peng:90] the filtration is generated by the Brownian motion $W$. Since the work of Tang and Li [@tang:li:94], a lot of papers (see e.g. [@barl:buck:pard:97; @bech:06; @morl:10; @pard:97; @royer:06] or the books of Situ [@situ:05] or recently of Delong [@delo:13]) treat the case where the filtration is generated by the Brownian motion $W$ and a Poisson random measure $\pi$ independent of $W$. In most of these papers, the generator $f$ is supposed to be Lipschitz in $y$, even if the monotonic case is mentioned (see [@royer:06]) and all coefficients are square integrable. Yao [@yao:10] studies the $L^p$ case, $p>1$, and gives existence and uniqueness result in the case where the generator is monotone but with at most linear growth w.r.t. $y$. Li and Wei [@li:wei:14] give existence und uniqueness results for a fully coupled forward backward SDE under some monotonicity condition and $L^p$ coefficients, $p\geq 2$. Note that this monotonicity condition involves the coefficients of the forward diffusion and is not the same as the assumption imposed on the generator in this paper. An extension to BSDEs driven by a continuous local martingale $X$ and an integer-valued random measure $\pi$ has been studied by Xia [@xia:00]. Xia supposes that the filtration satisfies the representation property with respect to $X$ and $\pi$ and that the driver is Lipschitz continuous and square integrable.
For more general filtrations, the representation property of a local martingale is no more true (see Section III.4 in [@jaco:shir:03]) and an additional (orthogonal) martingale term has to be introduced in the definition of a solution. This approach was developed in the seminal work of El Karoui and Huang [@elka:huan:97] and by Carbone et al. [@carb:ferr:sant:07] for [càdlàg ]{}martingales. The filtration $\bF$ is supposed to be complete, right continuous and quasi-left continuous. For a given square integrable martingale $X$ ($\langle X \rangle$ denotes the predictable projection of the quadratic variation), the BSDE becomes $$\label{eq:bsde_gene_filt}
Y_t=\xi+\int_t^T f(s,Y_s,Z_s)d\langle X \rangle_s-\int_t^T Z_s dX_s - M_T + M_t.$$ The solution is now the triple $(Y,Z,M)$ where $M$ is a square integrable martingale orthogonal to $X$. [Ø]{}ksendal and Zhang [@okse:zhan:12] analyse BSDE of the form where $f$ does not depend on $z$, and apply to insider finance (see also Ceci et al. [@ceci:cret:russ:14]). Liang et al. [@lian:lyon:qian:11] also obtain results for a particular class of BSDE on an arbitrary filtered probability space. In these papers, existence and uniqueness of the solution of is proved for a Lipschitz continuous function $f$ and under square integrability condition (in [@okse:zhan:12] the monotone case is treated but $f$ does not depend on $z$). The Hilbertian structure of $L^2(\Omega,\F_T,\P)$ is used in Cohen and Elliott [@cohe:elli:12] (see also [@klim:14]). If $L^2(\Omega,\F_T,\P)$ is a separable Hilbert space, then an orthogonal basis of martingales can be introduced instead of $X$ and there is no orthogonal additional term $M$ in . $Z$ becomes a sequence of predictable processes. The special case of a Lévy noise is treated before by Nualart and
|
{
"pile_set_name": "ArXiv"
}
|
March 2015\
revised July 2015
[ **Four-Dimensional Entropy from Three-Dimensional Gravity**]{}\
[**Abstract**]{}
[At the horizon of a black hole, the action of (3+1)-dimensional loop quantum gravity acquires a boundary term that is formally identical to an action for three-dimensional gravity. I show how to use this correspondence to obtain the entropy of the (3+1)-dimensional black hole from well-understood conformal field theory computations of the entropy in (2+1)-dimensional de Sitter space. ]{}
The ability to explain black hole thermodynamics is a key test of any quantum theory of gravity. In this regard, loop quantum gravity has a mixed record. The correct area dependence of black hole entropy appears quite naturally [@ABCK; @ABK]. But to obtain quantitative agreement with the semiclassical results of Bekenstein and Hawking, it seems necessary to tune a rather mysterious parameter, the Barbero-Immirzi parameter $\gamma$, to a peculiar value determined by a complex combinatorial computation [@Domagala; @Meissner].
In the past few years, there have been intriguing hints that the entropy can also be obtained by setting $\gamma=i$ [@Geiller; @Achour; @GhoshPran; @Carlipz]. This is the natural value: it makes the theory self-dual [@Ashtekar], and is the only choice for which the Ashtekar-Barbero-Sen connection (\[a1\]) is a fully diffeomorphism-invariant spacetime connection [@Samuel; @Alexa]. Unfortunately, with this choice one must impose a reality conditions, a procedure that remains poorly defined. As a consequence, the theory with $\gamma=i$ is not nearly as mathematically sophisticated as the version with real $\gamma$, and far fewer results have been established.
In this paper, I will describe a simple new method for computing black hole entropy in loop quantum gravity with $\gamma=i$. The key observation is that loop quantum gravity requires a boundary term at a black hole horizon that is formally identical to an action for three-dimensional gravity with a positive cosmological constant. The identification is not an obvious geometric one, but the four-dimensional horizon maps to a well understood three-dimensional spacetime, and one can exploit this association to use standard techniques of conformal field theory to count the states.
Two $\hbox{SL}(2,\mathbb{C})$ actions
=====================================
We start with (3+1)-dimensional gravity in first-order form, treating the tetrad one-form $e^I = e_\mu{}^Idx^\mu$ and the spin connection one-form $\omega^{IJ} =\omega_\mu{}^{IJ}dx^\mu$ as independent variables. The Ashtekar-Sen self-dual connection [@Ashtekar; @Sen] is $A^{IJ} = \frac{1}{2}\left(\omega^{IJ} + \frac{i}{2}\epsilon^{IJ}{}_{KL}\omega^{KL} \right)$, but to avoid double-counting components, it is sufficient to consider the complexified $\hbox{SU}(2)$—or equivalently, $\hbox{SL}(2,\mathbb{C})$—connection $$\begin{aligned}
A^i = i\omega^{0i} + \frac{1}{2}\epsilon^{ijk}\omega_{jk} ,
\label{a1}\end{aligned}$$ where lower case Roman indices run from $1$ to $3$ (see, for instance, section 4.3 of [@Rovelli]). The gravitational action can then be written in the form [@JacobsonSmolin; @Samuelb] $$\begin{aligned}
I_4 = -\frac{i}{16\pi G_4}\int\! d^4x\, \Sigma_i\wedge F^i ,
\label{a2}\end{aligned}$$ where $F^i = dA^i + \epsilon^{ijk}A_j\wedge A_k$ is the curvature of the connection and $\Sigma^i = ie^0\wedge e^i + \frac{1}{2}\epsilon^{ijk}e_j\wedge e_k$ is the self-dual projection of $e^I\wedge e^J$. The real part of (\[a2\]) is equal to the standard Einstein-Hilbert action, while the imaginary part is essentially irrelevant: it is extremal whenever the real part is, so it does not change the equations of motion, and it vanishes on shell. In loop quantum gravity, the factor of $i$ in (\[a1\]) is often replaced by an arbitrary parameter $\gamma$, the Barbero-Immirzi parameter. The quantization becomes much simpler when $\gamma$ is chosen to be real, but as noted above, hints are now appearing that the self-dual choice $\gamma=\pm i$ simplifies and clarifies the description of black hole entropy.
Now suppose that a black hole is present, with a horizon $\Delta$ of area $A_\Delta$. For the surface $\Delta$ to be an isolated horizon [@ISO], it must obey a geometric restriction, which translates to the condition [@ABCK; @Krasnova] $$\begin{aligned}
F^i = -\frac{2\pi}{A_\Delta}\Sigma^i \quad\hbox{on $\Delta$.}
\label{a5}\end{aligned}$$ Although the horizon is not a physical boundary, the imposition of (\[a5\]) forces us to add a “boundary” term to the action. As first noted by Smolin in a slightly different context [@Smolin], the required term is a Chern-Simons action. The specific form depends on the Barbero-Immirzi parameter; for our choice $\gamma = i$, it is a chiral $\hbox{SL}(2,\mathbb{C})$ Chern-Simons action $$\begin{aligned}
I_\Delta = \frac{k}{4\pi} \int_\Delta \mathrm{Tr}
\left\{A\wedge dA + \frac{2}{3} A\wedge A\wedge A \right\} ,
\label{a6}\end{aligned}$$ where $A = A^iT_i$ is the $\hbox{sl}(2,\mathbb{C})$-valued connection with generators normalized so $\mathrm{Tr}(T_iT_j) = \frac{1}{2}\eta_{ij}$, and the coupling constant $k$ is expressed in terms of (3+1)-dimensional gravitational quantities as $$\begin{aligned}
k_{4D} = \frac{iA_\Delta}{8\pi G_4} .
\label{a7}\end{aligned}$$ Moreover, the symplectic form—that is, the set of Poisson brackets—also acquires a boundary term for the connection at the horizon, which is identical to the symplectic form of Chern-Simons theory (see, e.g., [@Pranzetti]). Thus components of the connection, which commute in the bulk, become canonically conjugate at $\Delta$, and by the usual rules of quantization we expect a Hilbert space $\mathcal{H}_{\mathrm\scriptstyle bulk}\otimes \mathcal{H}_\Delta$, with the bulk and horizon states related by the operator version of the boundary conditions (\[a5\]) [@ABK].
So far, I have not used loop quantum gravity. I now exploit one general feature of that quantization. Classically, the boundary conditions (\[a5\]) imply that the boundary $\hbox{SL}(2,\mathbb{C})$ connection is not flat, and is thus not an extremum of the Chern-Simons action. In loop quantum gravity, though, quantum states are described by spin networks, and the area element on the right-hand side of (\[a5\]) is distributional, differing from zero only at the “punctures” where spin network edges intersect the horizon. The boundary conditions *are* then equivalent to the equations of motion for a Chern-Simons theory, but now on a sphere with punctures (or, technically, a manifold $\mathbb{R}\times S^2$ with Wilson lines) [@WittenCS]. Hence the boundary Hilbert space $\mathcal{H}_\Delta$ is that of a Chern-Simons theory on a sphere with punctures. In standard loop quantum gravity, one can say much more—holonomies around punctures give calculable elements of area—but we shall not need any of those details; it is enough that the boundary theory acts as an independent Chern-Simons theory coupled to the bulk through a set of punctures.
The action (\[a6\]) also appears in a very different context, though: it is the first-order action for (2+1)-dimensional gravity with a positive cosmological constant $\Lambda=1/\ell^2$ [@Wittenx]. The connection is now $$\begin{aligned}
{\tilde A}^a = \frac{1}{2}\epsilon^{abc}{\tilde\omega}_{bc}
+ \frac{i}{\ell}{\tilde e}^a ,
\label{a8}\end{aligned}$$ where ${\tilde e}^a$ and ${\tilde\omega}^{bc}$ are the three-dimensional triad and spin connection, and the coupling constant $k$ is $$\begin{aligned}
k_{3D} = \frac{i\ell}{2G_3} ,
\label{a9}\end{aligned}$$ now expressed in terms of (2+1)-dimensional quantities. Much as in the four-dimensional case, the real part of (\[a6\]) gives the usual
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We describe the universe as a local, inhomogeneous spherical bubble embedded in a flat matter dominated FLRW universe. Generalized exact Friedmann equations describe the expansion of the universe and an early universe inflationary de Sitter solution is obtained. A non-perturbative expression for the deceleration parameter $q$ is derived that can possibly describe the acceleration of the universe without dark energy, due to the effects associated with very long wave length super-horizon inflationary perturbations. The suggestion by Kolbe et al. [@Kolbe] that long wave length super-horizon inflationary modes can affect a local observable through inhomogeneities is considered in the light of our exact inhomogeneous model.'
---
[**Large Scale Cosmological Inhomogeneities, Inflation And Acceleration Without Dark Energy** ]{} 0.3 true in [J. W. Moffat]{} 0.3 true in [*The Perimeter Institute for Theoretical Physics, Waterloo, Ontario, N2J 2W9, Canada*]{} 0.3 true in and 0.3 true in [*Department of Physics, University of Waterloo, Waterloo, Ontario N2Y 2L5, Canada*]{}
0.2 true in e-mail: jmoffat@perimeterinstitute.ca
Introduction
============
In a recent article, we investigated a cosmology in which a spherically symmetric perturbation enhancement is embedded in an asymptotic FLRW universe [@Moffat]. The perturbation enhancement is described by an exact inhomogeneous solution of Einstein’s field equations. We found that the large-scale inhomogeneities can lead to a reinterpretation of the luminosity distance $d_L$ of a cosmological source in terms of its red shift $z$, owing to the observer dependence of these quantities. The time evolution and the expansion rate of the inhomogeneous universe can lead to intrinsic effects such as cosmic variance at large angles and long-wavelength perturbations not described by a FLRW homogeneous and isotropic universe. Therefore, the interpretation of the data using a FLRW model that the accelerating expansion of the universe is caused by dark energy may be misleading. This is important, for it is difficult to explain theoretically the postulated dark energy that causes the acceleration of the universe. The model also leads to an axis pointing towards the center of the spherically symmetric large scale perturbation enhancement with dipole, quadrupole and octopole moments aligned with the axis. It was shown that the luminosity distances and red shifts observed by different observers located at spatially different points of causally disconnected parts of the universe can have varying values. A spatial average of all these observations leads to an intrinsic cosmic variance in e.g. the deceleration parameter $q$. The distribution of CMB temperature fluctuations can be unevenly distributed in the northern and southern hemispheres.
The popular explanation for the observed large-scale homogeneity of the universe is that the universe underwent an initial inflationary period with more than 60 e-folds [@Guth]. The inflationary cosmic expansion can stretch an initially small, smooth spatial region to a size larger than the horizon size today and explain the present day large-scale homogeneity. The question arises as to whether the initial [*local*]{} patch can be sufficiently homogenized to allow inflation to begin [@Trodden]. In the following, we shall consider the universe as an expanding bubble with an inhomogeneous metric and generalized Friedmann equations, including inhomogeneous density and pressure and a cosmological constant. Our main assumptions are spherical symmetry and an inhomogeneous barytropic fluid that satisfies an equation of state. For the case of a spatially flat inhomogeneous early universe, we obtain a de Sitter inflationary solution.
The acceleration of the expansion of the universe deduced from Type Ia supernovae observations and the CMB WMAP data [@Perlmutter; @Riess; @Spergel] has been interpreted as due to the cosmological constant (vacuum energy), modifications of Einstein’s gravitational field equations at large distances [@Turner], and quintessence fields [@Peebles]. The quintessence explanations postulate a new form of matter with negative pressure called dark energy. Recently, it has been suggested that the acceleration is caused by very long wavelength, super-horizon perturbations generated by a period of inflation in the early universe [@Kolbe; @Barausse]. The backreaction of perturbations on an FLRW background universe has been the subject of investigation by several authors [@Brandenberger]. The predictions based on perturbation theory are limited by the condition $\Phi\ll 1$, where $\Phi$ is the gravitational potential.
The inflationary perturbation modes whose wavelengths presently are smaller than $\lambda\leq 10$ Mpc have entered the non-linear regime and have generated galaxies and clusters of galaxies, while longer wavelength modes at super-horizon scales $\geq c/H$ are entering the linear regime today. The effects of the sub-horizon modes are small due to fact that $\delta\rho/\rho\sim 10^{-5}$ at the surface of last scattering. Therefore, these sub-horizon modes produce negligible corrections at second order $\sim 10^{-8}$. However, the super-horizon modes could potentially create a correction to the deceleration parameter $q$, large enough to remove the need for dark energy. It has been argued recently [@Chung; @Flanagan; @Seljak; @Wiltshire] that second order perturbation effects of the form $\Phi\nabla^2\Phi$ are described by a renormalization of the local spatial curvature and cannot (for a positive energy density) produce a negative deceleration parameter.
One problem with the perturbation calculations is that they ignore all higher gradient terms $\nabla^n\Phi$, and any [*non-perturbative effects*]{} that can have a significant influence on the inhomogeneity contributions due to very long wave length modes at super-horizon. These effects will occur for inflationary models in which the number of e-folds of inflation is much larger than the 60 e-folds required to create physically satisfactory fluctuations. In the following, we shall use the exact inhomogeneous model of ref. [@Moffat] to derive a formula for the deceleration parameter $q$ that is non-perturbative and whose variance can lead to a negative value for $q$ without dark energy and a cosmological constant $\Lambda$.
Inhomogeneous Friedmann Equations
=================================
Our action takes the form $$S=S_G+S_M,$$ where $$S_G=\frac{1}{16\pi G}\int d^4x\sqrt{-g}(R-2\Lambda).$$ The matter action is given by $$S_M=\int
d^4x\sqrt{-g}[\frac{1}{2}g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi-V(\phi)],$$ where $\phi$ is a scalar matter field and $V(\phi)$ is a potential.
For the sake of notational clarity, we write the FLRW line element $$ds^2=dt^2-a^2(t)\biggl(\frac{dr^2}{1-kr^2}+r^2d\Omega^2\biggr),$$ where $d\Omega^2=d\theta^2+\sin\theta^2d\phi^2$. The general, spherically symmetric inhomogeneous line element is given by [@Lemaitre; @Tolman; @Bondi; @Bonnor; @Moffat2; @Moffat3; @Krasinski; @Moffat]: $$\label{inhomometric} ds^2=dt^2-X^2(r,t)dr^2-R^2(r,t)d\Omega^2.$$ The energy-momentum tensor ${T^\mu}_\nu$ takes the barytropic form $$\label{energymomentum} {T^\mu}_\nu=(\rho+p)u^\mu u_\nu
-p{\delta^\mu}_\nu,$$ where $u^\mu=dx^\mu/ds$ and, in general, the density $\rho=\rho(r,t)$ and the pressure $p=p(r,t)$ depend on both $r$ and $t$. We have for comoving coordinates $u^0=1, u^i=0,\,
(i=1,2,3)$ and $g^{\mu\nu}u_\mu u_\nu=1$.
The Einstein gravitational equations are $$\label{Einstein} G_{\mu\nu}\equiv
R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}{\cal R}+\Lambda g_{\mu\nu}=-8\pi
GT_{\mu\nu},$$ where ${\cal R}=g^{\mu\nu}R_{\mu\nu}$ and $\Lambda$ is the cosmological constant. Solving the $G_{01}=0$ equation for the metric (\[inhomometric\]), we find that $$X(r,t)=\frac{R'(r,t)}{f(r)},$$ where $R'=\partial R/\partial r$ and $f(r)$ is an arbitrary function of $r$.
We obtain the two generalized Friedmann equations [@Moffat]: $$\label{inhomoFriedmann} \frac{{\dot R}^2}{R^2}+2\frac{{\dot
R}'}{R'}\frac{{\dot R}}{R}+\frac{1}{R^2}(1-f^2)
-2\frac{ff'}{R'R}=8\pi G\rho+\Lambda,$$ $$\label{inhomoFriedmann2} \frac{\ddot
R}{R}+\frac{1}{3}\frac{\dot{R}^2}{R^
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We combine existing multiwavelength data (including an HST/GHRS UV spectrum and a ground based optical spectrum) with unpublished HST/WFPC2 images, near-IR photometry and K band spectroscopy. We use these data to constrain the young, the intermediate age and the old stellar populations in the central regions of the starburst galaxy NGC7714.
In a previous paper (González Delgado et al. 1999), the stellar features in the HST/GHRS ultraviolet (UV) spectrum and the optical emission lines were used to identify a $\sim$5Myr old, very little reddened stellar population as the main source of UV light in the central $\sim$330pc. The optical data indicated the existence of an older population. The nature of the latter is investigated here. Stellar absorption features in the optical and the near-IR are used to partly break the strong degeneracy between the effects of ageing and those of the inhomogeneous dust distribution on the UV–optical–IR colors. Consistency with far-IR, X-ray and radio data is also addressed. The successful models have essential features in common. We find that the young burst responsible for the UV light represents only a small part of an extended episode of enhanced star formation, initiated a few 10$^8$yrs ago. The star formation rate is likely to have varied on this timescale, averaging about 1[M$_{\odot}$]{}yr$^{-1}$. The mass of young and intermediate age stars thus formed equals at least 10% of the mass locked in pre-existing stars of the underlying spiral galaxy nucleus, and fractions around 25% are favored. The spectrophotometric star formation timescale is long compared to the $\sim 110$Myr elapsed since closest contact with the neighboring NGC7715, according to the dynamical models of Smith & Wallin (1992). The initial trigger of the starburst thus remains elusive.
NGC7714 owes its brightness in the UV to a few low extinction lines of sight towards young stars. Our results based on the integrated spectrophotometry of the central $\sim\,330$pc are supported by high resolution images of this area. The different extinction values obtained when different spectral indicators are used result naturally from the coexistence of populations with various ages and obscurations. The near-IR continuum image looks smoothest, as a consequence of lower sensitivity to extinction and of a larger contribution of old stars.
We compare the nuclear properties of NGC7714 with results from studies in larger apertures. We emphasize that the global properties of starburst galaxies are the result of the averaging over many lines of sight with very diverse properties in terms of obscuration and stellar ages. The overal picture is strongly reminiscent of the other nearby “proto-typical" starburst, M82.
author:
- Ariane Lançon
- 'Jeffrey D. Goldader'
- Claus Leitherer
- 'Rosa M. González Delgado'
title: |
Multiwavelength Study of the Starburst Galaxy NGC 7714.\
II. The Balance between Young, Intermediate Age and Old Stars
---
Introduction {#Intro.sec}
============
A burst of star formation in a galaxy affects the galaxy’s energy output across the entire electromagnetic spectrum. Supernovae emit X-rays; the continua of hot massive stars are strong in the ultraviolet (UV); gaseous recombination lines dominate the optical spectra; cool stars are strong emitters in the near-infrared (near-IR); dust heated by the absorption of energetic photons can produce strong far-IR emission; and synchrotron radiation from electrons accelerated by supernova remnants is important at radio wavelengths. Yet, the physical manifestations of a starburst depend on its age. Old starbursts ($\ga$ tens of Myr), where the majority of the massive stars have evolved off the main sequence or already died, will have relatively little UV emission, though red supergiants (RSG), red giants, or asymptotic giant branch (AGB) stars could cause them to be quite bright in the near-IR. On the other hand, a very young starburst ($\la$ few Myr) will have strong UV emission, yet relatively weak IR emission, since IR luminous stars have not yet formed. For a recent review see, e.g., @lei00.
The effects of starbursts have been studied in great detail for large samples at individual wavelengths. Multiwavelength studies of starbursts at galaxy-scale resolution were done by, e.g., @cal97, @sch97, and @mas99. But few studies have attempted a panchromatic approach that combines high spatial resolution with both photometric and spectroscopic information, focussing on one object, thereby studying star formation in an individual galaxy at the greatest possible detail.
One motivation for programs aiming at very high spatial resolution is disentangling the complex effects of reddening by dust (with various extinction geometries) from “secular" age-induced reddening as stellar populations age. For all dusty starbursts, the reddening correction is an essential step in the determination of the total amount of star formation. The conversion of reddening measurements into attenuation factors is non trivial, even for “simple" stellar populations (Witt & Gordon 2000). For instance, it has often been suggested that emission lines are affected by more extinction than stellar continuum emission (e.g. Calzetti et al. 1994). Only the detailed analysis of spectrophotometric properties in the light of high resolution imaging data will allow us to understand what is really happening and to gain confidence in results for more distant objects, for which only integrated spectrophotometry is available.
We have chosen to study the prototypical starburst galaxy NGC 7714 [@wee81]. This spiral galaxy at a distance of 37.3 Mpc (for $H_{\rm0} = 75$ [km s$^{-1}$]{} Mpc$^{-1}$ and $cz = 2798$ [km s$^{-1}$]{}) is interacting with its smaller neighbor NGC 7715 (see references in González Delgado et al. 1999, hereafter Paper I). The inclination of NGC 7714 is $\sim$45$^\circ$, allowing a fairly clear view of the inner few hundred pc ($1\arcsec$ is equivalent to 189 pc at 37.3 Mpc), where the strongest star formation is occurring. Though luminous in the IR ($L_{\rm{IR}} = 3 \times 10^{10}$ [L$_{\odot}$]{}; see Paper I), NGC 7714 is a strong UV source as well. Together, these facts are evidence that the mean dust obscuration is not too severe. This gave us hope that we could use data from the spacecraft UV to model directly the UV continuum of the hot, young stars, providing powerful constraints on the starburst age. This was the major topic of Paper I.
The [*global*]{} properties of NGC 7714 were studied previously by @cal97, who analyzed large-aperture ($10\arcsec \times 20\arcsec$, 1.9 kpc $\times$ 3.8 kpc) multiwavelength photometry and spectroscopy similar to the data in our study. However, we concentrate on the inner few hundred pc of the nucleus, where the most intense star formation is occurring.
As part of Hubble Space Telescope (HST) Guest Observer program 6672, we obtained a UV spectrum (1180 – 1680 Å in the rest frame) of the nucleus of NGC 7714 using the Goddard High Resolution Spectrograph (GHRS). Our analysis of the UV spectrum and a comparison with available X-ray, optical, and radio data was reported in Paper I. We also included in that paper an F606W image obtained with the Wide-Field Planetary Camera 2 (WFPC2) on the HST, taken from the HST archive.
In this paper, we extend our analysis into the IR. This wavelength range is particularly sensitive to older, evolved stars, and to stars of all ages with heavy dust obscuration. We have obtained near-IR JHK$n$ images, and a K-band spectrum, which we present and analyze here. We also present a new near-UV image of NGC 7714 taken with HST. By combining the UV data from HST with optical and near-IR spectroscopy from the ground, we have accumulated high-quality spectra, spanning the range 1200 Å to 2.3 $\mu$m, for very nearly the same spatial regions of the galaxy. With photometric points from the X-ray to the radio, our spatial coverage spans several decades in frequency.
The new observations are presented in Sect.\[obs.sec\]. In Sect.\[data.anal.sec\], the data are analysed with a focus on the morphological and structural information they contain for the central regions of NGC7714. The coexistence of regions with very different properties, in particular in terms of extinction, already becomes evident in that section. The integrated spectrophotometric properties of the nucleus are analyzed in Sect.\[Spectrum.sec\]. We successively consider individual wavelength ranges, the full broad band energy distribution, and finally the full spectrum, and show that a variety of models remains consistent with even this amount of combined data. The predictions common to all successful models are highlighted and analyzed in terms of the nuclear morphology of the galaxy. Implications of our study for NGC7714 itself and for studies of other starbursts are discussed in Sect.\[Discussion.sec\]. The conclusions are given in Sect.\[Concl.sec\].
Observations and data reduction {#obs.sec}
===============================
Spectroscopy at 2 $\mu$m {#Kspec.obs.sec}
------------------------
NGC 77
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present a novel [*ab initio*]{} approach for computing intramolecular charge and energy transfer rates based upon a projection operator scheme that parses out specific internal nuclear motions that accompany the electronic transition. Our approach concentrates the coupling between the electronic and nuclear degrees of freedom into a small number of reduced harmonic modes that can be written as linear combinations of the vibrational normal modes of the molecular system about a given electronic minima. Using a time-convolutionless master-equation approach, parameterized by accurate quantum-chemical methods, we benchmark the approach against experimental results and predictions from Marcus theory for triplet energy transfer for a series of donor-bridge-acceptor systems. We find that using only a single reduced mode–termed the “primary” mode, one obtains an accurate evaluation of the golden-rule rate constant and insight into the nuclear motions responsible for coupling the initial and final electronic states. We demonstrate the utility of the approach by computing the inelastic electronic transition rates in a model donor-bridge-acceptor complex that has been experimentally shown that its exciton transfer pathway can be radically modified by mode-specific infrared excitation of its vibrational mode.'
author:
- Xunmo Yang
- 'Eric R. Bittner'
title: 'Inelastic Charge Transfer Dynamics in Donor-Bridge-Acceptor Systems Using Optimal Modes'
---
Introduction
============
Energy and electronic transport plays a central role in a wide range of chemical and biological systems. It is the fundamental mechanism for transporting the energy of an absorbed photon to a reaction center in light harvesting systems and for initiating a wide range of photo-induced chemical processes, including vision, DNA mutation, and pigmentation. The seminal model for calculating electron transfer rates was developed by Marcus in the 1950’s[@marcus1956theory; @marcus1965theory; @marcus1993electron]. $$k_{Marcus}=\frac{2\pi}{\hbar}|V_{ab}|^{2}\frac{1}{\sqrt{4\text{\ensuremath{\pi}}k_{B}T\lambda}}e^{-(\lambda+ \Delta\epsilon)^{2}/4\text{\ensuremath{\lambda}}k_{B}T}.\label{eq:marcus}$$ where $\lambda$ is energy required to reorganize the environment following the transfer of an electron from donor to acceptor. and $\Delta \epsilon$ is the driving force for the reaction, as illustrated in Fig. \[marcus\]. If we assume that the nuclear motions about the equilibrium configurations of the donor and acceptor species is harmonic, the chemical reactions resulting from energy or charge transfer events can be understood in terms of intersecting diabatic potentials as sketched. The upper and lower curves are the adiabatic potential energy surfaces describing the nuclear dynamics resulting from an energy or charge transfer event, taking the geometry of the donor state as the origin.
![Sketch of Marcus parabolas for a model energy or charge transfer system. Labeled are the key parameters used to compute the Marcus rate constant (Eq. \[eq:marcus\]). Energies are given in eV and the collective nuclear displacement is dimensionless. []{data-label="marcus"}](Figure1.pdf){width="0.5\columnwidth"}
As the transfer occurs by crossing an energy barrier, the transfer rate can be expected to be in the Arrhenius form $$\begin{aligned}
k\propto e^{-E_{A}/k_{B}T},\end{aligned}$$ with $E_{A}$ as the activation energy. Using $E_{A}={(\lambda+\Delta \epsilon)^{2}}/{4\lambda}$ we can relate the activation energy to both the reorganization energy and driving force, $-\Delta \epsilon$. One of the most profound predictions of the theory is that as the driving force increases, the transfer rate reaches a maximum and further increases in the driving force lead to lower reaction rates, termed the inverted regime. The existence of the inverted region was demonstrated unequivocally by Miller [*et al.*]{} [@miller1984intramolecular] in an elegant series of experiments that systematically tuned the driving force, reorganization energy, and diabatic coupling by careful chemical modification of the donor and acceptor.
A number of years ago, our group developed a time-convolutionless master equation approach for computing state-to-state rates in which the coupling between states depends upon the nuclear coordinates[@pereverzev2006time]. This approach incorporates a fully quantum-mechanical treatment of both the nuclear and electronic degrees of freedom and recovers the well-known Marcus expression in the semiclassical limit. The model is parameterized by the vibrational normal mode frequencies, and the electronic energies and energy derivatives at a reference configuration. The approach has been used by our group to compute state-to-state transition rates in semi-empirical models for organic semiconducting light-emitting diode and photovoltaics [@tamura2008phonon; @tamura2007exciton; @bittner2014noise; @singh2009fluorescence].
We recently made a significant breakthrough in using this approach by tying it it to a fully [*ab initio*]{} quantum chemical approach for determining the diabatic states and electron/phonon coupling terms allowing unprecedented accuracy and utility for computing state-to-state electronic transition rates. Our methodology consists of two distinct components. The first is the use of a diabatization schemes for determining donor and acceptor states in a molecular unit. The other is a projection scheme which enables us to analyze the contribution of vibrations in reactions. Similar decomposition schemes have been presented by Burghardt [@cederbaum2005short; @gindensperger2006shortI; @gindensperger2006shortII] and the approach used here builds upon the method given in Ref. . We recently benchmarked this approach against both the experimental rates and recent theoretical rates presented by Subotnik [*et al.*]{} [@subotnik2008constructing; @subotnik2009initial; @subotnik2010predicting] and successfully applied the approach to compute state-to-state transition rates in series of Pt bridged donor-acceptor systems recently studied by Weinstein’s group. We review here these latter results along with the details of our methods.
Theoretical Approach {#section:theory}
====================
Model Hamiltonian
-----------------
We consider a generic model for $n$ electronic states coupled linearly to a phonon bath. Taking the electronic ground state of the system as a reference and assuming that the electronic states are coupled linearly to a common set of modes, we arrive at a generic form for the Hamiltonian, here written for two coupled electronic states: $$\begin{aligned}
H=\left(\begin{array}{cc}
\epsilon_{1} & 0\\
0 & \epsilon_{2}
\end{array}\right)+\left(\begin{array}{cc}
{\mathbf g}_{11}&{\mathbf g}_{12} \\
{\mathbf g}_{21} &{\mathbf g}_{22}
\end{array}\right)\cdot{\mathbf q} +\frac{{\mathbf p}^{2}}{2}+\frac{1}{2}\mathbf{q}^{T}\cdot\mathbf\Omega\cdot\mathbf{q}.
\nonumber \\
\label{ham1}\end{aligned}$$ Here, the first term contains the electronic energies, $\epsilon_{1}$ and $\epsilon_{2}$ computed at a reference geometry–typically that of the donor or acceptor state. The second term represents the linearized coupling between the electronic and nuclear degrees of freedom given in terms of the mass-weighted normal coordinates $\mathbf q$. The diagonal terms give the adiabatic displacement forces between the reference geometry and the two states. If we choose one of the states as the reference state, then either $\mathbf g_{11}$ or $\mathbf g_{22}$ will vanish. The remaining two terms correspond to the harmonic motions of the nuclear normal modes, given here in mass-weighted normal coordinates. In the normal mode basis, the Hessian matrix, $\mathbf \Omega$, is diagonal with elements corresponding to the normal mode frequencies, $\omega_{j}^{2}$.
We now separate Eq. \[ham1\] into diagonal and off-diagonal terms $$\begin{aligned}
\hat H = \hat H_{o} + \hat V\end{aligned}$$ and perform a polaron transform using the unitary transformation [@grover1970exciton; @rice1994excitons; @pereverzev2006time]. $$\begin{aligned}
U&=&e^{-\sum_{ni}\!\!\frac{g_{nni}}{\hbar\omega_i}|n\rangle \langle
n|(a^{\dagger}_i-a_i)}
\nonumber \\
&=&
\sum_{n}|n\rangle \langle n|e^{-\sum_{i}\!\!\frac{g_{nni}}{\hbar\omega_i}(a^{\dagger}_i-a_i)}
\label{unitary}\end{aligned}$$ under which the transformed Hamiltonian is written in terms of the diagonal elements $$\begin{aligned}
\tilde H_0=U^{-1}H_0U
=\sum_n\tilde\epsilon_n |n\rangle \langle
n|+\sum_i\hbar\omega_ia^{\dagger}_ia_i,
\end{aligned}$$ with the renormalized electronic energies, $$\begin{aligned}
\tilde\epsilon_n=\epsilon_n-\sum_{i}\frac{g_{nni}^2}{\hbar\omega_i},\end{aligned}$$ and off-diagonal terms, $$\begin{aligned}
\hat V_{nm}=\sum_{i}g_{nmi}\left(a^{\dagger}_i+
a_i-\frac{2g_{nni}}{\hbar\omega_i}\right)
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This paper studies the asymptotic behavior of eigenvalues of random abelian $G$-circulant matrices, that is, matrices whose structure is related to a finite abelian group $G$ in a way that naturally generalizes the relationship between circulant matrices and cyclic groups. It is shown that, under mild conditions, when the size of the group $G$ goes to infinity, the spectral measures of such random matrices approach a deterministic limit. Depending on some aspects of the structure of the groups, whether the matrices are constrained to be Hermitian, and a few details of the distributions of the matrix entries, the limit measure is either a (complex or real) Gaussian distribution or a mixture of two Gaussian distributions.'
address: 'Department of Mathematics, Case Western Reserve University, 10900 Euclid Ave., Cleveland, Ohio 44106, U.S.A.'
author:
- 'Mark W. Meckes'
bibliography:
- 'G-circ-abelian.bib'
title: 'The spectra of random abelian $G$-circulant matrices'
---
Introduction
============
Given a finite group $G$ and a function $f : G \to {\mathbb{C}}$, the matrix $M
= \bigl[f(ab^{-1})\bigr]_{a,b \in G}$ is called a $G$-circulant matrix by Diaconis [@Diaconis-book; @Diaconis-matrices]. This generalizes the classical notion of circulant matrices, which arise as the special case in which $G$ is a finite cyclic group. The action of such a matrix $M$ on the vector space $\{ g : G \to {\mathbb{C}}\}$ is as a convolution operator: for $g : G \to {\mathbb{C}}$ and $a \in G$, $$\label{E:convolution}
(Mg)(a) = \sum_{b \in G} f(ab^{-1}) g(b) =: (f*g)(a).$$
This paper considers the asymptotic behavior of the spectra of random $G$-circulant matrices, or equivalently random convolution operators on $G$, when $G$ is a large abelian group. (For the rest of this paper, $G$ will always stand for a *finite abelian* group.) Such random matrices will be generated by picking the values $f(a)$ independently, with or without imposing a constraint $f(a^{-1}) =
\overline{f(a)}$ which is equivalent to insisting that the matrix $M$ is Hermitian. This generalizes the study of random circulant matrices, whose theory has already been developed in [@BoMi; @BoSe; @BrSe; @Meckes; @BoHaSa] among many other papers, with applications discussed in [@JaSr; @YiMoYaZh]. The richer structure of arbitrary abelian groups relative to cyclic groups leads to the appearance of some interesting phenomena which do not occur for circulant matrices, or the more familiar setting of random matrices with independent entries.
The prototypical situation (exemplified in Corollaries \[T:C-Ginibre-limit\], \[T:GUE-limit\], and \[T:Z2-GOE-limit\], and Theorems \[T:circular-law-uncorrelated\] and \[T:semicircle-law-special\] below) is that when the size of $G$ grows the empirical spectral distribution of a (properly normalized) random $G$-circulant matrix $M$ approaches a Gaussian distribution. When $M$ is constrained to be Hermitian the limit will be a real Gaussian distribution; without such a constraint it will be a complex Gaussian distribution. These situations may be thought of as analogous to the semicircle law for Hermitian random matrices and circular law for non-Hermitian random matrices with independent entries, respectively. This behavior, which has already been observed for random circulant matrices in [@BoMi; @Meckes], occurs in particular if only a negligible fraction of the elements of $G$ are of order $2$, and also if every nonidentity element of $G$ is of order $2$. On the other hand, if neither of these is the case then more complicated limiting distributions occur which are mixtures of two Gaussian distributions (as in Theorems \[T:circular-law-correlated\] and \[T:semicircle-law-general\] below).
Another perspective on these results, which is crucial in the proofs, is that they describe the distribution of values of random Fourier series on $G$. The supremum of such a random Fourier series is already a thoroughly studied quantity [@Kahane; @MaPi]. In particular, results of Marcus and Pisier [@MaPi] include as special cases estimates of the spectral norms of random $G$-circulant matrices, as pointed out in Proposition \[T:norm\] below.
Section \[S:Fourier\] below briefly reviews the facts about Fourier analysis on finite abelian groups which are used here and points out their immediate consequences for $G$-circulant matrices; some notation and conventions used in the remainder of the paper are established there. Section \[S:Gaussian\] investigates the spectra of some random $G$-circulant matrices whose entries are Gaussian random variables. The invariance properties of Gaussian random variables allow an easy detailed study to be undertaken which illuminates the general situation, in particular the role of the number of elements of order $2$. Finally, Section \[S:general\] determines the asymptotic behavior of the spectrum for general entries with finite variances.
The cases of $G$-circulant matrices with heavy-tailed entries, and of random $G$-circulant matrices when $G$ is a nonabelian finite group, will be investigated in future work.
Acknowledgements {#acknowledgements .unnumbered}
================
The author thanks Persi Diaconis for encouragement and pointers to the literature, John Duncan for helpful discussions about character theory, and the referee for careful reading and useful comments. This research was partly supported by National Science Foundation grant DMS-0902203.
Some Fourier analysis and notation {#S:Fourier}
==================================
For a finite abelian group $G$, we denote by $\widehat{G}$ the family of group homomorphisms $\chi : G \to {\mathbb{T}}$, where ${\mathbb{T}}$ is the multiplicative group $\{z \in {\mathbb{C}}\mid {\left\vert z \right\vert} = 1\}$. The elements of $\widehat{G}$ are called characters of $G$; $\widehat{G}$ is a group under the operation of pointwise multiplication. The multiplicative inverse of a character $\chi$ is its pointwise complex conjugate $\overline{\chi}$. From the homomorphism property it follows that for $a \in G$ and $\chi \in \widehat{G}$, $\chi(a^{-1}) =
\overline{\chi}(a)$.
We denote by $\ell^2(G)$ the space of functions $f: G \to {\mathbb{C}}$ equipped with the inner product $${\left\langle f, g \right\rangle} = \sum_{a \in G} f(a) \overline{g(a)},$$ and $\ell^2(\widehat{G})$ is defined analogously. The Fourier transform of $f \in \ell^2(G)$ is the function $\widehat{f} \in
\ell^2(\widehat{G})$ given by $$\widehat{f}(\chi) = {\left\langle f, \overline{\chi} \right\rangle}
= \sum_{a \in G} f(a) \chi(a).$$ This includes as special cases both the classical discrete Fourier transform (when $G$ is cyclic) and the Walsh–Hadamard transform (when $G$ is a product of cyclic groups of order $2$). The following lemma summarizes the most important fundamental facts about the Fourier transform for our purposes.
\[T:FT-isometry\] Let $G$ be a finite abelian group with ${\left\vert G \right\vert}$ elements.
1. \[I:onb\] The functions $\bigl\{
\frac{1}{\sqrt{{\left\vert G \right\vert}}}\chi \mid \chi \in \widehat{G}\bigr\}$ form an orthonormal basis of $\ell^2(G)$.
2. \[I:isometry\] The map $f \mapsto
\frac{1}{\sqrt{{\left\vert G \right\vert}}}\widehat{f}$ is a linear isometry of $\ell^2(G)$ onto $\ell^2(\widehat{G})$.
3. \[I:convolution\] If $f, g \in \ell^2(G)$, then for each $\chi \in \widehat{G}$, $\widehat{f*g}(\chi) = \widehat{f}(\chi)
\widehat{g}(\chi)$ (where the convolution $f*g$ is defined in .
<!-- -->
1. See Theorem 6 on [@Serre p. 19].
2. This follows easily from Proposition 7 on [@Serre p. 20] (which is a consequence of part (\[I:onb\])).
3. This follows directly from the definitions by a straightforward computation.
Observe that contained in Lemma \[T:FT-isometry\](\[I:onb\]) is the fact that ${\left\vert G \right\vert} = \bigl\vert\widehat{G}\bigr\vert$.
We will need two additional facts about characters of
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The possibility to synthesize heavier superheavy elements in massive nuclei reactions is strongly limited by the hindrance to complete fusion of reacting nuclei: due to the onset of the quasifission process in the entrance channel, which competes with complete fusion, and by strong increase of the fission yield along the de-excitation cascade of the compound nucleus in comparison to the evaporation residue formation. We present a wide and detailed procedure allowing us to describe the experimental results (evaporation residue nuclei and fissionlike products) in the mass asymmetric and symmetric reactions. Very reliable estimations and perspectives for the synthesis of superheavy elements in many massive nuclei reactions up to $Z=120$ and eventually also for $Z>120$ have been obtained.'
address: |
$^1$ Dipartimento di Fisica e di Scienze della Terra, Università di Messina, I-98166 Messina, Italy\
$^2$ Istituto Nazionale di Fisica Nucleare, Sezione di Catania, I-95123 Catania, Italy\
$^3$ Centro Siciliano di Fisica Nucleare e Struttura della Materia, I-95123 Catania, Italy\
$^4$Joint Institute for Nuclear Research, 141980, Dubna, Russia\
$^5$ Institute of Nuclear Physics, 100214, Tashkent, Uzbekistan
author:
- 'G Mandaglio$^{1,2,3}$, A K Nasirov$^{4,5}$, F Curciarello$^{1,2}$, V De Leo $^{1,2}$, M. Romaniuk$^{1,2}$G Fazio$^{1,2}$,G Giardina$^{1,2}$'
title: 'What perspectives for the synthesis of heavier superheavy nuclei? Results and comparison with models'
---
Introduction and status {#intro}
=======================
Many Laboratories are strongly engaged to investigate massive nuclei reactions with the aim to analyze and understand the characteristics and variety of reaction dynamics, and then to plane new experiments for the synthesis of other heavier superheavy nuclei. In the last decade many superheavy elements with $Z > 110$ were successfully reached by cold and hot fusion reactions, but results of the investigations in some other cases of symmetric or almost symmetric massive nuclei reactions the investigations were unsuccessful. New experiments have been performed to synthesize superheavy elements with $Z=120$ and other massive nuclei using reactions being believed to be able to reach superheavy elements with $Z>120$.
The possibility of synthesis of new elements with Z=120, 122, 124, 126 was explored in some hot-fusion reactions (for example the $^{54}$Cr+$^{248}$Cm, $^{54}$Cr+$^{249}$Cf, $^{58}$Fe+$^{249}$Cf, and $^{64}$Ni+$^{249}$Cf reactions) in cold-fusion reactions (for example the $^{132}$Sn+$^{174}$Yb, $^{132}$Sn+$^{176}$Hf, $^{132}$Sn+$^{186}$W and $^{84}$Kr+$^{232}$Th reactions) which could lead to the formation of nuclei in the Z=120-126 range. Moreover, various studies were conducted by different authors[@Siwek07; @Swiatecki04; @Smolanczuk01; @Zagreb07] in mass symmetric and asymmetric reactions ($^{136}$Xe+$^{136}$Xe, $^{149}$La+$^{149}$La, $^{86}$Kr+$^{208}$Pb, $^{58}$Fe+$^{244}$Pu) estimating relevant or promising results for the synthesis of superheavy elements, but in the some conducted experiments no events were found[@Gregorich03; @Ogan09; @Ogan2009]. Since some laboratories are planning to perform experiments in such field of nuclear reactions, the present study can be an useful support of knowledge before to attempt some difficult tasks. Therefore, it is needed to investigate the conditions and limits of reactions in respect to form compound nuclei (CN), and to produce evaporation residues of superheavy elements. There are three reasons causing a hindrance to the evaporation residue formation in the reactions with massive nuclei: the quasifission, fusion-fission, and fast fission processes[@nasirov09; @fazio05; @fazio08]. The quasifission process competes with the fusion process during the evolution of the dinuclear system (DNS). This process occurs when the dinuclear system prefers to break down into fragments instead of to be transformed into fully equilibrated CN. The number of events going to quasifission increases drastically by increasing the sum of the Coulomb interaction and rotational energy in the entrance channel. The next reason decreasing yield of ER is the fission of a heated and rotating CN which is formed in competition with quasifission. The stability of a massive CN decreases due to the decrease of the fission barrier by increasing its excitation energy $E^*_{\rm
CN}$ and angular momentum $\ell$. The stability of the transfermium nuclei are connected with the availability of shell correction in their binding energy which are sensitive to $E^*_{\rm CN}$ and values of the angular momentum. Moreover, the other reason decreasing yield of ER is the fast fission process which is the inevitable decay of the fast rotating mononucleus into the two fragments without reaching the equilibrium compact shape of a CN. Such a mononucleus is formed from DNS which survived against quasifission at large values of the orbital angular momentum decreasing the fission barrier up to zero. So, the main channels decreasing the cross section of compound nucleus are quasifission and fast fission. These channels produce binary fragments which can overlap with the ones with the fusion-fission channel and the amount of the mixed detected fragments depends on the mass asymmetry of entrance channel, beam energy, as well as the shell structure of being formed reaction fragments. Therefore, the experimental method to extract the fusion-fission contribution by the analysis of the mass and angular distributions of binary fragments of the full momentum transfer events is not unambiguous.
The failure of many experimental results is connected not only with the difficulties in the measurement of the evaporation residue cross sections which are lower than 0.5 pb but also in the inadequacy of the probability estimation of the complete fusion [@Siwek07; @Smolanczuk01; @Zagreb07] and then in determination of the evaporation residue cross section. The reported difficulties are related not only with the theoretical estimation of the complete fusion and evaporation residue cross section but also in the not univocal experimental identification of fusion-fission fragments among the quasifission and fast fission fragments. We will also discuss about the limits of reaching compound nuclei heavier than $Z=120$ due to the dominant repulsive Coulomb effects and strong centrifugal forces in very massive nuclei reactions.
In order to give realistic estimations of cross sections of the reaction products by mass symmetric or almost symmetric entrance channel it is need to develop an adequate model allowing one to describe by a likelihood way the complex dynamics of the mechanisms during all stages of reaction. In fact, in the last stage of nuclear reaction, the formed CN may de-excite by fission (producing fusion-fission fragments) or by emission of light particles. The reaction products that survive fission are the evaporation residues (ER)[@epja222004; @fazio05]. The registration of ER is clear evidence of the CN formation, but in case of reactions with massive nuclei, generally, the knowledge about ER’s only it is not enough to determine the complete fusion cross section and to understand the dynamics of the de-excitation cascade of CN if the true fission fragments are not included into consideration. On the other hand, the correct identification of an evaporation residue nucleus by the observation of its $\alpha$-decay chain does not assure if the target material contains other isotopes of the nucleus under consideration. In fact, for example, in the case of the $^{48}$Ca+$^{249}$Cf reaction, the identification of the $^{294}$Hs nucleus as the evaporation residue of the $^{297}$Hs compound nucleus after the emission of 3 neutrons (see the experiment reported in Ref. [@Ogan2006]) cannot assure that the collected events of the obtained $^{294}$Hs nucleus are only due to the mentioned process regarding the $^{297}$Hs CN formation because also the $^{250}$Cf nucleus, that is inevitably present in the target due to the finite resolution of the mass separation, contributes by the $^{48}$Ca+$^{250}$Cf reaction (leading to the $^{298}$Hs CN) to the synthesis of the same $^{294}$Hs evaporation residue nucleus after 4 neutrons emission from CN. This effect changes with the beam energy and the $E^*_{\rm CN}$ excitation energy of CN. In addition, the use of some assumptions in separation of the fissionlike fragments according to the kinds of the mechanism of its origin does not allow for sure correct determination of the fusion-fission contribution in the case of overlapping of the mass fragment distributions of different processes (quasifission, fast fission and fusion-fission). The exigence and importance to have a multiparameter and sensitive model is strongly connected with the requirement to reach reliable results and with the possibility to give reliable estimations of perspectives for the synthesis of superheavy elements. If the estimations reported in Figs. \[fig3\] (a) and (b) of Ref. [@Zag12] about evaporation residue cross section after 2n, 3n, and 4n emission which are peaked at about the same $E^* = 40$ MeV of the $^{298}$116 excitation energy are reliable results, then immediately arises the question: what process and barriers can describe with appreciable probabilities the emission of 2 and 3 neutrons that take away about 43 and 48 MeV (or also
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The fidelity decay in a microwave billiard is considered, where the coupling to an attached antenna is varied. The resulting quantity, coupling fidelity, is experimentally studied for three different terminators of the varied antenna: a hard wall reflection, an open wall reflection, and a 50$\,\Omega$ load, corresponding to a totally open channel. The model description in terms of an effective Hamiltonian with a complex coupling constant is given. Quantitative agreement is found with the theory obtained from a modified VWZ approach \[Verbaarschot et al, Phys. Rep. **129**, 367 (1985)\].'
author:
- 'B. Köber'
- 'U. Kuhl'
- 'H.-J. Stöckmann'
- 'T. Gorin'
- 'D. V. Savin'
- 'T. H. Seligman'
title: Microwave fidelity studies by varying antenna coupling
---
Introduction {#sec:Intro}
============
Fidelity is a standard benchmark in quantum information, and plays a relevant role in discussions on quantum chaos [@gor06c]. The corresponding fidelity amplitude can be interpreted as the overlap between two wave functions obtained from the propagation of the same initial state with two different time evolutions or, alternatively, as the overlap of the initial state with itself after being propagated forward in time with one evolution and backward in time with the other. In the latter case one often speaks of Loschmidt echo. Fidelity contains information on both eigenfunctions and spectra of the original and perturbed systems in a non-trivial way. One can show, however, that there exists a profound relation [@koh08] between fidelity decay and purely spectral universal parametric correlations [@sim93b; @tan95; @ale98; @mar03] in chaotic and disordered systems. The connection holds in quite general settings [@smo08]. Efforts to measure fidelity are therefore very important.
There was an early proposal (without the name fidelity) in quantum optics [@gar97]. Along these lines the perturbation of a kicked rotor was discussed in great detail [@hau05]. A realization of that idea is not available today, although an experiment of this type, but with a more complicated process, was conducted [@and03].
Experiments with microwave cavities or elastic bodies seem to provide good options to study the decay of fidelity [@sch05b], but a difficulty arises. Fidelity implies an integration over the entire space. In two-dimensional microwave billiards the antenna always represents a perturbation, and thus moving the antenna defeats the purpose of a fidelity measurement, as the wave-function taken at any point is that of a slightly different system. In contrast to wave function measurements, in fidelity experiments we are precisely interested in such differences, and thus wave functions measured with moveable antennas [@ste92; @ste95; @kuh07b] or a moveable perturbation body [@sri91; @bog06; @lau07] are not appropriate. In elastic experiments on solid blocks [@lob03b; @gor06b; @lob08] or three-dimensional (3D) microwave billiards the wave function inside the volume seems to be inaccessible anyway [@doer98b; @alt97a]. This leads to the development of the concept of scattering fidelity [@sch05b] which tests the sensitivity of $S$-matrix elements to perturbations. This is also of intrinsic interest since the scattering matrix may be considered as the basic building block at least in the case of quantum theory [@str00; @leh55].
In former studies the scattering fidelity has been investigated in chaotic microwave billiards by considering a perturbation of the billiard interior. It can be shown that in such a case the random character of wave functions causes the scattering fidelity to represent the usual fidelity, provided that appropriate averaging is taken [@sch05b; @hoeh08a]. Scars and parabolic manifolds will obviously change that correspondence, but their effect can be avoided in experiment. Specifically, two different types of interior perturbations were experimentally studied. In the first set of experiments a billiard wall was shifted, realizing the so-called global perturbation [@sch05b; @sch05d], meaning that there is a total rearrangement of both spectrum and eigenfunctions already for moderate perturbation strengths. Good agreement with prediction from random matrix theory (RMT), expecting Gaussian or exponential decay depending on perturbation strength, was found. In the second experiment a small scatterer was shifted inside the billiard, the wave function being influenced only locally [@hoeh08a]. Using the random plane wave conjecture, an algebraic decay was predicted and confirmed experimentally.
Actually, any measurement opens the system. Coupling to the continuum changes drastically the system properties by converting discrete energy levels into unstable resonance states. The latter reveal rich dynamics when the coupling strength to the scattering channels is varied [@sok92], see also [@per00] for relevant microwave studies. Since the time evolution operator is subunitary in this case, there appears the leakage of the norm inside the scattering system [@sav97]. This decay is fully controlled by the degree of system openness and may also be considered as a remote analog of fidelity decay for open systems. In the framework of the scattering fidelity coupling to the continuum can be taken into account naturally.
It seems therefore attractive to study the sensitivity of $S$-matrix elements to perturbations in the coupling between the scattering system and decay channels. This will be the central purpose of the present paper. Experimentally, we realize the system by a flat microwave billiard with two attached antennas and measure the reflection in one antenna while modifying the coupling in another, see Sec. \[sec:exper\] for details on the experimental setup. Section \[sec:theory\] presents a theoretical consideration based on RMT and the effective Hamiltonian approach. In Sec. \[sec:results\] we discuss in detail the experimental results and compare them with the theory. Our main findings are then summarized in the concluding Sec. \[sec:conclusions\].
Experiment {#sec:exper}
==========
The basic principles of billiard experiments with microwave cavities as a paradigm of quantum chaos research are described in detail in [@stoe99]. Therefore, we concentrate on the aspects of relevance to the present study. Reflection and transmission measurements have been performed in a flat resonator, with top and bottom plate parallel to each other. The cavity can be considered as two-dimensional for frequencies $\nu\, <\,\nu_{\rm max} = c/(2h)$, where $h=\rm 8\,mm$ is the height of the resonator.
![\[fig:01\] Geometry of the chaotic Sinai billiard, length $l=\rm 472\,mm$, width $w=\rm 200\,mm$ and a quarter-circle of radius $r =\rm 70\, mm$ where an antenna with different terminations may be introduced at position $c$. $a$ denotes the measuring antenna. The additional elements were inserted to reduce the influence of bouncing balls.](fig1){width=".9\columnwidth"}
The setup, as illustrated on Fig. \[fig:01\], is based on a quarter Sinai shaped billiard. Additional elements were inserted into the billiard to reduce the influence of bouncing-ball resonances. The classical dynamics for the chosen geometry of the billiard is dominantly chaotic. At position $a$ one antenna is fixed and connected to an Agilent 8720ES vector network analyzer (VNA), which was used for measurements in a frequency range from $\rm 2$ to $\rm 18\,GHz$ with a resolution of $\rm 0.1\,MHz$. We measured the reflection $S$-matrix element $S_{aa}$ first for the unperturbed system, which corresponds to the situation, where no additional antenna is inserted at position $c$. Then we perturbed the system by inserting another antenna at position $c$ which was terminated consecutively in three different ways:
1. connection to the VNA (total absorption),\
2. standard open (open end reflection),\
3. standard short (hard wall reflection),\
and again measured the corresponding reflection at antenna $a$ for each case. The connection of antenna $c$ to the VNA corresponds to a termination of antenna $c$ with a $50\,\Omega$ load. The terminators for the cases (b) and (c) have been taken from the standard calibration kit (Agilent 85052C Precision Calibration Kit) being part of our microwave equipment. For case (a) the reflection amplitude $S_{cc}$ was also measured. From this measurement the coupling strength of antenna $c$ can be obtained, see Eq. (\[eq:Tc\]) below. For all four cases we measured 18 different realizations by rotating an ellipse (see Fig. \[fig:01\]) to perform ensemble averages.
An alternative to the coupling of an antenna with variable end is an open wave guide whose coupling to the billiard can be varied by a variable slit. It showed up that, contrary to intuition, for this setup the main effect of the variation of the slit does not correspond to a change of the coupling to the outside, but to a distortion of the wave functions in the billiard, thus corresponding more to the case of a local scattering fidelity [@hoeh08a]. This system is discussed in Appendix \[app:Exp\].
Theory {#sec:theory}
======
Generalized VWZ approach to fidelity {#subsec:VWZ}
------------------------------------
The general case of $M$ scattering channels connected to $N$ levels of the closed cavity can be described in terms of the following effective non-Hermitian Hamiltonian $$\label{eq:Heff}
|
{
"pile_set_name": "ArXiv"
}
|
---
address: 'Laure Marêché, LPSM UMR 8001, Université Paris Diderot, Sorbonne Paris Cité, CNRS, 75013 Paris, France'
author:
- 'Laure <span style="font-variant:small-caps;">Marêché</span>'
title: Exponential convergence to equilibrium in supercritical kinetically constrained models at high temperature
---
**Abstract:** Kinetically constrained models (KCMs) were introduced by physicists to model the liquid-glass transition. They are interacting particle systems on $\mathds{Z}^d$ in which each element of $\mathds{Z}^d$ can be in state 0 or 1 and tries to update its state to 0 at rate $q$ and to 1 at rate $1-q$, provided that a constraint is satisfied. In this article, we prove the first non-perturbative result of convergence to equilibrium for KCMs with general constraints: for any KCM in the class termed “supercritical” in dimension 1 and 2, when the initial configuration has product $\mathrm{Bernoulli}(1-q')$ law with $q' \neq q$, the dynamics converges to equilibrium with exponential speed when $q$ is close enough to 1, which corresponds to the high temperature regime.
**2010 Mathematics Subject Classification:** 60K35.
**Key words:** Interacting particle systems; Glauber dynamics; kinetically constrained models; bootstrap percolation; convergence to equilibrium.
Introduction
============
Kinetically constrained models (KCMs) are interacting particle systems on $\mathds{Z}^d$, in which each element (or *site*) of $\mathds{Z}^d$ can be in state 0 or 1. Each site tries to update its state to 0 at rate $q$ and to 1 at rate $1-q$, with $q \in [0,1]$ fixed, but an update is accepted if and only if a *constraint* is satisfied. This constraint is defined via an *update family* $\mathcal{U}=\{X_1,\dots,X_m\}$, where $m \in \mathds{N}^*$ and the $X_i$, called *update rules*, are finite nonempty subsets of $\mathds{Z}^d \setminus \{0\}$: the constraint is satisfied at a site $x$ if and only if there exists $X \in \mathcal{U}$ such that all the sites in $x+X$ have state zero. Since the constraint at a site does not depend on the state of the site, it can be easily checked that the product $\mathrm{Bernoulli}(1-q)$ measure, $\nu_q$, satisfies the detailed balance with respect to the dynamics, hence is reversible and invariant. $\nu_q$ is the *equilibrium measure* of the dynamics.
KCMs were introduced in the physics literature by Fredrickson and Andersen [@Fredrickson_et_al1984] to model the liquid-glass transition, an important open problem in condensed matter physics (see [@Ritort_et_al; @Garrahan_et_al]). In addition to this physical interest, KCMs are also mathematically challenging, because the presence of the constraints make them very different from classical Glauber dynamics and prevents the use of most of the usual tools.
One of the most important features of KCMs is the existence of blocked configurations. These blocked configurations imply that the equilibrium measure $\nu_q$ is not the only invariant measure, which complicates a lot the study of the out-of equilibrium behavior of KCMs; even the basic question of their convergence to $\nu_q$ remains open in most cases.
Because of the blocked configurations, one cannot expect such a convergence to equilibrium for all initial laws. Initial measures particularly relevant for physicists are the $\nu_{q'}$ with $q' \neq q$ (see [@Leonard_et_al2007]). Indeed, $q$ is a measure of the temperature of the system: the closer $q$ is to 0, the lower the temperature is. Therefore, starting the dynamics with a configuration of law $\nu_{q'}$ means starting with a temperature different from the equilibrium temperature. In this case, KCMs are expected to converge to equilibrium with exponential speed as soon as no site is blocked for the dynamics in a configuration of law $\nu_{q}$ or $\nu_{q'}$. However, there have been few results in this direction so far (see [@Cancrini_et_al2010; @Blondel_et_al2013; @stretched_exponential_East-like; @Mountford_FA1f; @Mareche2019Est]), and they have been restricted to particular update families or initial laws.
Furthermore, general update families have attracted a lot of attention in recent years. Indeed, there recently was a breakthrough in the study of a monotone deterministic counterpart of KCMs called bootstrap percolation. Bootstrap percolation is a discrete-time dynamics in which each site of $\mathds{Z}^d$ can be *infected* or not; infected sites are the bootstrap percolation equivalent of sites at zero. To define it, we fix an update family $\mathcal{U}$ and choose a set $A_0$ of initially infected sites; then for any $t \in \mathds{N}^*$, the set of sites that are infected at time $t$ is $$A_t = A_{t-1} \cup \{x \in \mathds{Z}^d \,|\, \exists X \in \mathcal{U}, x+X \subset A_{t-1}\},$$ which means that the sites that were infected at time $t-1$ remain infected at time $t$ and a site $x$ that was not infected at time $t-1$ becomes infected at time $t$ if and only if there exists $X \in \mathcal{U}$ such that all sites of $x + X$ are infected at time $t-1$. Until recently, bootstrap percolation had only been considered with particular update families, but the study of general update families was opened by Bollobás, Smith and Uzzell in [@Bollobas_et_al2015]. Along with Balister, Bollobás, Przykucki and Smith [@Balister_et_al2016], they proved that general update families satisfy the following universality result: in dimension 2, they can be sorted into three classes, *supercritical*, *critical* and *subcritical* (see definition \[def\_universality\_classes\]), which display different behaviors (their result for the critical class was later refined by Bollobás, Duminil-Copin, Morris and Smith in [@Bollobas_et_al2017]).
These works opened the study of KCMs with general update families. In [@MMT; @lbounds_infection_time; @Hartarsky_et_al2019; @Hartarsky_et_al2019bis], Hartarsky, Martinelli, Morris, Toninelli and the author showed that the grouping of two-dimensional update families into supercritical, critical and subcritical is still relevant for KCMs, and established an even more precise classification. However, these results deal only with equilibrium dynamics. Until now, nothing had been shown on out-of-equilibrium KCMs with general update families, apart from a perturbative result in dimension 1 [@Cancrini_et_al2010].
In this article, we prove that for all supercritical update families, for any initial law $\nu_{q'}$, $q'\in]0,1]$, when $q$ is close enough to 1, the dynamics of the KCM converges to equilibrium with exponential speed. This result holds in dimension 2 and also in dimension 1 for a good definition of one-dimensional supercritical update families. It is the first non-perturbative result of convergence to equilibrium holding for a whole class of update families.
This result may help to gain a better understanding of the out-of-equilibrium behavior of supercritical KCMs. In particular, such results of convergence to equilibrium were key in proving “shape theorems” for specific one-dimensional constraints in [@Blondel2013; @Ganguly_et_al; @Blondel_et_al2018].
Notations and result
====================
Let $d \in \mathds{N}^*$. We denote by $\|.\|_\infty$ the $\ell^\infty$-norm on $\mathds{Z}^d$. For any set $S$, $|S|$ will denote the cardinal of $S$.
For any configuration $\eta \in \{0,1\}^{\mathds{Z}^d}$, for any $x\in \mathds{Z}^d$, we denote $\eta(x)$ the value of $\eta$ at $x$. Moreover, for any $S \subset \mathds{Z}^d$, we denote $\eta_S$ the restriction of $\eta$ to $S$, and $0_S$ (or just 0 when $S$ is clear from the context) the configuration on $\{0,1\}^S$ that contains only zeroes.
We set an update family $\mathcal{U}=\{X_1,\dots,X_m\}$ with $m \in \mathds{N}^*$ and the $X_i$ finite nonempty subsets of $\mathds{Z}^d \setminus \{0\}$. To describe the classification of update families, we need the concept of *stable directions*.
For $u \in S^{d-1}$, we denote $\mathds{H}_u =
|
{
"pile_set_name": "ArXiv"
}
|
=1 amstex pictex =23truecm =cmr5 =cmr8 =cmti8 =cmbx8 =cmtt8
=cmbx10 scaled1 =cmcsc10
ß \#1\#2[ \[0.25,0.75\] from \#1 to \#2]{}
\#1 [$$\vbox{\hrule\hbox%
{\vrule%
\hskip0.5cm%
\vbox{\vskip0.3cm\relax%
\hbox{$\displaystyle{#1}$}%
\vskip0.3cm}%
\hskip0.5cm%
\vrule}%
\hrule}$$]{}
1.8truecm
Quiver Grassmannians and Auslander varieties
for wild algebras.
Claus Michael Ringel
Let $k$ be an algebraically closed field and $\Lambda$ a finite-dimensional $k$-algebra. Given a $\Lambda$-module $M$, the set $\Bbb G_{\bold e}(M)$ of all submodules of $M$ with dimension vector $\bold e$ is called a quiver Grassmannian. If $D,Y$ are $\Lambda$-modules, then we consider $\Hom(D,Y)$ as a $\Gamma(D)$-module, where $\Gamma(D) =
\End(D)^\op$, and the Auslander varieties for $\Lambda$ are the quiver Grassmannians of the form $\Bbb G_{\bold e}\Hom(D,Y)$. Quiver Grassmannians, thus also Auslander varieties are projective varieties and it is known that every projective variety occurs in this way. There is a tendency to relate this fact to the wildness of quiver representations and the aim of this note is to clarify these thoughts: We show that for an algebra $\Lambda$ which is (controlled) wild, any projective variety can be realized as an Auslander variety, but not necessarily as a quiver Grassmannian.
[**1. Introduction.**]{} Let $k$ be an algebraically closed field and $\Lambda$ a finite-dimensional $k$-algebra. A [*dimension vector*]{} $\bold d$ for $\Lambda$ is a function defined on the set of isomorphism classes of simple $\Lambda$-modules $S$ with values $d_S$ being non-negative integers. If $M$ is a $\Lambda$-module, its dimension vector $\bdim M$ attaches to the simple module $S$ the Jordan-Hölder multiplicity $(\bdim M)_S = [M:S].$
Given a $\Lambda$-module $M$, the set $\Bbb G_{\bold e}(M)$ of all submodules of $M$ with dimension vector $\bold e$ is called a quiver Grassmannian. Quiver Grassmannians are projective varieties and every projective variety occurs in this way (see the Appendix). If $D,Y$ are $\Lambda$-modules, then we consider $\Hom(D,Y)$ as a $\Gamma(D)$-module, where $\Gamma(D) =
\End(D)^\op$. The easiest way to define the Auslander varieties for $\Lambda$ is to say that they are just the quiver Grassmannians $\Bbb G_{\bold e}\Hom(D,Y)$ (here, we rely on the Auslander bijections; the proper definition of the Auslander varieties would have to refer to right equivalence classes of right $D$-determined maps ending in $Y$, see \[Ri\]). The Auslander varieties are part of Auslander’s approach of describing the global directedness of the category $\mod\Lambda$. Let as add that the quiver Grassmannians for $\Lambda$ are special Auslander varieties, namely the Auslander varieties $\Bbb G_{\bold e}\Hom(D,Y)$ with $D = \Lambda$.
According to Drozd \[D1\], any finite dimensional $k$-algebra is either tame or wild (note that there are few tame algebras, most of the algebras are wild; for example, the path algebra of a connected quiver is tame only in case we deal with a Dynkin or an extended Dynkin quiver). It has been conjectured that wild algebras are actually controlled wild (the definition will be recalled in section 2). A proof of this conjecture has been announced by Drozd \[D2\] in 2007, but apparently it has not yet been published. We show that for a fixed (controlled) wild algebra $\Lambda$, any projective variety can be realized as an Auslander variety, but not necessarily as a quiver Grassmannian. We denote by $\mod\Lambda$ the category of all (finite-dimensional left) $\Lambda$-modules. Let $\rad$ be the radical of $\mod\Lambda$, this is the ideal generated by all non-invertible maps between indecomposable modules. If $\Cal C$ is a collection of object of $\mod \Lambda$, we denote by $\add \Cal C$ the closure under direct sums and direct summands. For every pair $X,Y$ of $\Lambda$-modules, $\Hom(X,\Cal C,Y)$ denotes the subgroup of $\Hom(X,Y)$ given by the maps $X \to Y$ which factor through a module in $\add\Cal C$. Here is now the definition. The algebra $\Lambda$ is said to be [*controlled wild*]{} provided for any finite-dimensional $k$-algebra $\Gamma$, there is an exact embedding functor $F\:\mod \Gamma \to \mod \Lambda$ and a full subcategory $\Cal C$ of $\mod \Lambda$ (called the [*control class*]{}) such that for all $\Gamma$-modules $X,Y$, the subgroup $\Hom(FX,\Cal C,FY)$ is contained in $\rad (FX,FY)$ and we have $$\Hom(FX,FY) = F\Hom(X,Y) \oplus \Hom(FX,\Cal C,FY).$$
In order to check that $\Lambda$ is controlled wild, it is sufficient to exhibit such a functor $F$ for just one suitable algebra $\Gamma$, for example for the 3-Kronecker algebra (this is the path algebra of the quiver with two vertices, say $a$ and $b,$ and three arrows $b \to a$).
We also mention that $\Lambda$ is said to be [*strictly wild*]{} provided for any finite-dimensional $k$-algebra $\Gamma$, there is a full exact embedding functor $F\:\mod \Gamma \to \mod \Lambda$ (thus, strictly wild algebras are controlled wild and we can take as control class $\Cal C$ the zero subcategory). The 3-Kronecker algebra is a typical strictly wild algebra. The special case of a strictly wild algebras has been considered already in \[Ri\]. The proof of Proposition 1 will be given in this section. We start with the following Lemma. Proof: Let $X = \bigoplus X_i$. It is sufficient to show that $\Hom(X,C,X) = \Hom(X,\Cal C,X)$ for some module $C\in \add \Cal C$. Since the subgroups $\Hom(X,C,X)$ with $C \in \add\Cal C$ are subspaces of the finite-dimensional vector space $\Hom(X,X)$, there is $C\in \add\Cal C$ such that $\Hom(X,C,X)$ is of maximal dimension. Let $C'\in \add\Cal C$. Then also $C\oplus C'$ belongs to $\add\Cal C$ and we have $\Hom(X,C,X) \subseteq
\Hom(X,C\oplus C',X)$. The maximality of the dimension of $\Hom(X,C,X)$ implies that $\Hom(X,C,X) =
\Hom(X,C\oplus C',X),$ and thus $\Hom(X,C',X) \subseteq \Hom(X,C,X)$. But $\Hom(X,\Cal C,X) = \bigcup_{C'} \Hom(X,C',X)$.
Proof. Let $U$ be an element of $\Bbb G_{\bold g+\bold c} N$. We want to show that $U \supseteq ReN$. Given dimension vectors $\bold d, \bold d'$ for $\Lambda$, one writes $\bold d'\le \bold d$ provided $\bold d-\bold d'$ has non-negative coefficients. Since $\bdim U = \bold g+\bold c,$ we have $\bdim U \ge \bold c.$ Let $S$ be a simple $R$-module with $eS \neq 0$. Then $$[U:S] = (\bdim U)_S \ge \bold c_S = (\bdim \Lambda eN)_S = [\Lambda eN:S],$$ and therefore $eN \subseteq U$, thus also $\Lambda eN \subseteq U.$
[**Proof of proposition 1.**]{} Let $V$ be a projective variety. There is a finite-dimensional algebra $\Gamma$, a $\Gamma$-module $M$ and a dimension vector $\bold g$ for $\Gamma$ such that $\Bbb G_{\bold g}M$ is of the form $V$ (see the Appendix). Since $\Lambda$ is controlled wild, there is a controlled embedding $F$ of $\mod
\Gamma$ into $\mod\Lambda$, say with control class $\Cal C$. Let $G = F({}_\Gamma\Gamma)$ and $Y = F(M).$ According to Lemma 1, there is $C\in \
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this article we study left I-orders in the bicyclic monoid $\mathcal{B}$. We give necessary and sufficient conditions for a subsemigroup of $\mathcal{B}$ to be a left I-oreder in $\mathcal{B}$. We then prove that any left I-order in $\mathcal{B}$ is straight.'
address: |
Department of Mathematics\
University of York\
Heslington\
York YO10 5DD\
UK
author:
- 'N. Ghroda'
title: 'Bicyclic semigroups of left I-quotients'
---
Introduction
============
The first published description of the bicyclic semigroup was given by Evgenii Lyapin in 1953 [@Lyp]. A description of the subsemigroups of the bicyclic monoid was given in 2005 [@ruskuc]. In this article, we use this description to study left I-orders in the bicyclic monoid.
Many definitions of semigroups of quotients have been proposed and studied. The first, that was specifically tailored to the structure of semigroups was introduced by Fountain and Petrich in [@pjhon], but was restricted to completely 0-simple semigroups of left quotients. This definition has been extended to the class of all semigroups [@bisGould]. The idea is that a subsemigroup $S$ of a semigroup $Q$ is a *left order* in $Q$ or $Q$ is a *semigroup of left quotients* of $S$ if every element of $Q$ can be written as $a^{\sharp}b$ where $a , b \in S$ and $a^{\sharp}$ is the inverse of $a$ in a subgroup of $Q$ and if, in addition, every *square-cancellable* element (an element $a$ of a semigroup $S$ is square-cancellable if $a\, \mathcal{H}^{*}\,a^{2}$) lies in a subgroup of $Q$. *Semigroups of right quotients* and *right orders* are defined dually. If $S$ is both a left order and a right order in a semigroup $Q$, then $S$ is an *order* in $Q$ and $Q$ is a semigroup of *quotients* of $S$. This definition and its dual were used in [@bisGould] to characterize semigroups which have bisimple inverse $\omega$-semigroups of left quotients.
On the other hand, Clifford [@clifford] showed that from any right cancellative monoid $S$ with (LC) we can construct a bisimple inverse monoid $Q$ such that $Q=S^{-1}S$; that is, every element $q$ in $Q$ can be written as $a^{-1}b$ where $a ,b \in S$ and $a^{-1}$ is the inverse of $a$ in $Q$ in the sense of inverse semigroup theory. By saying that a semigroup $S$ has the (LC) *condition* we mean that for any $a,b\in S$ there is an element $c\in S$ such that $Sa\cap Sb=Sc$. The author and Gould in [@GG] have extended Clifford’s work to a left ample semigroup with (LC) where they introduced the following definition of left I-orders in inverse semigroups:
Let $Q$ be an inverse semigroup. A subsemigroup $S$ of $Q$ is a *left I-order* in $Q$ or $Q$ is a semigroup of *left I-quotients* of $S$, if every element in $Q$ can be written as $a^{-1}b$ where $a ,b \in S$. The notions of *right I-order* and *semigroup of right I-quotients* are defined dually. If $S$ is both a left I-order and a right I-order in $Q$, we say that $S$ is an *I-order* in $Q$ and $Q$ is a semigroup of *I-quotients* of $S$. It is clear that, if $S$ a left order in an inverse semigroup $Q$, then it is certainly a left I-order in $Q$; however, the converse is not true (see for example [@GG] Example 2.2).
A left I-order in an inverse semigroup $Q$ is *straight left I-order* if every element in $Q$ can be written as $a^{-1}b$ where $a,b \in S$ and $a\,\mathcal{R}\,b$ in $Q$; we also say that $Q$ is a *straight left I-quotients* of $S$. If $S$ is straight in $Q$, we have the advantage of controlling the product in $Q$.
In [@NG] the author has given the necessary and sufficient conditions for a semigroup $S$ to have a bisimple inverse $\omega$-semigroup left I-quotients, modulo left I-order in the bicyclic semigroup $\mathcal{B}$, which is the most straightforward example of the bisimple inverse $\omega$-semigroup. In fact, it is a semigroups with many remarkable properties. Left I-orders in the bicyclic semigroup are interesting in their own right. By describtions left I-order in $\mathcal{B}$, we obtain:
\[main\] Let $S$ be a subsemigroup of $\mathcal{B}$. If $S$ is a left I-order in $\mathcal{B}$, then it is straight.
In the preliminaries after introducing the necessary notation, we give some previous results giving the description of subsemigroups of $\mathcal{B}$.
We use the classification of subsemigroups of $\mathcal{B}$ in [@ruskuc] to investigate which of them are left I-orders in $\mathcal{B}$. Subsemigroups of $\mathcal{B}$ fall into three classes upper, lower and two-sided. In Sections 3, 4 and 5 we give the necessary and sufficient conditions for upper, lower and two-sided subsemigroups of $\mathcal{B}$ to be left I-orders in $\mathcal{B}$, respectively. In each case, such left I-orders are straight and this proves Theorem \[main\].
Preliminaries {#prelim}
=============
Throughout this article we shall follow the terminology and notation of [@clifford]. The symbol $\mathbb{N}$ will denote the set consisting of the natural numbers and $\mathbb{N}^0=\mathbb{N}\cup \{0\}$. Let $ \mathcal{R} , \mathcal{L} , \mathcal{H}$ and $\mathcal{D}= \mathcal{R} \circ \mathcal{L}=\mathcal{L} \circ \mathcal{R}$ be the usual Green’s relations. A semigroup $S$ is called *simple* if $S$ does not contain proper two-sided ideals and *bisimple* if it consists of a single $\mathcal{D}$-class.
The bicyclic semigroup $\mathcal{B}(a,b)$ is defined by the monoid generated by two elements $a$ and $b$ subject only to the condition that $ba=1$. It follows that the elements can all be written in the standard form $a^ib^j$ where $i,j \geq 0$. We can write out the elements of $\mathcal{B}$ in array.
$$\begin{array}{c|ccccc}
1& b & b^{2} & b^{3} & b^{4} & \ldots\\ \hline
a & ab & ab^{2} & ab^{3} & ab^{4} & \ldots \\
a^{2} & a^{2}b & a^{2}b^{2} & a^{2}b^{3} & a^{2}b^{4} & \ldots\\
a^{3}& a^{3}b &a^{3}b^{2} & a^{3}b^{3} & a^{3}b^{4} & \ldots \\
a^{4} & a^{4}b& a^{4}b^{2} & a^{4}b^{3} & a^{4}b^{4} & \ldots \\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots\end{array}$$
The multiplication on $\mathcal{B}$ is defined as follows: $$a^{k}b^{l}a^{m}b^{n} = \begin{cases}
a^{k+m-l}b^{n} & l\leq m, \\
a^{k}b^{l-m+n} & l> m.
\end{cases}$$ We can put the two cases together as follows: $$a^kb^la^mb^n =a^{k-l+t}b^{n-m+t}\ \mbox{where}\ t=\mbox{max}\{l,m\}.$$ The monoid $\mathcal{B}$ is thus isomorphic to the monoid $\mathbb{N}^0 \times \mathbb{N}^0$ with multiplication $$(k,l)(m,n)=(k-l+t,n-m+t)\ \mbox{where
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Since LEPS collaboration reported the first evidence of $\Theta^+$ pentaquark in early 2003, eleven other experimental groups have confirmed this exotic state while many other groups didn’t see any signal. If this state is further established by future high statistical experiments, its discovery shall be one of the most important events in hadron physics for the past three decades. This exotic baryon with such a low mass and so narrow a width imposes a big challenge to hadron theorists. Up to now, there have appeared more than two hundred theoretical papers trying to interpret this charming state. I will review some important theoretical developments on pentaquarks based on my biased personal views.'
author:
- 'Shi-Lin Zhu'
title: Pentaquarks
---
[****]{}
- I. Quark Model and Exotic Hadron Search
- II\. Pentaquarks: Discovery or Fluctuations?
- III\. Group Theory Of Pentaquarks
- IV\. Theoretical Models of Pentaquarks
- Chiral soliton model
- Jaffe and Wilczek’s Diquark model
- Other clustered quark model
- Non-clustered quark model
- QCD sum rules
- Heptaquarks
- V. Narrow width puzzle
- VI\. Pentaquark flavor wave functions, masses and magnetic moments
- VII\. Heavy flavored pentaquarks
- VIII\. Chiral Lagrangian formalism for pentaquarks
- Mass splitting
- Selection rules from octet pentaquark decays
- IX\. Diquarks, pentaquarks and dibaryons
- X. Summary
Quark Model and Exotic Hadron Search {#sec1}
====================================
Quantum Chromodynamics (QCD) is believed to be the underlying theory of the strong interaction, which has three fundamental properties: asymptotic freedom, color confinement, approximate chiral symmetry and its spontaneous breaking. In the high energy regime, QCD has been tested up to $1\%$ level. In the low energy sector, QCD is highly nonperturbative due to the non-abelian SU$_c$(3) color group structure. It is very difficult to calculate the whole hadron spectrum from first principles in QCD. With the rapid development of new ideas and computing power, lattice gauge theory may provide the final solution to the spectrum problem in the future. But now, people have just been able to understand the first orbital and radial excitations of the nucleon with lattice QCD in the baryon sector [@liu].
Under such a circumstance, various models which are QCD-based or incorporate some important properties of QCD were proposed to explain the hadron spectrum and other low-energy properties. Among them, it is fair to say that quark model has been the most successful one. It is widely used to classify hadrons and calculate their masses, static properties and low-energy reactions [@isgur]. According to quark model, mesons are composed of a pair of quark and anti-quark while baryons are composed of three quarks. Both mesons and baryons are color singlets. Most of the experimentally observed hadrons can be easily accommodated in the quark model. Any state with the quark content other than $q\bar q$ or $q q q$ is beyond quark model, which is termed as non-conventional or exotic. For example, it is hard for $f_0(980)/a_0(980)$ to find a suitable position in quark model. Instead it could be a kaon molecule or four quark state [@pdg].
However, besides conventional mesons and baryons, QCD itself does not exclude the existence of the non-conventional states such as glueballs ($gg, ggg, \cdots$), hybrid mesons ($q\bar q g$), and other multi-quark states ($qq\bar q \bar q$, $qqqq\bar q$, $qqq\bar q \bar q \bar q$, $qqqqqq, \cdots$). In fact, hybrid mesons can mix freely with conventional mesons in the large $N_c$ limit [@cohen]. In the early days of QCD, Jaffe proposed the H particle [@jaffeold] with MIT bag model, which was a six quark state. Unfortunately it was not found experimentally.
In the past years there have accumulated some experimental evidence of possible existence of glueballs and hybrid mesons with exotic quantum numbers like $J^{PC}=1^{-+}$ [@pdg]. Recently BES collaboration observed a possible signal of a proton anti-proton baryonium in the $J/\Psi$ radiative decays [@bes]. But none of these states has been pinned down without controversy until the surprising discovery of pentaquarks by LEPS collaboration [@leps].
Pentaquarks: Discovery or Fluctuations? {#sec2}
=======================================
Early last year LEPS Collaboration at the SPring-8 facility in Japan observed a sharp resonance $\Theta^+$ at $1.54\pm 0.01$ GeV with a width smaller than 25 MeV and a statistical significance of $4.6\sigma$ in the reaction $\gamma n \to K^+ K^- n$ [@leps]. This resonance decays into $K^+ n$, hence carries strangeness $S=+1$. Later, many other groups have claimed the observation of this state [@diana; @clas; @saphir; @itep; @clasnew; @hermes; @svd; @cosy; @Yerevan; @zeus; @forzeus]. All known baryons with $B=+1$ carry negative or zero strangeness. Such a resonance is clearly beyond the conventional quark model with the minimum quark content $uudd\bar s$. Now it’s called $\Theta^+$ pentaquark in literature. A compilation of $\Theta^+$ mass and decay width is presented in Figure (\[mao\]).
NA49 Collaboration announced evidence for the existence of a new narrow $\Xi^- \pi^-$ baryon resonance $\Xi^{--}_5$ with mass of $(1.862\pm 0.002) $ GeV and width below the detector resolution of about 0.018 GeV in proton-proton collisions at $\sqrt{s}=17.2$ GeV [@na49]. The quantum number of this state is $Q=-2, S = -2, I
= 3/2$ and its quark content is $(d s d s \bar u)$. They also observed signals for the $Q=0$ member of the same isospin quartet with a quark content of $(d s u s \bar d)$ in the $\Xi^- \pi^+$ spectrum. The corresponding anti-baryon spectra also show enhancements at the same invariant mass. H1 Collaboration claimed the discovery of a heavy pentaquark around 3099 MeV with the quark content $udud\bar{c}$ [@H1]. Very recently, STAR collaboration at RHIC observed a narrow peak at $1734\pm 0.5\pm 5$ MeV in the $\Lambda K_s^0$ invariant mass which was interpreted as an $I={1\over 2}$ pentaquark [@star-rhic]. However its antiparticle was not observed yet.
There is preliminary evidence that the $\Theta^+$ is an iso-scalar because no enhancement was observed in the $pK^+$ invariant mass distribution [@saphir; @clasnew; @hermes; @forzeus]. The third component of its isospin is $I_z=0$. So the $\Theta^+$ pentaquark is very probably an iso-scalar if it is a member of the anti-decuplet. At present, the possibility of this state being a member of another multiplet is not completely excluded. Hence its total isospin is probably zero. Most of the theoretical models assume that $\Theta^+$ is in $SU(3)_f$ ${\bf\bar{10}}$ representation. All the other quantum numbers including its angular momentum and parity remain undetermined. However, most of theoretical work postulated its angular momentum to be one half because of its low mass. But the possibility of $J={3\over 2}$ still can not be excluded completely.
It is important to point out that many other experimental groups reported negative results [@bes1; @hera-b; @rhic]. For example, the existence of $\Xi^{--}_5$ is still under debate [@doubt]. Compared with 1640 $\Xi^-$ candidates produced in proton proton collisions in NA49’s analysis, WA89 collaboration found no signal of $\Xi^{--}$ pentaquark with $676000$ $\Xi^-$ candidates in their data sample [@wa89]. A long list of experiments yielding negative results including those unpublished can be found in Ref. [@long]. Although the $\Theta^+$ pentaquark has been listed as a three-star resonance in the 2004 PDG, its existence is still not completely established.
Group Theory Of Pentaquarks {#sec3}
===========================
One can use some textbook group theory to write down the pentaquark wave functions in the framework of quark model. Because of its low mass, high orbital excitation with $L\ge 2$ seems unlikely. Pauli principle requires the totally anti-symmetric wave functions for the four light quarks. Since the anti-quark is in the $[11]_C$ representation, the four quark color wave function is $[211]_C$.
With $L=0$, hence $P=-$, the 4q spatial wave function is symmetric, i.e., $[4]_O$. Their $SU(6)_{FS}$ spin-flavor wave function must be
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The net Fisher information measure $I_{T}$, defined as the product of position and momentum Fisher information measures $I_{r}$ and $I_{k}$ and derived from the non-relativistic Hartree-Fock wave functions for atoms with $Z=1-102$, is found to correlate well with the inverse of the experimental ionization potential. Strong direct correlations of $I_{T}$ are also reported for the static dipole polarizability of atoms with $Z=1-88$. The complexity measure, defined as the ratio of the net Onicescu information measure $E_{T}$ and $I_{T}$, exhibits clearly marked regions corresponding to the periodicity of the atomic shell structure. The reported correlations highlight the need for using the net information measures in addition to either the position or momentum space analogues. With reference to the correlation of the experimental properties considered here, the net Fisher information measure is found to be superior than the net Shannon information entropy.'
author:
- |
K. D. Sen$^{1}$, C. P. Panos$^{2}$, K. Ch. Chatzisavvas$^{2}$[^1],\
and Ch. C. Moustakidis$^{2}$\
\
[*$^{1}$School of Chemistry, University of Hyderabad*]{},\
[*Hyderabad, 500046 India*]{}\
[*$^{2}$Department of Theoretical Physics*]{},\
[*Aristotle University of Thessaloniki,*]{}\
[*54124 Thessaloniki, Greece*]{}
title: Net Fisher information measure versus ionization potential and dipole polarizability in atoms
---
*Key words*: Fisher Information; Information Entropy; Atoms; Ionization potential; Dipole polarizability; Complexity.
Introduction
============
Two of the most popular information measures due to Shannon [@Shannon48] and Fisher [@Fisher25] respectively, are being increasingly applied in studying the electronic structure and properties of atoms and molecules. The Shannon information entropy $S_{r}$ of the electron density $\rho(\textbf{r})$ in coordinate space is defined as $$\label{eq:eq1}
S_{r}=-\int
\rho(\textbf{r})\,\ln{\rho(\textbf{r})}\,d\textbf{r},$$ and the corresponding momentum space entropy $S_{k}$ is given by $$\label{eq:eq2}
S_{k}=-\int n(\textbf{k})\,\ln{n(\textbf{k})}\,d\textbf{k},$$ where $n(\textbf{k})$ denotes the momentum density. The densities $\rho(\textbf{r})$ and $n(\textbf{k})$ are respectively normalized to unity and all quantities are given in atomic units. The Shannon entropy sum $S_{T}=S_{r}+S_{k}$ contains the net information and obeys the well known lower bound by Bialynicki-Birula and Mycielski [@Bialynicki75] who obtained the entropic uncertainty relation (EUR) which represents a stronger version of the Heisenberg uncertainty principle of quantum mechanics. Accordingly, the entropy sum in D-dimensions satisfies the inequality [@Bialynicki75; @Sears80] $$\label{eq:eq3}
S_{T}=S_{r}+S_{k} \geq D\,(1+\ln{\pi}).$$ Individual entropies $S_{r}$ and $S_{k}$ depend on the units used to measure $r$ and $k$ respectively, but their sum $S_{T}$ does not i.e. it is invariant to uniform scaling of coordinates.
The Shannon information entropies (uncertainty) provide a global measure of information about the probability distribution in the respective spaces. A more localized distribution in position space corresponds to a *smaller* value of information entropy. For application of Shannon information entropy in chemical physics we refer the reader to the published literature [@Gadre100; @Gadre101; @Gadre102]. An example of quantification of order of the chemical bonding employing Shannon information is [@Karafiloglou04].
Analogous applications for other quantum many-body systems (nuclei, atomic clusters and correlated atoms in a trap-bosons) have been reported recently [@Panos100].
The Fisher information measure or intrinsic accuracy in position space is defined as $$\label{eq:eq4}
I_{r}=\int \frac{\left|\nabla\rho(\textbf{r})\right|^2}{\rho(\textbf{r})}
\,d\textbf{r},$$ and the corresponding momentum space measure is given by $$\label{eq:eq5}
I_{k}=\int \frac{\left|\nabla n(\textbf{k})\right|^2}{n(\textbf{k})}
\,d\textbf{k}.$$
The individual Fisher measures are bounded through the Cramer-Rao inequality [@Rao59] according to $\displaystyle{I_{r}\geq
\frac{1}{V_{r}}}$ and $\displaystyle{I_{k}\geq \frac{1}{V_{k}}}$, where $V$’s denote the corresponding spatial and momentum variances respectively. In position space, the Fisher information measures the sharpness of probability density and for a Gaussian distribution is exactly equal to the variance[@Frieden04]. A sharp and strongly localized probability density gives rise to a *larger* value of Fisher information in the position space. The Fisher measure in this sense is complementary to the Shannon entropy and their *reciprocal* proportionality is, in fact, utilized in this work. The Fisher measure has the desirable properties, i.e. it is always positive and reflects the localization characteristics of the probability distribution more sensitively than the Shannon information entropy [@Carroll06]. However, for the electronic density distribution in atoms, the enhanced sensitivity of the Fisher measure has not been demonstrated explicitly. The lower bounds of Shannon sum ($S_{r}+S_{k}$) and Fisher product ($I_{r}I_{k}$) get saturated for the Gaussian distributions. For a variety of applications of the Fisher information measure we refer to the recent book [@Frieden04] and for applications to the electronic structure of atoms, to the pioneering work of Dehesa et al. [@Dehesa01].
In the context of density functional theory (DFT), Sears, Parr and Dinur [@Sears80b] were the first to highlight the importance of Fisher information, by showing explicitly that the quantum mechanical kinetic energy is a measure of the information content of a distribution. A link of Shannon information entropy with the kinetic energy for atomic clusters and nuclei has also been indicated in [@Massen01]. The electron localization function [@Becke01], which has been widely successful in revealing the localization properties of electron density in molecules, has been interpreted in terms of Fisher information [@Roman01]. Recently, the Euler equation of density functional theory has been derived from the principle of the minimum Fisher information within the time dependent versions [@Nagy03]. The Shannon information sum $S_{T}$ has been used in a large majority of applications of information theory in the electronic structure studies involving atoms and molecules. In this work we define the net information $I_{T}$ as the product $I_{r}I_{k}$ and consider its inverse $I_{T}^{-1}$ as representing the net information similar to $S_{T}$. In this sense we propose to employ $I_{T}$ instead of $S_{T}$ to assess the utility of the net Fisher information vis-a-vis the Shannon entropy sum. This is done in the analysis of experimental properties such as the ionization potential and polarizability, corresponding to the neutral atoms in their ground electronic states. It is worth noting here that the net uncertainty measures defined in the conjugate spaces are at the foundation of the quantum mechanical probability distribution. As noted above, the quantities $I_{T}$ and $S_{T}$ measure the net information content of the probability distribution including its spatial characteristics. Such measures could therefore be tested in their ability to reproduce the trends in atomic sizes, ionization potentials and the polarizabilities, respectively.
Very recently the question whether atoms can grow in complexity with the increase in nuclear charge has been addressed [@Chatzisavvas05; @Chatzisavvas06]. In particular the Onicescu information measure [@Onicescu66] in position space $E_{r}$ and momentum $E_{k}$ have been defined as the corresponding density expectation values $E_{r}=\int
\rho(\textbf{r})^2\,\d(\textbf{r})=\langle \rho(\textbf{r})
\rangle$ and $E_{k}=\int n(\textbf{k})^2\,d(\textbf{k})=\langle
n(\textbf{k}) \rangle$, respectively.
The complexity $C$ is measured accordingly to the prescription due to Lopez-Ruiz, Manchini and Calbet (LMC) [@Lopez95; @Chatzisavvas06] as $$\label{eq:eq6}
C=S_{T} E_{T},$$ where $E_{T}=E_{r}E_{k}$.
$S_{T}$ denotes the information content stored in the system and $E_{T}$ corresponds to the disequilibrium of the system, i.e. the distance from its actual state to equilibrium, according to [@Lopez95]. Shiner, Davison and Landsberg (SDL) [@Shiner99] and LMC measures were criticized in [@Crutchfield00; @Feldman98; @Stoop05]. A related discussion can be found in [@Chatzisavvas05; @Chatzisavvas06].
In the
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, we address the fundamental problem of line spectral estimation in a Bayesian framework. We target model order and parameter estimation via variational inference in a probabilistic model in which the frequencies are continuous-valued, i.e., not restricted to a grid; and the coefficients are governed by a Bernoulli-Gaussian prior model turning model order selection into binary sequence detection. Unlike earlier works which retain only point estimates of the frequencies, we undertake a more complete Bayesian treatment by estimating the posterior probability density functions (pdfs) of the frequencies and computing expectations over them. Thus, we additionally capture and operate with the uncertainty of the frequency estimates. Aiming to maximize the model evidence, variational optimization provides analytic approximations of the posterior pdfs and also gives estimates of the additional parameters. We propose an accurate representation of the pdfs of the frequencies by mixtures of von Mises pdfs, which yields closed-form expectations. We define the algorithm VALSE in which the estimates of the pdfs and parameters are iteratively updated. VALSE is a gridless, convergent method, does not require parameter tuning, can easily include prior knowledge about the frequencies and provides approximate posterior pdfs based on which the uncertainty in line spectral estimation can be quantified. Simulation results show that accounting for the uncertainty of frequency estimates, rather than computing just point estimates, significantly improves the performance. The performance of VALSE is superior to that of state-of-the-art methods and closely approaches the Cramér-Rao bound computed for the true model order.'
author:
- 'Mihai-Alin Badiu, Thomas Lundgaard Hansen, and Bernard Henri Fleury [^1][^2]'
bibliography:
- 'IEEEabrv.bib'
- 'References.bib'
title: Variational Bayesian Inference of Line Spectra
---
Line spectral estimation, complex sinusoids, model order selection, Bayesian inference, von Mises distribution, super-resolution, Bernoulli-Gaussian model, sparse estimation
Introduction
============
The problem of line spectral estimation (LSE) [@StoicaMoses2005], i.e. extracting the parameters of a superposition of complex exponential functions from noisy measurements is fundamental in numerous disciplines in engineering, physics, and natural sciences. To quote a few examples, solutions to this problem have applications to range and direction estimation in sonar and radar, channel estimation in wireless communications, speech analysis, spectroscopy, molecular dynamics, power electronics, geophysical exploration.
In LSE, the original signal $\vx = (x_0,\ldots,x_{N-1})^{\operatorname{T}}\in\mathbb{C}^N$ is a superposition of $K$ complex sinusoids, i.e., $$\label{eq:OrigSig}
x_n = \sum_{k=1}^{K} \alpha_k e^{j\omega_k n}, %\quad n\in\{0,\ldots,N-1\},$$ where $\alpha_k\in\mathbb{C}$ and $\omega_k\in[-\pi,\pi)$ are the complex amplitude and (angular) frequency, respectively, of the $k$th component. We are given the vector $\vy$ containing $M\leq N$ noisy measurements of those components of $\vx$ with indices in $\mathcal{M}\subseteq\{0,\ldots,N-1\}$, $|\mathcal{M}|=M$. Defining the function $\va:[-\pi,\pi)\rightarrow\mathbb{C}^M$, $\omega\to\va(\omega) = (e^{j\omega m} \mid m\in\mathcal{M})^{\operatorname{T}}$ and the vector $\bm{\epsilon}$ representing additive noise, we write $$\label{eq:SigModel}
\vy = \sum_{k=1}^{K} \alpha_k \va(\omega_k) + \bm{\epsilon}.$$ The problem of LSE involves estimating the number $K$ of sinusoidal components, also referred to as model order selection, and their associated parameters $\alpha_k$ and $\omega_k$. Even if the model order $K$ is given, LSE is still nontrivial because of the nonlinear dependency of on the frequencies.[^3]
Prior Work
----------
Under the assumption of known $K$, the $\omega_k$’s are traditionally estimated using the maximum-likelihood (ML) technique or subspace methods, such as [@Schmidt1986; @RoyKailath1989]. The ML method involves the hard task of maximizing a nonconvex function that has a multimodal shape with a sharp global maximum. The maximizer is typically searched using iterative algorithms (e.g., [@ZiskindWax1988; @Feder1988; @Fleury1999]) which, however, require accurate initialization and, at best, are guaranteed to converge to a local optimum. Nonetheless, the performance of the ML technique is superior to that of subspace methods, the difference being evident especially when the sample size $M$ or alternatively the signal-to-noise ratio (SNR) are small. Since $K$ is typically unknown in practice, the model order is conventionally selected based on an information criterion, which comprises a data term representing the fitting error and a penalty term that increases with the model order (see [@Stoica2004] and references therein). Assuming a range of potential model orders, the parameters corresponding to each possible order are estimated using, e.g., one of the aforementioned methods. Finally, the tradeoff between fitting error and model complexity is made by selecting the configuration that minimizes the criterion. Scanning a range of model orders can be computationally expensive. Also, in non-asymptotic regimes (particularly limited $M$ or SNR), information criteria tend to provide a wrong model order. A comprehensive review of classical approaches can be found in [@StoicaMoses2005].
A more recent approach to LSE is dictionary-based model estimation, see [@Austin2013] and the references therein. In this approach, nonlinear estimation of the frequencies is avoided by discretizing the range $[-\pi,\pi)$ into a finite set (grid) of samples that represent the candidate frequency estimates. The signal model is then approximated with a linear system comprising a so-called dictionary matrix (whose columns are given by $\va(\cdot)$ evaluated at the grid samples) and a vector of weights. Thus, the original nonlinear problem is replaced by a linear inverse problem to which a sparse solution is sought. The nonzero entries of the sparse estimate of the weight vector encode the model order and parameter estimates. There is a plethora of techniques that can be used for sparse signal representation, see the detailed survey [@Tropp2010]. However, restricting the candidate frequency estimates to a discrete grid induces spectral leakage due to the model mismatch. Consequently, $\vx$ can admit only an approximately sparse representation (or may be even incompressible) in a finite dictionary [@Chi2011; @Duarte2013]. On the one hand, a denser grid provides a better sparse approximation and higher accuracy of frequency estimation. On the other hand, increasing the grid density makes the dictionary columns highly coherent, which might affect the sparse reconstruction capability, and boosts the computational complexity. To alleviate the mismatch issues, several approaches are conceived, e.g.: in [@Duarte2013], the concept of structured sparsity is utilized to inhibit closely-spaced frequency estimates; the method in [@Malioutov2005] starts with a coarse grid and heuristically iterates between estimating the weights and placing a finer grid around the location of the non-zero weight estimates; in [@Ekanadham2011; @Yang2013; @Hu2013; @Fyhn2015], a less fine grid is used as a baseline and the dictionary matrix is modified to include auxiliary interpolation functions.
In the quest for gridless methods which work directly with continuously parameterized dictionaries, i.e., dictionaries whose parameter ranges in $[-\pi,\pi)$, several works depart from using a static dictionary given by a fixed grid. By including the parameters that dictate the dictionary in the estimation problem, they obtain dynamic dictionary algorithms in which the candidate frequencies and hence the dictionary columns are gradually refined. In [@Austin2013], two such algorithms are designed based on the $\ell_p$ regularized least squares objective by adding a penalty term to prohibit closely spaced frequencies and respectively imposing a hard constraint on the minimum distance between frequencies. The algorithms approximately solve the involved nonlinear estimation and still require an initial grid [@Austin2013]. A different line of works adopts the Bayesian framework and augments the probabilistic model of sparse Bayesian learning (SBL) [@Tipping2001; @Wipf2004] to incorporate the candidate frequencies. In SBL, a sparse weight vector is promoted by selecting a parameterized/hierarchical prior model for its entries [@Tipping2001; @Wipf2004]. Estimation in the augmented model is performed using variational inference methods [@ShutinFleury2011; @Hu2012; @Shutin2013] or maximization of the marginalized posterior pdf [@Hansen2014]. Common to all existing SBL-based approaches is that they restrict to compute point estimates of the frequencies (i.e., MAP/ML estimates), which implies nontrivial maximization of highly multimodal functions (similar to classical ML frequency estimation) in each iteration. The maximization is accomplished approximately by using a grid followed by refinement with Newton’s method or interpolation. Another limitation is that, while providing good reconstruction performance, the SBL-based methods reportedly overestimate the model order, i.e., they consistently output additional spurious components (artifacts) of small power [@ShutinFleury2011; @Shutin2013].
A different gridless approach that avoids the frequency discretization issues is
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We perform a numeric study (worm algorithm Monte Carlo simulations) of ultracold two-component bosons in two- and three-dimensional optical lattices. At strong enough interactions and low enough temperatures the system features magnetic ordering. We compute critical temperatures and entropies for the disappearance of the Ising antiferromagnetic and the *xy*-ferromagnetic order and find that the largest possible entropies per particle are $\sim 0.5 k_B$. We also estimate (optimistically) the experimental hold times required to reach equilibrium magnetic states to be on a scale of seconds. Low critical entropies and long hold times render the experimental observations of magnetic phases challenging and call for increased control over heating sources.'
author:
- 'B. Capogrosso-Sansone'
- 'Ş.G. Söyler'
- 'N.V. Prokof’ev'
- 'B.V. Svistunov'
title: Critical entropies for magnetic ordering in bosonic mixtures on a lattice
---
Introduction
============
At the moment, one of the prominent focuses and major challanges of experiments with ultracold gases is the realization of configurations which can be used to study quantum magnetism [@Sachdev_review; @Lewenstein_review]. Though interesting and fundamental on its own, better understanding of (frustrated) magnetic systems is further motivated by its relevance to high-$T_c$ superconductivity and applications to quantum information processing. Direct studies of condensed matter spin systems experimentally are limited by the lack of control over interactions, geometry, frustration, and contaminating effects of other degrees of freedom. A new approach consists of using ultracold atoms in optical lattices (OL) provided that the system is driven towards regimes where it is possible to map the corresponing (Bose-)Hubbard Hamiltonian to spin models.\
Striking advances in experimental techniques, e.g. high controllability and tunability of Hamiltonian parameters, and, more recently, single site and single particle imaging [@QGM_1; @QGM_2; @QGM_3; @QGM_4], brought forward the idea, originally proposed by Feynmann, of quantum simulation/emulation [@Feynmann]. In the last decade, a considerable amount of theoretical and experimental research has been devoted to the objective of using ultracold lattice bosons and fermions to address many outstanding condensed matter problems via Hamiltonian modeling. Perhaps the biggest remaining experimental challenge consists of reaching low enough temperatures/entropies for the observation of ordered magnetic states. Theoretical insight on optimal conditions for such observations is greatly needed. While Mott insulator (MI) phases of single component bosonic systems have been observed experimentally [@Greiner; @Porto_2D; @Bloch_review], and finite temperature effects have been extensively investigated recently [@Bloch-Umass; @Pollet-Van_houcke; @Ho; @Capogrosso; @Ketterle], the multi-component case is still a work in progress.\
In the present work, we address the issue for the case of two-component bosonic systems. We obtain such important numbers as critical temperatures and, more importantly entropies, below which magnetic phases can be observed experimentally. With these numbers in hand, we provide rough estimates of hold times required for observing thermally equilibrated ordered magnetic states.\
We consider a homogeneous system of two-component bosons in a cubic (square) lattice with repulsive inter-species interaction and half-integer filling of each component. This system can be realized by loading OL with two different atomic species, see, e.g., experiments at LENS with rubidium and potassium mixtures [@mixture_hetero_1; @mixture_hetero_2], or the same atomic species in two different internal energy states, see e.g recent experiments done at MIT [@Ketterle] and ongoing experiments at Stony Brook [@Stony_Brook]. The inter- and intra-species interaction strengths, $U_{ab }\equiv U$, $U_{aa}$, and $U_{bb}$ can be tuned via Feshbach resonance or by changing the Wannier functions overlap (in the presence of state-dependent lattices). If the intra-species interactions $U_{aa}$ and $U_{bb}$ are made much larger than any other energy scale, and the temperature is low enough, the system is accurately described by the two-component *hard-core* Bose-Hubbard Hamiltonian: $$\begin{aligned}
H=-t_a\! \sum_{<ij>} a^{\dag}_i\,a^{}_j \,
-t_b\! \sum_{<ij>} b^{\dag}_i\,b^{}_j
\, +U\sum_{i}n^{(a)}_i n^{(b)}_i
\, . \label{hamiltonian}\end{aligned}$$ Here $a^{\dag}_i(a^{ }_i)$, $b^{\dag}_i(b^{}_i)$ are bosonic creation (annihilation) operators and $t_a$, $t_b$ are hopping matrix elements for two species of bosons ($A$ and $B$), respectively; the symbol $<\!\ldots\!>$ imposes the nearest-neibor constraint on the summation over site subscripts; $n^{(a)}_i=a^{\dag}_i a^{ }_i$ and $n^{(b)}_i=b^{\dag}_i b^{ }_i$.\
Model (\[hamiltonian\]) displays a very rich ground state phase diagram [@Kuklov_Svistunov; @Demler_Lukin; @Soyler], see Fig. \[fig1\]. For strong enough interactions, the system is incompressible in the particle-number sector, i.e. it is a MI. The remaining degree of freedom describing the boson type on a given site can be mapped onto the effective iso-spin variable [@Boninsegni; @Kuklov_Svistunov; @Demler_Lukin] and gives rise to two possible MI states: a double checker-board (2CB) solid phase, equivalent to the Ising antiferromagnet, and a super-counter-fluid (SCF), equivalent to a planar ferromagnet in the iso-spin terminology. For large enough hoppings the MI state undergoes a transition to a double superfluid state (2SF). Finally, as it has been shown recently [@Soyler], for strong asymmetry between the hopping amplitudes and relatively weak inter-species interaction a solid phase in the (heavy) component is stabilized via a mechanism of *inter-site* effective interactions mediated by the (light) superfluid component. In what follows we will focus on the magnetic states, namely the Ising antiferromagnet and the *xy*-ferromagnet. We present the first precise results, based on path integral Monte Carlo (PIMC) simulations by the Worm algorithm [@WA], for transition lines to magnetic phases in two- and three-dimensions (2D and 3D) at zero and finite temperature, and discuss experimental parameters required for reaching them.
Ground state
============
We begin with results for the ground state. In Fig. \[fig1\] we show the complete zero temperature phase diagram of model (\[hamiltonian\]) for the 2D system calculated in Ref. [@Soyler]. We also sketch (dashed line) the transition line for the disappearence of magnetic order for the 3D system by computing benchmark transition points (down triangles) for the strongly anisotropic and isotropic limits. These points correspond to the disappearance of the insulating Ising and the (*xy*)-ferromagnetic phases, respectively. While, as expected, the 3D case is better captured by the mean-field theory [@Demler_Lukin; @Soyler], the discrepancy between mean-field and Monte Carlo results is still sizable: $\sim$50%.
\
These results provide quantitative guidance for experimentally achieving the regime of quantum magnetism. In experiments with two different species this can be done by using Feshbach resonances [@mixture_hetero_1] in order to reach the desired $t_{a,b}/U$ value; in the case of the same species but different internal states one can load state dependent lattices and tune the interspecies interaction by changing the overlap of Wannier functions of the two components.
Finite-temperature results
==========================
Turning to the issue of reaching magnetic phases in realistic experimental setups—with an adiabatic protocol of turning on the optical lattice—we look for highest possible values of the critical entropy for the appearance of magnetically ordered states. The critical values of temperature come as a natural ‘by-product’ of simulations. In what follows we use $t_b\ge t_a$ as the energy unit.
Critical temperatures
---------------------
We start with the Ising antiferromagnet-to-normal transition. It belongs to the *d*-dimensional Ising universality class, the order parameter being the staggered magnetization along the *z*-axis or, equivalently, in bosonic language, the structure factor (which is the square of the order parameter): $$S^{(a,b)}_\textbf{\scriptsize K} =\, \sum_{\textbf{r},\textbf{r}'}\,
\exp\left[i\textbf{K}\!\cdot\! \left(\textbf{r}-\textbf{r}'\right)\right] \,
{ \langle n^{(a)}_{\textbf{r}} n^{(b)}_{\textbf{r}'}\rangle \over N^{(a)}N^{(b)}
}\; ,
\label{StrF_def}$$ with **K** the reciprocal lattice vector of the CB solid, i.e. **K**=$(\pi
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We apply adaptive feedback for the refrigeration of a mechanical resonator, i.e. with the aim of simultaneously cooling the classical thermal motion of more than one vibrational degree of freedom. The feedback is obtained from a neural network trained via a reinforcement learning strategy to choose the correct sequence of actions from a finite set in order to reduce the total energy of all modes of vibration. The actions are realized either as optical modulations of spring constants or as radiation pressure induced momentum kicks. As a proof of principle we numerically show simultaneous cooling of four independent modes with an overall strong reduction of the total system temperature.'
author:
- Christian Sommer
- Muhammad Asjad
- Claudiu Genes
bibliography:
- 'bibfileReinforcement.bib'
title: Prospects of reinforcement learning for simultaneous damping of many mechanical modes
---
The radiation pressure effect of light onto the motion of mechanical resonators has been extensively employed to bring such macroscopic systems towards the quantum ground state [@Aspelmeyer2014cavity; @windey2019cavity; @deli2019cavity; @Rossi2017; @Clark2017; @Qiu2019; @Asenbaum2013; @Mancini1998; @Clemens2016; @Kiesel2013; @Millen2015]. In a standard approach, the aim is to isolate a single vibrational mode and bring it to its quantum ground state where the only relevant motion is given by the zero-point fluctuations. Cold-damping is one of the used techniques, where one detects motionally-induced phase changes in the cavity output and an electronic feedback loop is implemented to dynamically modify the cavity drive such as to produce an extra optical damping effect [@Genes2008Ground; @Steixner2005; @Bushev2006; @Rossi2018; @Cohadon1999; @Poggio2007; @Wilson2015; @Tebbenjohanns2018Cold]. Alternatively, in the good cavity limit where the photon loss rate is smaller than the mechanical frequency, the resolved sideband technique can be implemented by detuning the drive to the cooling sideband [@Gigan2006self; @Braginsky2007parametric; @Marquardt2007Quantum; @Wilson2007Theory; @Teufel2011Sideband]. As the effect stems from the inherent time delay between the action of the mechanical resonator onto the cavity field and the back-action of light, this can be seen as a sort of automatic cavity induced feedback. While both techniques are successfully applied to isolate and cool a single vibrational mode, they are not necessarily optimal to induce full refrigeration of the mechanical resonator, i.e. to simultaneously cool all the vibrational modes into which the thermal energy is distributed. The main impediment is that the detected output signal only gives information on generalized collective quadrature but not on all modes. This leads to efficient cooling of some collective mode (for example center of mass) while some collective modes become dark and remain in a high temperature state.\
Here, we propose a machine learning approach towards devising a strategy capable of providing refrigeration of the classical motion of a mechanical resonator based on the feedback obtained from the detection of a single optical mode. To this end we provide a proof-of-principle multi-mode numerical simulation using a neural network trained using a reinforcement learning algorithm to generate the feedback signal capable of simultaneously extracting thermal energy from four distinct modes of a single mechanical resonator.\
Machine learning techniques have been recently applied to various applications in quantum physics ranging from the identification of phases in many-body systems, predicting ground-state energies for electrostatic potentials, active learning approaches to propose and optimize experimental setup configurations and towards applications for quantum control and quantum-error correction [@Chen2014Fidelity; @Carrasquilla2017Machine; @Nieuwenburg2017Learning; @Dunjko2017Machine; @Carleo2017Solving; @Mills2017Deep; @Melnikov2018Active; @Bukov2018Reinforce; @Foesel2018Reinforce]. In particular, a few studies [@Chen2014Fidelity; @Bukov2018Reinforce; @Foesel2018Reinforce] successfully applied the technique of reinforcement learning with neural networks [@Russel2018Modern]. This approach originates from the idea, to let an intelligent agent that observes its environment choose an action, that is determined by a given policy trying to optimize a particular reward and minimize a punishment.\
Here, we employ such a technique for optically assisted cooling of the classical thermal state of a multi-mode mechanical resonator system [@Nielsen2017Multimode; @Piergentili2018; @PhysRevA.99.023851]. The learning technique allows one to acquire a nonlinear function that chooses a feedback action that will be applied on the dynamical system upon taking the full or partial measured state of the system as an input. The training of this function that is given by a dense neural network is obtain by trial and error and quantified by an increased reward that is obtained by successfully reducing the energy of the resonators.\
{width="2.0\columnwidth"}
The physical systems considered are depicted in Fig. \[fig1\]. The mechanical resonator is subject to environmental noise described by a standard Brownian motion stochastic force leading to thermalization at some equilibrium temperature $T$. The feedback action is implemented via the radiation pressure force, i.e. photon kicks either from one or two sides. The induced damping is straightforward in the two-sided kicking case \[illustrated in Fig. \[fig1\]a\]: the read-out of motion is followed by appropriate kicking action from the side towards which the resonator is moving. However, one-sided kicking \[illustrated in Fig. \[fig1\]b\] already suffices as the oscillator can be displaced by a constant force and damped around the modified equilibrium position. The typical weak free space photon-phonon interaction can be drastically increased by the filtering of the action field through a high-finesse optical cavity as shown in Fig. \[fig1\]c. Such a situation is characterized by a linear coupling of the photon number to the membrane’s displacement and has been extensively studied in single mode cooling via cavity time delayed effects [@Genes2008Ground] or by implementation of cold damping techniques [@Genes2008Ground] especially in the bad cavity regime. The membrane-in-the-middle [@Thompson_2008; @Jayich_2008; @Asjad2014Robust] scenario in Fig. \[fig1\]d,e corresponds to a quadratic coupling in displacement leading to the possibility of optically modulating the mechanical oscillation frequency [@Asjad2014Robust]. We describe in Fig. \[fig1\]e a possible approach for feedback cooling via cavity field detection and neural network assisted feedback.\
We will focus in the following on the bad cavity case where the cavity back-action is negligible and where standard cold damping techniques are used for cooling. In such a case, the situations described in Fig. \[fig1\]b and Fig. \[fig1\]c are physically equivalent with the difference that in Fig. \[fig1\]c the action of a single photon is multiplied by a large number roughly proportional to the finesse of the cavity. We will therefore distinguish physically distinct situations such as parametric cooling and linear cooling. First we analyze the performance of a neural network suggested set of actions onto the cooling of a single mode via parametric modulation of the oscillation frequency: we describe the shape of the action and numerically show the efficient reduction of energy from the initial thermal distribution. We then apply the technique to the linear cooling of four distinct modes of the resonator and find a more complex set of actions required for efficient simultaneous cooling of all four modes (with limitations arising due to the numerical complexity of the simulations).
**Model and equations** — We consider a membrane resonator with a few modes of oscillations of frequencies $\omega_j$ (where $j=1,...N$). We start with a quantum formulation of the system’s dynamics aimed at future treatments of cooling in the presence of quantum noise. However the current formulation aims only at the reduction of classical thermal noise and is therefore obtained as a classical averaging of the quantum equations of motion. The Hamiltonian for the collection of modes is written as $H_m =\sum_{j=1} \hbar\omega_j/2\left(p_{j}^{2} + q_{j}^{2} \right),$ in terms of dimensionless position and momentum quadratures $q_{j}$ and $p_{j}$ for each independent membrane oscillation mode. The effect of the thermal reservoir can be easily included in a set of equations of motion supplemented with the proper input stochastic noise terms:
\[FreeEq.1\] $$\begin{aligned}
\dot{q}_j &=\omega_{j} p_j \\
\dot{p}_j &= -\omega_{j}q_j - \gamma_{j} p_j + \xi_j + F_{j}(t).\end{aligned}$$
The parameter $\gamma_{j}$ describes the damping of the $j$’s resonator mode. Its associated zero-averaged Gaussian stochastic noise term leading to thermalization with the environment can be fully described by the two-time correlation function: $$\begin{aligned}
\label{FreeEq.2}
\langle \xi_j(t) \xi_{j'}(t')\rangle &=& \frac{\gamma_{m_j}}{\omega_j}\int_{0}^{\Omega} \frac{d\omega}{2\pi}e^{-i\omega(t-t')} S_{\text{th}}(\omega)\delta_{jj'},\end{aligned}$$ where $\Omega$ is the frequency cutoff of the reservoir and the thermal noise spectrum is given by $
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Motivated by the recent work about a new physical interpretation of quasinormal modes by Maggiore, we investigate the quantization of near-extremal Schwarzschild-de Sitter black holes in the four dimensional spacetime. Following Kunstatter’s method, we derive the area and entropy spectrum of near-extremal Schwarzschild-de Sitter black holes which differs from Setare’s result. Furthermore, we find that the derived a universal area spectrum is $2\pi n$ which is equally spaced.'
author:
- Wenbo Li
- 'Lixin Xu[^1]'
- Jianbo Lu
title: 'Area spectrum of near-extremal SdS black holes via the new interpretation of quasinormal modes'
---
Introduction
============
The quantization of the black hole horizon area is a fascinating subject. Since an equally spaced entropy spectrum was firstly predicted by Bekenstein in $1974$ [@beken1], there have been many attempts to derive the entropy spectrum directly from the dynamical modes of the classical theory [@beken2; @louko; @dolgov; @barvin; @kastrup; @beken3]. However, there has been little known about the direct physical connection between the classical dynamical quantities that give rise to Bekenstein-Hawking entropy and the corresponding microscopic degrees of the quantum black hole. An important step in this direction was made by Hod [@hod1] by a semiclassical consideration of the macroscopic oscillation modes of black holes. In particular, he assumed an equally discrete area spectrum and used the existence of a unique quasinormal mode frequency in the large damping limit to uniquely fix the spacing. Dreyer [@dreyer] demonstrated that Hod’s result could be recovered in loop quantum gravity if the relevant group is taken to be $SO(3)$ rather than $SU(2)$. These developments have spurred lots of subsequent activity [@frit; @kunst; @motl; @corichi; @abdalla0; @card1; @card2; @hod2; @birmin; @ploy; @setare1; @lepe; @setare2; @hod3; @keshet1; @keshet2; @daghigh].
Recently, a new physical interpretation for the quasinormal modes of black holes was presented by Maggiore [@maggiore]. According to Maggiore’s proposal, in order to overcome or at least alleviate some difficulties raised by the Hod’s proposal [@hod1] in the interpretation of quasinormal frequencies, the Black hole perturbations are modeled in terms of a collection of damped harmonic degrees of freedom. In addition, he indicated that the real frequencies of the equivalent damped harmonic oscillators were $(\omega^2_R+\omega^2_I)^{1/2}$, rather than simply $\omega_R$. Motivated by Maggiore’s work, Vagenas [@vagenas] utilized the new proposal to the interesting case of Kerr black hole and proposed a new interpretation as a result of Maggiore’s idea, for the frequency that appears in the adiabatic invariant of a black hole. In a more recent paper [@medved], a universal form for the Kerr and Schwarzschild quantum area spectra was established by Medved by presenting a simple but vital modification to a recent treatment on the Kerr (or rotating black hole) spectrum. Although the above considerations are still somewhat speculative, they certainly propose a reasonable physical interpretation on the spectrum of the black hole quasinormal modes. It is ,therefore, of interest to study the area spectrum of other black holes, in particular, near-extremal black holes for which the quasinormal frequencies are quite different from that of non-extremal black holes.
In this article our aim is to investigate the area and entropy spectrum of near-extremal Schwarzschild-de Sitter black holes in four dimensional spacetime by adopting Maggiore’s proposal. According to Maggiore’s work, the frequency of the harmonic oscillator is $\omega_0=(\omega^2_R+\omega^2_I)^{1/2}$. Although the author of Ref. [@setare2] has discussed the area and entropy spectrum, the most interesting case is that of highly excited quasinormal modes, whose frequency is $\omega_0=|\omega_I|$ rather than simply $\omega_0=\omega_R$. Since, for $\omega_I\gg\omega_R$, this observation will change the physical understanding of the black hole spectrum and examine the various results of the literature. In the next section, we will try to derive the area and entropy spectrum by extending Kunstatter’s method.
Quasinormal Modes of Near Extremal SdS Black Holes
==================================================
The near-extremal Schwarzschild-de Sitter (SdS) spacetime in four dimensions is a non-trivial case with a non-asymptotically flat spacetime. General SdS spacetimes have a metric of the form $$\begin{aligned}
ds^2=-f(r)dt^2+f^{-1}(r)dr^2+r^2d\Omega^2_2,\end{aligned}$$ with $$\begin{aligned}
f(r)=1-\frac{2M}{r}-\frac{r^2}{L^2_{ds}},\end{aligned}$$ where $M$ denotes the black hole mass and $L^2_{ds}$ is the de Sitter curvature radius, which related to the cosmological constant $\Lambda$ by $L^2_{ds}=3/\Lambda$. The spacetime possesses two horizons: the usual black hole horizon locates at $r=r_+$ and the cosmological horizon locates at $r=r_c$, where $r_+<r_c$. We assume that the three roots of the equation $f(r)=0$ are $r_+$, $r_c$, and $r_0$ respectively. In terms of these roots, $f(r)$ can be rewritten as $$\begin{aligned}
f(r)=\frac{1}{L^2_{ds}r}(r-r_+)(r_c-r)(r-r_0),\end{aligned}$$ with $r_0=-(r_++r_c)$. In addition, $M$ and $L^2_{ds}$ as functions of these roots can be expressed as $$\begin{aligned}
L^2_{ds}=r_+^{2}+r_+r_c+r_c^{2},\nonumber\\
2ML^2_{ds}=r_+r_c(r_++r_c).\label{m}\end{aligned}$$ Defined by the relation $\kappa_+\equiv\frac{1}{2}(df/dr)|_{r=r_+}$, the surface gravity $\kappa_+$ can be written as $$\begin{aligned}
\kappa=\frac{(r_c-r_+)(r_+-r_0)}{2L^2_{ds}r_+}.\end{aligned}$$
Let us now specialize to a non-trivial case with a non-asymptotically flat spacetime called the near-extremal SdS black hole. As for this case, the cosmological horizon $r_c$ is very close to the black hole horizon $r_+$. Hence, one can make the following approximations: $$\begin{aligned}
r_0\sim-2r_+,\;\;\;\;\;\;\;\;L^2_{ds}\sim3r^2_+,\;\;\;\;\;\;\;
\kappa_+\sim\frac{r_c-r_+}{2r^2_+}.\label{kappa}\end{aligned}$$
Cardoso and Lemos studied firstly the analytical quasinormal mode spectrum for the near-extremal SdS black hole [@card2], and they concluded that the asymptotic quasinormal frequencies of near-extremal SdS black hole are given by the simple expression $$\begin{aligned}
\omega=\kappa_+\bigg[\sqrt{\frac{\upsilon_0}{\kappa^2_+}-1/4}-i(n+1/2)\bigg],\;\;\;\;\;\;n=0,1,2,...\label{QNM1}\end{aligned}$$ where $$\begin{aligned}
\upsilon_0=\kappa^2_+l(l+1),\end{aligned}$$ for scalar and electromagnetic perturbations, and $$\begin{aligned}
\upsilon_0=\kappa^2_+(l+2)(l-1),\end{aligned}$$ for gravitational perturbations, $l$ is the angular quantum number. Very recently, by computing the Lyapunov exponent which is the inverse of instability timescale associated with the geodesic motion, Cardoso et al presented that quasinormal modes of black holes are determined by the parameters of the circular null geodesics in the eikonal limit ($l\gg1$) [@card3]. And then, they found a simple analytical quasinormal modes for the near-extremal SdS black holes in $d=4$: $$\begin{aligned}
\omega_{BQNM}=\kappa_+[l-i(n+1/2)].\label{QNM2}\end{aligned}$$
Though the form of Eq. (\[QNM2\]) is similar with Eq. (\[QNM1\]), there exists an important difference, namely, Eq. (\[QNM2\]) is valid for the low-lying modes $n\ll l$ with $l\gg1$ while Eq. (\[QNM1\]) is valid in the limit $n\rightarrow \infty$. Here we are interested in
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'ALOHA-type protocols became a popular solution for distributed and uncoordinated multiple random access in wireless networks. However, such distributed operation of the Medium Access Control (MAC) layer leads to sub-optimal utilization of the shared channel. One of the reasons is the occurrence of collisions when more than one packet is transmitted at the same time. These packets cannot be decoded and retransmissions are necessary. However, it has been recently shown that it is possible to apply signal processing techniques with these collided packets so that useful information can be decoded. This was recently proposed in the Irregular Repetition Slotted ALOHA (IRSA), achieving a throughput $T \simeq 0.97$ for very large MAC frame lengths as long as the number of active users is smaller than the number of slots per frame. In this paper, we extend the operation of IRSA with *i)* an iterative physical layer decoding processing that exploits the capture effect and *ii)* a Successive Interference Cancellation (SIC) processing at the slot-level, named intra-slot SIC, to decode more than one colliding packet per slot. We evaluate the performance of the proposed scheme, referred to as Extended IRSA (E-IRSA), in terms of throughput and channel capacity. Computer-based simulation results show that E-IRSA protocol allows to reach the maximum theoretical achievable throughput even in scenarios where the number of active users is higher than the number of slots per frame. Results also show that E-IRSA protocol significantly improves the performance even for small MAC frame lengths used in practical scenarios.'
author:
-
title: 'Intra-Slot Interference Cancellation for Collision Resolution in Irregular Repetition Slotted ALOHA'
---
random access protocols, slotted ALOHA, irregular repetition slotted ALOHA, bipartite graphs, capture effect, intra-slot interference cancellation, successive interference cancellation, collision resolution, iterative decoding.
Introduction
============
Uncoordinated Medium Access Control (MAC) protocols, such as ALOHA or Carrier Sensing Multiple Access (CSMA), are used in today’s communication networks due to their capability for managing the access to a shared communication channel in a distributed manner. A clear example is the operation of the Random Access Channel (RACH) of LTE which consists in a framed slotted ALOHA scheme where slots represent orthogonal preambles, which users use to contend for the access to the resources [@LAYA2014].
Despite the congestion problems that these protocols suffer from in highly dense networks, they are still the best solutions available for completely distributed access in wireless networks. There are many scenarios where centralized-based access is not possible due to the long propagation delays (e.g. satellite communications) or due to scalability issues when the number of contending devices is extremely high and unpredictable, e.g. Machine-to-Machine (M2M) networks.
Therefore, when it comes to highly dense dynamic networks, random-based distributed protocols are the only viable solution known to date. It has been proven in the literature that, among the existing alternatives, frame-based ALOHA-type protocols can perform best when optimally configured. However, the high probability of collision will still yield low performance. To overcome this limitation, the use of Successive Interference Cancellation (SIC) techniques is becoming a hot topic in the area of MAC design. The combination of the MAC layer with SIC techniques, traditionally employed at the PHY layer for coding purposes, is deemed to lead a major breakthrough in the performance of MAC protocols by turning collisions into useful information.
Recently, approaches based on multiple packet transmission [@DA] and iterative interference cancellation (IC) [@CRDSA], [@IRSA] have shown to yield dramatic performance improvements in terms of throughput with respect to previous existing solutions.
The Contention Resolution Diversity Slotted ALOHA (CRDSA) protocol proposed in [@CRDSA] was the first ALOHA-based protocol providing the adoption of SIC techniques for resolving collisions. More specifically, each packet is transmitted in two different randomly selected slots within a MAC frame. Even though this approach apparently increases the network load, it provides time-domain diversity through the transmission of a redundant copy of each packet. The replicas of each packet possess a pointer to the slot where the other replica was sent. Whenever a packet is successfully decoded, the pointer is extracted and the interference contribution caused by the twin replica on the corresponding slot is removed. The procedure is iterated, eventually permitting the recovery of the whole set of packets transmitted within the same frame. CRDSA achieves a maximum throughput, defined as probability of successful packet transmission per slot, of $T \simeq 0.55$, while the peak throughput for Framed Slotted ALOHA is just $T \simeq 0.37$.
The CRDSA protocol was later generalized in [@IRSA], allowing users to transmit more than 2 copies of the same packet per frame. In particular, the actual number of packet replicas is drawn from a probability mass function, referred to as *degree distribution* [@IRSA], that is optimized to achieve the maximum supportable load on the shared medium. Since the number of transmitted replicas is different from user to user, this scheme is dubbed Irregular Repetition Slotted Aloha (IRSA). In [@IRSA], the operation of IRSA is described by borrowing concepts from graph codes such as belief propagation on a packet level for resolving collisions. It provides a bipartite graph representation allowing a fast analytical characterization of the IRSA performance. The convergence analysis of the SIC process shows that IRSA provides a throughput equal to $T \simeq 0.97$ if a suitable degree distribution is selected and as long as the number of available slots is greater than the number of contending devices. Despite these promising performance figures, IRSA cannot perform optimally when the number of devices is greater than the number of slots. This behavior can represent a boundary in scenarios suffering of channel overload problems such as M2M networks where a massive number of devices limits its application in realistic scenarios.
This is the main motivation for the work presented in this paper, where we propose an extension of IRSA, referred to as Extended IRSA (E-IRSA), which can operate excellently even when the number of devices is above the number of available slots per frame. In the proposed scheme, the receiver attempts to recover as many data packets as possible for each single slot exploiting the capture effect, which enables to decode those packets received with the strongest signal in a given slot. Whenever a packet is decoded, its interference contribution is subtracted first from the overall signal received in that slot, i.e., *intra-slot SIC*, and then, as well as in IRSA, from signals received in the slots where the related packet replicas have been also transmitted, i.e., *inter-slot SIC*.
In summary, E-IRSA extends IRSA in two ways: $(i)$ it applies iterative physical-layer decoding that exploits the capture effect in order to decode more than one data packet per slot, and $(ii)$ it applies intra-slot SIC, in order to increase the decoding probability of the next colliding packets.
This extension has been motivated by the promising results published in [@MUDIRSA] and [@Stephan]. The work in [@MUDIRSA] presents a theoretical study on a generalized IRSA scheme assuming that the receiver is capable of decoding multiple colliding packets jointly using *multiuser detection* (MUD) techniques in systems adopting code-division multiple access (CDMA).
In its turn, the work in [@Stephan] describes a practical implementation of a further generalization of IRSA, the so-called Coded Slotted Aloha (CSA) [@CSA], where several options for decoding more than one packet per slot in case of collision are considered. This work relies on concepts from physical layer network coding (PNC) [@PNC1], [@PNC2], and MUD and it also shows how it is possible to perform intra-slot SIC removing one or more packets from the overall signal received in a slot.
The rest of the paper is organized as follows. Section II introduces the system model and notations of E-IRSA. The description of the proposed collision resolution scheme is then provided in Section III. Simulation results are provided in Section IV. Finally, Section V concludes the paper.
System Model and Notation
=========================
We consider a network composed by one receiver (also referred to as coordinator) and $m$ devices (also referred to as users) located at one-hop distance from the coordinator, forming a star topology. Every user is frame- and slot-synchronous, and has only one data packet (also referred to as burst or message) ready to transmit to the coordinator, per MAC frame. The latter is divided into $n$ slots of equal length. The transmission of a packet takes at most one slot. According to [@IRSA], each of the $m$ users performs a random number of replicas of the same packet, referred to as *repetition rate*, selected by a probability mass function we dubbed *degree distribution*. Furthermore, the users transmit in randomly selected slots and without performing carrier sensing. Hence, each slot can be in one of three states: $(i)$ empty, i.e., no user has transmitted in the slot; $(ii)$ clean, i.e., one user has transmitted in the slot; or $(iii)$ collision, i.e., more users have transmitted in the same slot. As introduced by [@IRSA], the IRSA operation can be described by a bipartite graph $\mathcal{G}=(U,S,E)$ consisting of a set $U$ of $m$ *user nodes*, i.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In our previous paper, we discussed the hyperbolization of the configuration space of $n\,(\geq 5)$ marked points with weights in the projective line up to projective transformations. A variation of the weights induces a deformation. It was shown that this correspondence of the set of the weights to the Teichmüller space when $n = 5$ and to the Dehn filling space when $n= 6$ is locally one-to-one near the equal weight. In this paper, we establish its global injectivity.'
address:
- |
Department of Information and Computer Sciences\
Nara Women’s University\
Kita-Uoya Nishimachi\
Nara 630-8506, Japan
- |
Department of Mathematics\
Kyushu University\
33, Fukuoka 812-8581 Japan
- |
Department of Mathematical and Computing Sciences\
Tokyo Institute of Technology\
Ohokayama, Meguro\
Tokyo 152-8552 Japan
author:
- Yasushi Yamashita
- Haruko Nishi
- Sadayoshi Kojima
date: ' Version 1.0 (November 26, 1998)'
title: ' Configuration spaces of points on the circle and hyperbolic Dehn fillings, II'
---
= 22cm = 16.2cm = 0.2cm
Introduction
============
In [@KojimaNishiYamashita], we have shown that the configuration space of $n \, (\geq 5)$ marked points with weights on the real projective line up to projective transformations admits a natural hyperbolization so that the result becomes a hyperbolic cone-manifold of dimension $n-3$. In brief, we identify each component of the configuration space with the space of similarity classes of convex $n$-gons with fixed external angles in the complex plane via Schwarz-Christoffel map. Then there is a beautiful way by Thurston to hyperbolize such a space as an interior of some hyperbolic polyhedron (see [@BavardGhys; @KojimaYamashita]). Each point on the boundary can be encoded by an appropriately degenerate configuration. Pasting them together along the same degenerate configurations, we obtained the resultant cone-manifold. Kapovich and Millson discussed the same hyperbolization via their duality in [@KapovichMillson].
The external angles can be given in fact at your choice and we regarded them as the weight. A variation of the weights induces a deformation. We restricted the set of possible external angles so that the $n$-gons are convex and can be represented as the inner polygon of the star shaped $n$-gons for any marking. More concretely, we set $$\Theta_n=\{(\theta_1, \ldots, \theta_n)\,| \,
\sum_{i=1}^n \theta_i= 2\pi, \,
\theta_i>0, \, \theta_i + \theta_j < \pi \text { for any }i \ne j\}.$$ This is equivalent to say that the number of faces appeared in Thurston’s polyhedralization is constant, that is $n$. Under this assumption, the topology of a deformation will be almost constant.
In [@KojimaNishiYamashita], we discussed local behavior of the deformations appeared in our setting near the equal weight. When $n = 5$, the deformations are topologically a connected sum of five copies of the real projective space, ${\#}^5 {{\bold R}}{{\bold P}}^2$. The assignment of the hyperbolic structure of a deformation to each weight was shown to be a local embedding at the equal weight. When $n = 6$ and with the equal weight, the result of hyperbolization is a $3$-dimensional hyperbolic manifold of finite volume with ten cusps, which we denoted by $\overline{X_6}$. Any deformation induced by a variation of the weights can be regarded as some Dehn filled resultant of $\overline{X_6}$. The assignment of the deformation to each weight was also shown to be a local embedding at the equal weight.
In this paper, we prove the global injectivity of the above assignment. Namely, we show that $\Theta_n$ is mapped by the above assignment injectively to the deformation space in Theorem 1 when $n = 5$, and in Theorem 2 when $n = 6$. The local injectivity in [@KojimaNishiYamashita] is proven by computing the derivative of the map at the equal weight. The proof of the global injectivity we present here is based on rather geometric observation for variation of polygons developed in [@KojimaYamashita; @AharaYamada], and independent of the argument in [@KojimaNishiYamashita].
We review some of materials in [@KojimaNishiYamashita] to set up the notations in the next section, and prove the theorems in the sections after.
Preliminaries
=============
We here very briefly recall the hyperbolization in [@KojimaNishiYamashita].
The configuration space of $n$ marked points in the real projective line ${{\bold R}}{{\bold P}}^1$ is, by definition, the quotient of $({{\bold R}}{{\bold P}}^1)^n$ minus the big diagonal set by the diagonal action of the projective linear group ${\operatorname{PGL}}(2,{{\bold R}})$. It has $(n-1)!/2$ connected components, each of which is homeomorphic to a cell of dimension $n-3$. Reading off the markings of the points in counterclockwise order, each component can be labeled by a circular permutation $p=\langle i_1 i_2
\ldots i_n \rangle$ of $n$ numbers $1$ to $n$ up to reversing the order.
Fix an element $\theta = (\theta_1, \ldots, \theta_n)$ of $\Theta_n$. Then there is a one-to-one Schwarz-Christoffel correspondence between the configuration space and the set of similarity classes $X_{n,\theta}$ of marked $n$-gons in the complex plane with external angles $\{\theta_1, \cdots, \theta_n\}$ compatible with markings (see Lemma 1 in [@KojimaNishiYamashita]). Fix a label $p=\langle i_1 i_2 \ldots i_n \rangle$. Then the component of $X_{n,\theta}$ with label $p$ can be identified with the subset of the space of all the congruence classes of Euclidean $n$-gons with external angles $\theta_{i_1}, \cdots, \theta_{i_n}$ cyclically which consists of the ones with area 1.
Let $x_i$ denote the edge of an $n$-gon starting from the vertex with angle $\theta_i$ in counterclockwise order, and simultaneously its length. Then, we have $$\sum_{j=1}^n x_{i_j} \exp (\sum_{k=1}^j \sqrt{-1} \theta_{i_k}) =0.$$
Let ${\cal E}_{p,\theta}$ be the $(n-2)$-dimensional vector space satisfying the above constrain. Then the space of congruence classes of $n$-gons is identified with the polyhedral cone ${\cal E}_{p,\theta}\cap\,\bigcap_{j=1}^n \{x_{i_j} >0\}$ in ${\cal E}_{p,\theta}$. The area determines a quadratic form ${\operatorname{Area}}$ of signature $(1, n-3)$ on ${\cal E}_{p,\theta}$ (see Lemma 2 in [@KojimaNishiYamashita]). Thus ${\cal E}_{p,\theta}$ together with ${\operatorname{Area}}$ becomes a Minkowski space and ${\operatorname{Area}}^{-1}(1)$ is the hyperbolic space in dimension $n-3$. Therefore the space of similarity classes of $n$-gons with fixed external angles $\theta_{i_1}, \cdots, \theta_{i_n}$ lies in the hyperbolic space bounded by the hyperbolic hyperplanes ${\operatorname{Area}}^{-1}\cap \{x_{i_j}=0\}$ for $j = 1,\cdots, n$. We denote by $\Delta_{p,\theta}$ such a hyperbolic polyhedron. Then the conditions for $\theta$ as in the definition of $\Theta_n$ ensures us that the hyperbolic polyhedron $\Delta_{p,\theta}$ has exactly $n$ facets.
Let us denote by $(i_1 i_2)i_3 \ldots i_n$ or simply $(i_1 i_2)$ the face of $\Delta_{p,\theta}$ represented by $\Delta_{p,\theta} \cap \{x_{i_1}=0\}$ since it corresponds to the degenerate configurations where the points marked by $i_1$ and $i_2$ collide. Similarly we use $(i_1i_2)(i_3 i_4)i_5 \ldots i_n$ or $(i_1 i_2 i_3) i_4 \ldots i_n$, etc. to represent the codimension two faces of $\Delta_{p,\theta}$.
Now gluing $(n-1)!/2$ hyperbolic polyhedra $\Delta_{p, \theta}$ for all labels $p$ along the faces which represent the same degenerate configurations, we obtain $\overline{X_{n,\theta}}$ in which $X_{n, \theta}$ lies as an open dense subset.
The
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In a recent breakthrough, Babai (STOC 2016) gave a quasipolynomial time graph isomorphism test. In this work, we give an improved isomorphism test for graphs of small degree: our algorithms runs in time $n^{\mathcal{O}((\log d)^{c})}$, where $n$ is the number of vertices of the input graphs, $d$ is the maximum degree of the input graphs, and $c$ is an absolute constant. The best previous isomorphism test for graphs of maximum degree $d$ due to Babai, Kantor and Luks (FOCS 1983) runs in time $n^{\mathcal{O}(d/ \log d)}$.'
author:
- |
Martin Grohe\
RWTH Aachen University\
`grohe@informatik.rwth-aachen.de`
- |
Daniel Neuen\
RWTH Aachen University\
`neuen@informatik.rwth-aachen.de`
- |
Pascal Schweitzer\
TU Kaiserslautern\
`schweitzer@cs.uni-kl.de`
bibliography:
- 'literature.bib'
title: A Faster Isomorphism Test for Graphs of Small Degree
---
Introduction
============
Luks’s polynomial time isomorphism test for graphs of bounded degree [@luks82] is one of the cornerstones of the algorithmic theory of graph isomorphism. With a slight improvement given later [@BKL83], it tests in time $n^{\mathcal{O}(d/\log d)}$ whether two $n$-vertex graphs of maximum degree $d$ are isomorphic. Over the past decades Luks’s algorithm and its algorithmic framework have been used as a building block for many isomorphism algorithms (see e.g.[@BKL83; @BL83; @GM15; @KS17; @Luks91; @Ponomarenko91; @Seress03]). More importantly, it also forms the basis for Babai’s recent isomorphism test for general graphs [@Babai15-full; @Babai16] which runs in quasipolynomial time (i.e., the running time is bounded by $n^{{\operatorname{polylog}(n)}}$). Indeed, Babai’s algorithm follows Luks’s algorithm, but attacks the obstacle cases for which the recursion performed by Luks’s framework does not lead to the desired running time. Graphs whose maximum degree $d$ is at most polylogarithmic in the number $n$ of vertices are not a critical case for Babai’s algorithm, because for such graphs no large alternating or symmetric groups appear as factors of the automorphism group, and therefore the running time of Babai’s algorithm on the class of all these graphs is still quasipolynomial. Hence graphs of polylogarithmic maximum degree form one of the obstacle cases towards improving Babai’s algorithm. This alone is a strong motivation for trying to improve Luks’s algorithm. In view of Babai’s quasipolynomial time algorithm, it is natural to ask whether there is an $n^{{\operatorname{polylog}(d)}}$-isomorphism test for graphs of maximum degree $d$. In this paper we answer this question affirmatively.
\[thm:main-result-degree-d\] The Graph Isomorphism Problem for graphs of maximum degree $d$ can be solved in time $n^{\mathcal{O}((\log d)^c)}$, for an absolute constant $c$.
To prove the result we follow the standard route of considering the *String Isomorphism Problem*, which is an abstraction of the Graph Isomorphism Problem that has been introduced by Luks in order to facilitate a recursive isomorphism test based on the structure of the permutation groups involved [@BL83; @luks82]. Here a *string* is simply a mapping $\mathfrak x:\Omega\to\Sigma$, where the *domain* $\Omega$ and *alphabet* $\Sigma$ are just finite sets. Given two strings $\mathfrak x,\mathfrak y:\Omega\to\Sigma$ and a permutation group $G\le\operatorname{Sym}(\Omega)$, the objective of the String Isomorphism Problem is to compute the set $\operatorname{Iso}_G(\mathfrak x,\mathfrak y)$ of all *$G$-isomorphisms* from $\mathfrak x$ to $\mathfrak y$, that is, all permutations $g\in G$ mapping $\mathfrak x$ to $\mathfrak y$. We study the String Isomorphism Problem for groups $G$ in the class $\operatorname{\widehat{\Gamma}}_d$ of groups all of whose composition factors are isomorphic to subgroups of $S_d$, the symmetric group acting on $d$ points. Luks introduced this class because he observed that, after fixing a single vertex, the automorphism group of a connected graph of maximum degree $d$ is in $\operatorname{\widehat{\Gamma}}_d$[^1]. Our main technical result, Theorem \[thm:main-result-gamma-d\], states that we can solve the String Isomorphism Problem for groups $G\in\operatorname{\widehat{\Gamma}}_d$ in time $n^{{\operatorname{polylog}(d)}}$, where $n=|\Omega|$ is the length of the input strings. This implies Theorem \[thm:main-result-degree-d\] (as outlined in Section \[sec:applications\]).
To prove this result, we introduce the new concept of an *almost $d$-ary sequence* of invariant partitions. More precisely, we exploit for the group $G$ a sequence $\{\Omega\} = \mathfrak{B}_0 \succ \dots \succ \mathfrak{B}_m =
\{\{\alpha\} \mid \alpha \in \Omega\}$ of $G$-invariant partitions $\mathfrak B_i$ of $\Omega$, where $\mathfrak B_{i-1}\succ\mathfrak B_i$ means that $\mathfrak B_i$ refines $\mathfrak B_{i-1}$. For this sequence we require that for all $i$ the induced group of permutations of the subclasses in $\mathfrak B_i$ of a given class in $\mathfrak B_{i-1}$ is permutationally equivalent to a subgroup of the symmetric group $S_d$ or semi-regular (i.e., only the identity has fixed points). Our algorithm that exploits such a sequence is heavily based on techniques introduced by Babai for his quasipolynomial time isomorphism test. We even use Babai’s algorithm as a black box in one case. One of our technical contributions is an adaptation of Babai’s Unaffected Stabilizers Theorem [@Babai16 Theorem 6] to groups constrained by an almost $d$-ary sequence of invariant partitions. In [@Babai16], the Unaffected Stabilizers Theorem lays the groundwork for the group theoretic algorithms (the Local Certificates routine), and it plays a similar role here. However, we need a more refined running time analysis. Based on this we can then adapt the Local Certificates routine to our setting.
However, not every group in $\operatorname{\widehat{\Gamma}}_d$ has such an almost $d$-ary sequence required by our technique. We remedy this by changing the operation of the group while preserving string isomorphisms. The structural and algorithmic results enabling such a change of operation form the second technical contribution of our work. For this we employ some heavy group theoretic results. First, applying the classification of finite simple groups via the O’Nan-Scott Theorem and several other group theoretic characterizations, we obtain a structure theorem for primitive permutation groups in $\operatorname{\widehat{\Gamma}}_d$ showing that they are either small (of size at most $n^{{\operatorname{polylog}(d)}}$) or have a specific structure. More precisely, large primitive groups in $\operatorname{\widehat{\Gamma}}_d$ are composed, in a well defined manner, of Johnson groups (i.e. symmetric/alternating groups with an induced action on $t$-element subsets of the standard domain). Second, to construct the almost $d$-ary sequence of partitions, we exploit the existence of these Johnson schemes and introduce subset lattices which are unfolded yielding the desired group operation.
With Luks’s framework being used as a subroutine in various other algorithms, one can ask for the impact of the improved running time in such contexts. As a first, simple application we obtain an improved isomorphism test for relational structures (Theorem \[thm:relational\]) and hypergraphs (Corollary \[cor:hypergraph\]). A deeper application is an improved fixed-parameter tractable algorithm for graph isomorphism of graphs parameterized by tree width [@GroheNSW18], which substantially improves the algorithm from [@LPPS14].
#### Outline
Section \[sec:characterization-primitive\] is concerned with the structure of primitive $\operatorname{\widehat{\Gamma}}_d$ groups; it culminates in Theorem \[thm:first-main-theorem\] with a structural description. In Section \[sec:almost:d:ary\] we describe how to algorithmically change the operation of a group in $\operatorname{\widehat{\Gamma}}_d$ to force the existence of an almost $d$-ary sequence of invariant partitions $\{\Omega\} = \mathfrak{B}_0 \succ \dots \succ \mathfrak{B}_m =
\{\{\alpha\} \mid \alpha \in \Omega\}$ without changing string isomorphisms. In Sections \[sec:affected:orbits\] and \[sec:local-certificates\] we extend Bab
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Strong and fragile glass relaxation behaviours are obtained simply changing the constraints of the kinetically constrained Ising chain from symmetric to purely asymmetric. We study the out–of–equilibrium dynamics of those two models focusing on the Kovacs effect and the fluctuation–dissipation relations. The Kovacs or memory effect, commonly observed in structural glasses, is present for both constraints but enhanced with the asymmetric ones. Most surprisingly, the related fluctuation-dissipation (FD) relations satisfy the FD theorem in both cases. This result strongly differs from the simple quenching procedure where the asymmetric model presents strong deviations from the FD theorem.'
address: 'UMR 5819 (CNRS, CEA, UJF), SI3M/DRFMC, CEA Grenoble, 17 rue des Martyrs, 38054 Grenoble cedex 9, France.'
author:
- Arnaud Buhot
title: 'Kovacs effect and fluctuation-dissipation relations in 1D kinetically constrained models'
---
Introduction
============
The Kovacs or memory effect has been observed by Kovacs himself in the 1960s on structural glassy systems [@Kovacs]. It is a surprising memory effect of the energy (or volume) of the system following a particular quenching procedure. This quenching procedure consists in suddenly cooling a glassy system from a high temperature (infinite one in our case) to a very low intermediate temperature $T_i$ and letting the system relax. When the waiting time $t_w$ necessary for the energy of the system to reach the equilibrium value for a final temperature $T_f$ is attained ($e(t_w) = e_{eq}(T_f)$), the temperature of the system is set to this value $T_f \geq T_i$. The result found by Kovacs is that, even though the system is at the equilibrium energy (the volume in his case) corresponding to the temperature imposed, the system and its energy are still evolving. The system keeps memory of its history and of the fact that equilibrium is not effective. After a rapid increase of the energy, the system reaches the equilibrium and the energy decreases and levels off that equilibrium leading to a hump in the energy as function of time.
In this paper, we are interested in a comparison of the Kovacs effect for two simple (even simplistic) models with respectively strong and fragile glass behaviours. We consider the symmetric and purely asymmetric kinetically constrained Ising chain (KCIC) models [@Fredrickson; @Jackle]. This simple change of the kinetic constraints allows one to switch respectively from strong to fragile glass behaviour. The underlying equilibrium being the same for both models, a direct comparison of the dynamical effects is possible.
After a short presentation of the models in section 2, the Kovacs effect is discussed in section 3. The effect is observed in both models considered but is enhanced in the asymmetric case. This section also contains simple rescaling arguments to explain this effect and a comparison with recent works on the Kovacs effect [@Berthier; @Berthier2; @Bertin; @Mossa]. The fluctuation-dissipation (FD) relations are studied during the Kovacs quenching procedure and presented in section 4. The FD relations satisfy the FD theorem in both cases. These results are in strong contradiction with those obtained using a simple quenching procedure from high temperature to a low final temperature after a waiting time $t_w$. This last quenching procedure leads to FD relations satisfying the FD theorem for the symmetric KCIC model and for waiting times well below the relaxation time. In contrast, strong deviations from the FD theorem have been observed for the asymmetric KCIC model for waiting times smaller than the equilibration time. We give some conclusions in section 5.
Presentation of the models
==========================
In this paper, we are interested in the possible difference concerning the Kovacs effect due to strong or fragile glass behaviour. We thus consider the KCIC model [@Fredrickson; @Jackle] for which constraints may be chosen to model strong (for symmetric ones) and fragile (for a purely asymmetric chain) glass relaxations.
Let us consider a chain of $N$ Ising spins ($\sigma_i =
0,1$ with $i=1,\cdots,N$) without interactions where spins $\sigma_i = 1$ are considered as defects. The corresponding Hamiltonian is thus trivial ($H =
\sum_i \sigma_i$) as well as the equilibrium thermodynamic properties. The equilibrium energy at temperature $T$ or inverse temperature $\beta$ is given by $e_{eq}(T=1/\beta)
= 1/(1+e^{\beta})$ which is also the concentration of defects or the probability to have a defect at site $i$. It is possible to determine exactly the probability for a defect to have its next defect (on the left) at a distance $d$ $$P_{eq}(d,T) = e_{eq} (1-e_{eq})^{d-1}.$$ The first term on the right hand side of the equation corresponds to the probability to have a defect whereas the second term (with a power $d-1$) is the probability to have no defects in the intermediate $d-1$ sites. The non-interacting spins render at equilibrium the probabilities at each sites independent of each other and leads to this simple product in $P_{eq}(d,T)$.
All these equilibrium properties are independent of any dynamics considered. However, the introduction of kinetic constraints allows one to obtain a slowing down of the dynamics characteristic of glassy systems before the equilibrium properties are reached. The probability for a spin to flip is constrained in the following way: in the symmetric case, a spin is able to flip as soon as a neighbour (left or right) is a defect whereas, in the asymmetric case, the defect has to be on the left. Such spins are also called spin facilitated and their probability transitions are given by the following equation: $$P(\sigma_i \rightarrow 1 -\sigma_i) = \min(1,e^{\beta
(2\sigma_i-1)}) (b \, \sigma_{i-1} + (1-b) \, \sigma_{i+1}).
\label{EqPd}$$ The first term on the right hand side corresponds to the usual Metropolis probability and allows one to satisfy the detailed balance. The second term corresponds to the general kinetic constraints with a probability $b$ to flip the spin if there is a left neighbour and $1-b$ for a right neighbour. The symmetric model ($b=1/2$) and the purely asymmetric one ($b=0$ or $1$) correspond to particular values of this parameter $b$.
These models have been extensively studied (for more information and references on kinetically constrained models see the recent review by Ritort and Sollich [@Ritort]). With the symmetric constraints, the dynamical behaviour is reminiscent of a strong glass with a relaxation time following an Arrhenius law. At sufficiently low temperature, the defects are mainly isolated and may be considered as simple particles diffusing with a temperature-dependent rate of diffusion $\Gamma \sim \exp(-1/T)$. The energy (or concentration of particles) evolves through creation and annihilation processes. Similar reaction-diffusion models have been introduced for a long time to study domain growth, coarsening and aging [@Doering; @benAvraham; @Lindenberg]. Within the asymmetric constraints, the energy barriers involved in the motion of defects are increasing logarithmically with the distance from the next defect [@Sollich]. As a consequence, the relaxation time follows the Bässler law [@Bassler]: $t_{relax}
\sim \exp(1/T^{2} \ln 2)$. Whereas the Arrhenius behaviour is associated to a strong glass behaviour following the Angell’s classification [@Angell], the super-Arrhenius behaviour of the asymmetric model is associated to a fragile glass. Intermediate constraints allow the system to continuously crossover from fragile to strong glass behaviour [@Buhot] but will not be considered in this study.
Kovacs effect
=============
As already mentioned in the introduction, the Kovacs effect is observed following a particular quenching procedure. At time $t=0$, a system, equilibrated at high temperature ($T = \infty$ in our case), is suddenly quenched to an intermediate low temperature $T_i$. The system starts to relax and the energy decreases until it reaches the equilibrium energy corresponding to a final temperature $T_f \geq T_i$ after a waiting time $t_w (T_i, T_f)$ defined by the following equation: $$e(t_w) = \frac{1}{N}\sum_i \sigma_i(t_w) = e_{eq}(T_f) =
(1+e^{\beta_f})^{-1}$$ where $e(t)$ is the energy of the system at time $t$ and $\beta = 1/T$ is the inverse temperature. The temperature of the system is set to $T_f$ at this waiting time $t_w$.
If the system was characterized only by the thermodynamical parameters (energy, volume and temperature), we would expect the energy of the system to stay constant and equal to $e_{eq}(T_f)$ after the waiting time $t_w$. However, even though its energy corresponds to that equilibrium one at the imposed temperature, the particular configuration of the system at $t_w$ is still far from an equilibrium configuration at $T_f$. As a consequence, the energy is still evolving after $t
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'L. Arturo Ureña-López,'
bibliography:
- 'references.bib'
title: 'Scalar field dark matter with a cosh potential, revisited'
---
Introduction {#sec:intro}
============
There is no doubt that one of the most fascinating riddles of modern cosmology is the dark matter (DM) that seems to be an ubiquitous component in the universe, and specially one that is indispensable for the formation of cosmological structure. DM is an essential part of the successful model Lambda Cold Dark Matter ($\Lambda$CDM), which has become the standard paradigm to understand the cosmos and its evolution up to its present state. In this model, DM is simply described by collisionless particles that interacts mostly gravitationally with other matter components, and makes up about $26\%$ of the total matter budget. Quite amazingly, the theoretical predictions of the $\Lambda$CDM model agree well with a wide range of cosmological and astrophysical observations [@PlanckCollaboration2018; @Bull2016; @Bertone2016]. However, the physical properties of the DM component remain as evasive as ever, mostly because the particle of the standard model that describes it as Weakly Interacting Massive Particle (WIMP), seems to be absent from most of the so-called direct detection experiments (eg [@Akerib2014; @Aprile2017; @Akerib2018; @Baudis2018]).
Given the crisis of the WIMP hypothesis, a major trend in modern studies about DM is characterized by the ’no stone left unturned’ approach [@Bertone2018], which asks for a thorough search of different alternatives to the standard CDM model. One possibility that has shown a rich phenomenology is the so-called Scalar Field Dark Matter (SFDM) model, which has been studied relentlessly for almost two decades now (under different names: fuzzy dark matter, wave dark matter, ultra-light axion particles, etc), see [@Hui2017; @Magana2012; @Marsh2016; @Lee:2017qve]. The common characteristics in all these variants are, firstly, the presence of a scalar field (SF), whether complex or real, which is endowed with a potential that contains, explicitly or implicitly, a mass term of the form $m^2_a \phi^2$, and secondly the (bare) mass of the SF is very light, of the order of $m_a \sim 10^{-22} \, \mathrm{eV}/c^2$[^1]. The foregoing properties are enough for the rich phenomenology we mentioned before, that allows the comparison of the model with a wide range of cosmological and astrophysical data, including, among others, gravitational waves, black holes, $21$-cm constraints, etc, see [@Amendola2006; @Hlozek2015a; @Urena-Lopez2016; @Ikeda2019; @Barack2018; @Brito2017; @Nebrin2018; @Sarkar2016; @Baumann2018] for some selected examples.
The properties of SFDM can be extended if one includes the presence of a self-interacting term of the fourth order, in the form $V(\phi) =
(1/2)m^2_a \phi^2 + (g_4/4) \phi^4$, where $g_4 > 0$ is a dimensionless constant. Self-interacting SF models have also been studied and their signatures as DM model have been widely discussed in [@Fan2016; @Li2014; @RINDLER-DALLER2014; @Goodman2000]. One can even consider the inclusion of higher order terms in the SF potential. The most famous case is the axion-like potential, $$V(\phi) = m^2_a f^2_a \left[ 1-\cos\left( \phi/f_a \right) \right] = \frac{m^2_a}{2} \phi^2 - \frac{m^2_a}{24 f^2_a} \phi^4 + \ldots \, , \label{eq:00}$$ where $f_a$ is called, for historical reasons, the axion decay constant. This type of models are known as axion-like particles (ALP) [@Marsh2016a], and the anharmonic nature of the ALP potential produces observable signatures. The more noticeable effect appears for the evolution of linear density perturbations: there is an overgrowth of the density contrast with respect to the CDM case, which is characterized as a bump in the mass power spectrum (MPS) [@Zhang2017c; @Zhang2017b; @Cedeno2017].
As for the non-linear formation of structure, one has to consider to the non-relativistic limit of the Einstein-Klein-Gordon (EKG) system, which is the so-called Schrodinger-Poisson (SP) system [@Ruffini1969; @Seidel1990; @Guzman2004a; @Guzman2006]. There was early evidence about the similarities between the CDM model and the solutions of the SP system [@Widrow1993; @Woo2009], but it was until the results in [@Schive2014c], see also [@Schive2016; @Mocz2017; @Amin2019; @Li2018], that there appeared a clear and separate picture for the formation of structure under the SFDM hypothesis, specially for the differences with respect to CDM at small scales.
The gravitationally bounded objects that one could identify as DM galaxy halos all have a common structure: one central soliton surrounded by a Navarro-Frenk-White-like envelope created by the interference of the Schrodinger wave functions[@Schive2014c], features that have been confirmed by dedicated numerical simulations [@Schwabe2016; @Veltmaat2016; @Veltmaat2018; @Du2017; @Du2018].
The presence of a central soliton in all of SFDM galaxy halos has motivated studies about galactic kinematics to infer, first, the presence of such soliton structure, and, second, to determine the mass scale of the underlying SF particle [@Schive2014c; @Marsh2015a; @Gonzalez-Morales2012; @Gonzalez-Morales2017; @Bernal2018; @DeMartino2018; @Calabrese2016; @Marsh2018; @Lora2015; @Lora2012; @Urena-Lopez2017; @Robles2015; @Robles2018; @Bar2018a; @Bar:2019bqz; @Broadhurst2019a; @Lee2019]. The results are not yet conclusive, and depending on the analysis one may argue for positive presence of a soliton object and a SF mass of around $m_{a 22} \simeq 1$, or just an upper bound for the latter, $m_{a 22} < 0.4$. More recently, the presence of a soliton structure has been tested using data from rotation curves in galaxies, and then it is inferred that $m_{a22} > 10$ [@Bar2018a; @Bar:2019bqz].
The foregoing results on the SF mass, given that small scale structure seems to require light masses, are in tension, to say it mildly, with Lyman-$\alpha$ observations, which in contrast seem to demand larger values of the SF mass [@Amendola2006; @Irsic2017; @Armengaud2017]. According to the latter, the lower bound on the SF mass is $m_{a 22} > 21.3$, but this result is obtained from $N$-body simulations that cannot yet capture the whole properties of the SFDM model [@Nori2018AX-GADGET:Models; @Nori2018a], see for instance [@Zhang2018; @Zhang2017; @Zhang2017a; @Zhang2017d; @Li2018] for critical comments.
Given the motivations above, and as an added contribution to the studies of SFDM, the main aim in this paper is to revisit the properties of the hyperbolic counterpart of the axion-like potential , the cosh potential that was first studied in [@Sahni2000; @Matos2000; @Matos2001; @Matos2004], $$V(\phi) = m^2_a f^2_a \left[\cosh\left( \phi/f_a \right) - 1\right] = \frac{m^2_a}{2} \phi^2 + \frac{m^2_a}{24 f^2_a} \phi^4 + \ldots\, . \label{eq:0}$$
Although we have made an expansion of the cosh potential similarly to that of the axion-like one , there are differences that go beyond the fourth order. Firstly, the cosh potential resembles the exponential one $V(\phi) \simeq (m^2_a f^2/2) e^{\pm \phi/f_a}$ for $|\phi|/f_a \gg 1$, whereas it becomes the standard free one $V(\phi) \simeq (m^2_a/2) \phi^2$ for $|\phi|/f_a \ll 1$. The latter is the desired form for the cosh potential to work as CDM at late times, whereas the former is an additional advantage of the cosh potential that has been used before to avoid any fine tuning of the initial conditions within a cosmological setting [@F
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
The Schr" odinger picture of the Dirac quantum mechanics is defined in charts with spatially flat Robertson-Walker metrics and Cartesian coordinates. The main observables of this picture are identified, including the interacting part of the Hamiltonian operator produced by the minimal coupling with the gravitational field. It is shown that in this approach new Dirac quantum modes on de Sitter spacetimes may be found analytically solving the Dirac equation.
Pacs: 04.62.+v
author:
- |
Ion I. Cotăescu [^1]\
[*West University of Timişoara,*]{}\
[*V. P\^ arvan Ave. 4, RO-300223 Timişoara, Romania*]{}
title: 'The Schr" odinger picture of the Dirac quantum mechanics on spatially flat Robertson-Walker backgrounds'
---
The relativistic quantum mechanics of the spin-half particle on a given background can be constructed as the one-particle restriction of the quantum theory of the free Dirac field on this background, considered as a perturbation that does not affect the geometry. The central piece is the Dirac equation whose form depends on the local chart (or natural frame) and the tetrad fields defining the local frames and co-frames. This type of quantum mechanics has two virtues. First of all the charge conjugation of the Dirac field is point-independent indicating that the vacuum of the original field theory is stable in any geometry [@co; @cot]. Therefore, the resulted one-particle Dirac quantum mechanics can be seen as a coherent theory similar to that of special relativity. The second virtue is just the spin which generates specific terms helping us to correctly interpret the physical meaning of principal operators.
In the non-relativistic quantum mechanics the time evolution can be studied in different pictures (e. g., Schr" odinger, Heisenberg, Interaction) which transform among themselves through specific time-dependent unitary transformations. It is known that the form of the Hamiltonian operator and the time dependence of other operators strongly depend on the picture choice. In special and general relativity, despite of its importance, the problem of time-evolution pictures is less studied because of the difficulties in finding suitable Hamiltonian operators for scalar or vector fields. However, the Dirac quantum mechanics is a convenient framework for studying this problem since the Dirac equation can be put in Hamiltonian form at any time.
In this paper we should like to show that at least two different pictures of the Dirac quantum mechanics can be identified in the case of backgrounds with spatially flat Robertson-Walker (RW) metrics. We start with the simple conjecture of the Dirac equation in diagonal gauge and Cartesian coordinates considering that this constitutes the [*natural*]{} picture. Furthermore, we define the Schr" odinger picture such that the kinetic part of the Dirac equation should take the standard form known from special relativity. In this picture we identify the momentum and the Hamiltonian operators pointing out that they represent a generalization of the similar operators we obtained previously on de Sitter spacetimes [@cot].
Let us start denoting by $\{t,\vec{x}\}$ the Cartesian coordinates $x^{\mu}$ ($\mu,\nu,...=0,1,2,3 $) of a chart with the RW line element $$ds^2=g_{\mu\nu}(x)dx^{\mu}dx^{\nu}=dt^2-\alpha(t)^2 (d\vec{x}\cdot d\vec{x})$$ where $\alpha$ is an arbitrary time dependent function. In this chart we introduce the tetrad fields $e_{\hat\mu}(x)$ that define the local frames and those defining the corresponding coframes, $\hat e^{\hat\mu}(x)$ [@SW]. These fields are labeled by the local indices ($\hat\mu,\hat\nu,...=0,1,2,3$) of the Minkowski metric $\eta=$diag$(1,-1,-1,-1)$, satisfy $e_{\hat\mu}(x)\hat
e^{\hat\mu}(x)=1_{4\times4}$ and give the metric tensor as $g_{\mu
\nu}=\eta_{\hat\alpha\hat\beta}\hat e^{\hat\alpha}_{\mu}\hat
e^{\hat\beta}_{\nu}$. Here we consider the tetrad fields of the diagonal gauge that have non-vanishing components [@BD; @SHI], $$\label{tt}
e^{0}_{0}=1\,, \quad e^{i}_{j}=\frac{1}{\alpha(t)}\delta^{i}_{j}\,,\quad \hat
e^{0}_{0}=1\,, \quad \hat e^{i}_{j}=\alpha(t)\delta^{i}_{j}\,,\quad
i,j,...=1,2,3\,,$$ determining the form of the Dirac equation [@BD], $$\label{ED1}
\left(i\gamma^0\partial_{t}+i\frac{1}{\alpha(t)}\gamma^i\partial_i
+\frac{3i}{2}\frac{\dot{\alpha}(t)}{\alpha(t)}\gamma^{0}-m\right)\psi(x)=0\,.$$ This is expressed in terms of Dirac $\gamma$-matrices [@TH] and the fermion mass $m$, with the notation $\dot{\alpha}(t)=\partial_t\alpha(t)$. Thus we obtain the natural picture in which the time evolution is governed by the Dirac equation (\[ED1\]). The principal operators of this picture, the energy $\hat
H$, momentum $\vec{\hat P}$ and coordinate $\vec{\hat X}$, can be defined as in special relativity, $$\label{ON}
(\hat H \psi)(x)=i\partial_t\psi(x)\,,\quad (\hat P^i
\psi)(x)=-i\partial_i\psi_S(x)\,,\quad (\hat X^i \psi)(x)=x^i\psi(x)\,.$$ The operators $\hat X^i$ and $\hat P^i$ are time-independent and satisfy the well-known canonical commutation relations $$\label{com}
\left[\hat X^i, \hat P^j\right]=i\delta_{ij}I\,,\quad \left[\hat H, \hat
X^i\right]=\left[\hat H,\hat P^i\right]=0\,,$$ where $I$ is the identity operator. Other operators are formed by orbital parts and suitable spin parts that can be point-dependent too. In general, the orbital terms are freely generated by the basic orbital operators $\hat X^i$ and $\hat P^i$. An example is the total angular momentum $\vec{J}=\vec{L}+\vec{S}$ where $\vec{L}=\vec{\hat X}\times\vec{\hat P}$ and $\vec{S}$ is the spin operator. We specify that the operators $\hat P^i$ and $J^i$ are generators of the spinor representation of the isometry group $E(3)$ of the spatially flat RW manifolds [@cot]. Therefore, these operators are [*conserved*]{} in the sense that they commute with the Dirac operator [@CML; @ES].
The natural picture can be changed using point-dependent operators which could be even non-unitary operators since the relativistic scalar product does not have a direct physical meaning as that of the non-relativistic quantum mechanics. We exploit this opportunity for defining the Schr" odinger picture as the picture in which the kinetic part of the Dirac operator takes the standard form $i\gamma^0\partial_t+i\gamma^i\partial_i$. The transformation $\psi(x)\to \psi_S(x)=U_S(x)\psi(x)$ leading to the Schr" odinger picture is produced by the operator of time dependent [*dilatations*]{} $$\label{U}
U_S(x)=\exp\left[-\ln(\alpha(t))(\vec{x}\cdot\vec{\partial})\right]\,,$$ which has the following suitable action $$U_S(x)F(\vec{x})U_S(x)^{-1}=F\left(\frac{1}{\alpha(t)}\vec{x}\right)\,,\quad
U_S(x)G(\vec{\partial})U_S(x)^{-1}=G\left(\alpha(t)\vec{\partial}\right)\,,$$ upon any analytical functions $F$ and $G$. Performing this transformation we obtain the Dirac equation of the Schr" odinger picture $$\label{ED2}
\left[i\gamma^0\partial_{t}+i\vec{\gamma}\cdot\vec{\partial} -m
+i\gamma^{0}\frac{\dot{\alpha}(t)}{\alpha(t)}
\left(\vec{x}\cdot\vec{\partial}+\frac{3}{2}\right)\right]\psi_S(x)=0\,.$$ Hereby we have to identify the specific operators of this picture, the energy $H_S$ and the operators $P^i_S$ and $X^i_S$ that must be time-independent, as in the non-relativistic case. We assume that these operators are defined as $$\label{OS}
(H_S \psi_S)(x)=i\partial_t\psi_S(x)\,,\quad (P^i_S
\psi_S)(x)=-i\partial_i\psi_S(x)\,,\quad (X^i_S \psi_S)(x)=x^i\psi_S(x)\,,$$ obeying commutation relations similar to Eqs. (\[com\]). The Dirac equation (\[ED2\]) can be put in
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The observation of isolated positive and negative charges, but not isolated magnetic north and south poles, is an old puzzle. Instead, evidence of effective magnetic monopoles has been found in the abstract momentum space. Apart from Hall-related effects, few observable consequences of these abstract monopoles are known. Here, we show that it is possible to manipulate the monopoles by external magnetic fields and probe them by universal conductance fluctuation (UCF) measurements in ferromagnets with strong spin-orbit coupling. The observed fluctuations are not noise, but reproducible quasiperiodic oscillations as a function of magnetisation direction, a novel Berry phase fingerprint of the magnetic monopoles.'
author:
- 'Kjetil M.D. Hals$^{1}$, Anh Kiet Nguyen$^1$, Xavier Waintal$^2$ and Arne Brataas$^1$'
title: Effective Magnetic Monopoles and Universal Conductance Fluctuations
---
Quantum states in solids are classified by a crystal momentum vector and a band index. The space spanned by the momentum vectors is known as the momentum space. Each band index defines an energy-band of allowed electronic energy levels in the momentum space. Momentum-space magnetic monopoles arise from energy-band crossings [@Bohm:book03]. Each band crossing point produces a magnetic monopole with a quantised topological magnetic charge, characterised by a Chern number [@Bohm:book03]. An electric particle traversing a closed curve in momentum space accumulates a geometric phase from the monopole fields [@Berry:prca84; @Sundaram:prb99]. So far, these abstract monopoles have revealed themselves only through Hall-related effects [@Fang:science03; @Nagaosa:RMF10], but we show that they can also be manipulated and probed by UCF measurements.
UCFs are observed experimentally as reproducible fluctuations in the conductance in response to an applied external magnetic field $\rm{B}$ [@Lee:prb86]. The fluctuation pattern is known as the magneto-fingerprint of the sample [@Lee:prb86]. Recent experiments on the ferromagnetic semiconductor (Ga, Mn)As report two different $\rm{B}_c$ periods in the conductance fluctuations [@Vila:prl07; @Neumaier:prl07]: a slow, conventional oscillation for high magnetic fields and a much faster oscillation for low fields when the magnetisation rotates. The present work reinterprets these recent experimental results and shows that the fast oscillations are caused by the relocation of momentum-space magnetic monopoles. Rotation of the magnetisation relocates the monopoles, which leads to a geometric phase change of the closed momentum-space curves. The numerical results demonstrate, in good agreement with the experiments, that a geometric phase change is observed with fast UCF oscillations, implying a novel Berry phase fingerprint of the monopoles.
The underlying physics of UCFs is quantum interference between different paths across the sample [@Lee:prb86]. Let $\rm{A}_c$ denote the quantum mechanical probability amplitude for propagating along the classical path $\mathbf{x}_c (t)$. The amplitude can be expressed as $\rm{A}_c= \left| \rm{A}_c\right| \exp(i\rm{S}[\mathbf{x}_c(t)]/\hbar)$ in terms of the action $\rm{S}[\mathbf{x} (t)]= \int \rm{dt L }(\mathbf{x},\mathbf{\dot{x}})$, where $\rm{L}(\mathbf{x},\mathbf{\dot{x}})$ is the Lagrangian, $\mathbf{\dot{x}}\equiv \rm{d}\mathbf{x}/\rm{dt}$, and $\left| \rm{A}_c\right|^2$ is the probability to follow the path $\mathbf{x}_c(t)$ [@Rammer:book07]. When an external magnetic field $\mathbf{B}$ is applied, the term $-e\int \rm{dt}\mathbf{\dot{x}} \cdot \mathbf{A}$ should be added to the action [@Rammer:book07], where $e$ is minus the electron charge, and $\mathbf{A}$ is the vector potential corresponding to $\mathbf{B}=\boldsymbol{ \nabla } \times\mathbf{A}$. Let us separate out the magnetic field-dependent phase and rewrite the amplitude as $\rm{A}_c = \tilde{\rm{A} }_c\exp (-i e /\hbar \int \rm{dt}\mathbf{\dot{x}}_c\cdot \mathbf{A} )$. The conductance $\rm{G}$ is proportional to the total probability of propagating across the sample, $\rm{G(B)}\propto \left|\sum_c \rm{A}_c \right|^2$. Reformulating the line integral associated with the vector potential as a surface integral using Green’s theorem, one finds $\rm{G(B)}\propto \sum_{c c^{'}} \tilde{\rm{A}}_c^{*} \tilde{\rm{A}}_{c^{'}} \exp(i 2\pi \Phi_{c c^{'}}(B)/\Phi_0 ) $, where $\Phi_{c c^{'}}(B)$ is the magnetic flux enclosed by the loop formed by the paths $\mathbf{x}_{c}(t)$ and $\mathbf{x}_{c^{'}}(t)$ and $\Phi_0 \equiv h/e$. Changing the magnetic field randomises the phase difference between different pairs of paths, causing the conductance to fluctuate. A typical period $\rm{B}_c$ of these quasiperiodic oscillations is when the dominant paths experience a relative phase shift of $2\pi$. Assuming that typical paths approximately enclose the sample area $\mathcal{A}$, leads to $\rm{B}_c = \Phi_0 / \mathcal{A}$ [@Lee:prb86].
A closed loop in real space also corresponds to a closed loop in momentum space. In systems with either broken inversion or time-reversal symmetry, there is also a phase associated with paths in momentum space [@Sundaram:prb99; @Bohm:book03]. Semiclassically, this Berry phase effect is included in the Lagrangian as $\hbar\mathbf{A}^{(n)}\cdot \mathbf{\dot{k}}$, where $\mathbf{A}^{(n)} (\mathbf{k})=i \left\langle u_n \left| \boldsymbol{ \nabla }_{\mathbf{k}} \right| u_n \right\rangle$ is the Berry connection, $\left| u_n \right\rangle$ is the periodic part of the Bloch function, and $n$ is the band index [@Sundaram:prb99]. The propagation amplitudes accumulate a geometric phase factor $\exp(i\int \rm{d}\mathbf{k}\cdot \mathbf{A}^{(n)}(\mathbf{k}))$ along a path in momentum space. A closed momentum-space curve acquires a phase equal to the flux of the effective field $\mathbf{\Omega}^{(n)} (\mathbf{k})= \boldsymbol{ \nabla }_{\mathbf{k}}\times \mathbf{A}^{(n)}(\mathbf{k})$ that the loop encloses [@Sundaram:prb99; @Bohm:book03]. The effective field is known as the Berry curvature [@Berry:prca84; @Bohm:book03]: $$\begin{aligned}
\mathbf{\Omega}^{(n)} (\mathbf{k}) & = & i\sum_{m\neq n}
\frac{ \left\langle u_n \left| \boldsymbol{ \nabla }_{\mathbf{k}} H \right| u_m \right\rangle \times
\left\langle u_m \left| \boldsymbol{ \nabla }_{\mathbf{k}} H \right| u_n \right\rangle }{ \left( \rm{E}_{n}(\mathbf{k}) - \rm{E}_{m}(\mathbf{k}) \right) ^2 }, \label{BerryCurvature} \end{aligned}$$ where $H$ is the Hamiltonian of the system and $\rm{E}_n(\mathbf{k})$ is the dispersion relation of the $n$th band. Momentum-space magnetic monopoles are singularities in the Berry curvature where energy bands cross at isolated points [@Fang:science03; @Nagaosa:RMF10]. In ferromagnets with strong spin-orbit coupling, the Hamiltonian is not invariant under rotation of the magnetisation [@Jungwirth:RMP06]. Changing the magnetisation direction relocates the magnetic monopoles, inducing a geometric phase change in the propagation amplitudes. The external magnetic field can rotate the magnetisation in UCF experiments on ferromagnets. The phase change of a closed real-space curve then also acquires important contributions from the geometric phase change of the corresponding closed momentum-space curve. We demonstrate that the magnetic monopoles give rise to fast conductance oscillations at low magnetic fields. This novel and large magnetic monopole effect is qualitatively different from the studies of Berry phase effects in two-dimensional electron gases with Rashba spin-orbit coupling since these systems exhibit no effective momentum-space monopoles [@Engel:prb00]. Also, the effect we compute quantitively differs from the weak peak splitting effects seen therein by 1 order of magnitude.
In the following discussion, the Berry phase effect on UCFs will be investigated for the ferromagnetic semiconductor (Ga, Mn)As. The system is modeled by the Hamiltonian [@Jung
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: '[ We propose a radiative seesaw model based on a modular $A_4$ symmetry, which has good predictability in the lepton sector. We execute a numerical analysis to search for parameters that satisfy the experimental constraints such as those from neutrino oscillation data and lepton flavor violations. Then, we present several predictions in our model that originate from the modular symmetry.]{}'
author:
- Hiroshi Okada
- Yutaro Shoji
title: 'A radiative seesaw model in modular $A_4$ symmetry'
---
[APCTP Pre2020 - 005]{}
Introduction
============
One of big mysteries in the standard model (SM) is the origin of the flavor structures. In particular, the flavor structure of the neutrino mass matrix is very important to understand the lepton sector. Historically, models with non-Abelian discrete flavor symmetries have been widely discussed since they not only reproduce the experimental results but also have several model specific predictions. Recently, starting from papers [@Feruglio:2017spp; @deAdelhartToorop:2011re], modular motivated non-Abelian discrete flavor symmetries have attracted attention of many authors to realize more predictable flavor structures in the quark and the lepton sectors. One of their remarkable natures is that any coupling constants as well as fields can also be transformed as non-trivial representations of those symmetries. Thus, we do not need to introduce many scalar fields such as flavons to realize flavor structure. As a result, we obtain a more minimal scenario without assumptions such as vacuum alignments among scalar fields. Here, we list several references where they apply this kind of symmetries to flavor models; the $A_4$ modular group [@Feruglio:2017spp; @Criado:2018thu; @Kobayashi:2018scp; @Okada:2018yrn; @Nomura:2019jxj; @Okada:2019uoy; @deAnda:2018ecu; @Novichkov:2018yse; @Nomura:2019yft; @Okada:2019mjf; @Ding:2019zxk; @Nomura:2019lnr; @Kobayashi:2019xvz; @Asaka:2019vev; @Zhang:2019ngf; @Gui-JunDing:2019wap; @Nomura:2019xsb; @Kobayashi:2019gtp; @Wang:2019xbo; @King:2020qaj; @Abbas:2020qzc], $S_3$ [@Kobayashi:2018vbk; @Kobayashi:2018wkl; @Kobayashi:2019rzp; @Okada:2019xqk], $S_4$ [@Penedo:2018nmg; @Novichkov:2018ovf; @Kobayashi:2019mna; @King:2019vhv; @Okada:2019lzv; @Criado:2019tzk; @Wang:2019ovr], $A_5$ [@Novichkov:2018nkm; @Ding:2019xna; @Criado:2019tzk], larger groups [@Baur:2019kwi], multiple modular symmetries [@deMedeirosVarzielas:2019cyj], and double covering of $A_4$ [@Liu:2019khw], in which masses, mixing, and CP phases for quark and lepton are predicted [^1]. A possible correction from Kähler potential is also discussed in Ref. [@Chen:2019ewa]. Furthermore, a systematic approach to understand the origin of CP transformations is recently discussed in ref. [@Baur:2019iai], and CP violation in models with modular symmetry is discussed in Ref. [@Kobayashi:2019uyt]. Another big mystery in the SM is the lack of a dark matter (DM) candidate. Even though many experiments from different aspects are going on to search for DM signatures, we have not obtained any decisive proofs yet. However, there are a lot of nice scenarios of DM that are connected to other observables. One of interesting models is known as the radiative seesaw model [@Ma:2006km]. This scenario not only explains the neutrino sector and DM at the same time but also provides a lot of new phenomena at a low energy scale such as lepton flavor violations, muon anomalous magnetic moment, collider signatures, etc. Since such a model connects the DM sector and the neutrino sector, the understanding of the neutrino nature leads to the understanding of the DM nature, and vise versa. In this paper, we work on a radiative seesaw scenario with a Dirac DM candidate based on our previous work [@Okada:2020oxh], applying a modular $A_4$ flavor symmetry. Then, we try to find several predictions in the lepton sector.
The manuscript is organized as follows. [In Sec. \[sec:realization\], we give our model setup under the $A_4$ modular symmetry, in which we review the modular $A_4$ symmetry and define relevant interactions needed to formulate the neutrino mass matrix and lepton flavor violations (LFVs). Then, we execute a numerical analysis and give several predictions in the lepton sector in Sec. III. Finally, we give our conclusion and discussion in Sec. \[sec:conclusion\].]{}
Model {#sec:realization}
=====
In this section, we introduce our model, which is based on a modular $A_4$ symmetry. The leptonic fields and the scalar fields of the model, their representations under the $A_4\times Z_3$ symmetry and their modular weights are given in Tab. \[tab:fields\]. We also show the representations of the Yukawa couplings in Tab. \[tab:couplings\]. Under these symmetries, we write the renormalizable Lagrangian for the lepton sector as follows: $$\begin{aligned}
-{\cal L}_L &=
\sum_{\ell=e,\mu,\tau}y_\ell \bar L_{L_\ell} H_{SM} e_{R_\ell}{\nonumber}\\
&\hspace{3ex}+\alpha_\nu \bar L_{L_e} (Y^{(2)}_{\bf 3}\otimes N_{R})_{\bf1}\tilde H_1
+\beta_\nu \bar L_{L_\mu}(Y^{(2)}_{\bf 3}\otimes N_{R})_{\bf1''}\tilde H_1
+\gamma_\nu \bar L_{L_\tau}(Y^{(2)}_{\bf 3}\otimes N_{R})_{\bf1'}\tilde H_1{\nonumber}\\
&\hspace{3ex}+a_\nu (\bar N_{L_e} \otimes Y^{(6)*}_{\bf 3})_{\bf1} {L^C_{L_e}}\tilde H_2
+b_\nu (\bar N_{L_\mu}\otimes Y^{(6)*}_{\bf 3})_{\bf1'} {L^C_{L_\mu}}\tilde H_2
+c_\nu (\bar N_{L_\tau}\otimes Y^{(6)*}_{\bf 3})_{\bf1''} {L^C_{L_\tau}}\tilde H_2{\nonumber}\\
&\hspace{3ex}+a'_\nu (\bar N_{L_e}\otimes Y'^{(6)*}_{\bf 3})_{\bf1} {L^C_{L_e}}\tilde H_2
+b'_\nu (\bar N_{L_\mu}\otimes Y'^{(6)*}_{\bf 3})_{\bf1'} {L^C_{L_\mu}}\tilde H_2
+c'_\nu (\bar N_{L_\tau}\otimes Y'^{(6)*}_{\bf 3})_{\bf1''} {L^C_{L_\tau}}\tilde H_2{\nonumber}\\
&\hspace{3ex}+ {M_D} (\bar N_{L}\otimes N_{R})_{\bf1}
+ {\rm h.c.},
\label{eq:lag-lep}\end{aligned}$$ where $\tilde H\equiv i\sigma_2 H^*$ with $\sigma_2$ being the second Pauli matrix and $(A\otimes B)_{\bf R}$ indicates that the representation, $\bf R$, is contracted from $A$ and $B$. Here, $M_D$ includes a modular invariant coefficient, $1/(i\tau-i\tau^*)$, and the charged-lepton matrix is diagonal thanks to the $A_4$ symmetry.
----------- --------------------------------------------------- ----------------------------------------- ---------- ------------ ------------ ------------
$(\bar L_{L_e},\bar L_{L_\mu},\bar L_{L_\tau})$ $(e_{R_e},e_{R_{\mu}},e_{R_{\tau}})$ $N_{}$ $H_{SM}$ $H_1^*$ $H_2$
$SU(2)_L$ $\bm{2}$ $\bm{1}$ $\bm{1}$ $\bm{2}$ $\bm{2}$ $\bm{2}$
$U(1)_Y$ $\frac12$ $-1$ $0$ $\frac12$ $\frac12$ $\frac12$
$A_4$ ${(1,1',1'')}$ ${(1,1'',1')}$ $3$ $1$ $1$ $1$
$-k$ $0$ $0$ $-
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We investigate coordinated regularized zero-forcing precoding for limited feedback multicell multiuser multiple-input single-output systems. We begin by deriving an approximation to the expected signal-to-interference-plus-noise ratio for the proposed scheme with perfect channel direction information (CDI) at the base station (BS). We also derive an expected SINR approximation for limited feedback systems with random vector quantization (RVQ) based codebook CDI at the BS. Using the expected interference result for the RVQ based limited feedback CDI, we propose an adaptive feedback bit allocation strategy to minimize the expected interference by partitioning the total number of bits between the serving and out-of-cell interfering channels. Numerical results show that the proposed adaptive feedback bit allocation method offers a spectral efficiency gain over the existing coordinated zero-forcing scheme.'
author:
- 'Jawad Mirza, Peter J. Smith, and Pawel A. Dmochowski, '
bibliography:
- 'MIRZA\_VT-2015-01172.bib'
title: 'Coordinated Regularized Zero-Forcing Precoding for Multicell MISO Systems with Limited Feedback'
---
limited feedback MISO, RZF precoding.
Introduction {#intro}
============
multicell systems, due to neighboring co-channel cells, the level of interference is high, especially at the cell-edge, thus degrading the spectral efficiency of the cell. Such a loss can be mitigated using BS coordination, where information is exchanged among the BSs via a backhaul link to suppress the inter-cell interference (ICI) in the downlink [@4487516].
In codebook-based limited feedback multiuser (MU) multiple-input multiple-output (MIMO) systems [@jindal2006mimo], the user feeds back the index of the appropriate codebook entry or codeword to the BS, via a low-rate feedback link. This information is then used to compute precoders for the users. In [@bhagavatula2011adaptive], a limited feedback strategy for MISO multicell systems at high signal-to-noise ratio (SNR) is developed using random vector quantization (RVQ) codebooks [@au2007performance]. An adaptive bit allocation method which maximizes the spectral efficiency is proposed in [@zhang2010adaptive] for limited feedback systems. In [@5648782], an adaptive feedback scheme for limited feedback MISO systems is proposed with a zero-forcing (ZF) precoding scheme which minimizes the expected spectral efficiency loss.
Regularized zero-forcing (RZF) [@1391204] is a linear precoding technique shown to be effective for single-cell communication systems. RZF has also been extensively used in the analysis of 5G technologies such as massive MIMO [@HoydisBD13]. Despite the numerous studies on coordinated multicell systems, little attention has been paid to coordinated RZF precoding prior to the development of massive MIMO [@6779600]. Thus, in this paper we investigate coordinated RZF precoding for conventional (small-scale) multicell MU MISO systems, where BSs share out-of-cell interfering CSI to coordinate transmission.
We also derive expected SINR approximations for the proposed scheme with perfect channel direction information (CDI) and with RVQ codebook CDI at the BS. Furthermore, we develop an adaptive bit allocation scheme that distributes the bits to serving and out-of-cell interfering channels, minimizing interference at users. We assume perfect knowledge of channel quality indicator (CQI) at the BS [@5648782]. The main contributions of this paper are summarized below.
- We investigate a coordinated RZF precoding scheme for multicell MU MISO systems, where interfering channels are shared among BSs.
- Analytical expressions are derived to approximate the expected SINR for the proposed system with perfect CDI and limited feedback RVQ CDI.
- We propose a novel adaptive bit allocation method that minimizes ICI.
Downlink System Model
=====================
$$\label{2}
\textrm{SINR}_{l,k} = \frac{ \frac{P_{l,k,k}}{\gamma_{k}} \left| \mathbf{h}_{l,k,k} \mathbf{w}_{l,k} \right|^2}{1 +\frac{P_{l,k,k}}{\gamma_{k}} \sum_{\substack{m=1\\m \neq l}}^{L} \left| \mathbf{h}_{l,k,k} \mathbf{w}_{m,k} \right|^2 + \sum_{\substack{j=1\\j \neq k}}^{K} \frac{P_{l,k,j}}{\gamma_{j}} \sum_{q=1}^{L} \left| \mathbf{h}_{l,k,j} \mathbf{w}_{q,j}\right|^2}.$$ $$\label{3}
\mathbb{E}\left[\textrm{SINR}_{l,k}\right] \approx \frac{\frac{P_{l,k,k}}{\bar{\gamma}_k} \mathbb{E}\left[\left| \mathbf{h}_{l,k,k} \mathbf{w}_{l,k} \right|^2\right]}{1+\frac{P_{l,k,k}}{\bar{\gamma}_k} \sum_{\substack{m=1\\m \neq l}}^{L} \mathbb{E}\left[\left| \mathbf{h}_{l,k,k} \mathbf{w}_{m,k} \right|^2\right] + \sum_{\substack{j=1\\j \neq k}}^{K} \frac{P_{l,k,j}}{\bar{\gamma}_j} \sum_{q=1}^{L} \mathbb{E} \left[\left| \mathbf{h}_{l,k,j} \mathbf{w}_{q,j}\right|^2\right]},$$
Consider a multicell MU MISO system with $K$ cells having a single BS each. Each BS has $M$ transmit antennas and simultaneously serves $L$ single antenna users with $KL\leq M$[^1]. All the $K$ cells are interconnected via backhaul links assumed to be error free without delay. The $1 \times M$ channel vector between the $l^{\textrm{th}}$ user in the $k^{\textrm{th}}$ cell and the serving BS is given by $\mathbf{h}_{l,k,k}$. The interfering channel vector between the $l^{\textrm{th}}$ user in the $k^{\textrm{th}}$ cell and the $j^{\textrm{th}}$ interfering BS is denoted by $\mathbf{h}_{l,k,j}$, where $j \neq k$. The channel entries $\mathbf{h}_{l,k,k}$ and $\mathbf{h}_{l,k,j}$ are independent and identically distributed (i.i.d.) complex Gaussian $\mathcal{CN}(0,1)$. The downlink received signal at the $l^{\textrm{th}}$ user in the $k^{\textrm{th}}$ cell is given by[^2] $$\begin{aligned}
y_{l,k} \hspace{-.3em}&= \hspace{-.3em} \sqrt{\frac{P_{l,k,k}}{\gamma_{k}}} \mathbf{h}_{l,k,k} \mathbf{w}_{l,k} s_{l,k} \hspace{-.1em}+ \hspace{-.1em} \sqrt{\frac{P_{l,k,k}}{\gamma_{k}}} \hspace{-.2em}\sum_{\substack{m=1\\ m\neq l}}^{L} \hspace{-.2em}\mathbf{h}_{l,k,k} \mathbf{w}_{m,k} s_{m,k}\nonumber \\
&+ \sum_{j=1, j\neq k}^{K} \sqrt{\frac{P_{l,k,j}}{\gamma_{j}}} \mathbf{h}_{l,k,j} \sum_{q=1}^{L} \mathbf{w}_{q,j} s_{q,j} + n_{l,k},\label{rec}\end{aligned}$$ where $\mathbf{w}_{l,k}$ is the non-normalized precoding vector for the $l^{\textrm{th}}$ user in the $k^{\textrm{th}}$ cell and $\gamma_k$ is the normalization parameter (to be discussed later) for the $k^{\textrm{th}}$ cell. $s_{l,k}$ and $n_{l,k}\sim\mathcal{CN}(0,N_0)$ denote the data symbol and the noise for the $l^{\textrm{th}}$ user in the $k^{\textrm{th}}$ cell. The data symbols are selected from the same constellation with $\mathbb{E} \left[ | s_{l,k} |^2 \right] = 1$. $P_{l,k,k}$ and $P_{l,k,j}$ are the received powers at the $l^{\textrm{th}}$ user in the $k^{\textrm{th}}$ cell from serving and interfering BSs, respectively, given by
![The system model for $K=2$ and $L=2$ cell-edge users.[]{data-label="Fig1"}](Fig1.eps){width="7cm" height="5cm"}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We examine the long-term asymptotic behavior of dissipating solutions to aggregation equations and Patlak-Keller-Segel models with degenerate power-law and linear diffusion. The purpose of this work is to identify when solutions decay to the self-similar spreading solutions of the homogeneous diffusion equations. Combined with strong decay estimates, entropy-entropy dissipation methods provide a natural solution to this question and make it possible to derive quantitative convergence rates in $L^1$. The estimated rate depends only on the nonlinearity of the diffusion and the strength of the interaction kernel at long range.'
author:
- 'Jacob Bedrossian [^1]'
bibliography:
- 'nonlocal\_eqns.bib'
- 'dispersive.bib'
title: 'Intermediate Asymptotics for Critical and Supercritical Aggregation Equations and Patlak-Keller-Segel models'
---
Introduction
============
The most widely studied mathematical models of nonlocal aggregation phenomena are the Patlak-Keller-Segel (PKS) models, originally introduced to study the chemotaxis of microorganisms [@Patlak; @KS; @Hortsmann; @HandP]. Similar models are also used to study the formation of herds and flocks in ecological systems [@Bio; @Topaz; @Milewski; @GurtinMcCamy77]. A common theme is the competition between the tendency for organisms to diffuse, e.g. under Brownian motion or to avoid over-crowding, and for organisms to aggregate into groups through nonlocal self-attraction. The parabolic-elliptic PKS models are a subclass of the general aggregation-diffusion equations $$u_t + {\nabla}\cdot (u {\nabla}{\mathcal{K}}\ast u) = \Delta A(u). \label{def:ADD_general}$$ The local and global existence and uniqueness of models such as is well studied (see for instance [@BRB10; @BertozziSlepcev10; @Blanchet09; @BlanchetEJDE06; @SugiyamaDIE06; @SugiyamaADE07; @SugiyamaDIE07; @Corrias04]). However, less is known about the long-term qualitative behavior of solutions. In this work, we are interested in examining the asymptotic profiles of dissipating solutions to in the special case $$\label{def:ADD}
\left\{
\begin{array}{l}
u_t + {\nabla}\cdot (u {\nabla}{\mathcal{K}}\ast u) = \Delta u^m, \;\; m \geq 1, \\
u(0,x) = u_0(x) \in L_+^1({\mathbb R}^d;(1+{\left\vertx\right\vert}^2)dx)\cap L^\infty({\mathbb R}^d),
\end{array}
\right.$$ where $L_+^1({\mathbb R}^d;\mu) := {\left\{f \in L^1({\mathbb R}^d;\mu): f \geq 0\right\}}$. In particular, we are interested in determining when solutions to converge in $L^1({\mathbb R}^d)$ as $t \rightarrow \infty$ to the self-similar spreading solutions of the diffusion equation $$u_t = \Delta u^m \label{def:PME}.$$ All dissipating solutions are weak$^\star$ converging to zero as $t \rightarrow \infty$, but this kind of result implies that for $1 << t < \infty$, the dissipating solutions all look more or less like self-similar solutions of . For this reason, these results are often referred to as *intermediate asymptotics*.
*Supercritical* problems are those in which the aggregation is dominant at high concentrations, *subcritical* problems are those in which the diffusion dominates at high concentrations, and *critical* problems are those in which the effects are in approximate balance. It is known that supercritical problems exhibit finite time blow up for solutions of arbitrarily small mass and subcritical problems have global solutions [@SugiyamaDIE06; @SugiyamaADE07; @BRB10; @Blanchet09]. The critical case is more interesting; data with small mass exists globally, whereas finite time blow up is possible for large mass [@Blanchet09; @BRB10; @BlanchetEJDE06; @SugiyamaADE07]. In this work, we will refer to the case $m < 2-2/d$ as supercritical and $m = 2-2/d$ as critical. This is in contrast to the definition used in [@BRB10], where the critical diffusion exponent was taken to depend on the singularity of the kernel. Here, achieving such a precise balance is not the primary interest and moreover we are concerned with examining the limit of low concentrations. In the sense of [@BRB10; @Blanchet09; @SugiyamaADE07; @SugiyamaDIE06], $m = 2-2/d$ is the critical exponent for the Newtonian potential, which is the most singular kernel known to have unique, local-in-time solutions [@BRB10].
As strong nonlinearities vanish quickly near zero, scaling heuristics suggest that the nonlocal aggregation term should become irrelevant for small data in the critical and supercritical regime. We use entropy-entropy dissipation methods [@CarrilloToscani98; @Toscani99; @CarrilloToscani00; @CarrilloEntDiss01; @CarrilloDiFranToscani06; @BilerDolbeaultEsteban02] to obtain several intermediate asymptotics results which show this to be true, and that solutions of converge to self-similar solutions of . Entropy-entropy dissipation methods are well-suited for proving the convergence to equilibrium states of nonlinear Fokker-Plank-type equations for arbitrary data [@CarrilloToscani98; @CarrilloEntDiss01]. Through a change of variables employed below, this also provides convergence to self-similarity of nonlinear homogeneous diffusion equations [@CarrilloToscani00]. In contrast to these works, we employ such methods to prove a *small data* result, treating the nonlocal aggregation term as a perturbation. For this to work, sufficiently strong decay estimates on the solution must be obtained. Indeed, strong decay estimates imply the intermediate asymptotics results, and so we have chosen to state them separately in Theorem \[thm:Decay\] below. Here, we obtain these estimates using iteration methods, discussed in more detail below, which are a refinement of the local theory of (see e.g. [@Blanchet09; @BRB10]). While nonlinear, they are essentially perturbative in nature and thus somewhat limited against arbitrary data, using basic dissipation estimates to over-power the nonlocal advection term only under certain conditions. Analogous to related models, such as the nonlinear Schrödinger equations, it is likely a fully non-perturbative theory will need to be applied in order to treat large data, which is sometimes significantly more difficult (see for instance [@Tao; @KillipVisanClay]). More details and discussion about the results and the methods of the proofs are discussed below in §\[sec:StatResults\] and §\[sec:outline\].
The first of our intermediate asymptotics results, Theorem \[thm:IA\], covers the case ${\mathcal{K}}\in W^{1,1}({\mathbb R}^d)$. Here, the nonlocal term can be considered to have a finite characteristic length-scale which becomes vanishingly small relative to the length-scale of the solution as it dissipates. A result similar to Theorem \[thm:IA\] for $L^p, \; 1 < p < \infty$, was proved for the special case of the Bessel potential in [@LuckhausSugiyama06; @LuckhausSugiyama07] with the soft compactness method of [@KaminVazquez88] (see also [@VazquezPME]). In contrast to methods based on compactness, the entropy-entropy dissipation methods obtain quantitative convergence rates in $L^1$, which by interpolation against the decay estimates, provides convergence in all $L^p$, $1 \leq p < \infty$. For supercritical problems, the convergence rate is shown to be the same as the optimal rates for [@CarrilloToscani98; @Toscani99; @CarrilloToscani00; @CarrilloEntDiss01; @VazquezPME].
In general, if the kernel does not have critical scaling at large length-scales, the long-range effects should still become irrelevant as the solution dissipates. That is, we should expect results similar to the ${\mathcal{K}}\in W^{1,1}({\mathbb R}^d)$ case to hold, except when $m = 2-2/d$ and ${\nabla}{\mathcal{K}}\sim {\left\vertx\right\vert}^{1-d}$ as ${\left\vertx\right\vert} \rightarrow \infty$. Indeed, when ${\mathcal{K}}$ is the Newtonian potential, there exists at least one self-similar spreading solution to when $m = 2-2/d$ [@BlanchetEJDE06; @Blanchet09; @BlanchetDEF10; @CalvezCarrillo10]. In the presence of linear diffusion, these are additionally known to be the global attractors [@BlanchetEJDE06; @Bl
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We experimentally study a vacuum-induced Autler-Townes doublet in a superconducting three-level artificial atom strongly coupled to a coplanar waveguide resonator and simultaneously to a transmission line. The Autler-Townes splitting is observed in the reflection spectrum from the three-level atom in a transition between the ground state and the second excited state when the transition between the two excited states is resonant with a resonator. By applying a driving field to the resonator, we observe a change in the regime of the Autler-Townes splitting from quantum (vacuum-induced) to classical (with many resonator photons). Furthermore, we show that the reflection of propagating microwaves in a transmission line could be controlled by different frequency single photons in a resonator.'
author:
- 'Z.H. Peng'
- 'J.H. Ding'
- 'Y. Zhou'
- 'L.L. Ying'
- 'Z. Wang'
- 'L. Zhou'
- 'L.M. Kuang'
- 'Yu-xi Liu'
- 'O. Astafiev'
- 'J.S. Tsai'
title: 'Vacuum-induced Autler-Townes splitting in a superconducting artificial atom'
---
Electromagnetic waves propagating through a medium of identical atoms are resonantly absorbed. The absorption can be eliminated when a strong driving field couples other atomic transitions in, for example, three-level atoms, creating a transparency window for the waves. This results in either Autler-Townes splitting (ATS) [@Autler1955] or electromagnetically induced transparency (EIT) [@Harris1990; @Harris1997] and is extensively studied in quantum optics [@Fleischhauer2005; @Scully-book]. The same phenomena can be observed even if the medium is replaced by a single atom or a molecule which is coupled to either a cavity [@Muller2007; @Mucke2010] or an open space [@Tey2008; @Hwang2009]. However, the strong coupling between the driving field and a single natural atom is difficult to achieve. Recently, the strong coupling has been experimentally achieved between a superconducting artificial atom and non-quantized microwave fields confined in a one-dimensional transmission line [@Astafiev2010]. This enables ATS and EIT to be observed using a single superconducting artificial atom [@Murali2004; @Ian2010; @Sun2014; @Gu2016]. Several experiments have already shown ATS [@Baur2009; @Sillanpaa2009; @Abdumalikov2010; @Li2012; @Hoi2013PRL; @Hoi2013; @Novikov2013; @Liu2017] and state manipulation [@Kelly2010; @Xu2016] in a single three-level artificial atom. ATS [@Suri2013] and EIT [@Sovikov2015] have been demonstrated in a Jaynes-Cummings ladder system of a single two-level artificial atom dispersively coupled to a cavity.
![([a]{}) Micrograph of the device, comprising a superconducting artificial atom capacitively coupled to a transmission line resonator (meandering structure). Additionally, the artificial atom is coupled to a transmission line (on the left side) used to directly measure the reflection spectrum from the atom. ([b]{}) Magnified micrograph of the artificial atom of the tunable-gap superconducting flux qubit geometry. ([c]{}) Schematic of the three-level artificial atom coupled to a resonator. The transition from the ground state to the second excited state is probed at frequency $\omega_p$. ([d]{}) Left panel: The dressed state picture of the vacuum induced (quantized) ATS due to coupling to a resonator in $|e\rangle \leftrightarrow |f\rangle$. Right panel: The dressed state picture of the classical ATS with many resonator photons. []{data-label="picture"}](Fig1.pdf)
The observation of both ATS and EIT in a medium (an ensemble of atoms) usually requires strong classical driving fields. However, the conditions are different: the driving field required for the observation of EIT is weaker than that for ATS, and there is a crossover from ATS to EIT [@Sanders; @Yang]. In particular, in ATS the splitting is determined by the Rabi frequency of the driving field, whereas in EIT the dip between the two peaks is a result of quantum interference. These effects take place even if the medium is scaled down to a single atom and the driving field is quantized. The condition for the observation of the effects can then be reduced to a small number of photons or even without photons owing to coupling to a vacuum mode. Theoretical investigations show that the transparency for the classical probing field can still occur [@Field1993]. This has potentially important applications, for example, in single-photon switches and transistors, all-optical quantum logic, quantum communication, and metrology [@Chang2014]. However, vacuum-induced transparency has only recently been observed in an ensemble of three-level atoms [@Tanji-Suzuki2011]. Here, we demonstrate ATS with a single three-level artificial atom, which is controlled by a quantized single-mode field in a transmission line resonator. Our device, shown in Fig. \[picture\](a), consists of a three-level artificial atom capacitively and strongly coupled to two macroscopic objects simultaneously: a coplanar waveguide resonator (CPWR) and a transmission line. Although the device is reminiscent of the one studied in Ref. [@Peng2016], where the atom was coupled to two transmission lines, it is essentially different. We emphasize that in spite of the strong coupling of the atom to the transmission line and the atom to the resonator, the line and resonator are decoupled from each other (details are given in Supplementary Material [@Suppl]). This is a peculiarity of the device owing to the unique property of superconducting quantum systems: they are micrometer-scale electronic circuits. Namely, the two macroscopic objects (i.e. open lines, resonators) can be effectively decoupled from each other but, at the same time, strongly coupled to the quantum circuit. The artificial atom has the geometry of a tunable-gap flux qubit: a superconducting loop with two Josephson junctions and a dc-SQUID ($\alpha$-loop) [@Fedorov2010; @Peng2016], shown in Fig. \[picture\](b), is fabricated near the voltage antinode (open end) of the CPWR using electron-beam lithography and Al/AlO$_x$/Al shadow evaporation (see the SEM image in [@Suppl]). A weak probe signal with frequency $\omega_p$ is applied from the transmission line (left-hand side). Both the half-wavelength CPWR and the transmission line are etched from a $50\,\rm nm$ thick Nb thin film sputtered on an oxidized undoped silicon wafer. The lines have a center conductor of $10\,\mu\rm m$ width separated from the ground planes by $6\,\mu\rm m$ gaps, resulting in a $50\,\Omega$ wave impedance.
Our experiment is carried out in a dilution refrigerator with a base temperature of about 30 mK. The fundamental frequency of the CPWR is $\omega_r=2\pi\times8.794\,$GHz with the decay rate $\kappa=2\pi\times3.6\,$MHz. The three lowest energy levels of the artificial atom, denoted by $|g\rangle$ for the ground state, $|e\rangle$ for the first excited state, and $|f\rangle$ for the second excited state, are controlled by the magnetic flux $\Phi$ in the loop. The minimum transition energies from the ground to both excited states occur at half-integer flux quanta $\Phi_{N}=(N+\frac{1}{2})\Phi_{0}$, where $N$ is an integer and $\Phi_{0}$ is the flux quantum. The minimal energy gap $\Delta$ for the transition from $|g\rangle$ to $|f\rangle$ can be tuned by controlling the magnetic flux $\Phi_{\alpha}$ penetrating through the $\alpha$-loop. When the biased flux $\Phi$ is close to $\Phi_{N}$, the two lowest-energy eigenstates ($|g\rangle$ and $|e\rangle$) are in superposition of the two oppositely circulating persistent current states of the loop. The energy between the two lowest levels $\hbar\omega_{ge}$ in the vicinity of $\Phi_{N}$ is approximated by the expression $\sqrt{(2I_{p}\delta\Phi)^2+\Delta(\Phi_{N})^2}$, where $\delta\Phi=\Phi-\Phi_{N}$ and $I_{p}$ is the persistent current in the qubit loop. The weak dependence of $\Delta$ on $\Phi$ can be safely neglected when $\delta\Phi\ll\Phi_0$, which means that $\Delta(\Phi)\approx\Delta(\Phi_{N})$. As shown in Fig. \[picture\](c), by choosing $\Phi_{N}$ and $\delta\Phi$, we can tune both transition frequencies $\omega_{ef}$ and $\omega_{gf}$. The ATS can be observed by measuring directly the reflection spectrum through the left transmission line.
As shown in Fig. \[picture\](c), the transition energy between the $|e\rangle$ and $|f\rangle$ states is aligned to the resonator resonance. The probe field is applied at the $|g\rangle$ to $|f\rangle$ transition. Thus, the effective Hamiltonian of the whole system driven by classical waves with amplitude $\Omega$ at frequency $\omega_{p}$ close to the $|g\rangle \leftrightarrow |f\rangle$ transition is given by $$\begin{
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We show that in some cases, catalyst-assisted entanglement transformation cannot be implemented by multiple-copy transformation for pure states. This fact, together with the result we obtained in \[R. Y. Duan, Y. Feng, X. Li, and M. S. Ying, Phys. Rev. A 71, 042319 (2005)\] that the latter can be completely implemented by the former, indicates that catalyst-assisted transformation is strictly more powerful than multiple-copy transformation. For purely probabilistic setting we find, however, these two kinds of transformations are geometrically equivalent in the sense that the sets of pure states which can be converted into a given pure state with maximal probabilities not less than a given value have the same closure, no matter catalyst-assisted transformation or multiple-copy transformation is used.'
address: 'State Key Laboratory of Intelligent Technology and Systems, Department of Computer Science and Technology Tsinghua University, Beijing, China, 100084'
author:
- Yuan Feng
- Runyao Duan
- Mingsheng Ying
bibliography:
- 'relation.bib'
title: 'Relation between catalyst-assisted transformation and multiple-copy transformation for bipartite pure states'
---
Introduction
============
Quantum entanglement, which is essential in quantum information processing such as quantum cryptography [@BB84], quantum superdense coding [@BW92] and quantum teleportation [@BBC+93], has been extensively studied. One fruitful research direction on quantum entanglement is to discuss the possibility of transforming a bipartite entangled pure state into another one allowing only local operations on the separate subsystems respectively and classical communication between them (or LOCC for short). The asymptotic case when arbitrarily large number of copies are provided is considered by Bennett and his collaborators [@BBPS96]. While in deterministic and finite manner, the first and significant step was made by Nielsen [@NI99] who discovered the connection between the theory of majorization in linear algebra [@MO79] and entanglement transformation. Nielsen proved that a bipartite entangled pure state $|\psi_1\rangle$ can be transformed into another bipartite entangled pure state $|\psi_2\rangle$ by LOCC if and only if $\lambda_{\psi_1}\prec\lambda_{\psi_2}$, where the probability vectors $\lambda_{\psi_1}$ and $\lambda_{\psi_2}$ denote the Schmidt coefficient vectors of $|\psi_1\rangle$ and $|\psi_2\rangle$, respectively. Here the symbol $\prec$ stands for ‘majorization relation’. Generally, an $n$-dimensional real vector $x$ is said to be majorized by another $n$-dimensional real vector $y$, denoted by $x\prec y$, if the following relations hold: $$\sum_{i=1}^l x^\downarrow_i \leq \sum_{i=1}^l y^\downarrow_i {\rm \
\ \ for\ any\ \ \ }1\leq l\leq n,$$ with equality holding when $l=n$, where $x^\downarrow$ denotes the vector obtained by rearranging the components of $x$ in nonincreasing order.
Nielsen’s theorem gives a necessary and sufficient condition when two entangled pure states are comparable in the sense that one can be transformed into another by LOCC. There exist, however, incomparable states such that any one cannot be transformed into another only using LOCC. To treat the case of transformations between incomparable states, Vidal [@Vi99] generalized Nielsen’s work by allowing probabilistic transformations. He found that the maximal probability of transforming $|\psi_1\rangle$ into $|\psi_2\rangle$ by LOCC can be calculated by $$\label{eq:Vidal}
P(|\psi_1\rangle \rightarrow |\psi_2\rangle)=\min_{1\leq l\leq n}
\frac{E_l(\lambda_{\psi_1})}{E_l(\lambda_{\psi_2})},$$ where $E_l(x)$ denotes $\sum_{i=l}^n x^{\downarrow}_i$.
In Ref.[@JP99], Jonathan and Plenio discovered a very surprising phenomenon that sometimes an entangled state can enable otherwise impossible entanglement transformations without being consumed at all. A simple but well known example is $|\psi_1\rangle\nrightarrow
|\psi_2\rangle$ but $|\psi_1\rangle\otimes|\phi\rangle \rightarrow
|\psi_2\rangle\otimes|\phi\rangle$, where $|\psi_1\rangle=\sqrt{0.4}|00\rangle+\sqrt{0.4}|11\rangle+\sqrt{0.1}|22\rangle+\sqrt{0.1}|33\rangle,$ $|\psi_2\rangle=\sqrt{0.5}|00\rangle+\sqrt{0.25}|11\rangle+\sqrt{0.25}|22\rangle,$ and $|\phi\rangle=\sqrt{0.6}|44\rangle+\sqrt{0.4}|55\rangle.$ The role of the state $|\phi\rangle$ is just like a catalyst in a chemical process. Daftuar and Klimesh [@DK01] examined catalyst-assisted entanglement transformation and derived some interesting results. In [@FD05a], we investigated catalyst-assisted transformation in probabilistic setting. A necessary and sufficient condition was presented under which there exist partial catalysts that can increase the maximal transforming probability of a given entanglement transformation. The mathematical structure of catalyst-assisted probabilistic transformation was also carefully investigated.
Another interesting phenomenon of entanglement transformation was noticed by Bandyopadhyay $et$ $al$. [@SRS02]. In some occasions, increasing the number of copies of the original state can also help entanglement transformations. Take the above example. Instead of introducing a catalyst state $|\phi\rangle$, providing 3 copies of $|\psi_{1}\rangle$ is also sufficient to transform these copies together into the same number of $|\psi_{2}\rangle$. A question naturally arises here is: what is the relation between catalyst-assisted entanglement transformation and multiple-copy transformation? In [@DF05b], we found that multiple-copy entanglement transformation can be completely implemented by catalyst-assisted one. Furthermore, the mixing of these two has also the same power as pure catalyst-assisted transformation. In other words, any transformation which can be realized collectively on multiple copies and with the aid of a catalyst can be exactly implemented by only providing some appropriate catalyst. Later on, we proved that these two kinds of transformations are asymptotically equivalent in the sense that they can simulate each other’s ability to implement a desired transformation with the same optimal success probability, when the dimension of catalysts and the number of copies provided tend to infinity [@DF05a].
The contribution of the current paper is twofold. First, we show that in some cases catalyst-assisted entanglement transformation is strictly more powerful than multiple-copy one by deriving a sufficient condition when the former cannot be implemented by the latter. Second, for purely probabilistic setting we find, however, these two kinds of transformations are geometrically equivalent. That is, no matter catalyst-assisted transformations or multiple-copy transformations are used, the sets of quantum states that can be converted into a given state with maximal probabilities not less than a given value have the same closure. It is worth noting that the geometrical equivalence between these two kinds of transformations proved in the current paper is different from the asymptotical equivalence shown in [@DF05a]. We will elaborate the difference at the end of Section III after necessary notations have been introduced.
For simplicity, in what follows we denote a bipartite pure state by the probability vector of its Schmidt coefficients. This will not cause any confusion because it is well known that the fundamental properties of a bipartite pure state under LOCC are completely determined by its Schmidt coefficients. Therefore, from now on, we consider only probability vectors (sometimes we even omit the normalization of a nonnegative vector to be a probability one) instead of quantum states and always identify a probability vector with the bipartite pure state represented by it.
Deterministic case
==================
In this section, we study the relation between catalyst-assisted transformation and multiple-copy transformation in deterministic case. First, we introduce some notations.
Denote by $V^n$ the set of all $n$-dimensional nonnegative vectors and let $x,y,\cdots$ range over $V^n$. Let $$S(y)=\{x\in V^n\ |\ x\prec y\}$$ be the set of states that can be transformed into $y$ by LOCC directly, $$T(y)=\{x\in V^n\ |\ \exists \mbox{ probability vector } c,\ x\otimes c\prec y\otimes c\}$$ be the set of states that can be transformed into $y$ by LOCC with the aid of some catalyst, and $$M(y)=\{x\in V^n\ |\ \exists \mbox{ integer }k\ \geq 1,\ x^{\otimes{k}}\prec y^{{\otimes{k}}}\}$$ the set of states which, when some appropriate number of copies are provided, can be transformed into the same number of $y$ by LOCC.
Suppose $x\in T(y)$ and $x'\in T(y')$. Then $\bar{x}\in T(\bar{y})$ where $\bar{x}= x\oplus x'$ and $\bar{y}=
y\oplus y'$.
[*Proof.*]{} By definition, $x\in T(y)$ and $x'\in T(y')$ imply that there exist $c$ and $c'$ such that $x\otimes c\prec y\otimes c$ and $x'\otimes c'\prec y'\otimes c'$. It can be easily checked that the vector $c\otimes c'$ serves as a catalyst for the transformation from $\bar{x}$ to
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
State-space smoothing has found many applications in science and engineering. Under linear and Gaussian assumptions, smoothed estimates can be obtained using efficient recursions, for example Rauch-Tung-Striebel and Mayne-Fraser algorithms. Such schemes are equivalent to linear algebraic techniques that minimize a convex quadratic objective function with structure induced by the dynamic model.\
These classical formulations fall short in many important circumstances. For instance, smoothers obtained using quadratic penalties can fail when outliers are present in the data, and cannot track impulsive inputs and abrupt state changes. Motivated by these shortcomings, generalized Kalman smoothing formulations have been proposed in the last few years, replacing quadratic models with more suitable, often nonsmooth, convex functions. In contrast to classical models, these general estimators require use of iterated algorithms, and these have received increased attention from control, signal processing, machine learning, and optimization communities.\
In this survey we show that the optimization viewpoint provides the control and signal processing community great freedom in the development of novel modeling and inference frameworks for dynamical systems. We discuss general statistical models for dynamic systems, making full use of nonsmooth convex penalties and constraints, and providing links to important models in signal processing and machine learning. We also survey optimization techniques for these formulations, paying close attention to dynamic problem structure. Modeling concepts and algorithms are illustrated with numerical examples.
address:
- 'Department of Applied Mathematics, University of Washington, USA (e-mail: saravkin@uw.edu)'
- 'Department of Mathematics, University of Washington, Seattle, USA (e-mail: burke@math.washington.edu)'
- 'Division of Automatic Control, Linköping University, Linköping, Sweden (e-mail: ljung@isy.liu.se)'
- 'IBM T.J. Watson Research Center Yorktown Heights, NY, USA (e-mail: aclozano@us.ibm.com)'
- 'Department of Information Engineering, University of Padova, Padova, Italy (e-mail: giapi@dei.unipd.it)'
author:
- Aleksandr Aravkin
- 'James V. Burke'
- Lennart Ljung
- Aurelie Lozano
- Gianluigi Pillonetto
bibliography:
- 'kalmanSurvey.bib'
title: 'Generalized Kalman Smoothing: Modeling and Algorithms'
---
Introduction
============
The linear state space model
\[eq:Lin\] $$\begin{aligned}
x_{t+1} &=A_t x_t+B_t u_t +v_t\\
y_t &=C_t x_t + e_t\end{aligned}$$
is the bread and butter for analysis and design in discrete time systems, control and signal processing [@kalman; @KalBuc]. Applications areas are numerous, including navigation, tracking, healthcare and finance, to name a few.
For a system model, $y_t \in {{\mathbb R}}^{m}$ and $u_t \in {{\mathbb R}}^{p}$ are, respectively, the output and input evaluated at the time instant $t$. The dimensions $m$ and $p$ may depend on $t$, but we treat them as fixed to simplify the exposition. In signal models, the input $u_t$ may be absent. The state vectors $x_t \in {{\mathbb R}}^n$ are the variables of interest; $A_t$ encodes the process transition, to the extent that it is known to the modeler, $C_t$ is the observation model, and $B_t$ describes the effect of the input on the transition. The *process disturbance* $v_t$ models stochastic deviations from the linear model $A_t$, while $e_t$ model *measurement errors*. We consider the [*state estimation problem*]{}, where the goal is to infer the values of $x_t$ from the input-output measurements. Given measurements $$\mathcal{Z}^N_0:=\{u_0,y_1,u_1,y_2,\ldots,y_N,u_N\},$$ we are interested in obtaining an estimate $\hat{x}^N_t$ of $x_t$. If $N>t$ this is called a *smoothing* problem, if $N=t$ it is a *filtering* problem, and if $N<t$ it is a *prediction* problem.
How well the state estimate fits the true state depends upon the choice of models for the stochastic term $v_t$, error term $e_t$, and possibly on the initial distribution of $x_0$. While $u_t$ is usually a known deterministic sequence, the observations $y_t$ and states $x_t$ are stochastic processes. We can consider using several estimators $\hat{x}^N_t$ of the state sequence $\{x_t\}$ (all functions of $\mathcal{Z}^N_0$):
$$\begin{aligned}
\label{eq:cm}
E(x_t | \mathcal Z^N_0) \quad &\mbox{conditional mean} \\ \label{eq:cm2}
\max_{x_t} {{\bf p}}(x_t \big| \mathcal Z^N_0) \quad &\mbox{maximum {\it a posteriori} (MAP)} \\ \label{eq:cm3}
\nonumber \min_{\hat{x_t}} E(\|x_t - \hat{x}_t\|^2) \quad & \mbox{minimum expected} \\
& \mbox{mean square error (MSE)} \\
\min_{\hat{x_t} \in \mathrm{span}\left(\mathcal Z^N_0\right)} E(\|x_t - \hat{x}_t\|^2) \
& \mbox{minimum linear expected MSE} \label{eq:cm4} \end{aligned}$$
When $e_t,v_t$ and the initial state $x_0$ are jointly Gaussian, all the four estimators coincide. In the general setting, the estimators (\[eq:cm\]) and (\[eq:cm3\]) are the same. Indeed, the conditional mean represents the minimum variance estimate. In the general (non-Gaussian) case, computing may be difficult, while the MAP estimator can be computed efficiently using optimization techniques for a range of disturbance and error distributions.
Most models assume known means and variances for $v_t, e_t,$ and $x_0$. In the classic settings, these distributions are Gaussian: $$\label{eq:wgn}
\begin{aligned}
e_t & \sim \mathcal{N}(0,R_t) \\ v_t & \sim \mathcal{N}(0,Q_t)\\
x_0 & \sim \mathcal{N}(\mu,\Pi)
\end{aligned}, \qquad \text{all variables are mutually independent.}$$ Under this assumption, all the $y_t$ and $x_t$ become jointly Gaussian stochastic processes, which implies that the conditional mean (\[eq:cm\]) becomes a linear function of the data $\mathcal{Z}^N_0$. This is a general property of Gaussian variables. Many explicit expressions and recursions for this linear filter have been derived in the literature, some of which are discussed in this article. We also consider a far more general setting, where the distributions in can be selected from a range of densities, and discuss applications and general inference techniques.
We now make explicit the connection between [*conditional mean*]{} and [*maximum likelihood*]{} in the Gaussian case. By Bayes’ theorem and the independence assumptions (\[eq:wgn\]), the posterior of the state sequence $\{x_t\}_{t=0}^N$ given the measurement sequence $\{y_t\}_{t=1}^N$ is $$\begin{aligned}
\nonumber && {{\bf p}}\left(\{x_t\} \big| \{y_t\}\right) = \frac{{{\bf p}}\left(\{y_t\}\big|\{x_t\}\right){{\bf p}}\left(\{x_t\}\right)}{{{\bf p}}\left(\{y_t\} \right)} \\ \label{Bayes}
\qquad &&= \frac{{{\bf p}}\left(x_0 \right) \prod_{t=1}^N {{\bf p}}\left( y_t \big| x_t \right) \prod_{t=0}^{N-1} {{\bf p}}\left( x_{t+1} \big| x_t \right)}{{{\bf p}}\left(\{y_t\} \right)} \\ \nonumber
\qquad && \propto {{\bf p}}\left(x_0 \right) \prod_{t=1}^N {{\bf p}}_{e_t} \left( y_t - C_t x_t \right) \prod_{t=0}^{N-1} {{\bf p}}_{v_t} \left( x_{t+1} -A_t x_t -B_t u_t \right),\end{aligned}$$ where we use ${{\bf p}}_{e_t}$ and ${{\bf p}}_{v_t}$ to denote the densities corresponding to $e_t$
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In cloud computing systems slow processing nodes, often referred to as “stragglers”, can significantly extend the computation time. Recent results have shown that error correction coding can be used to reduce the effect of stragglers. In this work we introduce a scheme that, in addition to using error correction to distribute mixed jobs across nodes, is also able to exploit the work completed by [*all*]{} nodes, including stragglers. We first consider vector-matrix multiplication and apply maximum distance separable (MDS) codes to small blocks of sub-matrices. The worker nodes process blocks sequentially, working block-by-block, transmitting partial per-block results to the master as they are completed. Sub-blocking allows a more continuous completion process, which thereby allows us to exploit the work of a much broader spectrum of processors and reduces computation time. We then apply this technique to matrix-matrix multiplication using product code. In this case, we show that the order of computing sub-tasks is a new degree of design freedom that can be exploited to reduce computation time further. We propose a novel approach to analyze the finishing time, which is different from typical order statistics. Simulation results show that the expected computation time decreases by a factor of at least two in compared to previous methods.'
author:
- '\'
bibliography:
- 'reference.bib'
title: Exploitation of Stragglers in Coded Computation
---
Introduction
============
The advent of large scale machine learning algorithms and data analytics has increased the demand for computation. Modern massive-scale computing tasks can no longer be solved using a single processor. Parallelization is required. There has been a recent surge in literature proposing different techniques to parallelize the fundamental computing primitives of machine learning and data analytics. Many approaches are tailored to specific algorithms with the general approach being a classic one, to decompose a computation task into a set of parallel sub-jobs. The number of sub-jobs determines the degree of acceleration. One such example is *matrix multiplication*, a task found in many machine learning algorithms, e.g., sub-gradient calculations in stochastic gradient descent. As matrix multiplication can be decomposed into many small parallel jobs, it is possible to realize high degrees of parallelism.
In practical distributed computing environments the theoretical speedups promised will often not be attainable. Among other reasons, “stragglers” are a significant impediment to acceleration. Stragglers are *slow workers*, who delay the computation of the final result. Recent work demonstrated that error correction coding (ECC) can be used to reduce the effect of straggler [@Lee:ISIT16; @Salman:unified; @Lee:MATRIXISIT17; @ferdinand:allerton16; @Avestimehr:ISIT17; @Cadambe:ISIT17; @Alex2017:GC]. The central idea in [@Lee:ISIT16] is to use maximum-distance separable (MDS) codes [@Roth:2006] to generate redundant computations. The concept introduced in [@Lee:ISIT16] has been extended in a number directions including matrix multiplication [@Lee:MATRIXISIT17], approximate computing [@ferdinand:allerton16], heterogeneous networks [@Avestimehr:ISIT17] and convolution [@Cadambe:ISIT17].
One key feature of the coded computation approach in [@Lee:ISIT16] (and all the papers that follow it) is that it ignores the work done by the worst $(n-k)$ nodes, nodes thereby deemed to be stragglers. In the case of [*persistent*]{} stragglers, i.e., worker nodes that are unavailable permanently or for an extremely long period, this is the ideal strategy. However, in practice, there are many *non-persistent* stragglers, workers that, while slow, are able to do some amount of work. Non-persistent stragglers are present in practical cloud computing systems, and previous papers ignore the work they complete.
In this paper, we propose a method to exploit the work completed by all workers, including stragglers. We first apply our coding scheme to vector-matrix multiplication. We decompose the matrices into much smaller sub-matrices, encode them using MDS codes, and assign each worker a set of subtasks. Each worker then sequentially computes subtasks. They *transmit back* to the master the computed result of each subtask. I.e, a worker first computes its first subtask; transmits back the result before starting on the second subtask and so forth. The master node sequentially receives the completed subtasks from the workers. A faster worker may send a greater number of subtask results, while stragglers may send a smaller number. Once the master receives enough, it can recover the desired solution. We extend this method to matrix-matrix multiplication using product code. Through illustration we show that an “order of processing" effect is pre-eminent in matrix-matrix multiplication, an effect that is not presented in the vector-matrix multiplication case. We then propose an order of processing that reduces compute time.
In contrast to previous work, an important aspect of our model and results is that it leverages the sequential processing nature of most computing systems. In our paper, each worker sequentially processes multiple (small) encoded tasks in contrasts to processing a single (big) encoded task in [@Lee:ISIT16; @Lee:MATRIXISIT17]. This means that in our paper, the processing times of encoded tasks are no longer independent and identically distributed as they are in [@Lee:ISIT16; @Lee:MATRIXISIT17]. Thus, standard order statistics cannot be used to analyze the latency performance of our scheme as was done in [@Lee:ISIT16; @Lee:MATRIXISIT17]. To this end, we propose a novel theoretical approach to study the variation of work done across workers. Our analysis illustrate how our strategy improves finishing times through effective exploitation of the work completed by all workers.
Vector-Matrix Multiplication {#secvector}
============================
In this section, we propose our straggler exploitation method for vector-matrix multiplication. We detail our proposed scheme in three sub-sections: the delegation of work by the master, the computation at the workers, and the combining operation at the master. Finally, we give an example and compare our scheme to existing schemes.
The delegation of work by the master {#sec.A}
------------------------------------
We consider a distributed computing environment that consists of a master and $n$ workers. The objective of the master is to perform the vector-matrix multiplication $\mathbf A \mathbf x$ where $\mathbf A$ is an $m\times q$ matrix and $\mathbf x$ is a $q\times 1$ vector. We first partition $\mathbf A$ into $k$ equally-sized sub-matrices ($k$ is a parameter of our scheme): $$\mathbf A =
\begin{bmatrix}
\mathbf A_1 ; \ \
\mathbf A_2 ; \ \
\hdots \ \ ;
\mathbf A_k
\end{bmatrix}.$$ Each sub-matrix $\mathbf A_i$ is of size $m/k\times q$. We next define an $L\times k$ matrix $\mathbf G$ in which any $k$ row vectors of $\mathbf G$ are linearly independent and any square matrix formed using any $k$ columns of $\mathbf G$ is invertible. These conditions can be satisfied with high probability by selecting the elements of $\mathbf G$ in an independent and identically distributed (i.i.d.) manner from the Gaussian normal distribution. Let $\mathbf I_{m/k}$ be the $m/k
\times m/k$ identity matrix. The master computes $$\begin{aligned}
\label{eqn:encoding}
\mathbf { \Bar{A}}= \left(\mathbf G \otimes \mathbf I_{m/k}\right)\mathbf A \end{aligned}$$ where $\otimes$ denote the Kronecker product and $\mathbf { \Bar{A}}$ is an $Lm/k \times q$ matrix. The matrix $\mathbf { \Bar{A}}$ is composed of $L$ distinct sub-matrices (each of size $m/k\times q$): $$\mathbf { \Bar{A}}=
\begin{bmatrix}
\mathbf{ \Bar{A}}_1; \ \
\mathbf { \Bar{A}}_2; \ \
\hdots \ \ ;
\mathbf { \Bar{A}}_L
\end{bmatrix},$$ The matrix $\mathbf{ \Bar{A}_i}$ is a linear combination of the $\mathbf A_j$: $$\begin{aligned}
\mathbf{\Bar{A}}_i = \sum_{j=1}^{k} g_{ij} \mathbf A_j\end{aligned}$$ where $g_{ij}$ is the $ij$-th element of $\mathbf G$. The master transmits $l_i$ distinct sub-matrices to worker $i$ where $\sum_{i=1}^nl_i=L$ and $l_i>1$. All sub-matrices are distributed to distinct workers, i.e., no single matrix is given to two workers. Finally, the master sends $\mathbf x$ to all workers.
The computation at workers {#sec.B}
--------------------------
The $i$-th worker receives $\mathbf{ \Bar{A}}_{(i-1)L/n+1}, \ldots
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, we show that the eccentricity of a planet on an inclined orbit with respect to a disc can be pumped up to high values by the gravitational potential of the disc, even when the orbit of the planet crosses the disc plane. This process is an extension of the Kozai effect. If the orbit of the planet is well inside the disc inner cavity, the process is formally identical to the classical Kozai effect. If the planet’s orbit crosses the disc but most of the disc mass is beyond the orbit, the eccentricity of the planet grows when the initial angle between the orbit and the disc is larger than some critical value which may be significantly smaller than the classical value of 39 degrees. Both the eccentricity and the inclination angle then vary periodically with time. When the period of the oscillations of the eccentricity is smaller than the disc lifetime, the planet may be left on an eccentric orbit as the disc dissipates.'
author:
- |
Caroline Terquem$^{1,2}$[^1] and Aikel Ajmia$^1$\
$^1$ Institut d’Astrophysique de Paris, UPMC Univ Paris 06, CNRS, UMR7095, 98 bis bd Arago, F-75014, Paris, France\
$^2$ Institut Universitaire de France
title: Eccentricity pumping of a planet on an inclined orbit by a disc
---
celestial mechanics — planetary systems — planetary systems: formation — planetary systems: protoplanetary discs — planets and satellites: general
Introduction {#sec:intro}
============
Among the 240 extrasolar planets that have been detected so far with a semi–major axis larger than 0.1 astronomical unit (au), about 100 have an eccentricity $e > 0.3$. Five of them even have $e>0.8$. Such large eccentricities, which cannot be the result of disc–planet interaction (Papaloizou et al. 2001), are probably produced by planet–planet interactions, either through scattering or secular perturbation (see Ford & Rasio 2008 and references therein), that occur after the disc dissipates (Juric & Tremaine 2008, Chatterjee et al. 2008, Ford & Rasio 2008).
Here, we show that high eccentricities can be pumped [*by the disc*]{} if the orbit of the planet is inclined with respect to the disc. The process involved is an extension of the Kozai mechanism, in which a planet is perturbed by a distant companion on an inclined orbit (Kozai 1962). While the Kozai effect has always been studied for the case in which the companion is far away from the planet, the process investigated here is shown to be efficient even if the orbit of the planet crosses the disc. The classical Kozai effect has of course been very well studied. Here, we show that some significant differences occur when the classical scenario is extended to apply to a disc.
In section \[sec:Kozai\] we review the Kozai effect, and show that the same behaviour is expected whether the planet is perturbed by a distant companion or by a ring of material orbiting far away. In section \[sec:simulations\], we present the results of numerical simulations of the interaction between a planet on an inclined orbit and a disc. We show that, provided most of the mass in the disc is beyond the orbit, and the initial inclination is larger than some critical value, the gravitational potential from the disc causes the eccentricity and the inclination of the planet’s orbit to oscillate with time. This may occur even if the orbit crosses the disc. In section \[sec:discussion\] we summarise our findings, and discuss under which conditions this mechanism could operate. The important result is that a planet on an inclined orbit with respect to the disc and located in or within the planet formation region may have its eccentricity pumped up to high values by the interaction with the disc. This is of astronomical interest, since inclinations are beginning to be measured for extrasolar planets.
Review of the Kozai effect and extension to a disc {#sec:Kozai}
==================================================
We consider a planet of mass $M_p$ orbiting around a star of mass $M_\star$ which is itself surrounded by a ring of material of mass $M_{\rm disc}$. The ring is in the equatorial plane of the star whereas the orbit of the planet is inclined with respect to this plane. The motion of the planet is dominated by the star, so that its orbit is an ellipse slightly perturbed by the gravitational potential of the ring. We study the secular perturbation of the orbit due to the ring. We denote by $(X,Y,Z)$ the Cartesian coordinate system centred on the star and $(r, \varphi, \theta)$ the associated spherical coordinates. The ring is in the $(X,Y)$–plane between the radii $R_i$ and $R_o>R_i$. We suppose that the angular momentum of the disc is large compared to that of the planet’s orbit so that the effect of the planet on the disc is negligible: the disc does not precess and its orientation is invariable. The gravitational potential exerted by the ring at the location of the planet is:
$$\Phi = -G \int_{R_i}^{R_o} \Sigma(r) r dr \int_0^{2 \pi}
\frac{d \alpha}{\left( r^2+r_p^2-2rr_p \cos \alpha \sin \theta_p \right)^{1/2}},
\label{Phi}$$
where the subscript $p$ refers to the planet and $\Sigma(r)$ is the mass density in the ring. We assume:
$$\Sigma(r)=\Sigma_0 \left( \frac{r}{R_o} \right)^{-n},
\label{sigma}$$
where:
$$\Sigma_0 = \frac{(-n+2) M_{\rm disc}}{2 \left( 1-\eta^{-n+2} \right)\pi R_o^2 },
\label{sigma0}$$
with $\eta \equiv R_i/R_o$. We suppose that $R_i \gg r_p$, so that the square root in equation (\[Phi\]) can be expanded in $r_p/r$ and integrated to give:
$$\begin{aligned}
\Phi=- \frac{-n+2 }{1-\eta^{-n+2} } \frac{G M_{\rm
disc}}{R_o}
\left[ \frac{1 - \eta^{1-n}}{1-n} + \; \; \; \; \; \; \; \; \; \;
\; \; \; \; \; \right. \\
\left. \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;
\frac{ -1 + \eta^{-1-n} }{ 1+n } \frac{r_p^2}{2R_o^2}
\left( -1 + \frac{3}{2} \sin^2 \theta_p \right)
\right].\end{aligned}$$
In the classical Kozai effect, the planet is perturbed by a distant companion of mass $M$. If we assume the orbit of this outer companion is circular of radius $R \gg r_p$ and lies in the $(X,Y)$–plane, then the potential averaged over time it exerts at the location $(r_p,
\theta_p)$ is:
$$\Phi_{\rm Kozai} = -\frac{GM}{R} \left[ 1+ \frac{r_p^2}{2R^2}
\left( -1 + \frac{3}{2} \sin^2 \theta_p \right) \right] .$$
Because $r_p$ and $\theta_p$ appears in exactly the same way in $\Phi$ and $\Phi_{\rm Kozai}$, the secular perturbation on the inner planet, obtained by averaging over the mean anomaly of its orbit, is the same in both cases to within an overall multiplicative factor. The results obtained for the classical Kozai effect can therefore be extended to the case of the disc. In particular, the perturbation due to the disc makes the eccentricity $e$ of the planet to oscillate with time if the initial inclination angle $I_0$ between the orbit of the planet and the plane of the disc is larger than a critical angle $I_c$ given by:
$$\cos^2 I_c = \frac{3}{5}.$$
The maximum value reached by the eccentricity is then (Innanen et al. 1997):
$$e_{\rm max}=\left( 1- \frac{5}{3} \cos^2 I_0 \right)^{1/2},
\label{emax}$$
and the time $t_{\rm evol}$ it takes to reach $e_{\rm max}$ starting from $e_0$ is (Innanen et al. 1997):
$$\frac{t_{\rm evol}}{\tau} =
0.42
\left( \sin^2 I_0 - \frac{2}{5} \right)^{-1/2}
\ln \left( \frac{e_{\rm max}}{e_0} \right),
\label{tevol}$$
with the time $\tau$ defined as:
$$\tau=\frac{(
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We describe Monte Carlo models for the dynamical evolution of the nearby globular cluster M4. The code includes treatments of two-body relaxation, three- and four-body interactions involving primordial binaries and those formed dynamically, the Galactic tide, and the internal evolution of both single and binary stars. We arrive at a set of initial parameters for the cluster which, after 12Gyr of evolution, gives a model with a satisfactory match to the surface brightness profile, the velocity dispersion profile, and the luminosity function in two fields. We describe in particular the evolution of the core, and find that M4 (which has a classic King profile) is actually a [*post-collapse*]{} cluster, its core radius being sustained by binary burning. We also consider the distribution of its binaries, including those which would be observed as photometric binaries and as radial-velocity binaries. We also consider the populations of white dwarfs, neutron stars, black holes and blue stragglers, though not all channels for blue straggler formation are represented yet in our simulations.'
author:
- |
Mirek Giersz$^{1}$[^1] and Douglas C. Heggie$^{2}$\
$^{1}$Nicolaus Copernicus Astronomical Centre, Polish Academy of Sciences, ul. Bartycka 18, 00-716 Warsaw, Poland\
$^{2}$University of Edinburgh, School of Mathematics and Maxwell Institute for Mathematical Sciences, King’s Buildings, Edinburgh EH9 3JZ, UK
date: 'Accepted …. Received …; in original form …'
title: 'Monte Carlo Simulations of Star Clusters - V. The globular cluster M4'
---
\[firstpage\]
stellar dynamics – methods: numerical – binaries: general – globular clusters: individual: M4
Introduction
============
The present paper opens up a new road in the study of the dynamical evolution of globular clusters. We adopt the Monte Carlo method of Giersz [@Gi1998; @Gi2001; @Gi2006], which in recent years has been enhanced to deal quite realistically with the stellar evolution of single and binary stars, to study the dynamical history of the nearby globular cluster M4. An earlier version of the code had already been used to study the dynamical history of $\omega$ Cen [@GH2003], but at that time the treatment of stellar evolution was primitive and there were no binaries. The new code has been thoroughly tested on smaller systems, by comparison with $N$-body simulations and observations of the old open star cluster M67 [@GH2008]. There we showed that the Monte Carlo code could produce data of a similar level of detail and realism as the best $N$-body codes. Now for the first time we consider much richer systems, with about half a million stars initially, which are at present beyond the reach of $N$-body methods.
This paper has a place within a long tradition of the modelling of globular star clusters, but the place is a distinctive one. First, we are not concerned with a static model of a star cluster at the present day, like a King model. We are concerned with issues where the dynamical history of the star cluster is important, where static models are uninformative. Secondly, our aim is to construct a model of a specific star cluster, rather than trying to understand the general properties of the evolution of a population of star clusters. This has been done before, and a brief history is outlined in @GH2008, but the present work takes these efforts onto a new level of realism, in terms of the description of stellar evolution, and dynamical interactions involving binary stars.
This problem is not easy. Not only is it necessary to use an elaborate technique for simulating the relevant astrophysical processes, but it is necessary also to search for initial conditions which, after about 12Gyr of evolution, lead to an object resembling a given star cluster. By “resembling” we do not simply mean matching the overall mass, radius and binary fraction of a cluster, for example, for two reasons:
1. We have found that values found for data in the literature are highly uncertain, and different sources are contradictory. These data are usually derived, in some model-dependent way, from such data as surface-brightness profiles and velocity dispersion profiles, and we prefer to compare our models directly with this data, and not with inferred global parameters.
2. We have found that, even if one achieves a satisfactory fit to these profiles, the model may give a very poor comparison with the luminosity function.
From these considerations we conclude that a model which aims to fit only the mass and radius of a star cluster (say) may be very far from the truth.
Tackling this difficult problem is not just interesting, however. We have been motivated by a number of pressing astrophysical problems. For example, the two nearby star clusters M4 (the subject of this study) and NGC 6397 (which we shall consider in our next paper in this series) have rather similar mass and radius, and yet one has a classic King profile, while the other is a well-studied example of a cluster with a “collapsed core” [@Tr1995]. Among possible explanations one may consider differences in the population of binaries, which are known to affect core properties, or in tidal effects. Indeed the present paper will show that these two clusters may be much more similar than one would suppose from the surface brightness profiles alone.
A second motivation for our work is our involvement in observational programmes aimed at characterising the binary populations in globular clusters. What differences (e.g. in the distributions of periods and abundances) should one expect to find between the core and the halo? These issues are important in the planning of observations, and in their interpretation.
The cluster M4 is the focus of much of this effort because it is nearby, making it a relatively easy target for deep observational study. It was the first globular cluster to yield a deep sequence of white dwarfs [@Ri1995]. More recently it has been subjected to an intensive observational programme by the Padova group [@bedinetal2001; @bedinetal2003; @andersonetal2006], which includes searches for radial-velocity binaries in the upper main sequence [@Somm]. It also turns out to be a cluster which (we conclude) started with only about half a million stars, which facilitates the modelling. Along with the open cluster M67, M4 was chosen by the international MODEST consortium, at a meeting in Hamilton in 2005, as the focus for joint effort by theorists and observers, to cast light on its binary population and dynamical properties. M67 has been modelled very successfully by @Hu2005, using $N$-body techniques, and this paper represent the first theoretical step in a similar study of M4.
The paper is organised as follows. First, we summarise features of the code and the models, the data we used, and our approach to the problem of finding initial conditions for M4. Then we present data for our best models: surface brightness and velocity dispersion profiles, luminosity functions, the properties of the binary population, white dwarfs and other degenerate remnants, and the inferred dynamical state of the cluster. The final section summarises our findings and discusses them in the context of work on other clusters, including objects to which we will turn in future papers.
Methods
=======
The Monte Carlo Code
--------------------
The details of our simulation method have been amply described in previous papers in this series. Each star in a spherical star cluster is represented by its mass, energy and angular momentum, and its stellar evolutionary state may be computed at any time using synthetic formulae for single and binary evolution. It may be a binary or a special kind of single star that has been created in a collision or merger event.
Neighbouring stars interact with each other in accordance (in a statistical sense) with the theory of two-body relaxation. If one or both of the participants is a binary, the probability of an encounter affecting the internal dynamics is calculated according to analytic cross sections, which also determine the outcome. This is one of the main shortcomings of the code, as these cross sections are not well known in the case of unequal masses, and also the possibility of stellar collisions during long-lived temporary capture is excluded.
A star or binary may escape if its energy exceeds a certain value, which we choose to be lower than the energy at the nominal tidal radius, in order to improve the scaling of the lifetime with $N$, as explained in @GH2008. This is the second main shortcoming of the models, as it leads to a cutoff radius of the model that is smaller than the true tidal radius, and this lowers the surface density profile in the outer parts of the system.
A difficulty in applying the Monte Carlo code to M4 is that it employs a static tide, whereas the orbit of M4 appears to be very elliptical [@Di1999]. We have to assume that a cluster can be placed in a steady tide of such a strength that the cluster loses mass at the same average rate as it would on its true orbit. Some support for this procedure comes from $N$-body modelling. @BM2003 show that clusters on an elliptical orbit between about 2.8 and 8.5kpc dissolve on a time scale intermediate between that for circular orbits at these two radii, and that the dissolution time scales in almost the same way with the size of the system. @wilkinsonetal2003 show that the core radius of a cluster on an elliptic orbit evolves in very nearly the same way as in a cluster with a circular orbit at the time-averaged galactocentric distance.
All other free parameters of the code (e.g. the coefficient of $N$ in the Coulomb logarithm) take the optimal values found in the above study
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present an evaluation of the atmospheric tau neutrino flux in the energy range between $10^2$ and $10^6$ GeV. The main source of tau neutrinos is from charmed particle production and decay. The $\nu_\tau N\rightarrow \tau X$ event rate for a detector with a water equivalent volume of 1 km$^3$ is on the order of 60-100 events per year for $E_\tau>100$ GeV, reducing to 18 events above 1 TeV. Event rates for atmospheric muon neutrino oscillations to tau neutrinos are also evaluated.'
address: ' Department of Physics and Astronomy, University of Iowa, Iowa City, Iowa 52242'
author:
- 'L. Pasquali and M. H. Reno'
title: Tau Neutrino Fluxes from Atmospheric Charm
---
INTRODUCTION
============
Recent measurements of the atmospheric neutrino flux by the Super-Kamiokande Collaboration[@sk; @superk] show a deficit of muon neutrinos in comparison to theoretical predictions, while the measured electron neutrino flux is consistent with theory assuming that all neutrino masses vanish. Earlier, lower statistics experiments already showed inconsistencies with theoretical atmospheric flux predictions [@imb]. On the basis of event rates and the zenith angle dependence of the muon neutrino deficit, the Super-Kamiokande Collaboration has shown that their results could be explained by neutrino oscillations between $\nu_\mu$ and $\nu_\tau$ [@superk]. Oscillations imply at least one non-zero neutrino mass. Definitive evidence of massive neutrinos requires modifying the standard model of electroweak interactions.
The Super-Kamiokande Collaboration measures neutrino fluxes from observations of electrons and muons in neutrino nucleon interactions: $\nu_l + N\rightarrow l + X$. In view of the importance of the question of whether or not neutrinos have mass, one would like to see not just $\nu_\mu$ disappearance, but also the $\nu_\tau$ appearance coming from $\nu_\mu\rightarrow \nu_\tau$ oscillations. Oscillation sources of $\nu_\tau$’s include oscillations on the terrestrial scale from atmospheric $\nu_\mu$’s as well as oscillations over large astronomical distances of $\nu_\mu$’s produced in, for example, active galactic nuclei [@agn]. A background to the flux of neutrinos from $\nu_\mu\rightarrow \nu_\tau$ are tau neutrinos produced directly in the atmosphere.
Tau neutrinos are produced in the atmosphere by cosmic ray collisions with nuclei in the atmosphere, which produce charm quark pairs. A fraction of the time, the emerging hadrons are $D_s$’s, which have a leptonic decay channel $D_s\rightarrow \tau \nu_\tau$ with a branching ratio of a few percent. The subsequent $\tau$ decays also contribute to the atmospheric $\nu_\tau$ flux. Heavier mesons contribute to the flux of tau neutrinos, but as we show below, they are negligible compared to the $D_s$ contribution.
In this letter, we outline the procedure to calculate the atmospheric tau neutrino flux. The details of the method as applied to atmospheric electron neutrino, muon neutrino and muon fluxes from charm decay appear in Refs. [@us] and [@tig]. We present our flux results for the neutrino energy range of $10^2-10^6$ GeV, followed by the resulting $\nu_\tau + N
\rightarrow \tau +X$ event rates. For tau energies above 100 GeV, the rate is on the order of $60-100$ events per year per km$^3$ water equivalent volume. With a 1 TeV threshold, there are on the order of 20 events. We also evaluate the expected event rate for the tau neutrino flux coming from $\nu_\mu\rightarrow \nu_\tau$ oscillations based on a range of parameters consistent with the Super-Kamiokande results [@superk]. For tau energies above a few hundred GeV, the atmospheric tau neutrino background flux from $D_s\rightarrow \nu_\tau \tau$ dominates the tau neutrino flux from atmospheric muon neutrino oscillations.
TAU NEUTRINO FLUX CALCULATION
=============================
The main source of atmospheric tau neutrinos is the leptonic decay of the $D_s$: $D_s\rightarrow \tau\nu_\tau$, followed by $\tau\rightarrow \nu_\tau X$. For relativistic particles, a semianalytic, unidimensional approximate solution to cascade equations describing proton, meson and lepton fluxes is a reliable approximation [@book; @lipari; @tig]. The solutions rely on factorizing source terms in the cascade equations into factors which are weakly dependent on energy times the incident cosmic ray flux, here approximated by a proton flux. The source term for $p\,$Air$\rightarrow D_s$, for a $D_s$ of energy $E$ and column depth $X$ as measured from the top of the atmosphere is $$\begin{aligned}
S(p\rightarrow D_s)& \simeq & {\phi_p(E,X)\over \lambda_p(E)}
\int_E dE_p
{\phi_p(E_p,0)\over \phi_p(E,0)}{\lambda(E)\over \lambda_p(E_p)}
{dn_{p\rightarrow D_s}\over dE}(E;E_p)\\ \nonumber
&\equiv & {\phi_p(E,X)\over \lambda_p(E)} Z_{pD_s}(E) \ .\end{aligned}$$ Here $\phi_p(E,X)$ is the flux of cosmic ray protons at column depth $X$. At the top of the atmosphere ($X=0$), following Ref. [@tig], we set $$\phi_p(E,0)=1.7\ (E/{\rm GeV})^{-2.7}\ {\rm cm}^{-2}{\rm s}^{-1}
{\rm sr}^{-1}{\rm GeV}^{-1},$$ valid for energies values lower than $5 \cdot 10^6$ GeV. In Eq. (2.1), $\lambda_p$ is the proton interaction length and $dn/dE$ is the cross section normalized energy distribution of the $D_s$ emerging from the proton-Air collision. The quantity $Z_{pD_s}(E)$ is called a $Z$-moment. Generically, $Z$-moments describe sources of particles of energy $E$, whether by production, decay or energy loss through scattering. A complete discussion of the $Z$-moment method of solution appears in Refs. [@book] and [@lipari]. Its recent application to atmospheric muon, muon neutrino and electron neutrino fluxes from charm decays is found Refs. [@tig] and [@us].
Solutions to the cascade equations in terms of $Z$-moments have two separate forms, depending on whether the decay lengths are short compared to the height of production (“low energy”) or long (“high energy”). For $\nu_\tau$’s from $D_s$ (and $\tau$) decays, we confine our attention to the neutrino energy range $10^2-10^6$ GeV. These neutrino energies are well below the critical energy of $\sim 10^8$ GeV, above which decay lengths of the relativistic $D_s$’s and $\tau$’s are longer than the vertical distance to height of production. Tau neutrinos are called “prompt” in the low energy regime. The approximate solution for the $\nu_\tau+\bar{\nu}_\tau$ flux, at the surface of the Earth, is $$\phi_{\nu_\tau}(E) = {{Z_{pD_s}(E) Z_{D_s\nu_\tau}(E)}\over {1-Z_{pp}(E)}}
{\phi_p(E,0)} \ .$$ The prompt tau neutrino flux is isotropic. $Z_{pp}(E)$ accounts for the proton energy loss in proton-air collisions. For $Z_{pp}(E)$, we use the results obtained by Thunman et al. in their recent evaluation [@tig] using the Monte Carlo PYTHIA [@pythia]. A similar, energy independent value was used in Ref. [@lipari].
For $Z_{pD_s}(E)$, there are several approaches. Here we show the results from next-to-leading order (NLO) perturbative QCD production of charmed quark pairs, scaled by a factor of 0.13 to account for the fraction of $c\rightarrow D_s$ [@review]. Details of the NLO calculation in the context of the prompt muon, muon neutrino and electron neutrino fluxes appear in Ref. [@us]. A second evaluation relies on Thunman et al.’s [@tig] $Z_{pD^0}(E)$, rescaled by the ratio of $D_s$ to $D^0$ production taken to be 0.25.
The $D_s\rightarrow \nu_\tau$ decay $Z$-moments have several contributions. The most straightforward is the direct $D_s\rightarrow \nu_\tau$ in the two body decay, where [@book; @lipari] $$Z_{D_s\nu_\tau}^{(2\ body)}(E)=\int_0^{1-R_{D_s}} {dx\over x}
{Z_{pD_s}(E/x))\over Z_{pD_s}(E)}{\sigma_{pA}(E/x)\over \sigma_{pA}(E)}
{\phi_p(E/x,0)\over \phi_p(E,0)} {B\over 1-R_{D_s}}$$
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present the Photon-Plasma code, a modern high order charge conserving particle-in-cell code for simulating relativistic plasmas. The code is using a high order implicit field solver and a novel high order charge conserving interpolation scheme for particle-to-cell interpolation and charge deposition. It includes powerful diagnostics tools with on-the-fly particle tracking, synthetic spectra integration, 2D volume slicing, and a new method to correctly account for radiative cooling in the simulations. A robust technique for imposing (time-dependent) particle and field fluxes on the boundaries is also presented. Using a hybrid OpenMP and MPI approach the code scales efficiently from 8 to more than 250.000 cores with almost linear weak scaling on a range of architectures. The code is tested with the classical benchmarks particle heating, cold beam instability, and two-stream instability. We also present particle-in-cell simulations of the Kelvin-Helmholtz instability, and new results on radiative collisionless shocks.'
author:
- 'Troels Haugb[ø]{}lle'
- Jacob Trier Frederiksen
- '[Å]{}ke Nordlund'
bibliography:
- 'references.bib'
title: 'Photon-Plasma: a modern high-order particle-in-cell code'
---
Introduction
============
Particle-In-Cell models have gained widespread use in astrophysics as a means to understand plasma dynamics, particularly in collisionless plasmas, where non-linear instabilities can play a crucial role for launching plasma waves and accelerating particles. The advent of of tera- and now peta-flop computers has made it possible to study the macroscopic properties of both relativistic and non-relativistic plasmas from first principles in large scale 2D and 3D models, and sophisticated methods, such as the extraction of synthetic spectra is bridging the gap between models and observations.
While Particle-In-Cell codes were some of the first codes to be developed for computers[@Harlow:1957; @Harlow:1964], and several classic books have been written on the subject[@birdsall:1985; @hockney:1988], modern numerical methods are in use today in the community, which did not exist then, and the temporal and spatial scales of the problems have grown enormously. Furthermore, in the context of astrophysics the modeling of relativistic plasmas has become of prime importance.
In this paper we present the relativistic particle-in-cell [<span style="font-variant:small-caps;">PhotonPlasma</span> code]{} in use at the University of Copenhagen, and the numerical and technical methods implemented in the code. The code was initially created during the ’Thinkshop’ workshop at Stockholm University in 2005, and has since then been under continuous development. It has been used on many architectures from SGI, IBM, and SUN shared memory machines to modern hybrid Linux GPU clusters. Currently our main platforms of choice are Blue-Gene and Linux clusters with 8-16 cores per node and infiniband. We have also developed a special GPU version that achieves a 20x speedup compared to a single 3GHz Nehalem core (to be presented in a forthcoming paper). The code has excellent scaling, with more than 80% efficiency on Blue-Gene/P scaling weakly from 8 to 262.144 cores, and on Blue-Gene/Q from 64 to 524.288 threads. The I/O and diagnostics is fully parallelized and on some installations we reach more than 45 GB/s read and 8 GB/s write I/O performance.
In Section II and III we introduce the underlying equations of motion, and the numerical techniques for solving the equations. We present our formulation of radiative cooling in Section IV, and in Section V the various initial and boundary conditions supported by the code. Section VI presents on-the-fly diagnostics, including the extraction of synthetic spectra. Section VII describes the binary collision modules, while Section VIII contains test problems. In Section IX we discuss aspects of parallelization and scalability, and finally in section X we finish with concluding remarks.
Equations of motion
===================
The [<span style="font-variant:small-caps;">PhotonPlasma</span> code]{} is used to find an approximate solution to the relativistic Maxwell-Vlasov system $$\label{eq:vlasov}
{\frac{\partial f^s}{\partial t}} + {\bm{u}}\cdot\frac{\partial f^s}{\partial {\bm{x}}} +
\frac{q^s}{m^s}({\bm{E}}+ {\bm{u}}\times{\bm{B}})\cdot\frac{\partial f^s}{\partial ({\bm{u}\gamma})} = \mathcal{C}$$ $$\begin{aligned}
\label{eq:gauss}
{\nabla\cdot}{\bm{E}}& = \frac{\rho_c}{\epsilon_0} \\
\label{eq:divb} {\nabla\cdot}{\bm{B}}& = 0 \\
\label{eq:faraday} {\frac{\partial {\bm{B}}}{\partial t}} & = - {\nabla\times}{\bm{E}}\\
\label{eq:ampere} \frac{1}{c^2}{\frac{\partial {\bm{E}}}{\partial t}} & = {\nabla\times}{\bm{B}}- \mu_0{\bm{J}}\,,\end{aligned}$$ where $s$ denotes particle species in the plasma (electrons, protons, …), $\gamma = {(1-(u/c)^2)^{-1/2}}$ is the Lorentz factor, and $\mathcal{C} \equiv \partial{f^s}/\partial{t}\big|_{coll}$ denotes some collision operator.
In a completely collisionless plasma $\mathcal{C}$ is zero, but in the code we also allow for binary collisions between particles. As shown below in the tests, discretization effects in the interpolation of fields and sources between the mesh and the particles and the integration of the equations of motion lead to a non-zero, non-physical, collision term, which should be minimized, especially in the case of collisionless plasmas, but also with respect to any collisional modeling term introduced explicitly. The charge and current densities are given by taking moments of the distribution function over momentum space $$\begin{aligned}
\label{eq:rho}
\rho_c({\bm{x}}) = \int \textrm{d}{\bm{u}}\sum_s q^s f^s({\bm{x}},{\bm{u}}) \\
{\bm{J}}({\bm{x}}) = \int \textrm{d}{\bm{u}}\sum_s {\bm{u}}\, q^s f^s({\bm{x}},{\bm{u}}) \,.\end{aligned}$$ To find an approximate representation for this six-dimensional system in the particle-in-cell method so-called macro particles are introduced to sample phase space. Macro particles can either be thought of as Lagrangian markers that measure the distribution function in momentum space at certain positions, or as ensembles of particles that maintain the same momentum while moving through the volume. If the trajectory of a macro particle is a solution to the Vlasov equation, given the linearity, a set of macro particles will also be a solution to the system. Other continuum fields, which only depend on position, are sampled on a regular mesh. Macro particles are characterized by their positions ${\bm{x}}_p$ and proper velocities ${\bm{p}}_p={\bm{u}\gamma}$. They have a weight factor $w_p$, giving the number density of physical particles inside a macro particle, and a physical shape $S$. The shape is chosen to be a symmetric, positive and separable function, with a volume that is normalized to 1. For example in three dimensions it can be written $S({\bm{x}}-{\bm{x}}_p) = S_\textrm{1D}(x-x_p)S_\textrm{1D}(y-y_p)S_\textrm{1D}(z-z_p)$, and $\int S({\bm{x}}-{\bm{x}}_p) \textrm{d}{\bm{x}}= 1$. The full distribution function of a single macro particle is then $$\label{eq:pseudop}
f_p({\bm{x}},{\bm{p}}) = w_p\, \delta({\bm{p}}-{\bm{p}}_p)\,S({\bm{x}}-{\bm{x}}_p)\,.$$ Inserting the above in [Eq. \[eq:vlasov\]]{} and taking moments [@Lapenta:2006; @hockney:1988; @birdsall:1985] we find the equations of motion for a single macro particle, $$\begin{aligned}
\label{eq:pmotion}
\frac{\textrm{d}{\bm{x}}_p}{\textrm{d}t} &= {\bm{u}}_p &
\frac{\textrm{d}{\bm{u}}_p\gamma_p}{\textrm{d}t} &= \frac{q}{m}\left({\bm{E}}_p + {\bm{u}}_p \times {\bm{B}}_p \right)\,,\end{aligned}$$ where the electromagnetic fields are sampled at the macro particle position through the shape functions $$\begin{aligned}
{\bm{E}}_p &= {\bm{E}}({\bm{x}}_p) = \int \!\! d{\bm{x}}\, {\bm{E}}({\bm{x}}) \, S({\bm{x}}-{\bm{x}}_p) \\
{\bm{B}}_p &= {\bm{B}}({\bm{x}}_p) = \int \!\! d{\
|
{
"pile_set_name": "ArXiv"
}
|
It was suggested more than thirty years ago by Fulde and Ferrell[@ff], and Larkin and Ovchinnikov[@lo], that an inhomogeneous superconductor with an order parameter that oscillates spatially may be stabilized by a large external magnetic or internal exchange field. Such a Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state has never been observed in conventional low-$T_c$ superconductors. Recently it has attracted renewed interest in the context of organic, heavy-fermion, and high-$T_c$ cuprate superconductors[@gloos; @yin; @norman; @rainer; @shimahara; @murthy; @dupuis; @modler; @tachiki; @geg; @sh2; @maki; @samokhin; @buzdin; @yang; @symington; @pickett]. These new classes of superconductors are believed to provide conditions that are favorable to the formation of FFLO state, because many of them are i) strongly type II superconductors so that the upper critical field $H_{c2}$ can easily approach the Pauli paramagnetic limit; and (ii) layered compounds so that when a magnetic field is applied parallel to the conducting plane, the orbital effect is minimal, and the Zeeman effect (which is the driving force for the formation of FFLO state) dominates the physics. Indeed, some experimental indications of the existence of the FFLO state have been reported[@gloos; @modler; @geg; @symington].
The main difficulty in the experimental search for the FFLO state is that just like the BCS state, the FFLO state is a superconducting state. The distinction between the two is a subtle difference in the structure of the superconducting order parameter, which is difficult to detect using ordinary experimental techniques. Previous experiments have focused on thermodynamic signatures of possible phase transitions from the BCS to FFLO state, which is believed to be first order (see, however, Ref. ). But such signatures can be caused by other phase transitions that have nothing to do with the superconducting order parameter. Thus it is very difficult to establish the presence of an FFLO state this way without any ambiguity.
In this paper we propose using the Josephson effect to detect the existence of FFLO states. Our proposal has some similarity in spirit to the basic ideas behind the “phase sensitive" experiments[@phase] that established the predominant $d$-wave symmetry of the order parameter in cuprate superconductors. Specifically, we predict: (i) The Josephson current in a Josephson junction between a conventional BCS superconductor and an FFLO superconductor is suppressed, due to the difference in momenta of the order parameters. (ii) The Josephson current may be recovered by applying a properly chosen magnetic field in the junction, with field strength and direction depending on the momentum of the order parameter of the FFLO superconductor; it thus provides a way to measure the momentum of the order parameter directly.
In the rest of the paper, we demonstrate the above effects first by using the Ginsburg-Landau theory, and then by presenting a microscopic derivation. We discuss the experimental implication and feasibility of our proposal toward the end of the paper.
[*Ginsburg-Landau Theory*]{}. The effects we predict are most easily demonstrated using a Ginsburg-Landau theory. For simplicity we consider a two-dimensional BCS superconductor, described by a spatially dependent superconducting order parameter $\Psi_{BCS}({\bf r})$, which is coupled to a two-dimensional FFLO superconductor, described by an order parameter $\Psi_{FFLO}({\bf r})$[@note]. We consider the two Josephson junction geometries shown in Figure 1. Since the physics for the two geometries is similar we focus our discussion on geometry a) and simply state the results for geometry b). For geometry a) the Josephson coupling term in the free energy takes the form (in the absence of any magnetic field) $$H_J=-t\int{d^2{\bf r}}[\Psi_{FFLO}^*({\bf r})\Psi_{BCS}({\bf r})+ c.c.].
\label{eq1}$$ In the ground state of a BCS superconductor, $\Psi_{BCS}({\bf r})= \psi_0$ is a constant. However, in an FFLO superconductor the order parameter is a superposition of components carrying finite momenta: $$\Psi_{FFLO}({\bf r})=\sum_{m}\psi_m e^{i{\bf k}_m\cdot{\bf r}},$$ and is oscillatory in space. In the absence of magnetic flux inside the junction, the total Josephson current is $$\begin{aligned}
I_J&=& {\rm Im}\left[t
\int{d^2{\bf r}}\Psi^*_{BCS}({\bf r})\Psi_{FFLO}({\bf r})\right]
\nonumber\\
&=&\sum_m {\rm Im}
\left[t\psi_0^*\psi_m\int{d^2{\bf r}}e^{i{\bf k}_m\cdot{\bf r}}\right].\end{aligned}$$ Clearly, due to the oscillatory nature of the integrand, the Josephson current is suppressed in such a junction. The same result is clearly true for geometry b).
Mathematically, the reason that the Josephson current is suppressed here is similar to the suppression of Josephson current by an applied magnetic field in an ordinary Josephson junction between two BCS superconductors[@tinkham]. However, the physics is very different: here the suppression is due to the spatial oscillation of the [*order parameter*]{} in the FFLO state, while in the case of ordinary Josephson junction in a magnetic field, the phase of the Josephson tunneling [*matrix element*]{} is oscillatory (in a proper gauge choice). Nevertheless, the mathematical similarity allows these two effects to [*cancel*]{} each other and restore the Josephson current, as we demonstrate below.
Consider geometry a). Imagine applying a parallel magnetic field ${\bf B}\bot \hat{z}$ parallel to the planes, where $\hat{z}$ is a unit vector along the normal direction of the plane. Using the gauge ${\bf A}({\bf r})={\bf r}\times {\bf B}$ we have ${\bf A}({\bf r})=A({\bf r})\hat{z}$. The appearance of the ${\bf A}$ field affects the phase of the Josephson tunneling matrix element only; the in-plane properties in the two individual superconductors are unaffected because the ${\bf A}$ field is perpendicular to the planes. Specifically, the Josephson coupling term in the free energy becomes $$H_J=-t\int{d^2{\bf r}}[e^{{2ieA({\bf r})d\over \hbar c}}
\Psi_{FFLO}^*({\bf r})\Psi_{BCS}({\bf r})+ c.c.],$$ and the total Josephson current takes the form $$\begin{aligned}
I_J&=& {\rm Im}\left[t
\int{d^2{\bf r}}e^{{-2ieA({\bf r})d\over \hbar c}}\Psi^*_{BCS}({\bf r})
\Psi_{FFLO}({\bf r})\right]
\nonumber\\
&=&\sum_m {\rm Im}\left[t\psi_0^*\psi_m\int{d^2{\bf r}}
e^{{-2ieA({\bf r})d\over \hbar c}}e^{i{\bf k}_m\cdot{\bf r}}\right].\end{aligned}$$ Here $d$ is the distance between the two planes. It is clear from the equation above that the two oscillatory factors cancel each other when $${\bf B}={\hbar c\over 2ed}\hat{z}\times {\bf k}_m.
\label{restore}$$ At this particular ${\bf B}$, the Josephson current gets partially restored. One can determine the momenta of the order parameter in the FFLO state by searching for the ${\bf B}$’s that restore the Josephson current.
For geometry b) consider applying a perpendicular magnetic field ${\bf B}\parallel \hat{z}$. Arguments similar to those of the last paragraph imply that the Josephson current becomes restored only when the momentum of the FFLO order parameter is along the junction (note that typically this momentum will lie along high symmetry directions of the superconductor). In this case the Josephson current is partially restored when $${\bf B}=\frac{\hbar c}{2e(d+\lambda_1+\lambda_2)}\hat{z}
\label{eq2}$$ where $\lambda_1$ ($\lambda_2$) is the penetration depth of the FFLO (BCS) superconductor for the field applied normal to the superconducting plane. The appearance of the factor $d+\lambda_1+\lambda_2$ in Eq. \[eq2\] is because the magnetic field is assumed to enter each superconductor a distance equal to the penetration depth from the edge of the junction.
[*Microscopic Derivation*]{}. These effects can also be demonstrated using a microscopic approach, as we illustrate below. For simplicity and clarity, we take the zero temperature limit, and initially consider the usual Josephson effect between two BCS superconductors. When two superconductors are placed in proximity so that electrons can tunnel from one to the other, the total energy of the system contains a term that depends on the phase difference $\phi$ between the order parameters of the two superconductors: $$E_J(\phi)=E_J\cos\phi;$$ and the Josephson current is $$I_J(\phi)={2eE_J\over\hbar}\sin\phi
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Random variables equidistributed on convex bodies have received quite a lot of attention in the last few years. In this paper we prove the negative association property (which generalizes the subindependence of coordinate slabs) for generalized Orlicz balls. This allows us to give a strong concentration property, along with a few moment comparison inequalities. Also, the theory of negatively associated variables is being developed in its own right, which allows us to hope more results will be available.
Moreover, a simpler proof of a more general result for $\ell_p^n$ balls is given.
author:
- 'Marcin Pilipczuk (malcin@duch.mimuw.edu.pl)'
- |
Jakub Onufry Wojtaszczyk[^1] (onufry@duch.mimuw.edu.pl)\
Department of Mathematics, Computer Science and Mechanics\
University of Warsaw\
ul. Banacha 2, 02-097 Warsaw, Poland\
title: The negative association property for the absolute values of random variables equidistributed on a generalized Orlicz ball
---
Introduction
============
Notation
--------
We shall begin by introducing the notation used throughout the paper. For any set $A$ by $\1_A$ we shall denote the characteristic function of $A$. As usually, $\R$ and ${\R_{+}}$ will denote the reals and the non-negative reals respectively. By $\R^k$ we shall mean the $k$-dimensional Euclidean space equipped with the standard scalar product $\is{\cdot}{\cdot}$, the Lebesgue measure denoted by $\lambda$ or $\lambda_k$ and a system of orthonormal coordinates $x_1,x_2,\ldots,x_k$. By ${\R_{+}}^k$ we mean the generalized positive quadrant, that is the set $\{(x_1,\ldots,x_k) \in \R^k
: \forall_i\ x_i \geq 0\}$. For a given set $K\subset \R^k$ by ${K_{+}}$ we shall denote the positive quadrant of $K$, that is $K \cap {\R_{+}}^k$. For a given set $A$ by ${{\bar{A}}}$ we will denote the complement of $A$.
For a measure $\mi$ on $\R^n$ and an affine subspace $H \subset
\R^n$, by the [*projection of $\mi$ onto $H$*]{} we mean the measure $\mi_H$ defined by $\mi_H(C) = \mi(\{x \in \R^n : P(x) \in C\})$, where $P$ is the orthogonal projection onto $H$. If $\mi$ is given by a density function $m$ and $K \subset H \subset \R^n$, then by the [*restriction of $\mi$ to $K$*]{} we mean the measure $\mi_{|K}$ on $H$ given with the density $m \cdot \1_K$. By the support of a function $m:X \to \R$, denoted ${{\rm supp}}m$, we mean ${{\rm cl}}\{x \in X : m(x) \neq 0\}$. If $\mi$ is a measure, then by ${{\rm supp}}\mi$ we mean the smallest closed set $A$ such that $\mi(\bar{A}) = 0$. In the cases we consider, when $\mi$ will be given by a density $m$, we will always have $supp \mi = {{\rm supp}}m$.
We shall call a set $K \subset \R^n$ a [*symmetric body*]{} if it is convex, bounded, central-symmetric (i.e. if $x \in K$ then $-x \in K$) and has a non-empty interior. A body $K \subset \R^n$ is called [*1-symmetric*]{} if for any $(\eps_1,\ldots,\eps_n) \in \{-1,1\}^n$ and any $(x_1,\ldots,x_n) \in K$ we have $(\eps_1 x_1, \ldots, \eps_n
x_n) \in K$. Such a body is sometimes called [*unconditional*]{}.
A function $f : {\R_{+}}{{\rightarrow}}{\R_{+}}\cup \{\infty\}$ is called a [*Young function*]{} if it is convex, $f(0) = 0$ and $\exists_x : f(x) \neq 0$, $\exists_{x\neq 0} : f(x) \neq \infty$. If we have $n$ Young functions $f_1,\ldots,f_n$, then the set $$K = \{(x_1,\ldots,x_n) : \sum_{i=1}^n f_i(|x_i|) \leq 1\}$$ is a 1-symmetric body in $\R^n$. Such a set is called a [*generalized Orlicz ball*]{}, also known in the literature as a modular sequence space ball.
We shall call a Young function $f$ [*proper*]{} if it does not attain the $+\infty$ value and $f(x) > 0$ for $x > 0$. A generalized Orlicz ball is called [*proper*]{} if it can be defined by proper Young functions.
If the coordinates of the space $\R^n$ are denoted $x_1,x_2,\ldots,x_n$, the appropriate Young functions will be denoted $f_1,f_2,\ldots,f_n$, with the assumption $f_i$ is applied to $x_i$. If some of the coordinates are denoted $x,y,z,\ldots$, the appropriate Young functions will be denoted $f_x, f_y, f_z, \ldots$, with the assumption that $f_x$ is applied to $x$, $f_y$ to $y$ and so on.
A function $f: \R {{\rightarrow}}\R$ is called increasing (decreasing) if $x \geq
y$ implies $f(x) \geq f(y)$ ($f(x) \leq f(y)$) — we do not require a sharp inequality. A function $f:\R^k {{\rightarrow}}\R$ or $f:{\R_{+}}^k {{\rightarrow}}\R$ is called [*coordinate-wise increasing (decreasing)*]{}, if for $x_i \geq
y_i$, $i = 1,2,\ldots,n$ we have $f(x_1,\ldots,x_k) \geq
f(y_1,\ldots,y_n)$ ($f(x_1,\ldots,x_k) \leq f(y_1,\ldots,y_n)$). A set $A \subset {\R_{+}}^k$ is called a [*c-set*]{}, if for $x_i \geq y_i
\geq 0$, $i = 1,2,\ldots,n$ and $(x_1,\ldots,x_n) \in A$ we have $(y_1,\ldots,y_n) \in A$. For a coordinate-wise increasing function $f : {\R_{+}}^k {{\rightarrow}}\R$ the sets $f^{-1}((-\infty,t])$ are c-sets, and conversely the characteristic function of a c-set is a coordinate-wise decreasing function on ${\R_{+}}^k$. Similarily a function $f : {\R_{+}}^k {{\rightarrow}}\R$ is [*radius-wise increasing*]{} if $f(tx_1,tx_2,\ldots,tx_n) \geq f(x_1,x_2,\ldots,x_n)$ for $t > 1$, and a set $A$ is a [*radius-set*]{} if its characteristic function is radius-wise decreasing.
We say a function $f : \R^n {{\rightarrow}}\R_+$ is [*log-concave*]{} if $\ln f$ is concave. A measure $\mi$ on $\R^n$ is called log-concave if for any nonempty $A, B \subset \R^n$ and $t \in (0,1)$ we have $\mi(tA + (1-t)B) \geq \mi(A)^t\mi(B)^{1-t}$. A classic theorem by Borell (see [@Bo]) states that any log-concave density not concentrated on any affine hyperplane has a density function, and that function is log-concave. A random vector in $\R^n$ is said to be log-concave if its distribution is log-concave.
A sequence of random variables $(X_1,\ldots,X_n)$ is said to be [*negatively associated*]{}, if for any coordinate-wise increasing bounded functions $f,g$ and disjoint sets $\{i_1,\ldots,i_k\}$ and $\{j_1,\ldots,j_l\} \subset \{1,\ldots,n\}$ we have $$\label{NegAss} {{\rm Cov}}\big(f(X_{i_1},\ldots,X_{i_k}), g(X_{j_1},\ldots,X_{j_l})\big) \leq 0.$$
We say that the sequence $(X_j)$ is [*weakly negatively associated*]{} if inequality (\[NegAss\]) holds for $l = 1$, and [*very weakly
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In order to mimic the human ability of continual acquisition and transfer of knowledge across various tasks, a learning system needs the capability for continual learning, effectively utilizing the previously acquired skills. As such, the key challenge is to transfer and generalize the knowledge learned from one task to other tasks, avoiding forgetting and interference of previous knowledge and improving the overall performance. In this paper, within the continual learning paradigm, we introduce a method that *effectively forgets* the less useful data samples continuously and allows beneficial information to be kept for training of the subsequent tasks, in an online manner. The method uses statistical leverage score information to measure the importance of the data samples in every task and adopts frequent directions approach to enable a continual or life-long learning property. This effectively maintains a constant training size across all tasks. We first provide mathematical intuition for the method and then demonstrate its effectiveness in avoiding catastrophic forgetting and computational efficiency on continual learning of classification tasks when compared with the existing state-of-the-art techniques.'
author:
- |
Dan Teng\
Neuri\
`dan@neuri.ai`\
Sakyasingha Dasgupta\
Neuri\
`sakya.dasgupta@gmail.com `\
title: Continual Learning via Online Leverage Score Sampling
---
Introduction {#sec: intro}
============
It is a typical practice to design and optimize machine learning (ML) models to solve a single task. On the other hand, humans, instead of learning over isolated complex tasks, are capable of generalizing and transferring knowledge and skills learned from one task to another. This ability to remember, learn and transfer information across tasks is referred to as continual learning [@Thrun1995; @Ruvolo2013; @Hassabis2017; @Parisi2019]. The major challenge for creating ML models with continual learning ability is that they are prone to *catastrophic forgetting* [@McClelland1995; @McCloskey1989; @Goodfellow2013; @French1999]. ML models tend to forget the knowledge learned from previous tasks when re-trained on new observations corresponding to a different (but related) task. Specifically when a deep neural network (DNN) is fed with a sequence of tasks, the ability to solve the first task will decline significantly after training on the following tasks. The typical structure of DNNs by design does not possess the capability of preserving previously learned knowledge without interference between tasks or catastrophic forgetting. In order to overcome catastrophic forgetting, a learning system is required to continuously acquire knowledge from the newly fed data as well as to prevent the training of the new data samples from destroying the existing knowledge.
In this paper, we propose a novel approach to continual learning with DNNs that addresses the catastrophic forgetting issue, namely a technique called *online leverage score sampling (OLSS)*. In OLSS, we progressively compress the input information learned thus far, along with the input from current task and form more efficiently condensed data samples. The compression technique is based on the statistical leverage scores measure, and it uses the concept of frequent directions in order to connect the series of compression steps for a sequence of tasks.
When thinking about continual learning, a major source of inspiration is the ability of biological brains to learn without destructive interference between older memories and generalize knowledge across multiple tasks. In this regard, the typical approach is enabling some form of episodic-memory in the network and consolidation [@McClelland1995] via replay of older training data. However, this is an expensive process and does not scale well for learning large number of tasks. As an alternative, taking inspiration from the neuro-computational models of complex synapses [@BennaFusi2016], recent work has focused on assigning some form of importance to parameters in a DNN and perform task-specific synaptic consolidation [@Kirkpatrick2017; @Zenke2017]. Here, we take a very different view of continual learning and find inspiration in the brains ability for dimensionality reduction [@Pang2016] to extract meaningful information from its environment and drive behavior. As such, we enable such progressive dimensionality reduction (in terms of number of samples) of previous task data combined with new task data in order to only preserve a good summary information (discarding the less relevant information or effective forgetting) before further learning. Repeating this process in an online manner we enable continual learning for a large sequence of tasks. Much like our brains, a central strategy employed by our method is to strike a balance between dimensionality reduction of task specific data and dimensionality expansion as processing progresses throughout the hierarchy of the neural network [@FusiMiller2016].
Related Work
------------
Recently, a number of approaches have been proposed to adapt a DNN model to the continual learning setting, from an adaptive model architecture perspective such as adding columns or neurons for new tasks [@Rusu2016; @Yoon2018; @Schwarz2018]; model parameter adjustment or regularization techniques like, imposing restrictions on parameter updates [@Kirkpatrick2017; @Zenke2017; @Li2016; @Titsias2019]; memory revisit techniques which ensure model updates towards the optimal directions [@Lopez-Paz2017; @Rebuffi2017; @Shin2017]; Bayesian approaches to model continuously acquired information [@Titsias2019; @Nguyen2018; @Garnelo2018]; or on broader domains with approaches targeted at different setups or goals such as few-shot learning or transfer learning [@Finn2017; @Nichol2018].
In order to demonstrate our idea in comparison with the state-of-the-art techniques, we briefly discuss the following three popular approaches to continual learning:\
I) **Regularization**: It constrains or regularizes the model parameters by adding additional terms in the loss function that prevent the model from deviating significantly from the parameters important to earlier tasks. Typical algorithms include elastic weight consolidation (EWC) [@Kirkpatrick2017] and continual learning through synaptic intelligence (SI) [@Zenke2017].\
II) **Architectural modification**: It revises the model structure successively after each task in order to provide more memory and additional free parameters in the model for new task input. Recent examples in this direction are progressive neural networks [@Rusu2016] and dynamically expanding networks [@Yoon2018].\
III) **Memory replay**: It stores data samples from previous tasks in a separate memory buffer and retrains the new model based on both the new task input and the memory buffer. Popular algorithms here are gradient episodic memory (GEM) [@Lopez-Paz2017], incremental classifier and representation learning (iCaRL) [@Rebuffi2017].
Among these approaches, regularization is particularly prone to saturation of learning when the number of tasks is large. The additional / regularization term in the loss function will soon lose its competency when important parameters from different tasks are overlapped too many times. Modifications on network architectures like progressive networks resolve the saturation issue, but do not scale when the number and complexity of tasks increase. The scalability problem is also prominent when using current memory replay techniques, often suffering from high memory and computational costs.
Our approach resembles the use of memory replay since it preserves the original input data samples from earlier tasks for further training. However, it does not require extra memory for training and is cost efficient compared to previous memory replay methods. It also makes more effective use of the model structure by exploiting the model capacity to adapt with more tasks, in contrast to constant addition of neurons or additional network layers for new tasks. Furthermore, unlike the importance assigned to model specific parameters when using regularization methods, we assign importance to the training data that is relevant in effectively learning new tasks, while forgetting less important information.
Online Leverage Score Sampling {#sec: olss}
==============================
Before presenting the idea, we first setup the problem: Let $\{(A_1, B_1), (A_2, B_2), ..., (A_i, B_i), ...\}$ represent a sequence of tasks, each task consists of $n_i$ data samples and each sample has a feature dimension $d$ and an output dimension $m$, i.e., input $A_i\in\mathbb{R}^{n_i\times d}$ and true output $B_i\in\mathbb{R}^{n_i\times m}$. Here, we assume the feature and output dimensions are fixed for all tasks [^1]. The goal is to train a DNN over the sequence of tasks and ensure it performs well on all of them, without catastrophic forgetting. Here, we consider that the network’s architecture stays the same and the tasks are received in a sequential manner. Formally, with $f$ representing a DNN, our objective is to minimize the loss [^2]: $$\label{eq: main}
\min_f\norm{f(A) - B}_2^2 \text{ where } A = \begin{bmatrix} A_1 \\ A_2 \\... \\ A_i \\ ...
\end{bmatrix} \text{ and } B = \begin{bmatrix} B_1 \\ B_2 \\ ... \\ B_i \\ ... \end{bmatrix}.$$ Under this setup, we look at some of the existing models:
Online EWC trains $f$ on the $i$th task $(A_i, B_i)$ with a loss function containing additional penalty terms $$\min_f\norm{f(A_i) - B_i}_2^2 + \sum_{j=1}^{i-1}\sum_{p=1}^{w} \lambda F_p^j (\theta_p - \theta_p^{j*})^2,$$ where $\lambda$ indicates the importance level of the previous tasks compared to task $i$, $F_p
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, we discuss possible qualitative approaches to the problem of KPZ universality. Throughout the paper, our point of view is based on the geometrical and dynamical properties of minimisers and shocks forming interlacing tree-like structures. We believe that the KPZ universality can be explained in terms of statistics of these structures evolving in time. The paper is focussed on the setting of the random Hamilton–Jacobi equations. We formulate several conjectures concerning global solutions and discuss how their properties are connected to the KPZ scalings in dimension 1+1. In the case of general viscous Hamilton–Jacobi equations with non-quadratic Hamiltonians, we define generalised directed polymers. We expect that their behaviour is similar to the behaviour of classical directed polymers, and present arguments in favour of this conjecture. We also define a new renormalisation transformation defined in purely geometrical terms and discuss conjectural properties of the corresponding fixed points. Most of our conjectures are widely open, and supported by only partial rigorous results for particular models.'
author:
- Yuri Bakhtin
- Konstantin Khanin
bibliography:
- 'Burgers.bib'
title: 'On global solutions of the random Hamilton–Jacobi equations and the KPZ problem'
---
Introduction {#sec:intro}
============
The problem of the KPZ phenomenon and universality has been one of the most active directions in statistical physics in the last decade, see [@Alberts-Khanin-Quastel:MR3189070], [@Amir-Corwin-Quastel:CPA20347], [@Baik-Deift-Johansson:MR1682248], [@Balazs-Cator-Seppalainen:MR2268539], [@Borodin-Corwin:MR3152785], [@Borodin-Corwin-Ferrari:MR3207195], [@Borodin-Corwin-Ferrari-Vetho:MR3366125], [@Borodin-Ferrari:MR2438811], [@Borodin-Ferrari-Sasamoto:MR2430639], [@Borodin-Gorin:MR3526828], [@Calabrese-Le-Doussal-Rosso:0295-5075-90-2-20002], [@Calabrese-Le-Doussal:PhysRevLett.106.250603], [@Cator-Groeneboom:MR2257647], [@CaPi-ptrf], [@Corwin:MR2930377], [@Corwin-Quastel-Remenik:MR3373642], [@Dotsenko:0295-5075-90-2-20003], [@Gubinelli-Perkowsky:MR3592748], [@Hairer:MR3274562], [@Hairer:MR3071506], [@Le-Doussal-Calabrese:1742-5468-2012-06-P06001], [@Moreno-Remenik-Quastel:MR3010188], [@Prahofer-Spohn:MR1933446], [@Quastel-Spohn:MR3373647], [@Sasamoto-Spohn:MR2628936], [@Sasamoto-Spohn:PhysRevLett.104.230602], [@Seppalainen:MR2917766], [@Tracy-Widom:MR1385083], and multiple other contributions. A fascinating feature of the problem is a combination of two factors: exact solvability and universality. On the one hand, one can write exact formulas for the limiting objects of some particular models. On the other hand, these formulas are supposed to describe the limiting behavior of a huge class of systems that are not integrable. The universality is so global that in a certain sense we do not know how far it stretches. One can say that any large scale “directed” variational problem in 2-dimensional disordered media is expected to belong to the KPZ universality class. In this paper, we concentrate not on exact solutions but rather on the universality phenomenon.
Another remarkable aspect of the problem is the fact that it is tightly connected to diverse areas of mathematics and theoretical physics: PDEs, stochastic analysis, dynamical systems, statistical mechanics, random matrix theory, stochastic geometry, representation theory, to name a few.
We want to emphasize the connection with the problem of global solutions to random Hamilton–Jacobi equations. We consider both cases — classical Hamilton–Jacobi equations that correspond to the study of minimisers of Lagrangian action also called geodesics and their parabolic regularizations where action minimisers are replaced by directed polymers. In our approach, geometrical properties of action minimisers and polymers play an important role. We believe that these geometrical properties make random Hamilton–Jacobi equations a better playground than random matrices where no geometric aspects are currently known. We should warn the reader that the number of rigorous results in the area is rather limited. In most cases we provide explanations and conjectures rather than theorems. We believe that many problems that we discuss can me attacked mathematically. Others are more challenging. However, we see value in presenting the general picture and the set of ideas describing the phenomena.
The structure of the paper is the following. In Section \[sec:HJ-eq\], we define random Hamilton–Jacobi equations in any dimension and discuss their global solutions. We consider the general case of convex Hamiltonians rather than only quadratic ones.
To discuss properties of solutions, we introduce the notion of directed polymer that generalizes the usual one defined through the Hopf–Cole transformation and Feynman–Kac formula in the quadratic Hamiltonian case. Such generalized polymers are discussed in Section \[sec:HJ-polymers\]. We postpone the discussion of the equivalence of two notions of polymers in the quadratic Hamiltonian case to Section \[sec:equiv-of-poly-for-Burgers\] playing the role of an appendix.
In Section \[sec:shape\], we discuss the phenomenology of KPZ scaling exponents. We introduce the notion of shape function and demonstrate how its strong convexity property is related to $1:2:3$ KPZ scalings in dimension $1+1$.
In Section \[sec:min-shocks-1d\], we discuss the properties of minimisers and shocks in the 1D case. We present rigorous results in the compact case of periodic forcing potentials, and discuss how the behaviour of minimisers and shocks changes in the non-compact setting. We also discuss hyperbolic properties of minimisers playing an important role in the picture.
In Section \[sec:renorm-point\], we discuss point fields that play the role of structural backbones of the global geometry of minimzers and directed polymers. These point fields correspond to locations of concentration of minimisers/polymers and shocks separating the domains of attraction to those locations. We then define the renormalisation transformation acting on these point fields and formulate several conjectures related to fixed points of this transformation and their stability. The transformation is defined in purely geometrical terms without involving action values. This is a reflection of monotonicity which is present only in dimension 1. The scheme we discuss is more general than the KPZ phenomenon, with a 1-parameter family of fixed points, where the parameter is the exponent describing the rate of decay of density of the point field with time. We conjecture that the statistics of these fixed points is universal and reflects only the properties of interlacing between concentration points and shocks.
The KPZ case corresponds to the density decay rate exponent being equal to $2/3$. Another special value is $1/2$ which is related to stochastic flows of diffeomorphisms and coalescent Brownian motion. This is the simplest instance of the fixed point, and we discuss it in Section \[sec:colescing-BM\]. We also discuss the correlation functions of the fixed point which have been shown to exhibit Pfaffian structure. It is natural to ask whether a similar Pfaffian property may hold for other density exponents.
Another renormalization scheme based on Airy sheet is discussed in Section \[sec:renorm-airy\].
We finish with concluding remarks in Section \[sec:concluding\].
[**Acknowledgements.**]{} Large parts of this paper were written in KITP in Santa Barbara and CIRM in Luminy, and we are grateful to these centers for hospitality. The work of Yuri Bakhtin was partially supported by NSF through grant DMS-1460595. The work of Konstantin Khanin was supported by NSERC Discovery Grant RGPIN 328565.
Hamilton–Jacobi equations {#sec:HJ-eq}
=========================
We will consider the following randomly forced Hamilton–Jacobi equation:
$$\label{eq:general-Hamilton--Jacobi}
\partial_t \Phi + H(\nabla \Phi)={\nu}\Delta \Phi - F,$$
Here $\Phi=\Phi^\nu(t,x)$, $t\in{{\mathbb R}}$, $x\in{{\mathbb R}}^d$, is a scalar function. The Hamiltonian $H=H(p)$ is assumed to be a convex function of momentum $p=\nabla\Phi\in{{\mathbb R}}^d$; $F=F(t,x)$ is the external force potential, i.e., $f(t,x)=-\nabla F(t,x)$ is the external force. The viscosity parameter $\nu\ge 0$. We shall consider both viscous case where $\nu>0$ and the inviscid case where $\nu=0$. In the former case, solutions are smooth in space while in the latter, they typically devolop shocks, i.e., gradient discontin
|
{
"pile_set_name": "ArXiv"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.